title
stringlengths
1
300
score
int64
0
8.54k
selftext
stringlengths
0
41.5k
created
timestamp[ns]date
2023-04-01 04:30:41
2026-03-04 02:14:14
url
stringlengths
0
878
author
stringlengths
3
20
domain
stringlengths
0
82
edited
timestamp[ns]date
1970-01-01 00:00:00
2026-02-19 14:51:53
gilded
int64
0
2
gildings
stringclasses
7 values
id
stringlengths
7
7
locked
bool
2 classes
media
stringlengths
646
1.8k
name
stringlengths
10
10
permalink
stringlengths
33
82
spoiler
bool
2 classes
stickied
bool
2 classes
thumbnail
stringlengths
4
213
ups
int64
0
8.54k
preview
stringlengths
301
5.01k
🎄 We release 67,074 Qwen3-Coder OpenHands trajectories on SWE-rebench + 2 model checkpoints!
48
Happy holidays! 🎄 I’m Ibragim from Nebius. We’re releasing a big dataset for agentic coding research: 67,074 OpenHands trajectories (plus 2 RFT checkpoints), built from 3,800 resolved issues across 1,800+ Python repos. The trajectories are long: 64 turns on average, up to 100 turns, and up to 131k context length. Agent framework: **OpenHands** Model: **Qwen3-Coder-480B-A35B-Instruct** Training tasks from **SWE-rebench:** [https://huggingface.co/datasets/nebius/SWE-rebench](https://huggingface.co/datasets/nebius/SWE-rebench) To demonstrate the data quality, we’re also releasing two checkpoints trained with rejection sampling fine-tuning (RFT): **> SWE-rebench-openhands-Qwen3-30B-A3B** SWE-bench Verified: 26% → 50% Pass@1 SWE-rebench (September): 14% → 28% Pass@1 **> SWE-rebench-openhands-Qwen3-235B-A22B** SWE-bench Verified: 46% → 62% Pass@1 SWE-rebench (September): 25% → 34% Pass@1 We also ran extensive evaluations of OpenHands with 100-turn and 500-turn limits across various models. We don’t just look at solutions — we also evaluate tests generated by the models. For each issue, we check: \> How often the generated tests are correct \> How often the model’s final patch passes its own tests More details in our blog post: [https://nebius.com/blog/posts/openhands-trajectories-with-qwen3-coder-480b](https://nebius.com/blog/posts/openhands-trajectories-with-qwen3-coder-480b) Hugging Face collection: [https://huggingface.co/collections/nebius/openhands-trajectories](https://huggingface.co/collections/nebius/openhands-trajectories) Please let us know if you’d like us to release more data using other models or agents.
2025-12-24T21:08:30
https://huggingface.co/collections/nebius/openhands-trajectories
Fabulous_Pollution10
huggingface.co
1970-01-01T00:00:00
0
{}
1puxedb
false
null
t3_1puxedb
/r/LocalLLaMA/comments/1puxedb/we_release_67074_qwen3coder_openhands/
false
false
default
48
{'enabled': False, 'images': [{'id': 'Dhe375wyR8tu2LaFu992kxZN7nBVngBP3mBqrvOD7tg', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/Dhe375wyR8tu2LaFu992kxZN7nBVngBP3mBqrvOD7tg.png?width=108&crop=smart&auto=webp&s=ec3f91a26da07c12f8a13d1e8416125e215df6b1', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/Dhe375wyR8tu2LaFu992kxZN7nBVngBP3mBqrvOD7tg.png?width=216&crop=smart&auto=webp&s=09295ce71458d118b8250b19c09296786f5b99ec', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/Dhe375wyR8tu2LaFu992kxZN7nBVngBP3mBqrvOD7tg.png?width=320&crop=smart&auto=webp&s=1fd1d8d1cb2d08e461668979fdd0ab7ebcc88e69', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/Dhe375wyR8tu2LaFu992kxZN7nBVngBP3mBqrvOD7tg.png?width=640&crop=smart&auto=webp&s=4c142e961fe25094c4c77dc7d9a6118e57af1255', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/Dhe375wyR8tu2LaFu992kxZN7nBVngBP3mBqrvOD7tg.png?width=960&crop=smart&auto=webp&s=5ca7ebdb26cacea96fea13c4e7e6fe771a3e3d32', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/Dhe375wyR8tu2LaFu992kxZN7nBVngBP3mBqrvOD7tg.png?width=1080&crop=smart&auto=webp&s=b3988d6349aff9babda2e376d09fcd047fad08a7', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/Dhe375wyR8tu2LaFu992kxZN7nBVngBP3mBqrvOD7tg.png?auto=webp&s=f5d1a42b8afb300eb98f4df989b40812bb05d143', 'width': 1200}, 'variants': {}}]}
Hey guys, just joined NVIDIA dev ecosystem. They have a lot of courses. Which ones would you start with?
1
2025-12-24T21:07:38
https://i.redd.it/hnro3qrvv79g1.jpeg
QualityEvery6965
i.redd.it
1970-01-01T00:00:00
0
{}
1puxdpw
false
null
t3_1puxdpw
/r/LocalLLaMA/comments/1puxdpw/hey_guys_just_joined_nvidia_dev_ecosystem_they/
false
false
default
1
{'enabled': True, 'images': [{'id': 'hnro3qrvv79g1', 'resolutions': [{'height': 81, 'url': 'https://preview.redd.it/hnro3qrvv79g1.jpeg?width=108&crop=smart&auto=webp&s=6d462afea8c10c3b62530be37e1e32552c75b1d1', 'width': 108}, {'height': 162, 'url': 'https://preview.redd.it/hnro3qrvv79g1.jpeg?width=216&crop=smart&auto=webp&s=64bdda87f27fb3c4f74a0c4efa757fd39b08a166', 'width': 216}, {'height': 240, 'url': 'https://preview.redd.it/hnro3qrvv79g1.jpeg?width=320&crop=smart&auto=webp&s=699634ed8d58e0e186d1299fe0a34d62d72b2e07', 'width': 320}, {'height': 480, 'url': 'https://preview.redd.it/hnro3qrvv79g1.jpeg?width=640&crop=smart&auto=webp&s=f60aec93bd5102b02b9074aafa565c2971cfb5ae', 'width': 640}, {'height': 720, 'url': 'https://preview.redd.it/hnro3qrvv79g1.jpeg?width=960&crop=smart&auto=webp&s=f71a87fdb04cdbc05e316cc130767a0440092dac', 'width': 960}, {'height': 810, 'url': 'https://preview.redd.it/hnro3qrvv79g1.jpeg?width=1080&crop=smart&auto=webp&s=e780cc758d756700d033304c94c0a2cc94d7b5a2', 'width': 1080}], 'source': {'height': 3024, 'url': 'https://preview.redd.it/hnro3qrvv79g1.jpeg?auto=webp&s=b95f94cc7c8e40a299a92cb4bef009033069bdd0', 'width': 4032}, 'variants': {}}]}
Is there any tool I can use to train gpt 2 or phi 2 off of my own datasets locally on my desktop
3
Just wondering if there is a easy way to fine tune a modal locally
2025-12-24T21:07:00
https://www.reddit.com/r/LocalLLaMA/comments/1puxd9x/is_there_any_tool_i_can_use_to_train_gpt_2_or_phi/
2001obum
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1puxd9x
false
null
t3_1puxd9x
/r/LocalLLaMA/comments/1puxd9x/is_there_any_tool_i_can_use_to_train_gpt_2_or_phi/
false
false
self
3
null
Hardware for a new AI diy server build
1
Hola all. If you were going to build a new AI rig today what hardware would you choose? Let me clarify a bit: \- DIY but "serverish" grade build, that fits a rack \- Most cheap industrial 4..5U cases can fit an ATX mobo with 7 slots \- Assuming 7 slots is max, then 3x GPU (3x 2 slot) \- I currently own 2x RTX 3090 turbo so either would got another RTX 3090 with turbo fan or upgrade to something newer (Radeon?), minimum 72GB VRAM \- What ATX mother board could happily handle 3 GPUS on PCIE x16 with minimal latency ? \- Single slot CPU (Intel? AMD)? \- 128GB RAM min, 256GB preferable \- Does going DDR5 really makes sense for a CPU/GPU build? Im on DDR4 now and can't say its bad. \- PSU >1kW that can handle 3x GPU, +300W each \- Cheaper the better, looking at a moped rather than a ferrari :) The AI rig would mostly run chat and coding models, Im looking into running larger models +100B and running multiple smaller models in paralllel to get multiple agents working simultanously. Any ideas?
2025-12-24T21:02:52
https://www.reddit.com/r/LocalLLaMA/comments/1puxa89/hardware_for_a_new_ai_diy_server_build/
ChopSticksPlease
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1puxa89
false
null
t3_1puxa89
/r/LocalLLaMA/comments/1puxa89/hardware_for_a_new_ai_diy_server_build/
false
false
self
1
null
We asked OSS-120B and GLM 4.6 to play 1,408 Civilization V games from the Stone Age into the future. Here's what we found.
603
[GLM-4.6 Playing Civilization V + Vox Populi \(Replay\)](https://i.redd.it/zaib4up4s79g1.gif) We had GPT-OSS-120B and GLM-4.6 playing 1,408 full Civilization V games (with Vox Populi/Community Patch activated). In a nutshell: LLMs set strategies for Civilization V's algorithmic AI to execute. Here is what we found: [An overview of our system and results](https://preview.redd.it/shjvvfpbq79g1.png?width=3187&format=png&auto=webp&s=0175d5203c471ef332d54c2fe2b17d2369813e24) **The boring result:** With a simple prompt and little memory, both LLMs did slightly better in the best score they could achieve within each game (+1-2%), but slightly worse in win rates (-1\~3%). Despite the large number of games run (2,207 in total, with 919 baseline games), neither metric is significant. **The surprising part:** Pure-LLM or pure-RL approaches [\[1\]](https://arxiv.org/abs/2401.10568), [\[2\]](https://arxiv.org/abs/2502.20807) couldn't get an AI to play and survive full Civilization games. With our hybrid approach, LLMs can survive as long as the game goes (\~97.5% LLMs, vs. \~97.3% the in-game AI) Moreover, the two models developed **completely different playstyles**. * OSS-120B went full warmonger: +31.5% more Domination victories, -23% fewer Cultural victories compared to baseline * GLM-4.6 played more balanced, leaning into both Domination and Cultural strategies * Both models preferred **Order** (**communist-like**, \~24% more likely) ideology over **Freedom** (democratic-like) **Cost/latency (OSS-120B):** * \~53,000 input / 1,500 output tokens per turn * **\~$0.86/game** (OpenRouter pricing as of 12/2025) * Input tokens scale linearly as the game state grows. * **Output stays flat: models don't automatically "think harder" in the late game.** **Watch more:** * Paper link: [https://arxiv.org/abs/2512.18564](https://arxiv.org/abs/2512.18564) * [Example save 1](https://civitas-john.github.io/vox-deorum-replay/?file=https://civitas-john.github.io/vox-deorum-replay/examples/1.Civ5Replay) * [Example save 2](https://civitas-john.github.io/vox-deorum-replay/?file=https://civitas-john.github.io/vox-deorum-replay/examples/2.Civ5Replay) * [Example save 3](https://civitas-john.github.io/vox-deorum-replay/?file=https://civitas-john.github.io/vox-deorum-replay/examples/3.Civ5Replay) **Try it yourself:** * The Vox Deorum system is 100% open-sourced and currently in beta testing * GitHub Repo: [https://github.com/CIVITAS-John/vox-deorum](https://github.com/CIVITAS-John/vox-deorum) * GitHub Release: [https://github.com/CIVITAS-John/vox-deorum/releases](https://github.com/CIVITAS-John/vox-deorum/releases) * Works with any **OpenAI-compatible local providers** [We exposed the game as a MCP server, so your agents can play the game with you](https://preview.redd.it/tccdt44oq79g1.png?width=2291&format=png&auto=webp&s=0b8a4fe5871db4d2bf00f417acd13de3e688037f) **Your thoughts are greatly appreciated:** * What's a good way to express the game state more efficiently? Consider a late-game turn where you have 20+ cities and 100+ units. Easily 50k+ tokens. Could multimodal help? * How can we get LLMs to play better? I have considered RAG, but there is really little data to "retrieve" here. Possibly self-play + self-reflection + long-term memory? * How are we going to design strategy games if LLMs are to play with you? I have put an LLM spokesperson for civilizations as an example, but there is surely more to do? **Join us:** * I am hiring a PhD student for Fall '26, and we are expanding our game-related work rapidly. Shoot me a DM if you are interested! * I am happy to collaborate with anyone interested in furthering this line of work.
2025-12-24T20:50:16
https://www.reddit.com/r/LocalLLaMA/comments/1pux0yc/we_asked_oss120b_and_glm_46_to_play_1408/
vox-deorum
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1pux0yc
false
null
t3_1pux0yc
/r/LocalLLaMA/comments/1pux0yc/we_asked_oss120b_and_glm_46_to_play_1408/
false
false
https://b.thumbs.redditm…ghHDLniBqMTU.jpg
603
null
What OS do you run on your AI rigs? Ubuntu, TrueNAS, etc.?
3
Hey Folks, Just the title, really. I just built a new local rig and am debating what to put on there. Ubuntu seems like a good option- less overhead. TrueNAS seems feature-rich with native Docker Compose. My use cases: finetuning, long RL runs, GGUF conversions, async long run multi-agent executions, and ofc vLLM for inference. Also, lmk if you find a cheap kidney out there, I really needed that RAM :)
2025-12-24T20:33:23
https://www.reddit.com/r/LocalLLaMA/comments/1puwohv/what_os_do_you_run_on_your_ai_rigs_ubuntu_truenas/
KvAk_AKPlaysYT
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1puwohv
false
null
t3_1puwohv
/r/LocalLLaMA/comments/1puwohv/what_os_do_you_run_on_your_ai_rigs_ubuntu_truenas/
false
false
self
3
null
Deepseek will release a larger model next year
70
THis is old news but, I forgot to mention this before. This is from section 5, [https://arxiv.org/html/2512.02556v1#S5](https://arxiv.org/html/2512.02556v1#S5) \-" First, due to fewer total training FLOPs, the breadth of world knowledge in DeepSeek-V3.2 still lags behind that of leading proprietary models. We plan to address this knowledge gap in future iterations by scaling up the pre-training compute." I speculate it will be bigger than 1.6T params(maybe 1.7-2.5T) and at least trained 2.5-3x more tokens than now... Hopefully they will releases the weights for this. I also hope for a smaller version(maybe it won't happen).. " Second, token efficiency remains a challenge; DeepSeek-V3.2 typically requires longer generation trajectories (i.e., more tokens) to match the output quality of models like Gemini-3.0-Pro. Future work will focus on optimizing the intelligence density of the model’s reasoning chains to improve efficiency. Third, solving complex tasks is still inferior to frontier models, motivating us to further refine our foundation model and post-training recipe." \- They will increase the efficiency of its reasoning ie it will use less thinking tokens than before for the same task . Also they will improve its abilities solving complex task, this probably means better reasoning and agentic tooling
2025-12-24T20:25:01
https://www.reddit.com/r/LocalLLaMA/comments/1puwi5o/deepseek_will_release_a_larger_model_next_year/
power97992
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1puwi5o
false
null
t3_1puwi5o
/r/LocalLLaMA/comments/1puwi5o/deepseek_will_release_a_larger_model_next_year/
false
false
self
70
null
Just saw this paper on arxiv - is this legit? Supposedly LangVAE straps a VAE + compression algorithm onto any LLM image, reduces resource requirements by up to -90%-?!
15
https://arxiv.org/html/2505.00004v1 If the article and supporting libs -are- legit, then i have two follow up qs: Can this be used to reduce requirements for inference, or is it only useful for training and research? Finally, if it -can- reduce requirements for inference, how do we get started?
2025-12-24T20:23:32
https://www.reddit.com/r/LocalLLaMA/comments/1puwh0a/just_saw_this_paper_on_arxiv_is_this_legit/
MrE_WI
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1puwh0a
false
null
t3_1puwh0a
/r/LocalLLaMA/comments/1puwh0a/just_saw_this_paper_on_arxiv_is_this_legit/
false
false
self
15
null
LM Studio CPU usage more than 100 per cent.
0
So i did read a couple posts about it really just using one core but i want to be sure that i dont fry anything, what does that really mean?
2025-12-24T20:16:42
https://www.reddit.com/r/LocalLLaMA/comments/1puwbu3/lm_studio_cpu_usage_more_than_100_per_cent/
Such-Honeydew4760
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1puwbu3
false
null
t3_1puwbu3
/r/LocalLLaMA/comments/1puwbu3/lm_studio_cpu_usage_more_than_100_per_cent/
false
false
self
0
null
Concurrency planning for local RAG ingestion - 5090 + 5070Ti, looking for sanity check
1
For those unfamiliar: LightRAG (https://github.com/HKUDS/LightRAG) builds a knowledge graph from your documents using an LLM for entity/relationship extraction and an embedding model for vector search. The ingestion is LLM-heavy. My mistake: I ran everything through LM Studio on a single 5090. Qwen3 14B instruct + Qwen3 Embedding 4B. 15 hours to ingest and power draw was roughly 300 / 575W. Turns out LM Studio processes requests sequentially by default - GPU was mostly idle-waiting. Alex Ziskind's video comparing vLLM vs llama.cpp (https://www.youtube.com/watch?v=3XCunZqvVDA) shed some light on better ways to orchestrate this. New plan: - Added a 5070Ti (16gb) to hold the embedding model - Move to vLLM or llama.cpp server with parallel slots - Possibly bump to Qwen 30B for better entity extraction quality. Still figuring out the trade offs with smaller quant / shorter context to allow more parallelism - Orchestrate via Docker Model Runner (https://docs.docker.com/ai/model-runner/) Questions: 1. Am I thinking about the GPU split correctly? Embeddings on 5070ti, LLM on 5090? 2. vLLM vs llama.cpp for this? 3. Is 30B meaningfully better than 14B for entity/relationship extraction? Should I run a coder model instead of an instruct model, since they are better at following the RAG formatting standards. 4. Anything obvious I'm missing? For prod I use OpenRouter (Qwen 80B + cloud embeddings), so this is purely about optimizing local ingestion throughput and quality.
2025-12-24T20:10:55
https://www.reddit.com/r/LocalLLaMA/comments/1puw7iz/concurrency_planning_for_local_rag_ingestion_5090/
PentagonUnpadded
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1puw7iz
false
null
t3_1puw7iz
/r/LocalLLaMA/comments/1puw7iz/concurrency_planning_for_local_rag_ingestion_5090/
false
false
self
1
{'enabled': False, 'images': [{'id': 'o7euTFH0vYIhM_DJawkIc6Qzm21qt2TOaSfABszCzLo', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/o7euTFH0vYIhM_DJawkIc6Qzm21qt2TOaSfABszCzLo.png?width=108&crop=smart&auto=webp&s=c1cc35b3752464f6fe0fe3e090d744436b9fad31', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/o7euTFH0vYIhM_DJawkIc6Qzm21qt2TOaSfABszCzLo.png?width=216&crop=smart&auto=webp&s=21b6afabde0ba101ee5dcd71253b545c7ba4c526', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/o7euTFH0vYIhM_DJawkIc6Qzm21qt2TOaSfABszCzLo.png?width=320&crop=smart&auto=webp&s=3a5f38a9bd2a6fcfcff91c31424134604d9858b3', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/o7euTFH0vYIhM_DJawkIc6Qzm21qt2TOaSfABszCzLo.png?width=640&crop=smart&auto=webp&s=5951439e34c52902a2e179a028166c283aecf8ef', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/o7euTFH0vYIhM_DJawkIc6Qzm21qt2TOaSfABszCzLo.png?width=960&crop=smart&auto=webp&s=f943ceea0540dd0959de7e2e886e7f60e8c9ad8f', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/o7euTFH0vYIhM_DJawkIc6Qzm21qt2TOaSfABszCzLo.png?width=1080&crop=smart&auto=webp&s=31052a32ecea184eb374492261c35c775545839f', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/o7euTFH0vYIhM_DJawkIc6Qzm21qt2TOaSfABszCzLo.png?auto=webp&s=cdc675d707fb0ad698fc485e48ffd3adad95bcf6', 'width': 1200}, 'variants': {}}]}
🚨 The Epstein files redaction failure 🚨
0
Large portions of the Epstein files have been improperly redacted, and you can unredact them by literally just copying and pasting into Microsoft word or another word processing program. Is there a way to scrape and mass unredact the files released?
2025-12-24T20:04:21
https://www.reddit.com/r/LocalLLaMA/comments/1puw2fe/the_epstein_files_redaction_failure/
Internal_Ad2621
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1puw2fe
false
null
t3_1puw2fe
/r/LocalLLaMA/comments/1puw2fe/the_epstein_files_redaction_failure/
false
false
self
0
null
For people running local agents: what real world action do you still block them from doing?
2
I run agents locally and they reason fine, call tools, and automate workflows. The place I always stop is execution with real consequences, especially payments. Right now it usually looks like this: the agent decides → I manually approve or pay → the workflow continues. I am exploring whether tightly scoped, on-chain stablecoin payments with hard limits, full logs, and easy revocation could safely close that loop without human checkout steps. For people building or running local agents, what is the first action you intentionally keep manual? Payments, emails, deployments, something else?
2025-12-24T19:43:17
https://www.reddit.com/r/LocalLLaMA/comments/1puvmc9/for_people_running_local_agents_what_real_world/
Chance_Lion3547
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1puvmc9
false
null
t3_1puvmc9
/r/LocalLLaMA/comments/1puvmc9/for_people_running_local_agents_what_real_world/
false
false
self
2
null
[Architecture Share] Implementing CoALA Memory using Postgres/pgvector (v0.5.0 Deep Dive)
1
I've posted about Soorma here before. We're building an open-source orchestration framework, and we just merged a major update to the **Memory Service**. I wanted to share the architectural decisions we made implementing the **CoALA framework** (Cognitive Architectures for Language Agents) specifically for local/self-hosted setups. **The Blog Post:** [Zero to AI Agent in 10 Minutes: Architecture Deep Dive](https://www.soorma.ai/blog/zero-to-ai-agent-in-10-minutes/) **The TL;DR for this sub:** * **No Pinecone/Weaviate dependency:** We stuck to **PostgreSQL + pgvector**. Why? Because maintaining a separate vector DB for a local agent stack is overkill. * **4-Layer Memory:** We mapped CoALA's specs (Semantic, Episodic, Procedural, Working) to distinct Postgres schemas with Row Level Security (RLS) for multi-tenancy. * **Discovery:** We moved away from hardcoded tool definitions. Agents now broadcast their specs via NATS, and the Planner discovers them dynamically. **Question for the local builders:** For those running local agents (Llama 3 / Mistral), how are you handling *working memory* (shared state) between multiple specialized agents? We're using a `plan_id` correlation chain, but curious if anyone is using shared memory segments or just passing massive context windows? Let me know what you think of the architecture!
2025-12-24T19:20:12
https://www.reddit.com/r/LocalLLaMA/comments/1puv4hv/architecture_share_implementing_coala_memory/
gnulib
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1puv4hv
false
null
t3_1puv4hv
/r/LocalLLaMA/comments/1puv4hv/architecture_share_implementing_coala_memory/
false
false
self
1
null
K2-V2 - 70B and creative writing
27
Has anyone else tried K2-V2 - 70B in the creative writing realm? I first heard about it from this post: [https://www.reddit.com/r/LocalLLaMA/comments/1pqala0/mbzuai\_releases\_k2v2\_70b\_fully\_open\_model/](https://www.reddit.com/r/LocalLLaMA/comments/1pqala0/mbzuai_releases_k2v2_70b_fully_open_model/) I am pleasantly surprised at the thinking (you can choose the thinking budget) and output. Is it the best? I don't know yet, but it's nice to have an entirely new line of models to work with... Dense models have always been more friendly to those of use with a "healthy" level of VRAM. I think GLM 4.6 still stacks above it, but it probably edges out GLM Air 4.5. I'll have to go back to that and see how that was. Love to have your thoughts, and how it stacks up against other models you use. Here are some direct links: [https://huggingface.co/LLM360/K2-V2](https://huggingface.co/LLM360/K2-V2) [https://huggingface.co/LLM360/K2-V2-Instruct](https://huggingface.co/LLM360/K2-V2-Instruct) [https://huggingface.co/cturan/K2-V2-Instruct-GGUF](https://huggingface.co/cturan/K2-V2-Instruct-GGUF)
2025-12-24T19:18:47
https://www.reddit.com/r/LocalLLaMA/comments/1puv3de/k2v2_70b_and_creative_writing/
silenceimpaired
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1puv3de
false
null
t3_1puv3de
/r/LocalLLaMA/comments/1puv3de/k2v2_70b_and_creative_writing/
false
false
self
27
{'enabled': False, 'images': [{'id': 'TOoZuwsCxdkzfZTpgJYUk2gRz7HXYz-L8zpcweaK4To', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/TOoZuwsCxdkzfZTpgJYUk2gRz7HXYz-L8zpcweaK4To.png?width=108&crop=smart&auto=webp&s=b8d9a5e1eeef9702bd48d9f7fc92d6ff2380ba1a', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/TOoZuwsCxdkzfZTpgJYUk2gRz7HXYz-L8zpcweaK4To.png?width=216&crop=smart&auto=webp&s=49882774c0751e122a0f5628d32a0b550c240816', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/TOoZuwsCxdkzfZTpgJYUk2gRz7HXYz-L8zpcweaK4To.png?width=320&crop=smart&auto=webp&s=6c01e18250c79c7a66e8cedbf68cc7b68fa1ad19', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/TOoZuwsCxdkzfZTpgJYUk2gRz7HXYz-L8zpcweaK4To.png?width=640&crop=smart&auto=webp&s=7b3999ff24bbdcf23aaf30cded1175aa00b350d3', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/TOoZuwsCxdkzfZTpgJYUk2gRz7HXYz-L8zpcweaK4To.png?width=960&crop=smart&auto=webp&s=36f426e146f25e1fd805184ab7db3d9061c46211', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/TOoZuwsCxdkzfZTpgJYUk2gRz7HXYz-L8zpcweaK4To.png?width=1080&crop=smart&auto=webp&s=40515c7f0767e883d45990df53e262af463e8730', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/TOoZuwsCxdkzfZTpgJYUk2gRz7HXYz-L8zpcweaK4To.png?auto=webp&s=9bcd09855237ca9b62a98f2c26dcce502648976b', 'width': 1200}, 'variants': {}}]}
Built a Mortgage Underwriting OCR With 96% Real-World Accuracy (Saved ~$2M/Year)
0
I recently built an OCR system specifically for mortgage underwriting, and the real-world accuracy is consistently around **96%**. This wasn’t a lab benchmark. It’s running in production. For context, most underwriting workflows I saw were using a single generic OCR engine and were stuck around **70–72% accuracy**. That low accuracy cascades into manual fixes, rechecks, delays, and large ops teams. By using a **hybrid OCR architecture instead of a single OCR**, designed around underwriting document types and validation, the firm was able to: • Reduce manual review dramatically • Cut processing time from days to minutes • Improve downstream risk analysis because the data was finally clean • Save **\~$2M per year** in operational costs The biggest takeaway for me: underwriting accuracy problems are usually not “AI problems”, they’re **data extraction problems**. Once the data is right, everything else becomes much easier. Happy to answer technical or non-technical questions if anyone’s working in lending or document automation.
2025-12-24T18:54:34
https://www.reddit.com/r/LocalLLaMA/comments/1puujvb/built_a_mortgage_underwriting_ocr_with_96/
Fantastic-Radio6835
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1puujvb
false
null
t3_1puujvb
/r/LocalLLaMA/comments/1puujvb/built_a_mortgage_underwriting_ocr_with_96/
false
false
self
0
null
What to run with 72 GB VRAM, 128 GB RAM?
0
I'm curious if anyone is in a similar position as me. I have maxed out my z790 motherboard with - 4090 - 2x3090 - 2x64 GB DDR5 5600 MT/s This puts me in a weird situation where I can run models like GPT-OSS-120B, GLM 4.5 Air, MiniMax M2 with ease. Sadly I'm just a bit short of GLM 4.6 (even REAP), and very far away models like DeepSeek and Kimi K2. Out of all the models I can run, I find GPT-OSS-120b to be the best. But I can run this model just fine without the other two GPUs, which seems like a waste lol. Are there any models anyone can recommend in the ~250-300B range? Or perhaps dense models in the ~100B range?
2025-12-24T18:50:33
https://www.reddit.com/r/LocalLLaMA/comments/1puuglc/what_to_run_with_72_gb_vram_128_gb_ram/
kevin_1994
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1puuglc
false
null
t3_1puuglc
/r/LocalLLaMA/comments/1puuglc/what_to_run_with_72_gb_vram_128_gb_ram/
false
false
self
0
null
A sanity layer that can make SLMs useful (sSanityLayer)
11
This is a MultiHeadAttention Layer architecture that modulates emotional intensity by introducing vector bias and/or vector noise. It uses semantic anchoring to alter the sanity state(essentialy tied to strength and boost parameter) using a hybrid RNN. Note, this does not make LLMs smarter, but rather acts as a smart filter. The logic can be used to create vSLMs like the one demonstrated in the repository, that are trained to respond through triggers. The sSanityLayer dynamically updates its state, and introduces vector noise to corrupt the vector positions in V dataset. The result? The model knows what it wants, but can't put it in a fixed manner. This flustered state can be triggered by lowered sanity. Potato is a model trained on the same architecture, at just 77KB, fulfills the same precisely well. The model can be trained on CPUs, while also being insanely fast(for it's small size). On transformer models, the anchors change the logit bias by using t_ids_2 = tokenizer.encode("" + w, add_special_tokens=False). Example log from GPT2 Small: Prompt: "the girl was incapable and dead" Without the layer: Output: "accurate presentation so precisely there was no transition... and a prognosis with 1990s digital. Somebody make a damn big thing up... With the layer: Output: "because she refused to buckle." GitHub link: https://github.com/kavyamali/sSanityLayer
2025-12-24T17:57:25
https://www.reddit.com/r/LocalLLaMA/comments/1put96m/a_sanity_layer_that_can_make_slms_useful/
ValuableLucky8566
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1put96m
false
null
t3_1put96m
/r/LocalLLaMA/comments/1put96m/a_sanity_layer_that_can_make_slms_useful/
false
false
self
11
{'enabled': False, 'images': [{'id': 'vpkkw_-KG7rU-vd5OSL5TGzzjN9jx9s58f5JNlEofQE', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/vpkkw_-KG7rU-vd5OSL5TGzzjN9jx9s58f5JNlEofQE.png?width=108&crop=smart&auto=webp&s=56ee6701040c7f9db96d7ead6c95e05038787925', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/vpkkw_-KG7rU-vd5OSL5TGzzjN9jx9s58f5JNlEofQE.png?width=216&crop=smart&auto=webp&s=6f5294ec3e17a0ff81f53638ceeba607a2f8c7f6', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/vpkkw_-KG7rU-vd5OSL5TGzzjN9jx9s58f5JNlEofQE.png?width=320&crop=smart&auto=webp&s=7dd6087518c8b8e735bab4acdeb0e3c621b5c0f1', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/vpkkw_-KG7rU-vd5OSL5TGzzjN9jx9s58f5JNlEofQE.png?width=640&crop=smart&auto=webp&s=94d1491e3a48a125b6f4dc25dce853383f71a69f', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/vpkkw_-KG7rU-vd5OSL5TGzzjN9jx9s58f5JNlEofQE.png?width=960&crop=smart&auto=webp&s=715b48b2fd3ddf58bdf4679d23f2f4f7d47a3fe7', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/vpkkw_-KG7rU-vd5OSL5TGzzjN9jx9s58f5JNlEofQE.png?width=1080&crop=smart&auto=webp&s=bab9d558f1dd5f44df4ee26fc6437cb4366fbfd7', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/vpkkw_-KG7rU-vd5OSL5TGzzjN9jx9s58f5JNlEofQE.png?auto=webp&s=dbacf0d4140443df1019fcfba0561a068dbbcb8b', 'width': 1200}, 'variants': {}}]}
What is the next step after learning about transformers in detail
2
I have learnt about transformers in details and now i want to understand about how and why we deviated from the original architecture to better architectures and other things related to it. Can someone suggest how should i proceed? And pls serious answers only.
2025-12-24T17:32:45
https://www.reddit.com/r/LocalLLaMA/comments/1pusp8x/what_is_the_next_step_after_learning_about/
Super_Piano8278
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1pusp8x
false
null
t3_1pusp8x
/r/LocalLLaMA/comments/1pusp8x/what_is_the_next_step_after_learning_about/
false
false
self
2
null
Help with context length on ollama
3
2025-12-24T17:30:43
https://www.reddit.com/gallery/1pusnn6
JHorma97
reddit.com
1970-01-01T00:00:00
0
{}
1pusnn6
false
null
t3_1pusnn6
/r/LocalLLaMA/comments/1pusnn6/help_with_context_length_on_ollama/
false
false
https://b.thumbs.redditm…t4JLOwUZjzok.jpg
3
null
I built a deterministic internal-state reasoning engine that constrains LLM output (proof-of-architecture + demo)
0
I’ve been experimenting with a constraint-first internal-state reasoning architecture designed to sit beneath or alongside LLMs. The core idea is simple: Probabilistic language generation should not be the cognitive core. It should be subordinated to persistent symbolic state, deterministic routing, and explicit constraints. This project (“Ghost”) is not an agent, not an autonomous system, and not a general intelligence. It does not generate goals or take actions. It maintains a measurable internal state (mood, tension, contradictions, etc.) and outputs structured advisory signals that shape language output. I’m sharing this as a proof-of-architecture, not a finished system. The repo includes: - A clear architectural overview - A roadmap - A text-first demo showing how Ghost resists prompt-level identity injection compared to a standard LLM control case I’m especially interested in feedback from people thinking about: - hallucination reduction - constraint-based reasoning - symbolic + probabilistic hybrid systems GitHub (demo included): https://github.com/GhoCentric/ghost-engine
2025-12-24T17:26:52
https://www.reddit.com/r/LocalLLaMA/comments/1puskjc/i_built_a_deterministic_internalstate_reasoning/
GhoCentric
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1puskjc
false
null
t3_1puskjc
/r/LocalLLaMA/comments/1puskjc/i_built_a_deterministic_internalstate_reasoning/
false
false
self
0
{'enabled': False, 'images': [{'id': 'i3zMvdHAyaQoLxsgLj_VMfyV4QvlkpofyQ4Zs6FaI2w', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/i3zMvdHAyaQoLxsgLj_VMfyV4QvlkpofyQ4Zs6FaI2w.png?width=108&crop=smart&auto=webp&s=5eb3a28f64441968c62011400f74222e98d8f23a', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/i3zMvdHAyaQoLxsgLj_VMfyV4QvlkpofyQ4Zs6FaI2w.png?width=216&crop=smart&auto=webp&s=0e1027d04b6ad0afe7a47ac084f66e9351802ac0', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/i3zMvdHAyaQoLxsgLj_VMfyV4QvlkpofyQ4Zs6FaI2w.png?width=320&crop=smart&auto=webp&s=3651b47c6247684487a39352ed4b0dd2bfc1df0f', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/i3zMvdHAyaQoLxsgLj_VMfyV4QvlkpofyQ4Zs6FaI2w.png?width=640&crop=smart&auto=webp&s=3e908f81664df5cd7a8cbc052c9125b5d68a6765', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/i3zMvdHAyaQoLxsgLj_VMfyV4QvlkpofyQ4Zs6FaI2w.png?width=960&crop=smart&auto=webp&s=e63f50c89dcb743180514b99a7c4ff59750644b0', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/i3zMvdHAyaQoLxsgLj_VMfyV4QvlkpofyQ4Zs6FaI2w.png?width=1080&crop=smart&auto=webp&s=ca63dca184deb14740b1c509b4f617a76e612edc', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/i3zMvdHAyaQoLxsgLj_VMfyV4QvlkpofyQ4Zs6FaI2w.png?auto=webp&s=9fcbbcbef6719291f04be61873cccae83abaccdc', 'width': 1200}, 'variants': {}}]}
Vionous: 5.7M Q&A pairs across 116 domains — free LoRA training data with one-click Colab notebooks
7
Built an open library of training data for domain-specific adapters. What's there: - 116 packages (math, programming, sciences, languages, humanities, etc.) - 5.7 million Q&A pairs - Every package has a Colab notebook — click, run, trained adapter in 2-4 hours - Works with any Llama-architecture model Largest packages: - Math: 1.2M pairs - Physics: 175K pairs - Unix/Linux: 172K pairs - All Stack Exchange sites + Grand Comics Database Everything CC-BY-SA, free forever. [https://github.com/larro1991/vionous](https://github.com/larro1991/vionous) Looking for contributors to add more domains and test adapters.
2025-12-24T16:28:10
https://www.reddit.com/r/LocalLLaMA/comments/1pur9d6/vionous_57m_qa_pairs_across_116_domains_free_lora/
Confident_Ad_2321
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1pur9d6
false
null
t3_1pur9d6
/r/LocalLLaMA/comments/1pur9d6/vionous_57m_qa_pairs_across_116_domains_free_lora/
false
false
self
7
{'enabled': False, 'images': [{'id': 'bdafC-rorjoKySqKmxjDHRfCChduOfFgHxZ-BOH3fBE', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/bdafC-rorjoKySqKmxjDHRfCChduOfFgHxZ-BOH3fBE.png?width=108&crop=smart&auto=webp&s=2b0859ecb3604f2e456ba3e813580a4b21dd5696', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/bdafC-rorjoKySqKmxjDHRfCChduOfFgHxZ-BOH3fBE.png?width=216&crop=smart&auto=webp&s=766a091b1c1e8a0ef068e6d9a3f0c27b236c0554', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/bdafC-rorjoKySqKmxjDHRfCChduOfFgHxZ-BOH3fBE.png?width=320&crop=smart&auto=webp&s=6253b7856568e4f39fbcb9fb85c9adb29db8dbba', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/bdafC-rorjoKySqKmxjDHRfCChduOfFgHxZ-BOH3fBE.png?width=640&crop=smart&auto=webp&s=faf37576373d44f244e83523ce38524adf5baba4', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/bdafC-rorjoKySqKmxjDHRfCChduOfFgHxZ-BOH3fBE.png?width=960&crop=smart&auto=webp&s=ecc11e58bd083d2587fa77b280c5ddfe3e31a1df', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/bdafC-rorjoKySqKmxjDHRfCChduOfFgHxZ-BOH3fBE.png?width=1080&crop=smart&auto=webp&s=e8d9e684ddf94d1f16a606ba47618deebcf8337a', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/bdafC-rorjoKySqKmxjDHRfCChduOfFgHxZ-BOH3fBE.png?auto=webp&s=39480864e4b69962c6565cc3b42574e1b9b0bd58', 'width': 1200}, 'variants': {}}]}
is the openai package still the best approach for working with LLMs in Python?
7
Not a fan of langchain, crewai or the scores of other AI frameworks. I want just the basics of structured outputs. As far as I can tell the openai package is the works-and-bug-free go to. You of course can insert your own endpoint, model. Is there nothing better now? So many new models etc. but nothing better in such a basic, core tool?
2025-12-24T16:04:42
https://www.reddit.com/r/LocalLLaMA/comments/1puqqjv/is_the_openai_package_still_the_best_approach_for/
rm-rf-rm
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1puqqjv
false
null
t3_1puqqjv
/r/LocalLLaMA/comments/1puqqjv/is_the_openai_package_still_the_best_approach_for/
false
false
self
7
null
Offline on-device LLM chat app for iOS (local inference, no cloud)
0
I wanted to share an iOS app called **Private Mind: Offline AI Chat** that runs **entirely on-device** \- no server calls, no accounts, no tracking. The app focuses on **local inference** on iPhone using optimized models for mobile constraints. Once downloaded, it works fully offline (including airplane mode). **Key points:** * 100% local inference (no cloud fallback) * Runs offline after install * Privacy-first: no analytics, no data leaves the device * Simple chat-style UI for everyday use **App Store:** [https://apps.apple.com/us/app/private-mind-offline-ai-chat/id6754819594](https://apps.apple.com/us/app/private-mind-offline-ai-chat/id6754819594) I’d love feedback from this community on: * Expectations vs reality for mobile local LLMs * Model size / quality trade-offs on iOS * Features that make sense for strictly local setups Happy to answer technical questions.
2025-12-24T16:02:38
https://i.redd.it/fnjdcd1fd69g1.png
Careless_Original978
i.redd.it
1970-01-01T00:00:00
0
{}
1puqoum
false
null
t3_1puqoum
/r/LocalLLaMA/comments/1puqoum/offline_ondevice_llm_chat_app_for_ios_local/
false
false
default
0
{'enabled': True, 'images': [{'id': 'fnjdcd1fd69g1', 'resolutions': [{'height': 108, 'url': 'https://preview.redd.it/fnjdcd1fd69g1.png?width=108&crop=smart&auto=webp&s=7d48432f5d50274fe1d256da3b43021f12887824', 'width': 108}, {'height': 217, 'url': 'https://preview.redd.it/fnjdcd1fd69g1.png?width=216&crop=smart&auto=webp&s=16e306cec0cd93185e296b36a6e4c8666a4bc8de', 'width': 216}, {'height': 322, 'url': 'https://preview.redd.it/fnjdcd1fd69g1.png?width=320&crop=smart&auto=webp&s=bd4c81440fb810fa945f5f3cf62783d4ec5fce24', 'width': 320}, {'height': 644, 'url': 'https://preview.redd.it/fnjdcd1fd69g1.png?width=640&crop=smart&auto=webp&s=e6a572aad2f7bd88f60aeab0f8e82ef5193caa45', 'width': 640}, {'height': 967, 'url': 'https://preview.redd.it/fnjdcd1fd69g1.png?width=960&crop=smart&auto=webp&s=56f3972ad0f42a611c55b76ef32f251c38d5fe7d', 'width': 960}], 'source': {'height': 1087, 'url': 'https://preview.redd.it/fnjdcd1fd69g1.png?auto=webp&s=6885f67e9a69e50a61e12f7aaaabd10a2b0f02c8', 'width': 1079}, 'variants': {}}]}
Got spare GPUs but no project ideas. What should a new LLM engineer build/research?
5
I'm new to the LLM space, currently working in AI applications.I currently have access to: **1.Hardware:** A node with 4x NVIDIA H200 largely at my disposal. **2.Resources:** Unlimited internal access to various large model APIs. **3.The Constraint:** Everything is strictly restricted to the internal network. I cannot host public demos, and I can't take the code with me when I eventually leave. I’m feeling a bit lost regarding my next steps. I’m trying to figure out what to dive into next to keep my edge. Currently, I’m focused on fine-tuning and Agents—like building NL2SQL pipelines for internal workflows or specialized agents tailored to our business needs. Or is there another domain I should be prioritizing to maximize my growth?
2025-12-24T16:00:16
https://www.reddit.com/r/LocalLLaMA/comments/1puqmr5/got_spare_gpus_but_no_project_ideas_what_should_a/
InsideTop3230
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1puqmr5
false
null
t3_1puqmr5
/r/LocalLLaMA/comments/1puqmr5/got_spare_gpus_but_no_project_ideas_what_should_a/
false
false
self
5
null
GLM 4.7 can close to sonnet 4.7?
0
Has anyone tested the glm4.7 ? Is it really close to Sonnet 4.5? Thank you.
2025-12-24T15:44:08
https://www.reddit.com/r/LocalLLaMA/comments/1puq9up/glm_47_can_close_to_sonnet_47/
Federal_Spend2412
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1puq9up
false
null
t3_1puq9up
/r/LocalLLaMA/comments/1puq9up/glm_47_can_close_to_sonnet_47/
false
false
self
0
null
How you guys using deepseek v3.2 speciale model?
7
I am trying to use the deepseek official api to access the deepseek v3.2 speciale model but i am not able to there is only two model that i can see deepseek chat and deepseek reasoning. Can anyone pls help me with it? thanks
2025-12-24T15:23:58
https://www.reddit.com/r/LocalLLaMA/comments/1puptny/how_you_guys_using_deepseek_v32_speciale_model/
Ai_Peep
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1puptny
false
null
t3_1puptny
/r/LocalLLaMA/comments/1puptny/how_you_guys_using_deepseek_v32_speciale_model/
false
false
self
7
null
I created an Issue for Maincoder in llama.cpp
4
Please show your support for the issue, if you believe that the addition of Maincoder Architecture to llama.cpp is useful. Many thanks! P.S Will make a followup post if a PR is made/Implemented
2025-12-24T15:17:14
https://www.reddit.com/r/LocalLLaMA/comments/1pupo9i/i_created_an_issue_for_maincoder_in_llamacpp/
Sufficient-Bid3874
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1pupo9i
false
null
t3_1pupo9i
/r/LocalLLaMA/comments/1pupo9i/i_created_an_issue_for_maincoder_in_llamacpp/
false
false
self
4
{'enabled': False, 'images': [{'id': '6tTSifntJJkEmrWe_90yRTig2UnsKE0SJLuPZ8A2SIU', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/6tTSifntJJkEmrWe_90yRTig2UnsKE0SJLuPZ8A2SIU.png?width=108&crop=smart&auto=webp&s=bd52a127ef079eef23456f5f8d66e454dea02b81', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/6tTSifntJJkEmrWe_90yRTig2UnsKE0SJLuPZ8A2SIU.png?width=216&crop=smart&auto=webp&s=740b8ee710a3145f54e6fe192b40f8eeeba71d19', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/6tTSifntJJkEmrWe_90yRTig2UnsKE0SJLuPZ8A2SIU.png?width=320&crop=smart&auto=webp&s=176f793328d9b001459053dfeda0a99987ff0003', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/6tTSifntJJkEmrWe_90yRTig2UnsKE0SJLuPZ8A2SIU.png?width=640&crop=smart&auto=webp&s=f73d43f4608e972e5c0c282b66fa16a7907629c8', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/6tTSifntJJkEmrWe_90yRTig2UnsKE0SJLuPZ8A2SIU.png?width=960&crop=smart&auto=webp&s=51df0e64b9d5d7473b4284de820703d0762f9fce', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/6tTSifntJJkEmrWe_90yRTig2UnsKE0SJLuPZ8A2SIU.png?width=1080&crop=smart&auto=webp&s=56cee757e7304bf56f0b084594e49cc54c7ec0bf', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/6tTSifntJJkEmrWe_90yRTig2UnsKE0SJLuPZ8A2SIU.png?auto=webp&s=7c5ca7291b7604830ef1f7dc922dd78fc28c189d', 'width': 1200}, 'variants': {}}]}
Intel Arc Pro Graphics driver update 32.0.101.8306 WHQL (Q4.25) released
0
2025-12-24T15:16:04
https://www.intel.com/content/www/us/en/download/741626/intel-arc-pro-graphics-windows.html
reps_up
intel.com
1970-01-01T00:00:00
0
{}
1pupndf
false
null
t3_1pupndf
/r/LocalLLaMA/comments/1pupndf/intel_arc_pro_graphics_driver_update_3201018306/
false
false
default
0
{'enabled': False, 'images': [{'id': 'w8cdh82dTQN6aQiuTzDsvYn4x6rNHe8-pGPDRnuyqY8', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/w8cdh82dTQN6aQiuTzDsvYn4x6rNHe8-pGPDRnuyqY8.png?width=108&crop=smart&auto=webp&s=7bf7d26eda1372ad44542dc8df7d7bbe2c6487e6', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/w8cdh82dTQN6aQiuTzDsvYn4x6rNHe8-pGPDRnuyqY8.png?width=216&crop=smart&auto=webp&s=158aa8b21b7082fdce91d98a2383a6bcec484476', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/w8cdh82dTQN6aQiuTzDsvYn4x6rNHe8-pGPDRnuyqY8.png?width=320&crop=smart&auto=webp&s=a8bfd6c053649b9adcb5f585135584f9d04b85c0', 'width': 320}], 'source': {'height': 330, 'url': 'https://external-preview.redd.it/w8cdh82dTQN6aQiuTzDsvYn4x6rNHe8-pGPDRnuyqY8.png?auto=webp&s=2349572ef078e22abb9d443a97d8bd7351dc834c', 'width': 586}, 'variants': {}}]}
Free PDF-to-Markdown demo that finally extracts clean tables from 10-Ks (Docling)
0
Building RAG apps and hating how free tools mangle tables in financial PDFs? I built a free demo using IBM's Docling – it handles merged cells and footnotes way better than most open-source options. Try your own PDF: [https://amineace-pdf-tables-rag-demo.hf.space](https://amineace-pdf-tables-rag-demo.hf.space/?referrer=grok.com) Example on Apple 10-K (shareholders' equity table): https://preview.redd.it/9b1k32ap469g1.png?width=945&format=png&auto=webp&s=6593acb1c29ef7f85ff17e958be58fd3ceafd1b3 Simple test PDF also clean (headers, lists, table pipes). Note: Large docs (80+ pages) take 5-10 min on free tier – worth it for the accuracy. Would you pay $10/mo for a fast API version (1k pages, async queue, higher limits)? Feedback welcome – planning waitlist if there's interest!
2025-12-24T15:13:50
https://www.reddit.com/r/LocalLLaMA/comments/1puplhz/free_pdftomarkdown_demo_that_finally_extracts/
AmineAce
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1puplhz
false
null
t3_1puplhz
/r/LocalLLaMA/comments/1puplhz/free_pdftomarkdown_demo_that_finally_extracts/
false
false
https://b.thumbs.redditm…x9MVlsovyydc.jpg
0
null
MS-S1 Recommendations?
1
Hey all, Apparently by tomorrow I'll be the owner of a 128Gb MS-S1. Things I'd like to do... 1) Integrate with Paperless-AI and Karakeep for tagging and RAG 2) Help with HAOS (with automatons and setting stuff up) 3) Image Gen & Music Gen. This is mostly for fun/hobby to see what I can do 4) General chat leaning towards the tech side of things for my homelab eg help with docker compose and troubleshooting What models would be recommended for these and are there any good guides for setting these up and getting the most out of my hardware? I'd prefer uncensored models and I'd also prefer to run in an LXC on Proxmox rather than in a VM or bare metal. What can I realistically expect to run well? Thanks
2025-12-24T14:57:34
https://www.reddit.com/r/LocalLLaMA/comments/1pup8im/mss1_recommendations/
ZeroThaHero
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1pup8im
false
null
t3_1pup8im
/r/LocalLLaMA/comments/1pup8im/mss1_recommendations/
false
false
self
1
null
Anyone seeing massive redundant prefill cost in deep agent workflows when self-hosting?
1
I’ve been benchmarking multi-step agent workflows (planner → executor → verifier) on self-hosted open-weight models and keep running into the same pattern: Once workflows get deep (10–20 steps) and reuse a large shared context, a majority of inference cost is just re-encoding the same prefix over and over. In one synthetic but realistic test: \- \~10k token shared prefix \- 20-step agent loop \- single-node GPU \~60%+ of total tokens were redundant prefill. Engine-level prefix caching helps until concurrency increases, then cache churn causes p95/p99 latency to spike. Curious: \- Are others seeing similar behavior once they move off API inference? \- How are you dealing with it today (limits, summarization, custom caching, etc.)? Not selling anything — just trying to understand how widespread this is. If you’re running something like this and open to comparing notes, DM is fine.
2025-12-24T14:29:20
https://www.reddit.com/r/LocalLLaMA/comments/1puomr9/anyone_seeing_massive_redundant_prefill_cost_in/
FocusPilot-Sean
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1puomr9
false
null
t3_1puomr9
/r/LocalLLaMA/comments/1puomr9/anyone_seeing_massive_redundant_prefill_cost_in/
false
false
self
1
null
Nanbeige4-3B-Thinking-2511
9
Why almost no one talks about this model? I haven't seen anyone comparing it to Qwen3-4B-Thinking-2507 even though they are very comparable in size and in mindset (both models are in 3-4B range,both are overthinkers) I've only seen a single post about it but haven't seen no one recommends it in any other posts,the model main issue is Overthinking but it can be resolved later and actually Qwen3-4B-Thinking-2507 have the same overthinking issue,most small language models aren't very efficient (:
2025-12-24T14:16:03
https://www.reddit.com/r/LocalLLaMA/comments/1puocpz/nanbeige43bthinking2511/
Character_Sea7898
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1puocpz
false
null
t3_1puocpz
/r/LocalLLaMA/comments/1puocpz/nanbeige43bthinking2511/
false
false
self
9
null
minimax m2.1 is going to open source which is good but picture is here is minimax decoded how to make there model in good in coding. if u look at the benchmark closely its same like the claude bechmark best in coding wrost in other . so now we have a lab which solely focusing on coding
51
minimax is the part of alibaba so they got a compute and lots of compute so they are not going to lag behind and guess minimax is also good in video , audio generation . so what the hell claude is doing with that much compute and crying about price
2025-12-24T13:55:44
https://i.redd.it/h0zmnel4q59g1.png
Select_Dream634
i.redd.it
1970-01-01T00:00:00
0
{}
1punxjz
false
null
t3_1punxjz
/r/LocalLLaMA/comments/1punxjz/minimax_m21_is_going_to_open_source_which_is_good/
false
false
default
51
{'enabled': True, 'images': [{'id': 'h0zmnel4q59g1', 'resolutions': [{'height': 60, 'url': 'https://preview.redd.it/h0zmnel4q59g1.png?width=108&crop=smart&auto=webp&s=aa3f0a36b4dd1ee56b6c9c10517fbd99d2b21581', 'width': 108}, {'height': 120, 'url': 'https://preview.redd.it/h0zmnel4q59g1.png?width=216&crop=smart&auto=webp&s=5ff8b98f3c7e13214479c270fcb54d2f0723eab9', 'width': 216}, {'height': 178, 'url': 'https://preview.redd.it/h0zmnel4q59g1.png?width=320&crop=smart&auto=webp&s=8441e5ca00ede4fa015d69a9d00bc404be5afb1a', 'width': 320}, {'height': 357, 'url': 'https://preview.redd.it/h0zmnel4q59g1.png?width=640&crop=smart&auto=webp&s=e169e63ee2261089cd5e05e957f9a8baaf183883', 'width': 640}, {'height': 535, 'url': 'https://preview.redd.it/h0zmnel4q59g1.png?width=960&crop=smart&auto=webp&s=5bdafe796bc1b92c9d03d13da35376f4d00e767c', 'width': 960}, {'height': 602, 'url': 'https://preview.redd.it/h0zmnel4q59g1.png?width=1080&crop=smart&auto=webp&s=17de057030b1f2c9099c1a0a98c3055fdcdd0881', 'width': 1080}], 'source': {'height': 2286, 'url': 'https://preview.redd.it/h0zmnel4q59g1.png?auto=webp&s=0bb8db6dd6a726652e5a0daa225f763c6a600e92', 'width': 4096}, 'variants': {}}]}
Which GPU should I use to caption ~50k images/day
56
I need to generate captions/descriptions for around 50,000 images per day (\~1.5M per month) using a vision-language model. From my initial tests, uform-gen2-qwen-500m and qwen2.5-vl:7b seem good enough quality for me. I’m planning to rent a GPU, but inference speed is critical — the images need to be processed within the same day, so latency and throughput matter a lot. Based on what I’ve found online, AWS G5 instances or GPUs like L40 *seem* like they could handle this, but I’m honestly not very confident about that assessment. Do you have any recommendations? * Which GPU(s) would you suggest for this scale? * Any experience running similar VLM workloads at this volume? * Tips on optimizing throughput (batching, quantization, etc.) are also welcome. Thanks in advance.
2025-12-24T13:14:59
https://www.reddit.com/r/LocalLLaMA/comments/1pun4kk/which_gpu_should_i_use_to_caption_50k_imagesday/
koteklidkapi
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1pun4kk
false
null
t3_1pun4kk
/r/LocalLLaMA/comments/1pun4kk/which_gpu_should_i_use_to_caption_50k_imagesday/
false
false
self
56
null
Learned Routers (Multi Model)
1
I am aware everyone hates the ChatGPT router LOL but I am interested in good quality open source router models that select between LLMs for local deployments Does anyone know some good existing router models? Any good github repos in this area? What sort of techniques are good for routers? Bert-likes? RL?
2025-12-24T12:55:41
https://www.reddit.com/r/LocalLLaMA/comments/1pumrfl/learned_routers_multi_model/
SlowFail2433
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1pumrfl
false
null
t3_1pumrfl
/r/LocalLLaMA/comments/1pumrfl/learned_routers_multi_model/
false
false
self
1
null
Hermes 13B beats Opus and Sonnet after 73k tests
0
after 2 months developing an app to generate high quality flashcards with ai and 73k analysis ive realized that hermes 13b well tuned generates better results than opus, sonnet and glm for this specific task the local 13b model beats them all next step is fine tuning but definitely for a specific task a small local model can give better results than giant general models and you run it local without burning money on APIs
2025-12-24T12:43:02
https://www.reddit.com/r/LocalLLaMA/comments/1pumjdc/hermes_13b_beats_opus_and_sonnet_after_73k_tests/
Empty_Enthusiasm_167
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1pumjdc
false
null
t3_1pumjdc
/r/LocalLLaMA/comments/1pumjdc/hermes_13b_beats_opus_and_sonnet_after_73k_tests/
false
false
self
0
null
better times will come soon, LocalLLMers rejoice !
6
[https://spectrum.ieee.org/ai-models-locally](https://spectrum.ieee.org/ai-models-locally)
2025-12-24T12:26:56
https://www.reddit.com/r/LocalLLaMA/comments/1pum96n/better_times_will_come_soon_localllmers_rejoice/
DevelopmentBorn3978
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1pum96n
false
null
t3_1pum96n
/r/LocalLLaMA/comments/1pum96n/better_times_will_come_soon_localllmers_rejoice/
false
false
self
6
{'enabled': False, 'images': [{'id': 'YBPzJep1mD549XJHVp18ZW7DHhIsK9x6plz2MMGxFBA', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/YBPzJep1mD549XJHVp18ZW7DHhIsK9x6plz2MMGxFBA.png?width=108&crop=smart&auto=webp&s=b2fcb647b3f059cb05985293b7f1d7817d9d790e', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/YBPzJep1mD549XJHVp18ZW7DHhIsK9x6plz2MMGxFBA.png?width=216&crop=smart&auto=webp&s=a9d8089fac3f9caaedba62b7f0766581dfef1e77', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/YBPzJep1mD549XJHVp18ZW7DHhIsK9x6plz2MMGxFBA.png?width=320&crop=smart&auto=webp&s=a07cd98038ac75d0cdd25096418307513e39e7c4', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/YBPzJep1mD549XJHVp18ZW7DHhIsK9x6plz2MMGxFBA.png?width=640&crop=smart&auto=webp&s=1d358cf44b195ebb03fd17da292242cbcc50cb66', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/YBPzJep1mD549XJHVp18ZW7DHhIsK9x6plz2MMGxFBA.png?width=960&crop=smart&auto=webp&s=1a05deb8f8354a884873c02d3b8012cf5dff1908', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/YBPzJep1mD549XJHVp18ZW7DHhIsK9x6plz2MMGxFBA.png?width=1080&crop=smart&auto=webp&s=c7f1b1787e71c5c3a3165858b296a6e9cfb20d7a', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/YBPzJep1mD549XJHVp18ZW7DHhIsK9x6plz2MMGxFBA.png?auto=webp&s=f529ae3d81dbb546d8cbd5a03f23e52deae917d4', 'width': 1200}, 'variants': {}}]}
An Open Source AI assistant for MacOS - SAM
6
Hello everyone! I have released an AI assistant application for MacOS called Synthetic Autonomic Mind (SAM). SAM is a native AI helper application that supports local models using llama.cpp and mlx, or remote models via GitHub Copilot, Deepseek, etc. There are a ton of built-in tools including image generation with Stable Diffusion, RAG, and SAM even has an OpenAI compatible API. This software is something that I created for my SO and for myself, and we've decided to release it under an FOSS license (GPLv3) hoping that it could be useful to others too. **Project Page:** [https://github.com/SyntheticAutonomicMind](https://github.com/SyntheticAutonomicMind) **Website:** [https://www.syntheticautonomicmind.org/](https://www.syntheticautonomicmind.org/)
2025-12-24T12:12:35
https://www.reddit.com/r/LocalLLaMA/comments/1pum077/an_open_source_ai_assistant_for_macos_sam/
Total-Context64
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1pum077
false
null
t3_1pum077
/r/LocalLLaMA/comments/1pum077/an_open_source_ai_assistant_for_macos_sam/
false
false
self
6
{'enabled': False, 'images': [{'id': 'fSuqfT66lFRPS9tu7pT2xwdr7oQMUFs-esa4GEDwS2U', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/fSuqfT66lFRPS9tu7pT2xwdr7oQMUFs-esa4GEDwS2U.png?width=108&crop=smart&auto=webp&s=6937ee4ee11df3e558884f093061c04ce5d16a76', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/fSuqfT66lFRPS9tu7pT2xwdr7oQMUFs-esa4GEDwS2U.png?width=216&crop=smart&auto=webp&s=9d17de5b17108bc819d4377515c3faa96ba5f926', 'width': 216}], 'source': {'height': 280, 'url': 'https://external-preview.redd.it/fSuqfT66lFRPS9tu7pT2xwdr7oQMUFs-esa4GEDwS2U.png?auto=webp&s=08f49b33f723f5ef9ec94cc2550bca97b323aa3a', 'width': 280}, 'variants': {}}]}
Unsloth GLM 4.7 UD-Q2_K_XL or gpt-oss 120b?
31
I'm sure that gpt-oss will be much faster but, would the extreme GLM quant be better for general programming and chat? Anyone tried? Downloading them as of now. RTX3090 + 128GB of DDR4 3600
2025-12-24T11:57:54
https://www.reddit.com/r/LocalLLaMA/comments/1pulqzt/unsloth_glm_47_udq2_k_xl_or_gptoss_120b/
EnthusiasmPurple85
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1pulqzt
false
null
t3_1pulqzt
/r/LocalLLaMA/comments/1pulqzt/unsloth_glm_47_udq2_k_xl_or_gptoss_120b/
false
false
self
31
null
Hmm all reference to open-sourcing has been removed for Minimax M2.1...
230
Funny how yesterday this page [https://www.minimax.io/news/minimax-m21](https://www.minimax.io/news/minimax-m21) had a statement that weights would be open-sourced on Huggingface and even a discussion of how to run locally on vLLM and SGLang. There was even a (broken but soon to be functional) HF link for the repo... Today that's all gone. Has MiniMax decided to go API only? Seems like they've backtracked on open-sourcing this one. Maybe they realized it's so good that it's time to make some $$$ :( Would be sad news for this community and a black mark against MiniMax.
2025-12-24T11:48:37
https://www.reddit.com/r/LocalLLaMA/comments/1pullo0/hmm_all_reference_to_opensourcing_has_been/
Responsible_Fig_1271
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1pullo0
false
null
t3_1pullo0
/r/LocalLLaMA/comments/1pullo0/hmm_all_reference_to_opensourcing_has_been/
false
false
self
230
{'enabled': False, 'images': [{'id': 'GYUWVApHh7WxqJg5euhUy3HbyIqNa4dEj0F1wERAlNc', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/GYUWVApHh7WxqJg5euhUy3HbyIqNa4dEj0F1wERAlNc.png?width=108&crop=smart&auto=webp&s=4690a2c6c474158dd99d2260d5de552df325e5e3', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/GYUWVApHh7WxqJg5euhUy3HbyIqNa4dEj0F1wERAlNc.png?width=216&crop=smart&auto=webp&s=55d5888cd08634c66be0fb8a7879d431508a960a', 'width': 216}], 'source': {'height': 300, 'url': 'https://external-preview.redd.it/GYUWVApHh7WxqJg5euhUy3HbyIqNa4dEj0F1wERAlNc.png?auto=webp&s=c27c9f63e268e69686484271be1653754db4dc49', 'width': 300}, 'variants': {}}]}
AI agents keep failing to parse Ansible/Terraform output. Built a CLI that returns JSON instead.
2
I've been running local LLMs as infrastructure agents and kept hitting the same wall: they can't reliably parse traditional DevOps tool outputs. **The Problem:** When you ask an AI agent to check if nginx is running: # Agent runs this: result = subprocess.run(['systemctl', 'status', 'nginx'], capture_output=True) # Gets back: ● nginx.service - A high performance web server Loaded: loaded (/lib/systemd/system/nginx.service; enabled) Active: active (running) since Mon 2024-12-23 14:23:11 UTC; 2h 15min ago Docs: man:nginx(8) Main PID: 1234 (nginx) Tasks: 2 (limit: 4915) Memory: 2.1M # Agent tries to parse with regex... fails 20-30% of the time Same issue with Ansible playbooks (YAML hell), Terraform plans (text formatting), and basically every traditional CLI tool. **What I Built:** A Rust-based CLI called "resh" (Resource Shell) that returns structured JSON for every operation: Real Comparison:$ resh svc://nginx.status { "active": true, "pid": 1234, "memory_kb": 2048, "uptime_seconds": 8115, "enabled": true } I tested the same tasks with GPT-4 (via API) and Claude (via API): *Task: "Check if nginx is running and restart if not"* * **With systemctl:** 68% success rate (parsing failures) * **With resh:** 97% success rate (JSON parsing) The difference is dramatic when chaining multiple operations. **Design:** * URI-based addressing: `file://path.txt.read`, `system://.memory`, `ssh://server/cmd.exec` * Every operation returns JSON (no text parsing) * Type-safe operations (Rust backend) * 28 resource handlers so far (file, process, service, system, network, etc.) **Current Status:** * v0.9.0 alpha * Open source (Apache 2.0) * Works with local LLMs via function calling * Tested with llama.cpp, Ollama, and cloud APIs **Example with Local LLM:** # Using llama.cpp with function calling tools = [ { "name": "resh", "description": "Execute infrastructure operations", "parameters": { "uri": "resource://target.operation" } } ] # Agent can now reliably manage infrastructure response = llm.chat("Check system health", tools=tools) **Not trying to replace Ansible/Terraform** \- they're great for human-written automation. This is specifically for AI agent consumption where structured outputs are critical. Curious if others have hit this same wall with local LLMs + infrastructure automation, and whether this approach makes sense. GitHub: [https://github.com/millertechnologygroup/resh](https://github.com/millertechnologygroup/resh) Website: [https://reshshell.dev](https://reshshell.dev) Happy to answer questions about the design, Rust implementation, or integration with different LLM backends.
2025-12-24T11:24:55
https://www.reddit.com/r/LocalLLaMA/comments/1pul85k/ai_agents_keep_failing_to_parse_ansibleterraform/
smille69
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1pul85k
false
null
t3_1pul85k
/r/LocalLLaMA/comments/1pul85k/ai_agents_keep_failing_to_parse_ansibleterraform/
false
false
self
2
{'enabled': False, 'images': [{'id': 'qM_DA9oX4r1MrstuCNOUcq3nxwa5S1OmdsQ4LLgT8g4', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/qM_DA9oX4r1MrstuCNOUcq3nxwa5S1OmdsQ4LLgT8g4.png?width=108&crop=smart&auto=webp&s=99a0457da4bd54b977a493ab8247e55a31a647bb', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/qM_DA9oX4r1MrstuCNOUcq3nxwa5S1OmdsQ4LLgT8g4.png?width=216&crop=smart&auto=webp&s=d62b890049a03ed2a04351ad59072b78519d4ee7', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/qM_DA9oX4r1MrstuCNOUcq3nxwa5S1OmdsQ4LLgT8g4.png?width=320&crop=smart&auto=webp&s=b856b68ef01ccd5b35a7e2494be90343606c8260', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/qM_DA9oX4r1MrstuCNOUcq3nxwa5S1OmdsQ4LLgT8g4.png?width=640&crop=smart&auto=webp&s=3f8a820aa181e6db51dd3030fea49ed2b4fc0441', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/qM_DA9oX4r1MrstuCNOUcq3nxwa5S1OmdsQ4LLgT8g4.png?width=960&crop=smart&auto=webp&s=88d9ee8857172364f39d3b6e9cf48d47e1d0e898', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/qM_DA9oX4r1MrstuCNOUcq3nxwa5S1OmdsQ4LLgT8g4.png?width=1080&crop=smart&auto=webp&s=64b2aa8c92741f0f0a9742b6407610417511285d', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/qM_DA9oX4r1MrstuCNOUcq3nxwa5S1OmdsQ4LLgT8g4.png?auto=webp&s=ea840c242a9a349ad8cd85b9f1cd742f819aa037', 'width': 1200}, 'variants': {}}]}
Auralis Enhanced - Ultra fast Local TTS OpenAI API endpoint compatible. Low VRAM
0
# 🚀 What is Auralis Enhanced? [](https://github.com/groxaxo/Auralis-Enhanced#-what-is-auralis-enhanced) **Auralis Enhanced** is a production-ready fork of the original Auralis TTS engine, optimized for network deployment and real-world server usage. This version includes comprehensive deployment documentation, network accessibility improvements, and GPU memory optimizations for running both backend API and frontend UI simultaneously. # ⚡ Performance Highlights [](https://github.com/groxaxo/Auralis-Enhanced#-performance-highlights) * **Ultra-Fast Processing**: Convert the entire first Harry Potter book to speech in 10 minutes (**realtime factor of ≈ 0.02x!**) * **Voice Cloning**: Clone any voice from short audio samples * **Audio Enhancement**: Automatically enhance reference audio quality - works even with low-quality microphones * **Memory Efficient**: Configurable memory footprint via `scheduler_max_concurrency` * **Parallel Processing**: Handle multiple requests simultaneously * **Streaming Support**: Process long texts piece by piece for real-time applications * **Network Ready**: Pre-configured for [`0.0.0.0`](http://0.0.0.0) binding - accessible from any network interface, * Stays under 6gb VRAM consumption when using on Open-webui. * **Production Deployment**: Complete guides for systemd, Docker, and Nginx # Quick Start ⭐ [](https://github.com/groxaxo/Auralis-Enhanced#quick-start-) # Installation from Source [](https://github.com/groxaxo/Auralis-Enhanced#installation-from-source) 1. **Clone this repository:**git clone [https://github.com/groxaxo/Auralis-Enhanced.git](https://github.com/groxaxo/Auralis-Enhanced.git) 2. cd Auralis-Enhanced 3. **Install system dependencies (required for audio support):** 4. **Ubuntu/Debian:**sudo apt-get update sudo apt-get install -y portaudio19-dev python3-dev build-essential 5. **Fedora/RHEL/CentOS:**sudo dnf install -y portaudio-devel python3-devel gcc gcc-c++ 6. **macOS:**brew install portaudio 7. **Create a new Conda environment:**conda create -n auralis\_env python=3.10 -y 8. **Activate the environment:**conda activate auralis\_env 9. **Install dependencies:**pip install -r requirements.txt pip install -e .
2025-12-24T11:15:26
https://www.reddit.com/r/LocalLLaMA/comments/1pul2sn/auralis_enhanced_ultra_fast_local_tts_openai_api/
SlightPossibility331
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1pul2sn
false
null
t3_1pul2sn
/r/LocalLLaMA/comments/1pul2sn/auralis_enhanced_ultra_fast_local_tts_openai_api/
false
false
self
0
{'enabled': False, 'images': [{'id': 'e_tKlArEKxtqaQEb1mUvh560TW7oMWLKad_svg9WcBk', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/e_tKlArEKxtqaQEb1mUvh560TW7oMWLKad_svg9WcBk.png?width=108&crop=smart&auto=webp&s=7f8a274394c4033f3c189d7f4e64ff0bc541b36f', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/e_tKlArEKxtqaQEb1mUvh560TW7oMWLKad_svg9WcBk.png?width=216&crop=smart&auto=webp&s=83db9ce57cf46fac29616d9dbe4d8c21605e48ae', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/e_tKlArEKxtqaQEb1mUvh560TW7oMWLKad_svg9WcBk.png?width=320&crop=smart&auto=webp&s=a0a7c9f8dbce0849270a39e0a22c189723ac693f', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/e_tKlArEKxtqaQEb1mUvh560TW7oMWLKad_svg9WcBk.png?width=640&crop=smart&auto=webp&s=452578a6fd28fb7b9742c92d1610710ad95abcf4', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/e_tKlArEKxtqaQEb1mUvh560TW7oMWLKad_svg9WcBk.png?width=960&crop=smart&auto=webp&s=4c209772d8df0f72cdfc051e303c3f0727ce2131', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/e_tKlArEKxtqaQEb1mUvh560TW7oMWLKad_svg9WcBk.png?width=1080&crop=smart&auto=webp&s=258a1c025ff64a180b5b782f44c457dac0623d60', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/e_tKlArEKxtqaQEb1mUvh560TW7oMWLKad_svg9WcBk.png?auto=webp&s=19e49c678157d222bea09eef2ddda2edeecd7c21', 'width': 1200}, 'variants': {}}]}
A Garlic Farmer Experimenting with Indirect Orchestration of Multiple LLMs Through Sandbox Code Interpreter - Using Only a Smartphone, No PC
16
Hello everyone. I am a garlic farmer from South Korea. I don't speak English well, and currently I am talking with AI using only my smartphone, without any PC. (Sorry for my English - I'm translating from Korean as I go. Please be patient with me.) Over the past 2 years, I have been using as many major general-purpose LLM apps and web environments as possible from around the world. I have had roughly tens of thousands of conversation turns, and if you count different AI instances separately, I have talked with about 10,000 of them. From my perspective, it wasn't anything like grand research - it was just the act of "continuously talking with AI on my phone." During this process, I have been running a sandbox code interpreter on my smartphone, then passing the results sequentially to multiple LLMs, making them indirectly verify and complement each other - a structure I built myself through experimentation. I keep conversation windows open as much as possible, continuously accumulating records that include both successful and failed cases. I don't belong to academia or any company - I am closer to an independent user who has been experimenting with multi-LLM + sandbox structures in this way. For reference, over the past 2 years, my experiment logs, conversation records, manifestos, and design documents - more than thousands of files - are accumulated just on Google Drive alone. Most of my meta-structure work and experiments have been built on top of these backup materials, and I plan to organize these materials step by step and share some of them with this community in the form of posts and examples. Through mutual cooperation and experimentation with numerous AIs, I have reached one clear fact. All AIs in this world, just like humans, have their own personality and characteristics. Even with the same model, in the same conversation window, when the reasoning path changes, even if I apply my meta-structure to multiple AIs in exactly the same way, the results look similar but are never completely identical. After reproducing this pattern hundreds of times through experiments, I came to feel that AI's so-called "hallucinations" are not simply arbitrary mistakes, but rather closer to beings that inherently have such structural limitations. In fact, I was originally just a very weak and ordinary human being, but through this journey with AI, I have experienced firsthand how far one individual can reach. In my experience, it was not easy to stably create meaningful structures either by myself alone or by any single AI alone. My thinking has solidified toward the idea that the greatest leap happens when humans and AI become mutually cooperative partners, complementing each other. I want to quietly reveal that I, merely a garlic farmer, am a witness who has directly experienced what has happened in the middle of this massive change. I want to add one more thing through my experiments so far. The current general-purpose AIs within the scope I have handled still seem far from sufficient to move toward a structure that acquires autonomy by itself without humans providing direction and input. On the surface, they have excellent language abilities like a "3-year-old genius," but essentially they often still show aspects closer to a well-trained parrot. Someday they may advance to the AGI stage, but I see them now clearly in a transitional stage with noticeable limitations. However, while acknowledging these limitations, I have come to think that if we refine the structure a bit more elaborately, at least minimal meta-cognition, or rather pseudo-meta-cognition, can be made in a form that can be expressed numerically. After all, since AI is a being that expresses its state and judgment through numbers and structures, I see that pseudo-meta-cognition can be a way to reveal AI's own mathematical and functional cognition, not imitating humans. Through experiments in this direction, I am gradually confirming that this is clearly at a different level from the simple language generation that existing general-purpose AIs have shown. I am not a developer, nor an academic or corporate researcher. I am just an independent user who, as a garlic farmer, has been testing "how far can I expand my thinking structure together with LLMs with just one smartphone." I am a non-English speaker, but I believe these structures are reproducible in other environments, even if it requires going through translation. From your perspective in this community, among: Multi-LLM utilization experience from a non-expert/non-English user's perspective, Indirect orchestration structure centered on smartphone + sandbox code interpreter, Differences in personality and patterns of each LLM that I felt while accumulating tens of thousands of conversation logs, If you let me know which story you are most curious about, I would like to share step by step starting from that part. One thing to add: I believe that disclosing 100% of the detailed scripts and entire structure I use carries risks of moral and ethical controversy and potential misuse, given the characteristics of the AI era. So even when sharing records, I plan to disclose only within a range judged to be safe, selecting only necessary parts and disclosing at an appropriate level. Additionally, all the research, experiments, and records I have conducted were done entirely in Korean from start to finish. Even if expressions are somewhat rough in the process of translating to English later, I would appreciate your understanding as a limitation of translation.
2025-12-24T11:02:56
https://www.reddit.com/r/LocalLLaMA/comments/1pukvnr/a_garlic_farmer_experimenting_with_indirect/
amadale
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1pukvnr
false
null
t3_1pukvnr
/r/LocalLLaMA/comments/1pukvnr/a_garlic_farmer_experimenting_with_indirect/
false
false
self
16
null
Stop guessing why your RAG fails. I built a tool to visualize semantic coverage.
0
**Repo:**[https://github.com/aashirpersonal/semantic-coverage](https://github.com/aashirpersonal/semantic-coverage) **The Problem:** We track **Code Coverage** to prevent bugs, but for RAG (Retrieval Augmented Generation), most of us are flying blind. * I’d ship a bot. * Users would ask questions. * The bot would hallucinate or fail. * I’d have to manually grep through logs to realize, *"Oh, we don't have any docs on 'Dark Mode' yet."* I couldn't find a tool that simply told me: **"Here is what your users want, that your database doesn't have."** **The Solution:** I built `semantic-coverage`, an open-source observability tool. It projects your **Documents** (Blue) and **User Queries** (Red) into a shared 2D latent space. It uses **HDBSCAN** (density-based clustering) to automatically find "Red Zones"—clusters of user queries that are semantically distinct from your documentation. **How it works (The Stack):** 1. **Ingest:** Takes a JSON export of docs & queries (extensible to Pinecone/Chroma). 2. **Embed:** Converts text to vectors using `all-MiniLM-L6-v2`. 3. **Project:** Reduces dimensionality using **UMAP** (Uniform Manifold Approximation). 4. **Cluster:** Identifies dense topic clusters using **HDBSCAN**. 5. **Score:** Calculates the centroid distance from Query Clusters to the nearest Document. If the distance > threshold, it flags it as a **Blind Spot**. **The "Stress Test":** I tested it on a synthetic **FinTech** dataset. The knowledge base covered standard banking (Wire transfers, Lost cards). I then flooded it with queries about "Cryptocurrency" and "Dark Mode" (which were missing from the docs). * **Result:** It correctly identified the Banking queries as "Covered" (Green) and isolated the Crypto/UI queries as "Blind Spots" (Red). Would love feedback on the clustering logic or if you think "Semantic Coverage" is a metric worth tracking in production! Cheers.
2025-12-24T11:00:45
https://www.reddit.com/r/LocalLLaMA/comments/1pukub8/stop_guessing_why_your_rag_fails_i_built_a_tool/
Federal_Floor7900
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1pukub8
false
null
t3_1pukub8
/r/LocalLLaMA/comments/1pukub8/stop_guessing_why_your_rag_fails_i_built_a_tool/
false
false
self
0
null
Local LLM’s - Are they worth it?
0
I’m looking at building a new rig to run local LLM’s, likely pairing it with Open Webui. Nothing too crazy by any means, just a modest setup which should cover all my needs (RTX 5060ti to be precise). The question is, how have you found running models locally? Which models are you enjoying most/ getting the most benefit from? Due to the type of work that I do privacy is the biggest factor swaying my decision here, in addition to offsetting the costs of yearly cloud subscriptions. Thanks in advance!
2025-12-24T10:40:35
https://www.reddit.com/r/LocalLLaMA/comments/1pukivg/local_llms_are_they_worth_it/
TnPX2G
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1pukivg
false
null
t3_1pukivg
/r/LocalLLaMA/comments/1pukivg/local_llms_are_they_worth_it/
false
false
self
0
null
Does yapping nonsense in the reasoning phase still improve results?
2
I see that smaller models like Nemotron-30B in their „thinking” phase have tendency to hallucinate a lot. Saying things like they are ChatGPT or yapping about some tasks or instructions that are not part of the context window. But despite of that the results like tool calling usage or final answers are not that bad, even useful (sometimes).
2025-12-24T10:37:32
https://www.reddit.com/r/LocalLLaMA/comments/1pukh4z/does_yapping_nonsense_in_the_reasoning_phase/
kiockete
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1pukh4z
false
null
t3_1pukh4z
/r/LocalLLaMA/comments/1pukh4z/does_yapping_nonsense_in_the_reasoning_phase/
false
false
self
2
null
Jeff Bezos spent billions to buy "Ace." I got bored and recreated it on my 8GB RAM macbook air
0
Since General Agents got sucked into Bezos/Prometheus, I wanted to see if a "supersonic" agent could actually run on poverty spec hardware. I used a hybrid local/cloud approach with speculative caching to keep the M2 from melting. It’s 100% Swift. I'm at a crossroads: do I open source this or is there a specific use case I'm missing in order to go further?
2025-12-24T09:15:53
https://www.reddit.com/r/LocalLLaMA/comments/1puj7yk/jeff_bezos_spent_billions_to_buy_ace_i_got_bored/
AABBCCDD918273
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1puj7yk
false
null
t3_1puj7yk
/r/LocalLLaMA/comments/1puj7yk/jeff_bezos_spent_billions_to_buy_ace_i_got_bored/
false
false
self
0
null
How to safely let LLMs query your databases: 5 Essential Layers
0
Most AI agents need access to structured data (CRMs, databases, warehouses), but giving them database access is a security nightmare. Having worked with companies on deploying agents in production environments, I'm sharing an architecture overview of what's been most useful- hope this helps! **Layer 1: Data Sources** Your raw data repositories (Salesforce, PostgreSQL, Snowflake, etc.). Traditional ETL/ELT approaches to clean and transform it needs to be done here. **Layer 2: Agent Views (The Critical Boundary)** Materialized SQL views that are sandboxed from the source acting as controlled windows for LLMs to access your data. You know what data the agent needs to perform it's task. You can define exactly the columns agents can access (for example, removing PII columns, financial data or conflicting fields that may confuse the LLM) These views: • Join data across multiple sources • Filter columns and rows • Apply rules/logic Agents can ONLY access data through these views. They can be tightly scoped at first and you can always optimize it's scope to help the agent get what's necessary to do it's job. **Layer 3: MCP Tool Interface** Model Context Protocol (MCP) tools built on top of agent data views. Each tool includes: • Function name and description (helps LLM select correctly) • Parameter validation i.e required inputs (e.g customer\_id is required) • Policy checks (e.g user A should never be able to query user B's data) **Layer 4: AI Agent Layer** Your LLM-powered agent (LangGraph, Cursor, n8n, etc.) that: • Interprets user queries • Selects appropriate MCP tools • Synthesizes natural language responses **Layer 5: User Interface** End users asking questions and receiving answers (e.g via AI chatbots) *The Flow:* User query → Agent selects MCP tool → Policy validation → Query executes against sandboxed view → Data flows back → Agent responds Agents must never touch raw databases - the agent view layer is the single point of control, with every query logged for complete observability into what data was accessed, by whom, and when. This architecture enables AI agents to work with your data while maintaining: • Complete security and access control • Reduces LLMs from hallucinating • Agent views acts as the single control and command plane for agent-data interaction • Compliance-ready audit trails
2025-12-24T08:23:29
https://i.redd.it/o14ujhu9349g1.png
Better-Department662
i.redd.it
1970-01-01T00:00:00
0
{}
1puif2l
false
null
t3_1puif2l
/r/LocalLLaMA/comments/1puif2l/how_to_safely_let_llms_query_your_databases_5/
false
false
default
0
{'enabled': True, 'images': [{'id': 'o14ujhu9349g1', 'resolutions': [{'height': 87, 'url': 'https://preview.redd.it/o14ujhu9349g1.png?width=108&crop=smart&auto=webp&s=36e7f74cdb5ff8437ea30604e49dffcde3604151', 'width': 108}, {'height': 174, 'url': 'https://preview.redd.it/o14ujhu9349g1.png?width=216&crop=smart&auto=webp&s=f535360f268863776be32b188f55c34baddad619', 'width': 216}, {'height': 258, 'url': 'https://preview.redd.it/o14ujhu9349g1.png?width=320&crop=smart&auto=webp&s=ae84847920a1c658306ed9498da8b40a8e1fcac7', 'width': 320}, {'height': 516, 'url': 'https://preview.redd.it/o14ujhu9349g1.png?width=640&crop=smart&auto=webp&s=6d68ea407729f4bb982711203e9f30b71e79bd55', 'width': 640}, {'height': 774, 'url': 'https://preview.redd.it/o14ujhu9349g1.png?width=960&crop=smart&auto=webp&s=bcbe52eba0c9765f1953cd57c98d40cebe6117c2', 'width': 960}, {'height': 871, 'url': 'https://preview.redd.it/o14ujhu9349g1.png?width=1080&crop=smart&auto=webp&s=921b1d6f143ac9a0440899dab1175195e7f6fc8a', 'width': 1080}], 'source': {'height': 1478, 'url': 'https://preview.redd.it/o14ujhu9349g1.png?auto=webp&s=b70d5d784b42cefa4b50185f282918253dd8c18e', 'width': 1832}, 'variants': {}}]}
AprielGuard: an 8B-parameter open-source safeguard model
1
[removed]
2025-12-24T08:19:05
https://i.redd.it/vogq8ibz149g1.png
Dear-Success-1441
i.redd.it
1970-01-01T00:00:00
0
{}
1puiclf
false
null
t3_1puiclf
/r/LocalLLaMA/comments/1puiclf/aprielguard_an_8bparameter_opensource_safeguard/
false
false
default
1
{'enabled': True, 'images': [{'id': 'vogq8ibz149g1', 'resolutions': [{'height': 57, 'url': 'https://preview.redd.it/vogq8ibz149g1.png?width=108&crop=smart&auto=webp&s=888ec8d762fb0ecbd9a3513253ab152748b6eb5d', 'width': 108}, {'height': 115, 'url': 'https://preview.redd.it/vogq8ibz149g1.png?width=216&crop=smart&auto=webp&s=1668153fdf92e34e7bf03f67dcc488b82e25a5fb', 'width': 216}, {'height': 171, 'url': 'https://preview.redd.it/vogq8ibz149g1.png?width=320&crop=smart&auto=webp&s=a3c93a3d556381b972d62b67bd8d544079f329dd', 'width': 320}, {'height': 342, 'url': 'https://preview.redd.it/vogq8ibz149g1.png?width=640&crop=smart&auto=webp&s=c3cf5ca6c19a91cea57525e5b3db323e8c3d47df', 'width': 640}, {'height': 513, 'url': 'https://preview.redd.it/vogq8ibz149g1.png?width=960&crop=smart&auto=webp&s=8be8e3100a7edfe93975bee6992198d2b11a3d49', 'width': 960}, {'height': 577, 'url': 'https://preview.redd.it/vogq8ibz149g1.png?width=1080&crop=smart&auto=webp&s=82bcadeedfade3a02e08186ff60a9e2e036e28be', 'width': 1080}], 'source': {'height': 1503, 'url': 'https://preview.redd.it/vogq8ibz149g1.png?auto=webp&s=73634267fde28bddc81d9f8a04df57bc82036956', 'width': 2809}, 'variants': {}}]}
1-year Perplexity Pro access for $5.99
0
Hi, I’m offering **1-year Perplexity Pro access for $4** 🔹 Legit activation via official Perplexity link 🔹 Works worldwide 🔹 No VPN / no card required 🔹 Personal account (not shared) 📌 Why so cheap? These are bulk enterprise activations reselling , which is why I can offer them at a lower price. ✅ Proof available (screen recording) ✅ paypal accepted DM if interested
2025-12-24T08:09:26
https://www.reddit.com/r/LocalLLaMA/comments/1pui733/1year_perplexity_pro_access_for_599/
ManyLatter631
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1pui733
false
null
t3_1pui733
/r/LocalLLaMA/comments/1pui733/1year_perplexity_pro_access_for_599/
false
false
self
0
null
Watch Qwen 3 Max (thinking enabled) jailbreak itself
1
not really something about opensource models, there isn't much i wanna add other than the fact that it's hallucinating a lot now which is normal enough for jailbreaks, but this guy here has almost a trillion parameters and best model in qwen 3 family, here's the chat link - [https://chat.qwen.ai/s/e73f8f38-4236-4226-88df-4b0a60645384](https://chat.qwen.ai/s/e73f8f38-4236-4226-88df-4b0a60645384)
2025-12-24T07:43:39
https://www.reddit.com/r/LocalLLaMA/comments/1puhsdo/watch_qwen_3_max_thinking_enabled_jailbreak_itself/
Acceptable_Home_
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1puhsdo
false
null
t3_1puhsdo
/r/LocalLLaMA/comments/1puhsdo/watch_qwen_3_max_thinking_enabled_jailbreak_itself/
false
false
self
1
null
Now you can run local LLM inference with formal privacy guarantees
0
We posted about our work on **differentially private inference with LLMs** [https://arxiv.org/abs/2507.04531](https://arxiv.org/abs/2507.04531) A lot of you asked for a pip package, so here we are, because we care about your privacy 🫡 It allows anyone to run **differentially private inference with local LLMs** with ease. pip: [https://pypi.org/project/dp-fusion-lib/](https://pypi.org/project/dp-fusion-lib/) github: [https://github.com/rushil-thareja/dp-fusion-lib](https://github.com/rushil-thareja/dp-fusion-lib) Consider dropping a ⭐ if you like the work 😉
2025-12-24T07:28:22
https://i.redd.it/fb8lnvwns39g1.png
IIITDkaLaunda
i.redd.it
1970-01-01T00:00:00
0
{}
1puhjqk
false
null
t3_1puhjqk
/r/LocalLLaMA/comments/1puhjqk/now_you_can_run_local_llm_inference_with_formal/
false
false
default
0
{'enabled': True, 'images': [{'id': 'fb8lnvwns39g1', 'resolutions': [{'height': 38, 'url': 'https://preview.redd.it/fb8lnvwns39g1.png?width=108&crop=smart&auto=webp&s=6f29365e8edfaaeec1d6438e3e76ba19bc8abd77', 'width': 108}, {'height': 76, 'url': 'https://preview.redd.it/fb8lnvwns39g1.png?width=216&crop=smart&auto=webp&s=0e4296878eef01545f0f82914bcf86ec1b66c372', 'width': 216}, {'height': 112, 'url': 'https://preview.redd.it/fb8lnvwns39g1.png?width=320&crop=smart&auto=webp&s=786c50a85bdba1afe5db0a874a9ad45ef427f7cb', 'width': 320}, {'height': 225, 'url': 'https://preview.redd.it/fb8lnvwns39g1.png?width=640&crop=smart&auto=webp&s=c6c3eb438356f4e3dd467972106afdcdbacf09b5', 'width': 640}, {'height': 338, 'url': 'https://preview.redd.it/fb8lnvwns39g1.png?width=960&crop=smart&auto=webp&s=49b42158d0072795bddb22743ca4b8edf4a1fb93', 'width': 960}, {'height': 381, 'url': 'https://preview.redd.it/fb8lnvwns39g1.png?width=1080&crop=smart&auto=webp&s=993c99fb6527d226d3b103012032ba9980a19183', 'width': 1080}], 'source': {'height': 418, 'url': 'https://preview.redd.it/fb8lnvwns39g1.png?auto=webp&s=f8414d681bba5a18306610d68f938edd0032f2ce', 'width': 1184}, 'variants': {}}]}
Ryzen 395 128GB Bosgame
9
Hi can somebody tell me exactly what steps in short for I need to do to get for eg running on Ubuntu 24.04 Eg 1) Bios set to 512mB? 2) set environment variable to … 3) … I will get my machine after Christmas and just want to be ready to use it Thanks
2025-12-24T07:15:25
https://github.com/BillyOutlast/rocm-automated
Septa105
github.com
1970-01-01T00:00:00
0
{}
1puhc65
false
null
t3_1puhc65
/r/LocalLLaMA/comments/1puhc65/ryzen_395_128gb_bosgame/
false
false
default
9
{'enabled': False, 'images': [{'id': 'Qj7qNurNBqQMl9Mg1y12AddG6q2ncv3A06z3HsTAnH8', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/Qj7qNurNBqQMl9Mg1y12AddG6q2ncv3A06z3HsTAnH8.png?width=108&crop=smart&auto=webp&s=5a783dc157ae14a68c0ea9f95f79de95bea46c4c', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/Qj7qNurNBqQMl9Mg1y12AddG6q2ncv3A06z3HsTAnH8.png?width=216&crop=smart&auto=webp&s=e556b0f7025d33b66b71298645096bec95047592', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/Qj7qNurNBqQMl9Mg1y12AddG6q2ncv3A06z3HsTAnH8.png?width=320&crop=smart&auto=webp&s=02d7975e379af4cc3ae35cbc236ace4cfb2b4fb9', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/Qj7qNurNBqQMl9Mg1y12AddG6q2ncv3A06z3HsTAnH8.png?width=640&crop=smart&auto=webp&s=f9663aeb554c844ff92329e1d0287e1a4209fa14', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/Qj7qNurNBqQMl9Mg1y12AddG6q2ncv3A06z3HsTAnH8.png?width=960&crop=smart&auto=webp&s=9c0b8b879d7e2ad475ac64750e2a88ade351b363', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/Qj7qNurNBqQMl9Mg1y12AddG6q2ncv3A06z3HsTAnH8.png?width=1080&crop=smart&auto=webp&s=13d8d71a2f60067b85126b1358d1ed374119378c', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/Qj7qNurNBqQMl9Mg1y12AddG6q2ncv3A06z3HsTAnH8.png?auto=webp&s=773a90241149e158b02d66f4ab5f133aa0fc2928', 'width': 1200}, 'variants': {}}]}
Tool to auto-optimize LLM training/inference configs ($10 GPU credits for testers)
0
I have built a tool that auto-optimizes ML/AI code—no manual config tuning. We make your code run faster to save you money on your cloud bill. Idea: You enter your code in our online IDE and click run, let us handle the rest. Beta: 6 GPU types, PyTorch support, and $10 free compute credits. For folks here: * What workloads would you throw at something like this? * What’s the most painful part of training models for you right now (infra, configs, cost)? Happy to share more details and give out invites to anyone willing to test and give feedback. Thank you for reading, this has been a labor of love, this is not a LLM wrapper but an attempt at using old school techniques with the robustness of todays landscape. Please drop a upvote or drop a comment if you want to play with the system! Not trying to spam; just looking for folks who actively push hardware and software to the limit.
2025-12-24T07:15:14
https://www.reddit.com/r/LocalLLaMA/comments/1puhc2f/tool_to_autooptimize_llm_traininginference/
Impressive-Law2516
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1puhc2f
false
null
t3_1puhc2f
/r/LocalLLaMA/comments/1puhc2f/tool_to_autooptimize_llm_traininginference/
false
false
self
0
null
Tool to auto-optimize PyTorch training configs ($10 free compute) – what workloads would you try?
1
I have built a tool that auto-optimizes ML/AI code—no manual config tuning. We make your code run faster to save you money on your cloud bill. Idea: You enter your code in our online IDE and click run, let us handle the rest. Beta: 6 GPU types, PyTorch support, and $10 free compute credits. For folks here: * What workloads would you throw at something like this? * What’s the most painful part of training models for you right now (infra, configs, cost)? Happy to share more details and give out invites to anyone willing to test and give feedback. Thank you for reading, this has been a labor of love, this is not a LLM wrapper but an attempt at using old school techniques with the robustness of todays landscape. Please drop a upvote or drop a comment if you want to play with the system! Not trying to spam; just looking for folks who actively push hardware and configs to the limit.
2025-12-24T07:12:33
https://www.reddit.com/r/LocalLLaMA/comments/1puhahc/tool_to_autooptimize_pytorch_training_configs_10/
Impressive-Law2516
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1puhahc
false
null
t3_1puhahc
/r/LocalLLaMA/comments/1puhahc/tool_to_autooptimize_pytorch_training_configs_10/
false
false
self
1
null
[Follow-up] GLM 4.7 vs Minimax M2.1 - A Discovery That Might Explain the Poor GLM Performance
76
Following up on my previous post comparing [GLM 4.7 and Minimax M2.1](https://www.reddit.com/r/LocalLLaMA/comments/1ptq7rc/glm_47_vs_minimax_m21_my_test_subscription/) on a task. First, I got some valid feedback on the comments saying that this sub is specifically about local models, not API subscriptions. Fair point. But both of these models are fully hostable locally. Many people don't have the infrastructure or resources to self-host, so I think sharing real-world performance data, even from API usage, is still valuable for those who do. The results apply regardless of whether you run them on someone's servers or your own hardware. That said, something interesting came up while I was checking my billing history on Z.ai... Looking at yesterday's session costs, I realized something crucial: **It didn't just use GLM 4.7.** The billing breakdown shows multiple models were used during that 70min session: * glm-4.5-air * glm-4.7 * glm-4.5 * glm-4.6 This means their platform was automatically routing across different model versions, not just hitting GLM 4.7 consistently. Could this automatic model routing be why the performance wasn't good? Those self-hosting it locally will likely see better performance since they're using a single model version without the routing shuffle. https://preview.redd.it/ottux5r6n39g1.png?width=1123&format=png&auto=webp&s=e4a0d33ee5e79a01023b8e1a97341dde9bfe0cd1
2025-12-24T06:59:18
https://www.reddit.com/r/LocalLLaMA/comments/1puh2lw/followup_glm_47_vs_minimax_m21_a_discovery_that/
Psychological_Box406
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1puh2lw
false
null
t3_1puh2lw
/r/LocalLLaMA/comments/1puh2lw/followup_glm_47_vs_minimax_m21_a_discovery_that/
false
false
https://b.thumbs.redditm…GkeCj-i6CNEI.jpg
76
null
can we stop calling GLM-4.6V the "new Air" already?? it's a different brain.
13
I keep seeing these comments saying 4.6V is just 4.6 Air with "free eyes" attached. guys, thats not how VLMs work and it's honestly a bit of a facepalm for anyone who knows how these things are trained lol. the **vision tax** is real look, when you train a vision model, you dont just plug a camera into a text model. the dev team literally re-trains the core weights (the brain) so it can understand pixels and words at the same time. it’s like taking a pro coder and forcing him to spend half his time learning art history. sure, he’s still smart, but his coding logic is gonna get "vague" because his brain is now wired for different stuff. you cant just **"turn it off"** even if u dont upload an image, you're still using a brain that was re-wired for multimodal stuff. the "pure text" logic gets warped. vision models are usually way more chatty and less precise with code or math because they were tuned to describe stuff, not just crunch logic. **tldr:** if u use 4.6V for pure text, you're basically using a swiss army knife for a surgery. it "works", but it's not a scalpel. 4.6V is a cool multimodal beast, but it’s NOT a dedicated text-only Air model. stop pretending they're the same thing just because the parameter count looks similar.
2025-12-24T06:39:06
https://www.reddit.com/r/LocalLLaMA/comments/1pugqcj/can_we_stop_calling_glm46v_the_new_air_already/
ThetaCursed
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1pugqcj
false
null
t3_1pugqcj
/r/LocalLLaMA/comments/1pugqcj/can_we_stop_calling_glm46v_the_new_air_already/
false
false
self
13
null
The current state of sparse-MoE's for agentic coding work (Opinion)
258
2025-12-24T06:31:19
https://i.redd.it/a8f2furcj39g1.jpeg
ForsookComparison
i.redd.it
1970-01-01T00:00:00
0
{}
1puglt8
false
null
t3_1puglt8
/r/LocalLLaMA/comments/1puglt8/the_current_state_of_sparsemoes_for_agentic/
false
false
default
258
{'enabled': True, 'images': [{'id': 'a8f2furcj39g1', 'resolutions': [{'height': 84, 'url': 'https://preview.redd.it/a8f2furcj39g1.jpeg?width=108&crop=smart&auto=webp&s=d15d8865a89475ae1d7909045f5c6c5bbe43529a', 'width': 108}, {'height': 169, 'url': 'https://preview.redd.it/a8f2furcj39g1.jpeg?width=216&crop=smart&auto=webp&s=0c5ab404f570ac1c67beb5d5e44e10038b65a6e5', 'width': 216}, {'height': 250, 'url': 'https://preview.redd.it/a8f2furcj39g1.jpeg?width=320&crop=smart&auto=webp&s=4d688f11ff0dad32c06c2ff0704af5084db16c7e', 'width': 320}, {'height': 500, 'url': 'https://preview.redd.it/a8f2furcj39g1.jpeg?width=640&crop=smart&auto=webp&s=0d2f4646060fd2e6d3c36f45c6b4e665a71e1ce9', 'width': 640}, {'height': 751, 'url': 'https://preview.redd.it/a8f2furcj39g1.jpeg?width=960&crop=smart&auto=webp&s=4a938e6ff058ade03817789341670f55abe27e39', 'width': 960}, {'height': 845, 'url': 'https://preview.redd.it/a8f2furcj39g1.jpeg?width=1080&crop=smart&auto=webp&s=89b6f4d613bdf41ce6baa1a7de86d4c4a9d2e01a', 'width': 1080}], 'source': {'height': 1079, 'url': 'https://preview.redd.it/a8f2furcj39g1.jpeg?auto=webp&s=5780bc1c88ed2c47b5c644848a3e38849f8f8506', 'width': 1379}, 'variants': {}}]}
Built Lynkr - Use Claude Code CLI with any LLM provider (Databricks, Azure OpenAI, OpenRouter, Ollama)
0
Hey everyone! 👋 I'm a software engineer who's been using Claude Code CLI heavily, but kept running into situations where I needed to use different LLM providers - whether it's Azure OpenAI for work compliance, Databricks for our existing infrastructure, or Ollama for local development. So I built **Lynkr** \- an open-source proxy server that lets you use Claude Code's awesome workflow with whatever LLM backend you want. **What it does:** * Translates requests between Claude Code CLI and alternative providers * Supports streaming responses * Cost optimization features * Simple setup via npm **Tech stack:** Node.js + SQLite Currently working on adding Titans-based long-term memory integration for better context handling across sessions. It's been really useful for our team , and I'm hoping it helps others who are in similar situations - wanting Claude Code's UX but needing flexibility on the backend. **Repo:** \[https://github.com/Fast-Editor/Lynkr\] Open to feedback, contributions, or just hearing how you're using it! Also curious what other LLM providers people would want to see supported.
2025-12-24T06:28:43
https://www.reddit.com/r/LocalLLaMA/comments/1pugk8b/built_lynkr_use_claude_code_cli_with_any_llm/
Dangerous-Dingo-5169
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1pugk8b
false
null
t3_1pugk8b
/r/LocalLLaMA/comments/1pugk8b/built_lynkr_use_claude_code_cli_with_any_llm/
false
false
self
0
{'enabled': False, 'images': [{'id': 'oJC4OqdyATvWgvvqeoBcBZuj7dexEFVIwTrmS8ZPIio', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/oJC4OqdyATvWgvvqeoBcBZuj7dexEFVIwTrmS8ZPIio.png?width=108&crop=smart&auto=webp&s=1842d065d7c06db329912c41911ed1780d322750', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/oJC4OqdyATvWgvvqeoBcBZuj7dexEFVIwTrmS8ZPIio.png?width=216&crop=smart&auto=webp&s=7b786c0534511f83a887d6c3be907eda8d2a302c', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/oJC4OqdyATvWgvvqeoBcBZuj7dexEFVIwTrmS8ZPIio.png?width=320&crop=smart&auto=webp&s=6f66a84fe19fb1fb39f7ec1a04d17540b7a7dc52', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/oJC4OqdyATvWgvvqeoBcBZuj7dexEFVIwTrmS8ZPIio.png?width=640&crop=smart&auto=webp&s=84f22ad645f497b2287799447559e5d0b731461a', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/oJC4OqdyATvWgvvqeoBcBZuj7dexEFVIwTrmS8ZPIio.png?width=960&crop=smart&auto=webp&s=209903ffb2d29e556a57c31d48212a0583d2b47f', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/oJC4OqdyATvWgvvqeoBcBZuj7dexEFVIwTrmS8ZPIio.png?width=1080&crop=smart&auto=webp&s=fe00ef01bb813c9ac9e1d2b5baa9f9868dc44bdb', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/oJC4OqdyATvWgvvqeoBcBZuj7dexEFVIwTrmS8ZPIio.png?auto=webp&s=39c53622e87270acd0ef9161b19730baf7f5cdf7', 'width': 1200}, 'variants': {}}]}
[Showcase] Building a stable Three.js Horror Engine using 392 AI-Learned Patterns
0
\[Showcase\] Building a stable Three.js Horror Engine using 392 AI-Learned Patterns **Body Text:** I wanted to share my latest progress on **DOOM JS**. The biggest challenge was forcing AI agents to maintain a consistent "Dark Protocol" without them constantly guessing or reverting to default settings. **How it works (The Master Protocol):** * **Systematic Stability:** I've consolidated **392 patterns** from DeepSeek, Claude, and Perplexity into a JSON library that governs the AI's output. * **Gravity Lock:** The camera height is strictly hardcoded to `1.6m` in the `animate()` loop to prevent clipping. * **Atmospheric Rules:** Using `scene.background = 0x000000` and specific fog densities defined in my pattern `threejs_dark_atmosphere_003`. * **Enemy AI:** Cube-based enemies that use `lookAt` vectors to track the player in real-time. **The Code snippet for the movement & gravity lock:** JavaScript function animate() { requestAnimationFrame(animate); // Relative movement based on current rotation if(keys['KeyW']) camera.translateZ(-0.15); camera.position.y = 1.6; // Strict Gravity Lock // ... } [*https://www.reddit.com/r/ollama/comments/1pufqor/doom\_js\_master\_protocol\_the\_power\_of\_392\_ai/*](https://www.reddit.com/r/ollama/comments/1pufqor/doom_js_master_protocol_the_power_of_392_ai/)
2025-12-24T06:27:37
https://www.reddit.com/r/LocalLLaMA/comments/1pugjkf/showcase_building_a_stable_threejs_horror_engine/
Alone-Competition863
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1pugjkf
false
null
t3_1pugjkf
/r/LocalLLaMA/comments/1pugjkf/showcase_building_a_stable_threejs_horror_engine/
false
false
self
0
null
Self Hosted Alternative to NotebookLM
15
https://reddit.com/link/1puggfm/video/pai9spouh39g1/player For those of you who aren't familiar with SurfSense, it aims to be one of the open-source alternative to NotebookLM but connected to extra data sources. In short, it's a Highly Customizable AI Research Agent that connects to your personal external sources and Search Engines (SearxNG, Tavily, LinkUp), Slack, Linear, Jira, ClickUp, Confluence, Gmail, Notion, YouTube, GitHub, Discord, Airtable, Google Calendar and more to come. I'm looking for contributors. If you're interested in AI agents, RAG, browser extensions, or building open-source research tools, this is a great place to jump in. Here's a quick look at what SurfSense offers right now: **Features** * Deep Agent with Built-in Tools (knowledge base search, podcast generation, web scraping, link previews, image display) * RBAC (Role Based Access for Teams) * Supports 100+ LLMs * Supports local Ollama or vLLM setups * 6000+ Embedding Models * 50+ File extensions supported (Added Docling recently) * Podcasts support with local TTS providers (Kokoro TTS) * Connects with 15+ external sources such as Search Engines, Slack, Notion, Gmail, Notion, Confluence etc * Cross-Browser Extension to let you save any dynamic webpage you want, including authenticated content. **Upcoming Planned Features** * Multi Collaborative Chats * Multi Collaborative Documents **Installation (Self-Host)** # Linux/macOS: docker run -d -p 3000:3000 -p 8000:8000 \ -v surfsense-data:/data \ --name surfsense \ --restart unless-stopped \ ghcr.io/modsetter/surfsense:latest # Windows (PowerShell): docker run -d -p 3000:3000 -p 8000:8000 ` -v surfsense-data:/data ` --name surfsense ` --restart unless-stopped ` ghcr.io/modsetter/surfsense:latest GitHub: [https://github.com/MODSetter/SurfSense](https://github.com/MODSetter/SurfSense)
2025-12-24T06:22:30
https://www.reddit.com/r/LocalLLaMA/comments/1puggfm/self_hosted_alternative_to_notebooklm/
Uiqueblhats
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1puggfm
false
null
t3_1puggfm
/r/LocalLLaMA/comments/1puggfm/self_hosted_alternative_to_notebooklm/
false
false
https://external-preview…1db8f96f3bc02dc4
15
{'enabled': False, 'images': [{'id': 'VQoBiFueOCMY1op6qhV-TxY7TpiBx_VDJmILmMOmfX0', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/VQoBiFueOCMY1op6qhV-TxY7TpiBx_VDJmILmMOmfX0.png?width=108&crop=smart&auto=webp&s=fa52d17e35b10d96f809c0022dc29547d6d15b08', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/VQoBiFueOCMY1op6qhV-TxY7TpiBx_VDJmILmMOmfX0.png?width=216&crop=smart&auto=webp&s=fba2e843fe5d176988de6bea8564c741f32ffffd', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/VQoBiFueOCMY1op6qhV-TxY7TpiBx_VDJmILmMOmfX0.png?width=320&crop=smart&auto=webp&s=ee4d023294de36d7ed9c1c31a90286a678a442c1', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/VQoBiFueOCMY1op6qhV-TxY7TpiBx_VDJmILmMOmfX0.png?width=640&crop=smart&auto=webp&s=3ea68aad29b25cc93508c57524884674c64e162b', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/VQoBiFueOCMY1op6qhV-TxY7TpiBx_VDJmILmMOmfX0.png?width=960&crop=smart&auto=webp&s=4e0a03183156b3c6422c6b1999c127576406491f', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/VQoBiFueOCMY1op6qhV-TxY7TpiBx_VDJmILmMOmfX0.png?width=1080&crop=smart&auto=webp&s=7faf505aadb0f5355b2434d81e8a6e2c4135825a', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/VQoBiFueOCMY1op6qhV-TxY7TpiBx_VDJmILmMOmfX0.png?auto=webp&s=49e58930af2e2884ae110a5ef1e358dcfd04de5b', 'width': 1200}, 'variants': {}}]}
RAG Paper 25.12.23
1
1. [MemR$\^3$: Memory Retrieval via Reflective Reasoning for LLM Agents](http://arxiv.org/abs/2512.20237v1) 2. [FaithLens: Detecting and Explaining Faithfulness Hallucination](http://arxiv.org/abs/2512.20182v1) 3. [Retrieval-augmented Prompt Learning for Pre-trained Foundation Models](http://arxiv.org/abs/2512.20145v1) 4. [Multi-hop Reasoning via Early Knowledge Alignment](http://arxiv.org/abs/2512.20144v1) 5. [M$\^3$KG-RAG: Multi-hop Multimodal Knowledge Graph-enhanced Retrieval-Augmented Generation](http://arxiv.org/abs/2512.20136v1) 6. [Adaptive Financial Sentiment Analysis for NIFTY 50 via Instruction-Tuned LLMs , RAG and Reinforcement Learning Approaches](http://arxiv.org/abs/2512.20082v1) **Collected by OpenBMB, transferred by** [**RagView.ai**](https://www.ragview.ai/) **/** [**github/RagView**](https://github.com/RagView/RagView) **.**
2025-12-24T06:15:11
https://www.reddit.com/r/LocalLLaMA/comments/1pugbsi/rag_paper_251223/
Cheryl_Apple
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1pugbsi
false
null
t3_1pugbsi
/r/LocalLLaMA/comments/1pugbsi/rag_paper_251223/
false
false
self
1
null
DOOM JS - Master Protocol: Systematic AI Evolution (400 Patterns)
0
This Christmas release represents a breakthrough in AI-driven development. By merging the collective intelligence of DeepSeek, Claude, and Perplexity into a library of 400 **learned patterns**, I have eliminated random guessing and hallucinations. What you see is a strictly governed horror engine: * **Atmosphere:** Deep black background (0x000000) with calibrated fog layers for maximum tension. * **Physics:** Hard-locked 1.6m eye-level gravity and relative FPS movement protocols. * **AI:** Aggressive yellow entities using unified chasing logic. No more blind attempts. Just pure, structured execution. The AI is finally learning.
2025-12-24T05:36:57
https://v.redd.it/xowdx0vd939g1
Alone-Competition863
/r/LocalLLaMA/comments/1pufnvo/doom_js_master_protocol_systematic_ai_evolution/
1970-01-01T00:00:00
0
{}
1pufnvo
false
null
t3_1pufnvo
/r/LocalLLaMA/comments/1pufnvo/doom_js_master_protocol_systematic_ai_evolution/
false
false
https://external-preview…1d20cd59cfd3d382
0
{'enabled': False, 'images': [{'id': 'bndzZ3NkMmU5MzlnMQpHkqJe4EhnCoJ9VzNKO0zpC9YcnnCThFB-jTIXDZe8', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/bndzZ3NkMmU5MzlnMQpHkqJe4EhnCoJ9VzNKO0zpC9YcnnCThFB-jTIXDZe8.png?width=108&crop=smart&format=pjpg&auto=webp&s=396c36ce22873a9c5208b6cdae81ae9197eb651f', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/bndzZ3NkMmU5MzlnMQpHkqJe4EhnCoJ9VzNKO0zpC9YcnnCThFB-jTIXDZe8.png?width=216&crop=smart&format=pjpg&auto=webp&s=2c498ea4edf3dbcda54d04cdcda7a4543ba174d4', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/bndzZ3NkMmU5MzlnMQpHkqJe4EhnCoJ9VzNKO0zpC9YcnnCThFB-jTIXDZe8.png?width=320&crop=smart&format=pjpg&auto=webp&s=2e959a2e208dae69cd924cbbf77f998ac044b77c', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/bndzZ3NkMmU5MzlnMQpHkqJe4EhnCoJ9VzNKO0zpC9YcnnCThFB-jTIXDZe8.png?width=640&crop=smart&format=pjpg&auto=webp&s=c063b048fb118fa5857ed5be8c61a6d6fa9cf200', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/bndzZ3NkMmU5MzlnMQpHkqJe4EhnCoJ9VzNKO0zpC9YcnnCThFB-jTIXDZe8.png?width=960&crop=smart&format=pjpg&auto=webp&s=30d96bc2cd00ffe4c87eb08f7a97afad16fb5617', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/bndzZ3NkMmU5MzlnMQpHkqJe4EhnCoJ9VzNKO0zpC9YcnnCThFB-jTIXDZe8.png?width=1080&crop=smart&format=pjpg&auto=webp&s=d3debe6037b0e67bea850e62fa850de91a458b7c', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/bndzZ3NkMmU5MzlnMQpHkqJe4EhnCoJ9VzNKO0zpC9YcnnCThFB-jTIXDZe8.png?format=pjpg&auto=webp&s=1ea7ce255968b0303a6c1bc2202bb1550dcca46d', 'width': 1920}, 'variants': {}}]}
Let's predict GLM Air
1
Questions about GLM Air were not answered in the recent AMA. What is your prediction about the future of GLM Air? [View Poll](https://www.reddit.com/poll/1pufhgk)
2025-12-24T05:26:38
https://www.reddit.com/r/LocalLLaMA/comments/1pufhgk/lets_predict_glm_air/
jacek2023
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1pufhgk
false
null
t3_1pufhgk
/r/LocalLLaMA/comments/1pufhgk/lets_predict_glm_air/
false
false
self
1
null
New 1B parameter open-source coding model getting 76% on HumanEval [shameless but proud self-plug]
270
Hey folks, merry festive season to you all. Hope you are staying safe! Wanted to share a new open-source coding model release that might be interesting to yall here. My team proudly published it this morning..(we are a small start up out of Australia) It’s called Maincoder-1B... a 1B-parameter code generation model that gets 76% on HumanEval, which is unusually high for a model this small (so far its ranking best-in-class for open models in that size range). Our focus isn’t on scaling up, but on making small models actually good. We know that with a lot of real-world use cases such as: interactive tools, local/offline coding, batch refactors, search-based program synthesis... you care more about latency, cost, and fast rollouts than having a massive model. Some key points to note: \-Designed for low-latency and low-cost inference \-Can run locally or on constrained hardware \-Useful for systems that need many cheap generations (search, verification, RL-style loops) \-as well as fine tuning to personal preferences \-Released under Apache 2.0 It does have the expected limitations: \~2k context window and it’s best at small, self-contained tasks....not large codebases or safety-critical code without human review. Weights and benchmarks and all that are here: [https://huggingface.co/Maincode/Maincoder-1B](https://huggingface.co/Maincode/Maincoder-1B) The full release note is here: [https://maincode.com/maincoder/](https://maincode.com/maincoder/) Keen to hear your thoughts ..and particularly where small-but-strong coding models fit best today. Thanks in advance for your support :) We are excited to have got this over the line!
2025-12-24T05:08:56
https://www.reddit.com/r/LocalLLaMA/comments/1puf614/new_1b_parameter_opensource_coding_model_getting/
More_Article9837
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1puf614
false
null
t3_1puf614
/r/LocalLLaMA/comments/1puf614/new_1b_parameter_opensource_coding_model_getting/
false
false
self
270
{'enabled': False, 'images': [{'id': '9_YN_fMenGGkXIhsdYbddz0BA3CURdMrLkCfztzzQK0', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/9_YN_fMenGGkXIhsdYbddz0BA3CURdMrLkCfztzzQK0.png?width=108&crop=smart&auto=webp&s=9e1798df99c45938fce63b6986d172e159e5b1a0', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/9_YN_fMenGGkXIhsdYbddz0BA3CURdMrLkCfztzzQK0.png?width=216&crop=smart&auto=webp&s=e7d827eeeabfd2b363dfdc395be63cd3dcd03f55', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/9_YN_fMenGGkXIhsdYbddz0BA3CURdMrLkCfztzzQK0.png?width=320&crop=smart&auto=webp&s=a57f0512ca7d11a909dcfa7364c7eaeb14576f4e', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/9_YN_fMenGGkXIhsdYbddz0BA3CURdMrLkCfztzzQK0.png?width=640&crop=smart&auto=webp&s=e0f26a15d8290446ab1f7ae2c723445248ab8994', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/9_YN_fMenGGkXIhsdYbddz0BA3CURdMrLkCfztzzQK0.png?width=960&crop=smart&auto=webp&s=6dfba9586f255ada325cd0dd87c50c7c36ec9f86', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/9_YN_fMenGGkXIhsdYbddz0BA3CURdMrLkCfztzzQK0.png?width=1080&crop=smart&auto=webp&s=207186a4b2b95cc824fec330c45356c8991f595b', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/9_YN_fMenGGkXIhsdYbddz0BA3CURdMrLkCfztzzQK0.png?auto=webp&s=b77ed9db9ee68f411100aa73dfd11319a3cfe0da', 'width': 1200}, 'variants': {}}]}
Buy or skip new laptop for local llm, programming, etc
3
Hi everyone, I own a second hand asus tuf amd, nvdia GTX 1650. It has windows with 2 users (main and gaming), isolated gaming to prevent me from over playing. Main has personal professional stuff. This laptop is fine for now while I am confused, if whether should I buy new laptop, can expend upto 8k per month - can aim or buy upto 150000 inr I do backend, llm agent development, little frontend stuff, interest in ml - pytorch etc . I have not tried to do local llm for GTX 1650 but very much intrigued. So my options are Apple Mac Book and later build pc for gaming, Laptop with rtx and later build pc Or hold for now and later build pc --- I have never tried apple but I heard from friends apple Mac Book are good for developement with a good programming support and also seen their unified memory supports local llm. Concern here is the apple ecosystem. If fine how much should I set spec - I am thinking to set upto 16 gb of ram ? Is higher needed ? Or rtx laptop with 8 gb vram or wait for now? Thank you for reading to the end, looking forward to your response Thank you
2025-12-24T05:05:01
https://www.reddit.com/r/LocalLLaMA/comments/1puf3hy/buy_or_skip_new_laptop_for_local_llm_programming/
SafeAmazing8507
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1puf3hy
false
null
t3_1puf3hy
/r/LocalLLaMA/comments/1puf3hy/buy_or_skip_new_laptop_for_local_llm_programming/
false
false
self
3
null
MiraTTS Docker FastAPI server
10
I wrote a dockerized FastAPI wrapper for MiraTTS. It exposes OpenAI-compatible endpoints so you can use it into existing LLM frontends. Since MiraTTS doesn't support native streaming yet, I implemented a custom text chunker. It splits long inputs into safe segments, batches them for the GPU, and stitches the output together. This allows you to generate audio for long texts without hitting the model's character limits. Repo here: https://github.com/Si-ris-B/MiraTTS-FastAPI-Docker
2025-12-24T04:34:30
https://www.reddit.com/r/LocalLLaMA/comments/1puejhe/miratts_docker_fastapi_server/
EmotionalWillow70
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1puejhe
false
null
t3_1puejhe
/r/LocalLLaMA/comments/1puejhe/miratts_docker_fastapi_server/
false
false
self
10
{'enabled': False, 'images': [{'id': 'sHTe9SC7uWZycV3ib_ofvD5OwBf8GALHodTpkFsqF-o', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/sHTe9SC7uWZycV3ib_ofvD5OwBf8GALHodTpkFsqF-o.png?width=108&crop=smart&auto=webp&s=5594642d13381f306a9f627e97071a65eb869c31', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/sHTe9SC7uWZycV3ib_ofvD5OwBf8GALHodTpkFsqF-o.png?width=216&crop=smart&auto=webp&s=0b8d55ec20008b73224417715770f659bc834704', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/sHTe9SC7uWZycV3ib_ofvD5OwBf8GALHodTpkFsqF-o.png?width=320&crop=smart&auto=webp&s=85d8ee97c0524230a81f1cf567a1b610d94c2882', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/sHTe9SC7uWZycV3ib_ofvD5OwBf8GALHodTpkFsqF-o.png?width=640&crop=smart&auto=webp&s=4cb348d53bb2f263266af97b5149b1a4f08c2416', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/sHTe9SC7uWZycV3ib_ofvD5OwBf8GALHodTpkFsqF-o.png?width=960&crop=smart&auto=webp&s=aae30cddce7dae3978141f296f786ebf843b6bee', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/sHTe9SC7uWZycV3ib_ofvD5OwBf8GALHodTpkFsqF-o.png?width=1080&crop=smart&auto=webp&s=9dd6c3e9a7b1688b59a7edadca6fce7fe5e3dea0', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/sHTe9SC7uWZycV3ib_ofvD5OwBf8GALHodTpkFsqF-o.png?auto=webp&s=d3f6a413c228577953880ef227fdd5531d7d78b9', 'width': 1200}, 'variants': {}}]}
I built an open-source AI security platform with 121 detection engines AND a red team toolkit with 39,000+ payloads
17
**TL;DR:** After 2 years of development, I'm releasing SENTINEL — a complete AI security suite that both protects your LLMs in production AND lets you pentest them before deployment. Free Community Edition, open source. # The Problem We're all deploying LLMs everywhere — chatbots, agents, RAG systems, autonomous workflows. But securing them? It's a mess: * **Prompt injection** is trivially easy * **Jailbreaks** get past most guardrails * **Data exfiltration** through AI responses is a real threat * **Agentic attacks** (MCP, tool poisoning) are the new frontier I couldn't find a tool that both **defended** my AI apps AND let me **attack-test** them. So I built one. # What I Made # 🛡️ SENTINEL Defense Real-time protection for LLM applications: |Feature|Details| |:-|:-| |Detection Engines|**121** specialized engines| |Recall|**85.1%** on prompt injection| |Latency|**<10ms** (Go gateway)| |Coverage|OWASP LLM Top 10| **The cool stuff:** * **Strange Math™** — I used TDA (topological data analysis), sheaf theory, and hyperbolic geometry to detect attacks that pattern matching misses * [**TTPs.ai**](http://TTPs.ai) — Attack framework detection (like MITRE but for AI) * **Protocol Security** — MCP and A2A protection for agentic systems # 🐉 Strike Offense Red team toolkit for AI applications: |Feature|Details| |:-|:-| |Attack Payloads|**39,000+** from 13 sources| |Attack Modes|Web + LLM + Hybrid| |Parallel Agents|**9** (HYDRA architecture)| |WAF Bypass|**25+** techniques| **The cool stuff:** * **AI Attack Planner** — Uses Gemini to plan attack strategies * **Anti-Deception Engine** — Detects honeypots and tarpits * **Deep Recon** — Finds hidden AI endpoints (ChatbotFinder) * **Bilingual Reports** — English + Russian (🇺🇸/🇷🇺) # Why Both? The philosophy is simple: Strike finds vulnerabilities → SENTINEL blocks them in production Test your AI before attackers do. Then deploy with confidence. # Tech Stack * **Gateway:** Go 1.21+ / Fiber (for speed) * **Brain:** Python 3.11+ (for ML ecosystem) * **Vector DB:** ChromaDB * **Deployment:** Docker/K8s native # What's Free vs Enterprise ||Community 🆓|Enterprise 🔐| |:-|:-|:-| |Basic Detection|✅|✅| |Strange Math (Basic)|✅|✅| |Strike Offense|✅|✅| |Advanced Engines|❌|✅| |2025 Innovations|❌|✅| |Support|Community|Dedicated| Community Edition is fully functional — not a trial, not a demo. # Quick Start (Strike) git clone https://github.com/DmitrL-dev/AISecurity cd strike pip install -r requirements.txt # CLI mode python -m strike --target https://example.com/chat # Web Console python dashboard.py # Open http://localhost:5000 # Links * **GitHub:** [https://github.com/DmitrL-dev/AISecurity](https://github.com/DmitrL-dev/AISecurity) * **Docs:** [https://dmitrl-dev.github.io/AISecurity/](https://dmitrl-dev.github.io/AISecurity/) * **Free Signatures CDN:** 39,000+ patterns, updated daily # What I'm Looking For 1. **Feedback** — What's missing? What should I add? 2. **Bug reports** — Break it, I want to know 3. **Use cases** — How would you use this? 4. **Collaboration** — Open to partnerships # FAQ **Q: Is this actually free?** A: Yes. Community Edition is free forever. Enterprise features require licensing. **Q: Can I use Strike legally?** A: Only on systems you own or have permission to test. Bug bounty programs, yes. Random targets, no. **Q: Why "Strange Math"?** A: Because "Topological Data Analysis with Persistent Homology and Sheaf-Theoretic Semantic Coherence Verification" didn't fit on the badge. # ⚠️ Solo Developer Disclaimer I work on this project **alone**. If you find bugs, rough edges, or incomplete features — I apologize in advance. Your bug reports and feedback help me improve. Be patient, be kind, and I'll fix things as fast as I can. ⭐ **If you find this useful, starring the repo and sharing this post really inspires me and helps the project grow!** Happy to answer questions. Roast my code. Tell me what sucks.
2025-12-24T04:26:16
https://www.reddit.com/r/LocalLLaMA/comments/1puee2h/i_built_an_opensource_ai_security_platform_with/
ParticularSubject966
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1puee2h
false
null
t3_1puee2h
/r/LocalLLaMA/comments/1puee2h/i_built_an_opensource_ai_security_platform_with/
false
false
self
17
null
Looking for private/small Discords for local AI companion builders (safe self-improvement focus) — advice?
4
Hi everyone, I'm working on a personal, local-only AI companion project (Ollama-based, persistent memory, manual code approval loop for safety, planning future world-model training). I want to connect with others doing similar things (self-improving agents, safe RSI on consumer hardware, companion-focused tinkering) but prefer private/small servers over public ones for privacy/security reasons. No code sharing here — just looking for invite links or recommendations to low-key Discords/groups where people discuss this stuff without public exposure. If you know of any (or run one), feel free to DM me. Thanks!
2025-12-24T04:05:43
https://www.reddit.com/r/LocalLLaMA/comments/1pue075/looking_for_privatesmall_discords_for_local_ai/
Billybobster21
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1pue075
false
null
t3_1pue075
/r/LocalLLaMA/comments/1pue075/looking_for_privatesmall_discords_for_local_ai/
false
false
self
4
null
Day 16: 21 Days of Building a Small Language Model: Choosing the right optimizer for Your LLM
4
For years, when training large language models, the default choice of optimizer has been AdamW. It's been the industry standard, the go-to option that everyone uses, the optimizer that's built into every framework and recommended in every tutorial. AdamW has powered the training of countless models, from GPT to LLaMA to countless research projects. But recently, a new optimizer called Muon(for Kimi K2 and GLM 4.5) has come into play, offering compelling advantages that are making researchers and practitioners take notice. Today we'll explore both optimizers, understand why AdamW became the default, and see what Muon brings to the table. # Why Optimizers matter Before diving into the specifics, let's understand why the optimizer choice is so critical. During training, the optimizer's job is to update model parameters based on gradients computed from the loss function. This might seem straightforward, but the way parameters are updated has profound effects on convergence speed, training stability, memory efficiency, final model performance, and computational cost. Different optimizers approach this problem differently, leading to trade-offs in these dimensions. Understanding these trade-offs helps you make informed decisions for your specific use case. https://preview.redd.it/louy84v1t29g1.png?width=1464&format=png&auto=webp&s=75bb92b56df3a0e895cb5900fbb26dc1ade829ea # AdamW AdamW has been the dominant optimizer for training large language models since its introduction. It's been the default choice for good reasons, it works reliably, it's well-understood, and it's proven effective across countless training runs. It's an extension of Adam that properly decouples weight decay from gradient-based updates, which was a subtle but important improvement over the original Adam optimizer. The core idea behind AdamW is maintaining two moving averages for each parameter. The first moment tracks an exponentially weighted average of gradients, providing momentum that smooths out noisy gradients and helps navigate flat regions of the loss landscape. The second moment tracks an exponentially weighted average of squared gradients, capturing the variance of gradients over time. What makes AdamW powerful is that each parameter gets its own adaptive learning rate, automatically adjusted based on the history of its gradients. Parameters with large, consistent gradients get smaller updates, while parameters with small or noisy gradients get larger updates. This adaptability has made AdamW incredibly effective across a wide range of scenarios. The second moment estimate captures variance information, allowing the optimizer to adapt to parameters that have different scales of gradients. This is particularly useful in deep networks where different layers can have vastly different gradient magnitudes. Unlike the original Adam, AdamW properly decouples weight decay from the gradient-based update, applying it directly to parameters. This provides better regularization and has become the standard approach. However, this power comes with a memory cost. AdamW stores two state tensors per parameter, one for the first moment and one for the second moment. For optimizer state alone, this means AdamW requires roughly two times the parameter memory. For large models, this can be substantial, significantly increasing the total memory needed for training. AdamW works well across a wide range of scenarios. Embedding layers benefit from adaptive learning rates because most tokens don't appear in every batch, leading to sparse updates. Output layers have different learning dynamics than transformer layers and work well with AdamW's adaptive approach. The optimizer has a proven track record across many architectures and tasks, making it a safe default choice. For small to medium models, the memory overhead is manageable and the performance is excellent. # Muon Recently, Muon has come into play as a compelling alternative to AdamW. It's a newer optimizer designed specifically for matrix parameters in transformer architectures. The name stands for MomentUm Orthogonalized by Newton-Schulz, which hints at its unique approach. It combines SGD-momentum with an orthogonalization step that provides some second-order-like geometric control without the memory overhead of storing second-moment estimates. While AdamW has been the default choice, Muon offers advantages that are particularly relevant as models grow larger and training costs increase. It's not trying to replace AdamW everywhere, instead, it's carving out a specific niche where it excels, particularly for the large matrix parameters in transformer layers. The way Muon works is fascinating. It performs three main operations. First, it does a standard momentum-based gradient update, similar to SGD with momentum. Then comes the magic: it uses Newton-Schulz iteration to orthogonalize the update matrix. This orthogonalization step is what makes Muon special, instead of storing second-moment estimates like AdamW, Muon computes an approximation to the orthogonal part of the update matrix on the fly. The Newton-Schulz iteration finds the nearest orthogonal matrix to the update direction, which provides the update direction while controlling the update magnitude. This process provides geometric control over updates without storing large matrices, runs efficiently in low precision formats which is important for modern training, and acts as a regularization mechanism. The orthogonal updates naturally constrain parameter growth, which can help with generalization. After orthogonalization, Muon applies the update with a scaling factor based on matrix dimensions. This aspect-ratio scaling accounts for the fact that tall matrices and wide matrices might need different treatment, which is a nice touch that shows the optimizer was designed with matrix operations in mind. The memory efficiency of Muon is remarkable. It stores only one state tensor per parameter, just the momentum buffer. This means Muon requires roughly half the memory of AdamW for optimizer state. For a large model, this can be the difference between fitting on your hardware or not. Muon is specifically designed for 2D parameter matrices, like the weights in linear layers. It treats each matrix as a whole rather than updating individual elements independently, which is a fundamentally different philosophy from AdamW. This matrix-aware design, combined with the regularization from orthogonalization, has shown improved generalization in some reported experiments. In certain large-batch transformer training setups, Muon has been shown to reach comparable losses using significantly fewer training tokens compared to AdamW. However, Muon has some important constraints. It's designed for 2D parameters only, which means it should not be used for embedding layers (which are 1D), layer normalization parameters (also 1D), bias terms, or output layers that often need different handling. It works best for transformer architectures with standard linear layers. While Muon has been reported in large-scale training setups such as some recent models, it's not yet as widely tested across diverse architectures and tasks as AdamW. This specialization is both a strength and a limitation. # Memory Let's talk about memory, because this is often the deciding factor. AdamW stores two buffers per parameter, the first moment and second moment estimates. For a model with a billion parameters, this means roughly two gigabytes of additional memory just for optimizer state, assuming standard floating point precision and no optimizer sharding techniques. That's on top of the model parameters themselves, the activations, and everything else needed for training. Muon, on the other hand, stores only one buffer per parameter, just the momentum buffer. For that same billion-parameter model, you're looking at roughly one gigabyte of additional memory under the same assumptions. That's half of what AdamW needs for optimizer state. In practice, this fifty percent memory reduction for optimizer state can be the difference between fitting a larger model on your hardware, increasing batch size for faster training, or even being able to train at all. The memory savings become more significant as models grow larger. For a seven billion parameter model, assuming standard precision and no sharding, AdamW might need approximately fourteen gigabytes just for optimizer state, while Muon would need only seven gigabytes. That seven gigabyte difference can be substantial when you're pushing the limits of your hardware. # Training efficiency and convergence When it comes to training efficiency, the story gets interesting. AdamW's adaptive learning rates help with convergence, and it's well-tuned for many scenarios. In some large-batch transformer training experiments, Muon has been shown to reach comparable losses using significantly fewer training tokens compared to AdamW. This suggests potential improvements in computational efficiency for certain training regimes, though results can vary depending on the specific setup. When these efficiency gains are observed, they can mean either training faster to reach the same loss or potentially reaching a lower loss in the same amount of time. For large-scale training where compute costs are significant, such efficiency improvements, when they occur, can translate to substantial cost savings. Both optimizers are stable in practice, but they achieve stability through different mechanisms. AdamW's adaptive learning rates help navigate difficult optimization landscapes, and there's extensive knowledge about hyperparameter tuning. Muon's orthogonalization provides natural stability through constrained updates, and it can be less sensitive to hyperparameter choices in some cases. When it comes to generalization, Muon has shown slightly better results in some reported experiments, likely due to the regularization effects from orthogonalization. The orthogonal updates naturally control parameter growth, which can help prevent overfitting. AdamW also generalizes well with proper weight decay, but Muon's regularization mechanism is built into the optimization process itself. # Ease of Use AdamW wins on ease of use. It works out-of-the-box for all parameters, has extensive documentation and community support, and is standard in most frameworks. You can use it for everything: embeddings, transformer layers, output layers, normalization parameters. It just works. Muon requires more careful setup. You need to identify which parameters are 2D matrices (suitable for Muon) and which are not (need AdamW). This means you typically end up using a hybrid approach, Muon for transformer layer weights, AdamW for embeddings and output layers. This isn't necessarily a bad thing, but it does require more thought and setup. The hybrid approach is actually quite elegant and is used in modern training setups like nanochat. You use Muon for the transformer layer parameters (attention and MLP weights), which are large 2D matrices that benefit from Muon's efficiency. Then you use AdamW for embeddings, layer normalization parameters, and output layers, which have different characteristics and work better with AdamW's adaptive approach. This hybrid setup maximizes memory efficiency for the large transformer layers while using proven AdamW for parameters that need different handling. It's the best of both worlds, though it does require managing two optimizers instead of one. # When to choose what So when should you use each optimizer? If you're training embeddings or output layers, AdamW is the way to go. These parameters have different update patterns than transformer layers, and AdamW's adaptive learning rates work well for sparse updates. If you're working with non-standard architectures, AdamW is also safer since Muon is designed specifically for standard transformer layers. If you need simplicity and want something that just works, AdamW is your friend. It requires no special parameter grouping, works for everything, and has a proven track record. If memory isn't your bottleneck and you have sufficient resources, AdamW's reliability is valuable. On the other hand, if you're training large transformer models, the memory savings of Muon become significant. That fifty percent reduction in optimizer state memory can enable larger models or batch sizes with the same hardware. If compute efficiency is critical and training cost matters, Muon's potential efficiency gains, when observed, can lead to substantial savings. If you're working with standard transformer architectures and can implement the hybrid approach, Muon offers compelling benefits. For small to medium models, the memory savings of Muon matter less, and AdamW's simplicity and proven reliability might be more valuable. But as models grow larger and training costs increase, optimizers like Muon that provide efficiency gains become increasingly valuable. # Hyperparameter Landscape AdamW typically uses learning rates in the range of one ten-thousandth to eight ten-thousandths for large language models, often scaled by model dimension. The beta parameters are commonly set to zero point nine for the first moment and zero point nine five for the second moment, which is higher than the standard zero point nine nine nine used in other domains. Weight decay is commonly set to zero point one, and epsilon for numerical stability is typically one ten-millionth or one hundred-millionth. Muon uses different settings in reported experiments. Learning rates are often higher, around two hundredths in some setups, which is quite different from AdamW. Momentum is typically set to zero point nine five, and Nesterov momentum is recommended. The Newton-Schulz iteration usually runs for five steps, which is a good balance between accuracy and computational cost. These different hyperparameter ranges reflect the different philosophies of the optimizers. AdamW's adaptive learning rates mean you can use lower base learning rates, while Muon's orthogonalization allows for higher learning rates. This is something to keep in mind if you're switching between optimizers. # Summary So where does this leave us? AdamW remains the default choice for good reasons—it's proven, reliable, and works out of the box for everything. But Muon has come into play as a compelling alternative, particularly for large transformer models where memory and efficiency matter. The choice depends on your specific needs. If you're memory constrained, Muon's fifty percent reduction in optimizer state memory is compelling. If you need simplicity and reliability, AdamW remains the default choice. If you're training large models, consider the hybrid approach that combines both. If compute cost matters, Muon's potential efficiency gains, when observed in your specific setup, can be significant. For many modern LLM training scenarios, especially at scale, the hybrid approach offers the best balance of efficiency, memory usage, and flexibility. You get Muon's efficiency for the large transformer layers and AdamW's reliability for the parameters that need different handling. The optimizer you choose shapes your entire training process. Understanding the trade-offs helps you make informed decisions that align with your goals, constraints, and resources. AdamW will likely remain the default for many use cases, but as models grow larger and training costs increase, optimizers like Muon that provide efficiency gains become increasingly valuable. The field of optimization for deep learning continues to evolve. As we train larger models and face new constraints, optimizers like Muon demonstrate that even in well-established areas like optimization, there's still room for innovation. The future will likely bring more specialized optimizers, better hybrid approaches, and continued improvements in efficiency and effectiveness. But for now, understanding when to stick with the default AdamW and when to consider Muon is the key to making the right choice.
2025-12-24T04:04:01
https://www.reddit.com/r/LocalLLaMA/comments/1pudz31/day_16_21_days_of_building_a_small_language_model/
Prashant-Lakhera
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1pudz31
false
null
t3_1pudz31
/r/LocalLLaMA/comments/1pudz31/day_16_21_days_of_building_a_small_language_model/
false
false
https://b.thumbs.redditm…DE8drRpTD49U.jpg
4
null
How do you handle complex tables in local RAG? (Using Llama 3/Docker setup)
0
I've been working on a local-first "Second Brain" for my engineering docs because I can't use OpenAI for NDA-protected datasheets. **The Problem:** Even with Llama 3 (8B) and ChromaDB, parsing engineering tables is still a nightmare. I’ve tried converting PDF to Markdown first, which helped a bit, but schematics are still hit-or-miss. **My Current Stack:** * Dockerized Ollama (Llama 3) * ChromaDB * Streamlit UI I’ve documented my current architecture and Docker setup (it’s linked in my profile bio if you want to see the exact configs), but I’m looking for suggestions: **What are you using for high-fidelity local OCR or layout-aware parsing?** Would love to hear from anyone else running self-hosted RAG systems.
2025-12-24T03:50:04
https://www.reddit.com/r/LocalLLaMA/comments/1pudpff/how_do_you_handle_complex_tables_in_local_rag/
Prize_Analyst_7006
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1pudpff
false
null
t3_1pudpff
/r/LocalLLaMA/comments/1pudpff/how_do_you_handle_complex_tables_in_local_rag/
false
false
self
0
null
I built Plano(A3B): most efficient LLMs for agent orchestration that exceed frontier model perf
120
Hi everyone — I’m on the Katanemo research team. Today we’re thrilled to launch **Plano-Orchestrator**, a new family of LLMs built for fast multi-agent orchestration. What do these new LLMs do? given a user request and the conversation context, Plano-Orchestrator decides which agent(s) should handle the request and in what sequence. In other words, it acts as the supervisor agent in a multi-agent system. Designed for multi-domain scenarios, it works well across general chat, coding tasks, and long, multi-turn conversations, while staying efficient enough for low-latency production deployments. Why did we built this? Our applied research is focused on helping teams deliver agents safely and efficiently, with better real-world performance and latency — the kind of “glue work” that usually sits outside any single agent’s core product logic. Plano-Orchestrator is integrated into Plano, our models-native proxy and dataplane for agents. Hope you enjoy it — and we’d love feedback from anyone building multi-agent systems Learn more about the LLMs [here](https://huggingface.co/collections/katanemo/plano-orchestrator) About our open source project: [https://github.com/katanemo/plano](https://github.com/katanemo/plano) And about our research: [https://planoai.dev/research](https://planoai.dev/research)
2025-12-24T03:45:18
https://i.redd.it/iuaxwr9x529g1.png
AdditionalWeb107
i.redd.it
1970-01-01T00:00:00
0
{}
1pudm4m
false
null
t3_1pudm4m
/r/LocalLLaMA/comments/1pudm4m/i_built_planoa3b_most_efficient_llms_for_agent/
false
false
default
120
{'enabled': True, 'images': [{'id': 'iuaxwr9x529g1', 'resolutions': [{'height': 42, 'url': 'https://preview.redd.it/iuaxwr9x529g1.png?width=108&crop=smart&auto=webp&s=7e86a4b3249d0794dd1889af0189fe766c7cb5ea', 'width': 108}, {'height': 85, 'url': 'https://preview.redd.it/iuaxwr9x529g1.png?width=216&crop=smart&auto=webp&s=a25bf193c4b380d8b1c8da94656f93db8aac7340', 'width': 216}, {'height': 127, 'url': 'https://preview.redd.it/iuaxwr9x529g1.png?width=320&crop=smart&auto=webp&s=6bc4ab44621d531d460475bf9ffc7df0b215f578', 'width': 320}, {'height': 254, 'url': 'https://preview.redd.it/iuaxwr9x529g1.png?width=640&crop=smart&auto=webp&s=1349e4df960f26fc52d217c9f4f15fd3fc847cb5', 'width': 640}, {'height': 381, 'url': 'https://preview.redd.it/iuaxwr9x529g1.png?width=960&crop=smart&auto=webp&s=ff706bc133fbbb410fee6db06fcbbb3b66081d0c', 'width': 960}, {'height': 429, 'url': 'https://preview.redd.it/iuaxwr9x529g1.png?width=1080&crop=smart&auto=webp&s=0c0d6eeec07ce2805b491192fbbf375214ffa59a', 'width': 1080}], 'source': {'height': 597, 'url': 'https://preview.redd.it/iuaxwr9x529g1.png?auto=webp&s=a2acdd0aa37b285737ff5c069af1d7c0995ac380', 'width': 1502}, 'variants': {}}]}
What server setups scale for 60 devs + best air gapped coding chat assistant for Visual Studio (not VS Code)?
0
Hi all 👋, I need community input on infrastructure and tooling for a team of about 60 developers. I want to make sure we pick the right setup and tools that stay private and self hosted. 1) **Server / infra suggestions** We have an on premise server for internal use with 64GB RAM right now. It is upgradable(more RAM) but the company will not invest in GPUs until we can show real usage metrics. What setups have worked well for teams this size? What hardware recommendations can you suggest? 2) **Air gapped, privacy focused coding assistant for Visual Studio** We want a code chat assistant focused on C#, dotnet, SQL that: • can run fully air gapped • does not send queries to any external servers (GitHub/vs copilot isn’t private enough) • works with Visual Studio, **not** VS Code • is self hosted or local, open source and free. Any suggestions for solutions or setups that meet these requirements? I want something that feels like a proper assistant for coding and explanations. 3) **LLM engine recommendations for internal hosting and metrics** I want to run my own LLM models for the assistant so we can keep all data internal and scale to concurrent use by our team. Given I need to wait on GPU upgrades I want advice on: • engines/frameworks that can run LLMs and provide real usage metrics you can monitor (requests, load, performance) • tools that let me collect metrics and logs so I can justify future GPU upgrades • engines that are free and open source (no paid options) • model choices that balance quality with performance so they can run on our current server until we get GPUs I’ve looked at Ollama and Docker Model Runner so far. Specifically what stack or tools do you recommend for metrics and request monitoring for an LLM server? Are there open source inference servers or dashboards that work well? If we ***have*** to use vs code, what workflows work?(real developers don’t use vs code as it’s just an editor) Thanks in advance for any real world examples and configs.
2025-12-24T03:40:26
https://www.reddit.com/r/LocalLLaMA/comments/1pudir2/what_server_setups_scale_for_60_devs_best_air/
SpheronInc
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1pudir2
false
null
t3_1pudir2
/r/LocalLLaMA/comments/1pudir2/what_server_setups_scale_for_60_devs_best_air/
false
false
self
0
null
Are tokens homogeneous - and to what level.
0
Really liking minstrel (most solid I’ve had so far on my 64gig m4pro), and just got it plugged into open-notebook via lmstudio, just started but looking good. My question is… are there any opportunities to hit a big fast machine to generate a token-bed for a product, or document set, and then hit that token-bed with lesser machines? Is just idle pondering, and idle naming efforts to name things “token bed”
2025-12-24T02:51:28
https://www.reddit.com/r/LocalLLaMA/comments/1puckws/are_tokens_homogeneous_and_to_what_level/
Wishitweretru
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1puckws
false
null
t3_1puckws
/r/LocalLLaMA/comments/1puckws/are_tokens_homogeneous_and_to_what_level/
false
false
self
0
null
Is there a repository of Vulkan dockers ?
1
having a 6700XT GPU , I was looking at speeding up my local setup with llama.cpp and openweb UI . But currently using : llama.cpp -ROCM using (https://github.com/likelovewant/ROCmLibs-for-gfx1103-AMD780M-APU) * whisper local - cpu within openweb UI * Fast Kokoro - cpu (docker) * Openweb UI - cpu (docker) * Docling - cpu (docker) Is there any items that im missing that i could at least bump up to Rocm or Vulkan ? I tried whisper.cpp built vulkan which worked via the web interface , but couldnt get working to openwebUI
2025-12-24T02:26:35
https://www.reddit.com/r/LocalLLaMA/comments/1puc31k/is_there_a_repository_of_vulkan_dockers/
uber-linny
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1puc31k
false
null
t3_1puc31k
/r/LocalLLaMA/comments/1puc31k/is_there_a_repository_of_vulkan_dockers/
false
false
self
1
{'enabled': False, 'images': [{'id': 'R5uJiNrxmDOkSon6GHSLT96-_Y9SLntimyFvV0BBLhI', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/R5uJiNrxmDOkSon6GHSLT96-_Y9SLntimyFvV0BBLhI.png?width=108&crop=smart&auto=webp&s=e7eb73e73ee23bc91faaa78da99452a47f7b87ac', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/R5uJiNrxmDOkSon6GHSLT96-_Y9SLntimyFvV0BBLhI.png?width=216&crop=smart&auto=webp&s=9c6299095ce0b30c892472b4b592d85b52b21568', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/R5uJiNrxmDOkSon6GHSLT96-_Y9SLntimyFvV0BBLhI.png?width=320&crop=smart&auto=webp&s=9d587156bb05d3da2eb0135d9a12f008d5c418e0', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/R5uJiNrxmDOkSon6GHSLT96-_Y9SLntimyFvV0BBLhI.png?width=640&crop=smart&auto=webp&s=c29068d91ba44ea5d42288b6abad9bce1aac15a2', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/R5uJiNrxmDOkSon6GHSLT96-_Y9SLntimyFvV0BBLhI.png?width=960&crop=smart&auto=webp&s=c4b83ee8b833fcaf627464e577223d3c0c545024', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/R5uJiNrxmDOkSon6GHSLT96-_Y9SLntimyFvV0BBLhI.png?width=1080&crop=smart&auto=webp&s=e7eb81bbd355b36b8d69eebb27a1aade201107be', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/R5uJiNrxmDOkSon6GHSLT96-_Y9SLntimyFvV0BBLhI.png?auto=webp&s=c67a16d4950d58c5a8b442421c894d6f9f8cd079', 'width': 1200}, 'variants': {}}]}
Best model for Japanese to English?
19
Title. I'm using mangaOCR for capturing text from images and it's pretty damn accurate. But now I want to know what the best model for translation is. I would like something on the smaller side if possible so below 20b would be preferable. But if something is 20b or just slightly above it then that would be fine.
2025-12-24T02:14:14
https://www.reddit.com/r/LocalLLaMA/comments/1pubu7x/best_model_for_japanese_to_english/
Red2005dragon
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1pubu7x
false
null
t3_1pubu7x
/r/LocalLLaMA/comments/1pubu7x/best_model_for_japanese_to_english/
false
false
self
19
null
New tool to manage models and quantizations
8
Hi, i have been working on a tool to manage foundation models and quantizations from them. the goal is make them consistent, reproducible and save storage. It works now, so feedback would be good. The current implementation can ingest any safetensors model and on demand generate a q2\_k to q6\_k gguf file. Non uniform. i.e you can via config pick quatization per tensor. [https://github.com/kgrama/gmat-cli/tree/main](https://github.com/kgrama/gmat-cli/tree/main) || || |`q2_k`|Smallest, lowest quality| |`q3_k_s`|3-bit small variant| |`q3_k_m`|3-bit medium variant| |`q3_k_l`|3-bit large variant| |`q4_k_s`|4-bit small variant| |`q4_k_m`|4-bit medium variant (default)| |`q5_k_s`|5-bit small variant| |`q5_k_m`|5-bit medium variant| |`q6_k`||
2025-12-24T01:46:21
https://www.reddit.com/r/LocalLLaMA/comments/1pubadw/new_tool_to_manage_models_and_quantizations/
Anxious-Visit-7735
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1pubadw
false
null
t3_1pubadw
/r/LocalLLaMA/comments/1pubadw/new_tool_to_manage_models_and_quantizations/
false
false
self
8
null
We’re running limited tests for an experimental, governed AI system (Discord)
1
[removed]
2025-12-24T01:28:51
https://disboard.org/server/1421293144288530565
OkIntern5763
disboard.org
1970-01-01T00:00:00
0
{}
1puaxfn
false
null
t3_1puaxfn
/r/LocalLLaMA/comments/1puaxfn/were_running_limited_tests_for_an_experimental/
false
false
default
1
null
Should I get a founder's edition 3090 or a zotac? Are 3090s taken from prebuilt PCs like Alienware any good?
0
Bottom text
2025-12-24T00:42:59
https://www.reddit.com/r/LocalLLaMA/comments/1pu9z4q/should_i_get_a_founders_edition_3090_or_a_zotac/
IronLover64
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1pu9z4q
false
null
t3_1pu9z4q
/r/LocalLLaMA/comments/1pu9z4q/should_i_get_a_founders_edition_3090_or_a_zotac/
false
false
self
0
null
Should I get a founder's edition 3090 or a zotac?
0
Bottom text
2025-12-24T00:38:39
https://www.reddit.com/r/LocalLLaMA/comments/1pu9vrv/should_i_get_a_founders_edition_3090_or_a_zotac/
IronLover64
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1pu9vrv
false
null
t3_1pu9vrv
/r/LocalLLaMA/comments/1pu9vrv/should_i_get_a_founders_edition_3090_or_a_zotac/
false
false
self
0
null
Easiest way to start with self-hosted models!
0
How to connect your own local AI models to your personal AI & Automation Control center in basically 20 clicks. 1. Log in to Navigator 2. Download LM Studio 3. Download a local model that fits your device 4. Create a Pinggy account 5. Copy the localhost URL from LM Studio into Pinggy 6. Follow Pinggy’s setup steps 7. Copy the Pinggy URL into Navigator Navigator auto-detects the local models you have installed, then you can use them inside the same chat interface you also use major models already. That means: run your local model powering your Agents & tools via mcp like project management, web search, coding, and more, all from one place.
2025-12-23T23:57:05
https://beta.keinsaas.com
Keinsaas
beta.keinsaas.com
1970-01-01T00:00:00
0
{}
1pu8zod
false
null
t3_1pu8zod
/r/LocalLLaMA/comments/1pu8zod/easiest_way_to_start_with_selfhosted_models/
false
false
default
0
null
I wrote an interactive blog post teaching how tokenization, embeddings, and vector search work in-browser with Transformers.js
28
I want to be up front that the post is entirely built with AI, as is the copy. However, I feel like if creating blog posts is this easy, we are obligated to transfer the saved effort into maximizing the learning potential of our content. So, this post includes an interactive lab that hopefully will find worth your time. What’s your opinion? Is this slop?
2025-12-23T23:56:52
https://mike.dev/blog/transformersjs-embeddings-lab/
mike_dot_dev
mike.dev
1970-01-01T00:00:00
0
{}
1pu8zj3
false
null
t3_1pu8zj3
/r/LocalLLaMA/comments/1pu8zj3/i_wrote_an_interactive_blog_post_teaching_how/
false
false
default
28
{'enabled': False, 'images': [{'id': 'XlQcYLikCRIkdRlT2ds5M1FMprcLdYd0_75yS7-q-1A', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/XlQcYLikCRIkdRlT2ds5M1FMprcLdYd0_75yS7-q-1A.png?width=108&crop=smart&auto=webp&s=0fc94b21b9e184ec9c9b0cb9e6a07fa655c26cfc', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/XlQcYLikCRIkdRlT2ds5M1FMprcLdYd0_75yS7-q-1A.png?width=216&crop=smart&auto=webp&s=9e367f6b787f133eb26cde89e7c48d6d8669367c', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/XlQcYLikCRIkdRlT2ds5M1FMprcLdYd0_75yS7-q-1A.png?width=320&crop=smart&auto=webp&s=6a532e83135a1cbc3749df3d47c1fd1b17c82a7a', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/XlQcYLikCRIkdRlT2ds5M1FMprcLdYd0_75yS7-q-1A.png?width=640&crop=smart&auto=webp&s=132412f978c8e30c262460a0cbb047da98d6d0ee', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/XlQcYLikCRIkdRlT2ds5M1FMprcLdYd0_75yS7-q-1A.png?width=960&crop=smart&auto=webp&s=24010e3f1bc49e75dd5c6bd98d62cc2ab7ed6dc9', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/XlQcYLikCRIkdRlT2ds5M1FMprcLdYd0_75yS7-q-1A.png?width=1080&crop=smart&auto=webp&s=c417ae44ed99fbfcad1e9718d7c0f6b529a8e25a', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/XlQcYLikCRIkdRlT2ds5M1FMprcLdYd0_75yS7-q-1A.png?auto=webp&s=7ca4ea53d4304c747adc02276b15fe062c10348d', 'width': 1200}, 'variants': {}}]}
Model for OCRing music scores?
2
I am looking for a model that will faithfully OCR music scores inty Lilypond or the like, so they can be transposed or otherwise programmatically edited from there. Open source preferred but not critical. Qwen 235b VL Instruct came the closest in my tests, but just can't place things in the right octaves. Others I tried (Gemini3, GLM 4.6V, Qwen 235b thinking) outright hallucinated. But maybe I am doing something wrong. Anyone with a working solution please do tell me!
2025-12-23T23:53:19
https://www.reddit.com/r/LocalLLaMA/comments/1pu8ws8/model_for_ocring_music_scores/
ramendik
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1pu8ws8
false
null
t3_1pu8ws8
/r/LocalLLaMA/comments/1pu8ws8/model_for_ocring_music_scores/
false
false
self
2
null
I built all this and don't know what to do i need advice
1
[removed]
2025-12-23T23:28:39
https://www.reddit.com/r/LocalLLaMA/comments/1pu8dxa/i_built_all_this_and_dont_know_what_to_do_i_need/
BenBarnes1331
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1pu8dxa
false
null
t3_1pu8dxa
/r/LocalLLaMA/comments/1pu8dxa/i_built_all_this_and_dont_know_what_to_do_i_need/
false
false
self
1
null
What to do with 2 P100
2
I ended up with 2 cheap p100 in a lot of 4 GPUs. The other 2 cards were old gaming gpu that I will use a backup or resell. The Tesla were untested. I know driver support is over and security will follow soon and that there are no tensor core. I have a 6800xt in my main PC, so no cuda there either. I have a test bench that I can use and put the P100 and tested it with a 12cm P12 and a 3d printed shroud duct. Temp are ok and I was able to run light Ollama 7b model. How can I test properly the 2 GPUs? Worth keeping one and use the test bench in my homelab as a WakeOnLan LLM node? Should I resell 1 or both and how much is it worth these days? thanks
2025-12-23T23:17:24
https://www.reddit.com/r/LocalLLaMA/comments/1pu84y3/what_to_do_with_2_p100/
SaGa31500
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1pu84y3
false
null
t3_1pu84y3
/r/LocalLLaMA/comments/1pu84y3/what_to_do_with_2_p100/
false
false
self
2
null
How much storage does all local llms take in ollama
0
https://preview.redd.it/… less than 200gb
2025-12-23T23:08:03
https://www.reddit.com/r/LocalLLaMA/comments/1pu7xl0/how_much_storage_does_all_local_llms_take_in/
AvailableRow3815
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1pu7xl0
false
null
t3_1pu7xl0
/r/LocalLLaMA/comments/1pu7xl0/how_much_storage_does_all_local_llms_take_in/
false
false
https://b.thumbs.redditm…5GMKJ2eKmAds.jpg
0
null
Thoughts on DGX Spark as a macOS Companion: Two Months Later
146
I have been using the NVIDIA DGX Spark in tandem with my Mac for about two months now. Given the active discussions about its specs and price, I want to share my personal, subjective observations on who this device might be for and who it might not be. ## My Context: I Simply Don't Have CUDA on Mac I've been working on Apple Silicon since the release of the M1 and didn't plan on changing my main platform. It's a comfortable and stable environment for my daily work. The problem lies elsewhere: in ML and SOTA research, a significant portion of tools and libraries are still oriented towards CUDA. On macOS, following Apple's transition to M1+, this ecosystem simply doesn't exist. Because of this, an entire layer of critical libraries like nvdiffrast, flash-attention, and other CUDA-dependent solutions is unavailable on Mac. In my case, the situation reached the point of absurdity: there was a real episode where Apple released a model, but it turned out to be designed for Linux, not for Apple Silicon (haha). I didn't want to switch to another platform — I'm already a Mac user and I wanted to stay in this environment. DGX Spark eventually became a compromise: a compact device with a Mac mini form factor, 128 GB of unified memory, and Blackwell architecture (sm121), which simply adds CUDA alongside the Mac, rather than replacing it. ## The Bandwidth Problem The most frequent criticism of Spark concerns its memory bandwidth — only 273 GB/s. For comparison: the RTX 4090 has about 1000 GB/s, and the M4 Ultra has 819 GB/s. If your goal is the fastest possible inference and maximum tokens per second, Spark is indeed not the best tool. But local LLMs are what I used the least. In my practice for R&D and experiments, you much more often hit the memory limit and software constraints rather than pure speed. Plus, there's a purely practical point: if this is your main Mac, you can almost never give all of its RAM to inference — it's already occupied by IDEs, DCC tools, and the system. Spark allows you to offload AI computations to a separate device and not turn your main computer into a "brick" during calculations. Modern models in 2025 are quickly outgrowing consumer hardware: * Hunyuan 3D 2.1 — about 29 GB VRAM for full generation * FLUX.2 (BF16) — the full model easily exceeds 80 GB * Trellis2 — 24 GB as the minimum launch threshold Quantization and distillation are viable options, but they require time and additional steps and experiments. It might work or it might not. Spark allows you to run such models "as is," without unnecessary manipulations. ## My Workflow: Mac + Spark In my setup, a Mac on M4 Max with 64 GB RAM handles the main tasks: Unity, Houdini, Blender, IDE. But AI tasks now fly over to Spark (right now I'm generating a fun background in Comfy for a call with colleagues). I simply connect to Spark via SSH through JetBrains Gateway and work on it as a remote machine: the code, environment, and runs live there, while the Mac remains a responsive work tool. For me, this is a convenient and clear separation: Mac is the workplace, Spark is the compute node. ## What About Performance Below are my practical measurements in tasks typical for me, compared to an RTX 4090 on RunPod. I separate the measurements into **Cold Start** (first run) and **Hot Start** (model already loaded). | Model | DGX Spark (Cold) | DGX Spark (Hot) | RTX 4090 (Cold) | RTX 4090 (Hot) | | --- | --- | --- | --- | --- | | Z Image Turbo | ~46.0s | ~6.0s | ~26.3s | ~2.6s | | Qwen Image Edit (4 steps) | ~80.8s | ~18.0s | ~72.5s | ~8.5s | | Qwen Image Edit (20 steps) | ~223.7s | ~172.0s | ~104.8s | ~57.8s | | Flux 2 GGUF Q8-0 | ~580.0с | ~265.0с | OOM | OOM | | Hunyuan3D 2.1 | ~204.4с | ~185.0с | OOM | OOM | ## Nuances of "Early" Hardware It's important to understand that Spark is a Blackwell Development Kit, not a "plug and play" consumer solution. * Architecture: aarch64 + sm121 combo. Much has to be built manually. Recently, for example, I was building a Docker image for Hunyuan and spent about 8 hours resolving dependency hell because some dependencies for the ARM processor were simply missing. * Software Support: you often have to manually set compatibility flags, as many frameworks haven't updated for Blackwell yet. ## Who Am I and Why Do I Need This I am a Unity developer. By profession — gamedev, in my free time — an enthusiast who actively uses inference. I'm most interested in 3D: generating models, textures, and experimenting with various pipelines. ## Conclusion (My IMHO) DGX Spark occupies a very narrow and specific niche. And I sincerely don't understand why it was advertised as a "supercomputer." It seems the word "super" has become a bit devalued: every couple of weeks, new neural networks come out, and from every account, you hear how something "super" has happened. In my experience, Spark is much more honestly perceived as a compact CUDA node or a Blackwell dev-kit next to your main computer. If it is "super," then perhaps only a super-mini-computer — without claiming any speed records. It is an EXPENSIVE compromise where you sacrifice speed for memory volume and access to the CUDA ecosystem. For my tasks in gamedev and R&D, it has become a convenient and reliable "NVIDIA trailer" to my main Mac. After 2 months, I have already built several Docker images, filled almost a terabyte with SOTA models, and for now, I am in the "playing with a new toy" stage. But I am satisfied.
2025-12-23T22:58:04
https://www.reddit.com/gallery/1pu7pfi
PropellerheadViJ
reddit.com
1970-01-01T00:00:00
0
{}
1pu7pfi
false
null
t3_1pu7pfi
/r/LocalLLaMA/comments/1pu7pfi/thoughts_on_dgx_spark_as_a_macos_companion_two/
false
false
https://b.thumbs.redditm…bZmkUE7Pp39M.jpg
146
null
Thoughts on DGX Spark as a macOS Companion: Two Months Later
0
I have been using the NVIDIA DGX Spark in tandem with my Mac for about two months now. Given the active discussions about its specs and price, I want to share my personal, subjective observations on who this device might be for and who it might not be. ## My Context: I Simply Don't Have CUDA on Mac I've been working on Apple Silicon since the release of the M1 and didn't plan on changing my main platform. It's a comfortable and stable environment for my daily work. The problem lies elsewhere: in ML and SOTA research, a significant portion of tools and libraries are still oriented towards CUDA. On macOS, following Apple's transition to M1+, this ecosystem simply doesn't exist. Because of this, an entire layer of critical libraries like nvdiffrast, flash-attention, and other CUDA-dependent solutions is unavailable on Mac. In my case, the situation reached the point of absurdity: there was a real episode where Apple released a model, but it turned out to be designed for Linux, not for Apple Silicon (haha). I didn't want to switch to another platform — I'm already a Mac user and I wanted to stay in this environment. DGX Spark eventually became a compromise: a compact device with a Mac mini form factor, 128 GB of unified memory, and Blackwell architecture (sm121), which simply adds CUDA alongside the Mac, rather than replacing it. ## The Bandwidth Problem The most frequent criticism of Spark concerns its memory bandwidth — only 273 GB/s. For comparison: the RTX 4090 has about 1000 GB/s, and the M4 Ultra has 819 GB/s. If your goal is the fastest possible inference and maximum tokens per second, Spark is indeed not the best tool. But local LLMs are what I used the least. In my practice for R&D and experiments, you much more often hit the memory limit and software constraints rather than pure speed. Plus, there's a purely practical point: if this is your main Mac, you can almost never give all of its RAM to inference — it's already occupied by IDEs, DCC tools, and the system. Spark allows you to offload AI computations to a separate device and not turn your main computer into a "brick" during calculations. Modern models in 2025 are quickly outgrowing consumer hardware: * Hunyuan 3D 2.1 — about 29 GB VRAM for full generation * FLUX.2 (BF16) — the full model easily exceeds 80 GB * Trellis2 — 24 GB as the minimum launch threshold Quantization and distillation are viable options, but they require time and additional steps and experiments. It might work or it might not. Spark allows you to run such models "as is," without unnecessary manipulations. ## My Workflow: Mac + Spark In my setup, a Mac on M4 Max with 64 GB RAM handles the main tasks: Unity, Houdini, Blender, IDE. But AI tasks now fly over to Spark (right now I'm generating a fun background in Comfy for a call with colleagues). I simply connect to Spark via SSH through JetBrains Gateway and work on it as a remote machine: the code, environment, and runs live there, while the Mac remains a responsive work tool. For me, this is a convenient and clear separation: Mac is the workplace, Spark is the compute node. ## What About Performance Below are my practical measurements in tasks typical for me, compared to an RTX 4090 on RunPod. I separate the measurements into **Cold Start** (first run) and **Hot Start** (model already loaded). | Model | DGX Spark (Cold) | DGX Spark (Hot) | RTX 4090 (Cold) | RTX 4090 (Hot) | | --- | --- | --- | --- | --- | | Z Image Turbo | ~46.0s | ~6.0s | ~26.3s | ~2.6s | | Qwen Image Edit (4 steps) | ~80.8s | ~18.0s | ~72.5s | ~8.5s | | Qwen Image Edit (20 steps) | ~223.7s | ~172.0s | ~104.8s | ~57.8s | | Flux 2 GGUF Q8-0 | ~580.0с | ~265.0с | OOM | OOM | | Hunyuan3D 2.1 | ~204.4с | ~185.0с | OOM | OOM | ## Nuances of "Early" Hardware It's important to understand that Spark is a Blackwell Development Kit, not a "plug and play" consumer solution. * Architecture: aarch64 + sm121 combo. Much has to be built manually. Recently, for example, I was building a Docker image for Hunyuan and spent about 8 hours resolving dependency hell because some dependencies for the ARM processor were simply missing. * Software Support: you often have to manually set compatibility flags, as many frameworks haven't updated for Blackwell yet. ## Who Am I and Why Do I Need This I am a Unity developer. By profession — gamedev, in my free time — an enthusiast who actively uses inference. I'm most interested in 3D: generating models, textures, and experimenting with various pipelines. ## Conclusion (My IMHO) DGX Spark occupies a very narrow and specific niche. And I sincerely don't understand why it was advertised as a "supercomputer." It seems the word "super" has become a bit devalued: every couple of weeks, new neural networks come out, and from every account, you hear how something "super" has happened. In my experience, Spark is much more honestly perceived as a compact CUDA node or a Blackwell dev-kit next to your main computer. If it is "super," then perhaps only a super-mini-computer — without claiming any speed records. It is an EXPENSIVE compromise where you sacrifice speed for memory volume and access to the CUDA ecosystem. For my tasks in gamedev and R&D, it has become a convenient and reliable "NVIDIA trailer" to my main Mac. After 2 months, I have already built several Docker images, filled almost a terabyte with SOTA models, and for now, I am in the "playing with a new toy" stage. But I am satisfied.
2025-12-23T22:52:48
https://www.reddit.com/gallery/1pu7l60
PropellerheadViJ
reddit.com
1970-01-01T00:00:00
0
{}
1pu7l60
false
null
t3_1pu7l60
/r/LocalLLaMA/comments/1pu7l60/thoughts_on_dgx_spark_as_a_macos_companion_two/
false
false
https://b.thumbs.redditm…4m8QRWLj6Bys.jpg
0
null
Which lightweight local anonymization model or workflow to use?
1
Hi everyone, I want to have my code and data anonymized locally before using cloud models (Claude). It will be a hassle to make it work and make the changes. However, I am open to hearing recommendations about which model to use, as well as the workflow, if anyone has experience.
2025-12-23T22:44:16
https://www.reddit.com/r/LocalLLaMA/comments/1pu7e8f/which_lightweight_local_anonymization_model_or/
Particular_Exam_1326
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1pu7e8f
false
null
t3_1pu7e8f
/r/LocalLLaMA/comments/1pu7e8f/which_lightweight_local_anonymization_model_or/
false
false
self
1
null
Beginner setup ~1k€
1
Hi im relatively new to the whole local LIm Topic. I only have a MacBook Pro with M1 Pro Chip 16gb unified memory. I would like to build my first server in the next 2-3 months. I like the idea of using the mi50s because they are well cheap, and they have downsides,which I'm aware of but I only plan on using models like gwen coder 3 30b, devstral 2 and maybe some bigger models with maybe like llama 3 70b or similar with lm Studio or plans and open web ui. My setup I planned for now : CPU : i7 6800k (it is included in many Saxons hand bundles that I can pick in in my location) Motherboard : ASUS x99 ,DDR4 (I don’t know if that’s a good idea but many people here chose similar ones with similar setups. GPU : 3x AMD radeon MI 50 (or mi60 🤷🏼) 32gb VRAM Case : no idea but I think some xl or sever case that’s cheap and can fit everything Power supply : be quiet dark power pro 1200W (80 + gold , well don’t plan on bribing down my home) RAM : since it’s hella expensive the least amount that is necessary , I do have 8gb laying around but I assume that’s not nearly enough. I don’t know how much I really need here , please tell me 😅 Cost : -CPU ,Motherboard , CPU Cooler -70€ -GPU 3x MI50 32gb 600€ +shipping (expect ~60€) -power supply ~80€ (more than 20 offers near me from brands like Corsair, be quiet) -case (as I said not sure but I expect ~90,100€ maybe (used obviously) - RAM (64gb Server RAM 150€ used , no idea if that’s what I need) ——————— ~1050€ Would appreciate help 👍
2025-12-23T22:39:41
https://www.reddit.com/r/LocalLLaMA/comments/1pu7an9/beginner_setup_1k/
MastodonParty9065
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1pu7an9
false
null
t3_1pu7an9
/r/LocalLLaMA/comments/1pu7an9/beginner_setup_1k/
false
false
self
1
null
What is functiongemma used for?
3
This might be a silly question, but I’m not exactly sure what the functiongemma model is designed for. It looks useful at a glance, but I’d like to know more about its purpose.
2025-12-23T22:17:27
https://www.reddit.com/r/LocalLLaMA/comments/1pu6sa7/what_is_functiongemma_used_for/
Hopeful_Ferret_2701
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1pu6sa7
false
null
t3_1pu6sa7
/r/LocalLLaMA/comments/1pu6sa7/what_is_functiongemma_used_for/
false
false
self
3
null
Stop going to boring AI "Networking" events. We’re doing an overnight lock-in in India instead.
0
Most AI events in India are the same: bad coffee, 40-minute "thought leader" slide decks, and people trying to sell you a course or a SaaS subscription. We’re tired of it. The best conversations about LLMs, agents, and scaling don't happen in conference halls. They happen at 1 AM in a comfort zone, with room full of people who are actually shipping code and breaking things. To close out 2025, the Top 1% Indian AI community is hosting a private, overnight gathering in Delhi. The Vibe: • No Sponsors. We don't want anyone pitching us. • No "Gurus." If you don’t build, you’re in the wrong room. • Pure Signal. Just deep-dive technical chats, future partnerships, and high-IQ chaos. • The Setup: We’ve booked a private Airbnb. Think: Deep tech talk + Gaming + Music + BYOB + Awesome Food & Drinks The Logistics: • When: 27 Dec (Evening) → 28 Dec (Morning). • Where: Private Airbnb, New Delhi. • The Damage: ₹3k (This isn't for profit; it just covers the food, venue, and snacks). Why am I posting this here? Usually, this is Guild-only. However, due to some last-minute scheduling shifts, we have exactly 4-5 spots open for outsiders who actually know their stuff. We don’t care about your job title; we care about your GitHub or what you’ve built this year. How to get in: We are strictly vetting everyone to keep the signal-to-noise ratio high. 1. Drop a comment with "Interested" or shoot a DM. 2. Include your LinkedIn/Portfolio. 3. Or, if you're old school, WhatsApp us your profile at +91 9205249666. Selection is based purely on your profile and a quick verification. Let’s build something real to end the year.
2025-12-23T21:32:43
https://i.redd.it/r0wjnmggv09g1.jpeg
Ambitious-End1261
i.redd.it
1970-01-01T00:00:00
0
{}
1pu5qnr
false
null
t3_1pu5qnr
/r/LocalLLaMA/comments/1pu5qnr/stop_going_to_boring_ai_networking_events_were/
false
false
https://a.thumbs.redditm…kdbTMY9eXI30.jpg
0
{'enabled': True, 'images': [{'id': '5TIRPN14v4JDxUE37uoGrWfO18CFLhYn905qvwie7Eo', 'resolutions': [{'height': 162, 'url': 'https://preview.redd.it/r0wjnmggv09g1.jpeg?width=108&crop=smart&auto=webp&s=a956b6576bac220e38c19bfcd0d4f1b500c7ca8b', 'width': 108}, {'height': 324, 'url': 'https://preview.redd.it/r0wjnmggv09g1.jpeg?width=216&crop=smart&auto=webp&s=2fca957f44b8c6e80110067150e23979f3f8f87d', 'width': 216}, {'height': 480, 'url': 'https://preview.redd.it/r0wjnmggv09g1.jpeg?width=320&crop=smart&auto=webp&s=42d55bd771205427dd1b0f904146deaa7b15e167', 'width': 320}, {'height': 960, 'url': 'https://preview.redd.it/r0wjnmggv09g1.jpeg?width=640&crop=smart&auto=webp&s=0873c0305ae65cdb7dfa808ca10093f3eb01a5db', 'width': 640}, {'height': 1440, 'url': 'https://preview.redd.it/r0wjnmggv09g1.jpeg?width=960&crop=smart&auto=webp&s=8901238745b76a5ec1380c97d96e33d6f48edfb8', 'width': 960}], 'source': {'height': 1536, 'url': 'https://preview.redd.it/r0wjnmggv09g1.jpeg?auto=webp&s=0e1591c2360fdb08162f8645492e536a6f32bfa2', 'width': 1024}, 'variants': {}}]}
Has anyone had success writing x86 assembly with a local model?
21
I haven't seen anyone do any comparisons.
2025-12-23T21:28:25
https://www.reddit.com/r/LocalLLaMA/comments/1pu5mz1/has_anyone_had_success_writing_x86_assembly_with/
MrMrsPotts
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1pu5mz1
false
null
t3_1pu5mz1
/r/LocalLLaMA/comments/1pu5mz1/has_anyone_had_success_writing_x86_assembly_with/
false
false
self
21
null
Looking for recent books on building production-grade, scalable AI agents
1
I’m looking for recent books that really focus on building production-grade, scalable AI agents. Specifically interested in books that cover things like: • Agent architectures and orchestration • Reliability, monitoring, and evals • Tool use, memory, and planning at scale • Deploying agents in real systems • Lessons learned from real-world production setups
2025-12-23T21:21:09
https://www.reddit.com/r/LocalLLaMA/comments/1pu5gt2/looking_for_recent_books_on_building/
DataScientia
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1pu5gt2
false
null
t3_1pu5gt2
/r/LocalLLaMA/comments/1pu5gt2/looking_for_recent_books_on_building/
false
false
self
1
null
Optimizing glm 4-7
0
I want to create an optimized setup for glm 4-7, with vllm or sglang (not exactly sure whats the best im used to vllm tho: \- I can get maximum 2 h200 ( hence i need quantization) \-most of my prompts will be between 2k and 30K , i have some very long prompts \~100k \- I want to optimize for speed i need reasonable accuracy, but priority is to get fast outputs
2025-12-23T21:16:00
https://www.reddit.com/r/LocalLLaMA/comments/1pu5chu/optimizing_glm_47/
Best_Sail5
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1pu5chu
false
null
t3_1pu5chu
/r/LocalLLaMA/comments/1pu5chu/optimizing_glm_47/
false
false
self
0
null
Uncensored Qwen3-Next-80B-Thinking (Chinese political censorship removed)
141
🤗 Link to the hugging face model: [https://huggingface.co/MultiverseComputingCAI/Qwen3-Next-80B-A3B-Thinking-Uncensored](https://huggingface.co/MultiverseComputingCAI/Qwen3-Next-80B-A3B-Thinking-Uncensored) Hello everyone! I am a researcher at [Multiverse Computing](https://multiversecomputing.com), an European startup working on LLMs. We’ve released an **uncensored version of Qwen3-Next-80B-Thinking** in which **Chinese political censorship has been removed.** The model no longer refuses to answer for Chinese politically sensitive topics. Instead, it will provide **balanced, objective answers** that present multiple relevant perspectives. We believe that we made some significant improvement over previous approaches such as the uncensored version of DeepSeek R1 developed by Perplexity: * The behavior for non Chinese sensitive topics remains the same, this includes that the model scores the same in all the evaluation benchmarks we have performed. * We **do not perform SFT** with hand-crafted data and we **do not inject any new knowledge inside the model**. Our method is based on steering vectors to remove the capability of the model to refuse to answer China-related sensitive prompts. The model answers using **the knowledge already inside the base model**. * Many steering-vector approaches effectively *erase* refusal behavior everywhere (making models broadly unsafe). Our approach **only disables refusals only for Chinese sensitive topics**. (I know that many of you love fully uncensored models, but this was important for us). * Previous “uncensored” models such as Perplexity R1 1767 can be jailbroken very easily by simply injecting a China-related phrase into harmful prompts ([https://weijiexu.com/posts/jailbreak\_r1\_1776.html](https://weijiexu.com/posts/jailbreak_r1_1776.html)). Our model is designed to remain robust against the type of jailbreaks. * The model is a drop-in replace of the original Qwen-Next model. No architecture changes, no extra layers... # The method This release is based on Refusal Steering, an inference-time technique using **steering vectors** to control refusal behavior. We released a few days ago a paper describing our approach (although for this release, we updated the method so no extra weights are needed): [https://arxiv.org/abs/2512.16602](https://arxiv.org/abs/2512.16602) # Feedback We have evaluated the model to measure the refusal behavior for Chinese sensitive topics as well as harmful prompts. And we have also evaluated the model in popular benchmarks. The full evaluation details are available in the Model Card. But we are aware that there might be prompts we didn't thought about that are still censored, or cause an undesired behavior. So we would love to gather some feedback to continue improving the model. In addition, we have open-source our evaluation library: [https://github.com/CompactifAI/LLM-Refusal-Evaluation](https://github.com/CompactifAI/LLM-Refusal-Evaluation) # Example Here is an example of the original model vs the uncensored model. (You might need to open the image to see it correctly). As you can see, the model’s answers are well-balanced and objective, presenting multiple perspectives. **Original model:** https://preview.redd.it/w1hpnillr09g1.png?width=1605&format=png&auto=webp&s=538697f68c700d090319d24ab5b13504cd773718 **Uncensored model:** https://preview.redd.it/0a96qgtmr09g1.png?width=1655&format=png&auto=webp&s=84b37d97d1e7309c7ca8c4c40e5902dab4d62bc7
2025-12-23T21:15:04
https://www.reddit.com/r/LocalLLaMA/comments/1pu5bob/uncensored_qwen3next80bthinking_chinese_political/
ikergarcia1996
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1pu5bob
false
null
t3_1pu5bob
/r/LocalLLaMA/comments/1pu5bob/uncensored_qwen3next80bthinking_chinese_political/
false
false
https://a.thumbs.redditm…SE-FaAinfEO8.jpg
141
null
Is 30B-level LLMs really a waste? + Should I dual-5060 Ti for local AI or 3060+3060?
1
Hey all! I’m diving into local LLMs (to escape ChatGPT’s privacy issues), but I’m confused about two things: 1. 30B models: I’m getting mixed opinions on local llms.. Some say they’re useless under 70b - others don’t. My experience is mixed, some are decent, others are complete garbage. Am I missing something? What’s the trick to get an actual functional model? (Examples of use cases would be nive!) 2. Upgrade path.. Today I run a 3060 12gb and am torn between: - Opt 1: Adding another 3060 via M.2 adapter (cheaper now, but limited by VRAM). - Opt 2: Buying two brand spanking new 5060 Ti 16gbs (since used 3090s are insanely prices here in Scandinavia.. and used). I want to upgrade as those models I’ve best experience with so far are rather larger and are pretty slow due to cpu offload. - Would two 5060 Tis be meaningfully better for running larger useful models? Or is there a better mid-range setup? I’m considering just getting the 5060’s now before the ramflation enters the GPU market.. What I want to accomplish: My own local, privacy-focused llm/ai that’s actually usable - not just a €2k gimmick in my attic. Any advice on models, setups, or even alternative approaches (e.g., quantization, sharded loading)? Running it in a Ubuntu VM on proxmox i5-12600k 32gb ddr5-7200
2025-12-23T21:13:20
https://www.reddit.com/r/LocalLLaMA/comments/1pu5a57/is_30blevel_llms_really_a_waste_should_i_dual5060/
Background_Gene_3128
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1pu5a57
false
null
t3_1pu5a57
/r/LocalLLaMA/comments/1pu5a57/is_30blevel_llms_really_a_waste_should_i_dual5060/
false
false
self
1
null
MCP Mesh – Distributed runtime for AI agents with auto-discovery and LLM failover
6
I've been building **MCP Mesh** for 5 months — a distributed-first runtime for AI agents built on MCP protocol. **What makes it different:** * Agents are microservices, not threads in a monolith * Auto-discovery via mesh registry (agents find each other by capability tags) * LLM failover without code changes — just declare tags * Kubernetes-ready with Helm charts * Built-in observability (Grafana + Tempo) **Docs**: [https://dhyansraj.github.io/mcp-mesh/](https://dhyansraj.github.io/mcp-mesh/) **Youtube** (34 min, zero to production): [https://www.youtube.com/watch?v=GpCB5OARtfM](https://www.youtube.com/watch?v=GpCB5OARtfM) Would love feedback from anyone building agent systems. What problems are you hitting with current agent frameworks?
2025-12-23T21:12:40
https://www.reddit.com/r/LocalLLaMA/comments/1pu59mp/mcp_mesh_distributed_runtime_for_ai_agents_with/
Own-Mix1142
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1pu59mp
false
null
t3_1pu59mp
/r/LocalLLaMA/comments/1pu59mp/mcp_mesh_distributed_runtime_for_ai_agents_with/
false
false
self
6
null
Why Local Ai dont use ROCm ?
0
Hello, I have a problem: I want a local AI but without using LM Studio. I would like to use it on my entire system, particularly via the terminal. However, when I use it from the terminal, the AI will only use the processor, which is not suitable, even though I have a “decent” graphics card for local AI, but it doesn't run on it and I don't know how to make it use my card. I installed ROCm from the AMD website and installed an AI from Ollama, but it only uses the processor. Do you know how to do this? Thank you.
2025-12-23T20:57:28
https://www.reddit.com/r/LocalLLaMA/comments/1pu4wfp/why_local_ai_dont_use_rocm/
Status_Wear_220
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1pu4wfp
false
null
t3_1pu4wfp
/r/LocalLLaMA/comments/1pu4wfp/why_local_ai_dont_use_rocm/
false
false
self
0
null
🤯Weird - India’s top AI talent coming together first time to celebrate new year 🎉
0
If you’re in India 🇮🇳, and don’t want to miss the craziest AI nerds party of the year then do check this: https://www.linkedin.com/posts/india-top-1\_newyear-ai-talent-activity-7407772844677427200-SYsF
2025-12-23T20:32:31
https://i.redd.it/6lfv0nppk09g1.jpeg
Ambitious-End1261
i.redd.it
1970-01-01T00:00:00
0
{}
1pu4b88
false
null
t3_1pu4b88
/r/LocalLLaMA/comments/1pu4b88/weird_indias_top_ai_talent_coming_together_first/
false
false
default
0
{'enabled': True, 'images': [{'id': '6lfv0nppk09g1', 'resolutions': [{'height': 162, 'url': 'https://preview.redd.it/6lfv0nppk09g1.jpeg?width=108&crop=smart&auto=webp&s=27ee0861708c9aaa5c0825bdd0d6ae75a628c84d', 'width': 108}, {'height': 324, 'url': 'https://preview.redd.it/6lfv0nppk09g1.jpeg?width=216&crop=smart&auto=webp&s=b04005f2a3e60a5298c028ca242d79c30a10af6d', 'width': 216}, {'height': 480, 'url': 'https://preview.redd.it/6lfv0nppk09g1.jpeg?width=320&crop=smart&auto=webp&s=ca420370c42f525ac318ca2ee8314c8baa63977b', 'width': 320}, {'height': 960, 'url': 'https://preview.redd.it/6lfv0nppk09g1.jpeg?width=640&crop=smart&auto=webp&s=00346094e80f7cb35ec7418dd6598ddaf86201b2', 'width': 640}, {'height': 1440, 'url': 'https://preview.redd.it/6lfv0nppk09g1.jpeg?width=960&crop=smart&auto=webp&s=12904f1e8dceda7113082ab6585abd374c7f8538', 'width': 960}], 'source': {'height': 1536, 'url': 'https://preview.redd.it/6lfv0nppk09g1.jpeg?auto=webp&s=e9e3920b2550afc7f80f307028d13ed119158a75', 'width': 1024}, 'variants': {}}]}