title
stringlengths
1
300
score
int64
0
8.54k
selftext
stringlengths
0
41.5k
created
timestamp[ns]date
2023-04-01 04:30:41
2026-03-04 02:14:14
url
stringlengths
0
878
author
stringlengths
3
20
domain
stringlengths
0
82
edited
timestamp[ns]date
1970-01-01 00:00:00
2026-02-19 14:51:53
gilded
int64
0
2
gildings
stringclasses
7 values
id
stringlengths
7
7
locked
bool
2 classes
media
stringlengths
646
1.8k
name
stringlengths
10
10
permalink
stringlengths
33
82
spoiler
bool
2 classes
stickied
bool
2 classes
thumbnail
stringlengths
4
213
ups
int64
0
8.54k
preview
stringlengths
301
5.01k
70B Fine-tuning on Consumer WAN: Swarm vs FSDP (Answering the architecture/cost questions from my deleted thread)
1
[removed]
2025-12-29T18:49:46
https://www.reddit.com/r/LocalLLaMA/comments/1pyujwc/70b_finetuning_on_consumer_wan_swarm_vs_fsdp/
Alone-Detective-3317
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1pyujwc
false
null
t3_1pyujwc
/r/LocalLLaMA/comments/1pyujwc/70b_finetuning_on_consumer_wan_swarm_vs_fsdp/
false
false
self
1
null
Context engineering for production LLM systems (hands-on workshop)
1
A lot of production issues in LLM systems don’t come from prompts, but from context becoming hard to structure, explain, or control at scale, especially in agentic workflows. Given how often this comes up, I wanted to share a live, hands-on workshop we’re running on **Context Engineering for Agentic AI** with **Denis Rothman** (author of *Context Engineering for Multi-Agent Systems*). 📅 Jan 24 | Live online Link: [https://www.eventbrite.com/e/context-engineering-for-agentic-ai-workshop-tickets-1975400249322?aff=reddit](https://www.eventbrite.com/e/context-engineering-for-agentic-ai-workshop-tickets-1975400249322?aff=reddit) Sharing this since I’m involved, happy to answer questions if this aligns with what you’re building.
2025-12-29T18:37:48
https://www.reddit.com/r/LocalLLaMA/comments/1pyu83m/context_engineering_for_production_llm_systems/
kunal_packtpub
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1pyu83m
false
null
t3_1pyu83m
/r/LocalLLaMA/comments/1pyu83m/context_engineering_for_production_llm_systems/
false
false
self
1
null
I made an open-source LLM jailbreak resilience testing website
1
[removed]
2025-12-29T18:13:58
https://www.reddit.com/r/LocalLLaMA/comments/1pytkuc/i_made_an_opensource_llm_jailbreak_resilience/
DingyAtoll
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1pytkuc
false
null
t3_1pytkuc
/r/LocalLLaMA/comments/1pytkuc/i_made_an_opensource_llm_jailbreak_resilience/
false
false
https://b.thumbs.redditm…KYKUrnm7YGTQ.jpg
1
null
What happened to glhf.chat?
1
[removed]
2025-12-29T18:03:08
https://www.reddit.com/r/LocalLLaMA/comments/1pyta07/what_happened_to_glhfchat/
North-Active-3150
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1pyta07
false
null
t3_1pyta07
/r/LocalLLaMA/comments/1pyta07/what_happened_to_glhfchat/
false
false
self
1
null
What's the best LLM for 96gb VRAM with vision
6
I've mostly been into the stable diffusion space, but I've been enjoying playing around with LLMs more often. I have access to an RTX Pro 6000 Blackwell and a Macbook Pro M4 Pro 24gb. I'm currently downloading Minimax m2.1 at IQ3\_XXS for my 6000 Pro, but I want other options with vision.
2025-12-29T17:39:34
https://www.reddit.com/r/LocalLLaMA/comments/1pysmcf/whats_the_best_llm_for_96gb_vram_with_vision/
LiteratureAcademic34
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1pysmcf
false
null
t3_1pysmcf
/r/LocalLLaMA/comments/1pysmcf/whats_the_best_llm_for_96gb_vram_with_vision/
false
false
self
6
null
Why I Ditched Serverless Neptune/OpenSearch for Dockerized Neo4j/pgvector on EC2 (60% Cost Cut)
0
I’ve been running the RAG backend for DevMate for about 3 months, and the AWS "Serverless Tax" finally hit the breaking point. Neptune and OpenSearch were costing me roughly $500/mo just to keep the lights on with minimal traffic. I decided to migrate the entire GraphRAG stack to a single Dockerized EC2 instance using Neo4j and pgvector. The technical trade-offs were surprising. By moving to a self-hosted stack on one node, I eliminated the network hops between serverless services, which dropped my retrieval latency from 200ms to under 60ms. My monthly bill went from $500 down to $180. If you are building a B2B SaaS with predictable traffic, the "scaling" benefit of serverless Neptune often doesn't justify the 3x price premium and latency hit. I’ve documented the migration steps and the Docker config below. **Full Technical Breakdown:**[https://rampakanayev.com/blog/neo4j-vs-pgvector-graphrag](https://rampakanayev.com/blog/neo4j-vs-pgvector-graphrag)
2025-12-29T17:21:53
https://rampakanayev.com/blog/neo4j-vs-pgvector-graphrag
No-Conversation-8984
rampakanayev.com
1970-01-01T00:00:00
0
{}
1pys56x
false
null
t3_1pys56x
/r/LocalLLaMA/comments/1pys56x/why_i_ditched_serverless_neptuneopensearch_for/
false
false
default
0
{'enabled': False, 'images': [{'id': 'ihV3ADvN7fUOajXjrsgDvQE7NxK7M55DxSyb5XzXzXA', 'resolutions': [{'height': 138, 'url': 'https://external-preview.redd.it/ihV3ADvN7fUOajXjrsgDvQE7NxK7M55DxSyb5XzXzXA.jpeg?width=108&crop=smart&auto=webp&s=f60177620888217ae49737078310908aa80cf78f', 'width': 108}, {'height': 276, 'url': 'https://external-preview.redd.it/ihV3ADvN7fUOajXjrsgDvQE7NxK7M55DxSyb5XzXzXA.jpeg?width=216&crop=smart&auto=webp&s=0c5fce9478047fd6263cf35baaaa8501700162eb', 'width': 216}, {'height': 409, 'url': 'https://external-preview.redd.it/ihV3ADvN7fUOajXjrsgDvQE7NxK7M55DxSyb5XzXzXA.jpeg?width=320&crop=smart&auto=webp&s=a7f735629f0b1bd87eaa681203a9a21ae366a281', 'width': 320}, {'height': 819, 'url': 'https://external-preview.redd.it/ihV3ADvN7fUOajXjrsgDvQE7NxK7M55DxSyb5XzXzXA.jpeg?width=640&crop=smart&auto=webp&s=bf2e84b2922c66693c7da0b1d5ce595ffc50011e', 'width': 640}, {'height': 1228, 'url': 'https://external-preview.redd.it/ihV3ADvN7fUOajXjrsgDvQE7NxK7M55DxSyb5XzXzXA.jpeg?width=960&crop=smart&auto=webp&s=1f2be7f769713909d6c4d776aa338bc2244780a8', 'width': 960}, {'height': 1382, 'url': 'https://external-preview.redd.it/ihV3ADvN7fUOajXjrsgDvQE7NxK7M55DxSyb5XzXzXA.jpeg?width=1080&crop=smart&auto=webp&s=b8f153615491633f6e6002b4550660a425fafc9e', 'width': 1080}], 'source': {'height': 1536, 'url': 'https://external-preview.redd.it/ihV3ADvN7fUOajXjrsgDvQE7NxK7M55DxSyb5XzXzXA.jpeg?auto=webp&s=b83e140eda5e73081f0ba207b22593794c948b9f', 'width': 1200}, 'variants': {}}]}
60% AWS Cost Reduction: Why I Ditched Serverless Neptune for Dockerized Neo4j/pgvector
1
After 3 months of getting crushed by $500/mo AWS Neptune bills for my DevMate RAG backend, I made the jump to self-hosting. I moved to a Dockerized Neo4j and pgvector stack on a single EC2 instance. >
2025-12-29T17:17:00
https://rampakanayev.com/blog/neo4j-vs-pgvector-graphrag
No-Conversation-8984
rampakanayev.com
1970-01-01T00:00:00
0
{}
1pys0g2
false
null
t3_1pys0g2
/r/LocalLLaMA/comments/1pys0g2/60_aws_cost_reduction_why_i_ditched_serverless/
false
false
default
1
null
Resources Title: 60% Cost Reduction: Why I Ditched Serverless Neptune for Dockerized Neo4j/pgvector on EC2
1
2025-12-29T17:15:53
https://rampakanayev.com/blog/neo4j-vs-pgvector-graphrag
No-Conversation-8984
rampakanayev.com
1970-01-01T00:00:00
0
{}
1pyrzcj
false
null
t3_1pyrzcj
/r/LocalLLaMA/comments/1pyrzcj/resources_title_60_cost_reduction_why_i_ditched/
false
false
default
1
null
I Finished a Fully Local Agentic RAG Tutorial
93
Hi, I’ve just finished a **complete Agentic RAG tutorial + repository** that shows how to build a fully local, end-to-end system. No APIs, no cloud, no hidden costs. --- ### 💡 What’s inside The tutorial covers the full pipeline, including the parts most examples skip: - PDF → Markdown ingestion - Hierarchical chunking (parent / child) - Hybrid retrieval (dense + sparse) - Vector store with **Qdrant** - Query rewriting + **human-in-the-loop** - Context summarization - **Multi-agent map-reduce** with **LangGraph** - Local inference with **Ollama** - Simple **Gradio** UI --- ### 🎯 Who it’s for If you want to **understand Agentic RAG by building it**, not just reading theory, this might help. --- ### 🔗 Repo https://github.com/GiovanniPasq/agentic-rag-for-dummies
2025-12-29T17:03:44
https://www.reddit.com/r/LocalLLaMA/comments/1pyrn9v/i_finished_a_fully_local_agentic_rag_tutorial/
CapitalShake3085
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1pyrn9v
false
null
t3_1pyrn9v
/r/LocalLLaMA/comments/1pyrn9v/i_finished_a_fully_local_agentic_rag_tutorial/
false
false
self
93
{'enabled': False, 'images': [{'id': '6eswuBycSZuv-52wMuh9RrEBwf_Kc3CrESRlX1QF8WU', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/6eswuBycSZuv-52wMuh9RrEBwf_Kc3CrESRlX1QF8WU.png?width=108&crop=smart&auto=webp&s=3830c381b96af366464c6dcda39fb3bf736324f1', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/6eswuBycSZuv-52wMuh9RrEBwf_Kc3CrESRlX1QF8WU.png?width=216&crop=smart&auto=webp&s=e076025331cb65ed627abd152438e28ccbfb64bc', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/6eswuBycSZuv-52wMuh9RrEBwf_Kc3CrESRlX1QF8WU.png?width=320&crop=smart&auto=webp&s=0f4a108830e7cdcf7aeffc7752dc6288a945dc05', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/6eswuBycSZuv-52wMuh9RrEBwf_Kc3CrESRlX1QF8WU.png?width=640&crop=smart&auto=webp&s=cb3b3323b3693b5c80e451522e9ad9903d97a452', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/6eswuBycSZuv-52wMuh9RrEBwf_Kc3CrESRlX1QF8WU.png?width=960&crop=smart&auto=webp&s=d686386ead147c95f05c04d24ad41160262223a6', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/6eswuBycSZuv-52wMuh9RrEBwf_Kc3CrESRlX1QF8WU.png?width=1080&crop=smart&auto=webp&s=7a44613c7683f44b9422dad3756c3ff9bc700ef9', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/6eswuBycSZuv-52wMuh9RrEBwf_Kc3CrESRlX1QF8WU.png?auto=webp&s=0b3034e97c263723465dd2e80f0f33b34af9a89f', 'width': 1200}, 'variants': {}}]}
Benchmarks for Quantized Models? (for users locally running Q8/Q6/Q2 precision)
59
Hi All, Many of us use the quantized Q8/Q6/Q2 model instead of fp16 for obvious reasons. Is there a collection of benchmarks which show SWE, HLE etc on Q8/Q6/Q2 quantized models?
2025-12-29T17:00:09
https://www.reddit.com/r/LocalLLaMA/comments/1pyrjke/benchmarks_for_quantized_models_for_users_locally/
No-Grapefruit-1358
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1pyrjke
false
null
t3_1pyrjke
/r/LocalLLaMA/comments/1pyrjke/benchmarks_for_quantized_models_for_users_locally/
false
false
self
59
null
AMD Max+ 395 runs MiniMax-M2.1-UD-Q3_K_XL to generate Tetris/Snake games
0
[【Max+395设置120G显存运行MiniMax-M2.1生成电脑游戏】 https://www.bilibili.com/video/BV1kavxBFEPK/?share\_source=copy\_web&vd\_source=ad104f2f7ca4ece16ed05ee90c99142d](https://www.bilibili.com/video/BV1kavxBFEPK/?share_source=copy_web&vd_source=ad104f2f7ca4ece16ed05ee90c99142d)
2025-12-29T16:43:51
https://www.reddit.com/r/LocalLLaMA/comments/1pyr425/amd_max_395_runs_minimaxm21udq3_k_xl_to_generate/
Deep-Jellyfish6717
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1pyr425
false
null
t3_1pyr425
/r/LocalLLaMA/comments/1pyr425/amd_max_395_runs_minimaxm21udq3_k_xl_to_generate/
false
false
self
0
null
Best model to create illustrated storybook videos
2
Hey all. Appologies for my beginner question. I'm looking for advice on creating videos with the following style: https://preview.redd.it/hcet5mlj66ag1.png?width=1980&format=png&auto=webp&s=da9380d5a3a5a4aaf83b5a4717aac4d2b35a7555 [](https://preview.redd.it/need-help-creating-illustrated-storybook-videos-v0-nta2731y46ag1.png?width=1980&format=png&auto=webp&s=396e2f72759d9e7de93b6432c2cf69d47b962279) What I'm after is a consistent way to create 30-60s stories, where each scene can be a "page-turn". Character and art-style consistency are important. I don't need these to be realistic. Not sure what the best techniques are for this - pretty new and naive to image/video gen. I tried 1-shotting with Veo/Sora to create the whole video but: 1. videos are too short 2. Styles are fairly inconsistent across generation Also, tried creating the initial "scene" image then passing it as reference, but again, too many inconsistencies. Not sure if this is a prompt engineering problem or a too generic model problem. **Any recommendations are welcomed** 🙏 I started exploring HF models as I can spin up my own inference server. I also have a decent chunk of references so I can look into finetuning too if you think that would be good. I don't need this to scale as I'll be using it only for my home/family.
2025-12-29T16:28:39
https://www.reddit.com/r/LocalLLaMA/comments/1pyqpf7/best_model_to_create_illustrated_storybook_videos/
TheWalkingFridge
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1pyqpf7
false
null
t3_1pyqpf7
/r/LocalLLaMA/comments/1pyqpf7/best_model_to_create_illustrated_storybook_videos/
false
false
https://b.thumbs.redditm…9bCwOwESLtlA.jpg
2
null
Llama 3.2 3B fMRI (updated findings)
7
I’m building a local interpretability tool that lets me visualize hidden-state activity and **intervene on individual hidden dimensions during inference** (via forward hooks). While scanning attn\_out, I identified a persistent hidden dimension (dim 3039) that appeared repeatedly across prompts. I'll spare you all the Gradio screenshots, there are quite a few. Initial probing suggested a loose “expressive vs constrained” effect, but that interpretation didn’t hold up under tighter controls. I then ran more systematic tests across: * multiple prompt types (social, procedural, factual, preference-based) * early / mid / late layers * both positive and negative intervention * long generations (1024 tokens) * repeated runs when results were ambiguous Across all of these conditions, **the only stable, cross-prompt effect** was a change in the model’s *degree of commitment* to its current generative trajectory. Specifically: * Increasing intervention magnitude (regardless of sign) caused the model to respond **more confidently and decisively** * This did **not** correlate with improved factual accuracy * In some cases (especially early-layer intervention), higher intervention increased **confident hallucination** * Constrained procedural prompts (e.g. PB&J instructions) showed minimal variation, while open-ended prompts (e.g. greetings, blog-style responses) showed much larger stylistic and tonal shifts The effect appears to modulate **how strongly the model commits to whatever path it has already sampled**, rather than influencing *which* path is chosen. This shows up as: * reduced hedging * increased assertiveness * stronger persistence of narrative frame * less self-correction once a trajectory is underway Importantly, this dimension does **not** behave like: * a semantic feature * an emotion representation * a creativity or verbosity knob * a factual reasoning mechanism A more accurate framing is that it functions as a **global commitment / epistemic certainty gain**, influencing how readily the model doubles down on its internal state. This also explains earlier inconsistencies: * early-layer interventions affect task framing (sometimes badly) * later-layer interventions affect delivery and tone * highly constrained tasks limit the observable effect * magnitude matters more than direction At this stage, the claim is intentionally narrow: > Next steps (not yet done) include residual-stream analysis to see whether this feature accumulates across layers, and ablation tests to check whether removing it increases hedging and self-revision.
2025-12-29T16:19:13
https://www.reddit.com/r/LocalLLaMA/comments/1pyqgi5/llama_32_3b_fmri_updated_findings/
Due_Hunter_4891
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1pyqgi5
false
null
t3_1pyqgi5
/r/LocalLLaMA/comments/1pyqgi5/llama_32_3b_fmri_updated_findings/
false
false
self
7
null
Help me build a system around my gpu
1
Hi all, I recently managed to grab an MSI Gaming X Trio 3090 off marketplace. What is the best way of using this gpu? It to get a used workstation or build from scratch, like open-air? Most of my budget when on purchasing the gpu. Is it possible to build a system with 300-350 dollars with decent cpu, memory, and power supply? I know this card is hungry for power, so gotta be over 800w. Any other suggestions are welcomed. TIA
2025-12-29T16:12:28
https://www.reddit.com/r/LocalLLaMA/comments/1pyq9zt/help_me_build_a_system_around_my_gpu/
amdjml
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1pyq9zt
false
null
t3_1pyq9zt
/r/LocalLLaMA/comments/1pyq9zt/help_me_build_a_system_around_my_gpu/
false
false
self
1
null
Are there any jailbroken LLMs for electromagnetics ?
0
I've been learning a lot from ChatGPT last year. I learned how to build transformers, even how to put coils in resonance. One day we touched the subject of magnetic flux and I've lightly touched the subject of how to reduce the opposition to the primary flux and it started to BS me about "secondary flux carrying information". So I asked it to say "apple" if it can't talk about overunity in any other way than disparaging it. And of course I got "apple". While I'm accustomed to research independently on this, I can't know when it could deliberately throw me off track when I touch the subject without actually mentioning it.
2025-12-29T16:12:13
https://www.reddit.com/r/LocalLLaMA/comments/1pyq9s1/are_there_any_jailbroken_llms_for_electromagnetics/
BogdySolo
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1pyq9s1
false
null
t3_1pyq9s1
/r/LocalLLaMA/comments/1pyq9s1/are_there_any_jailbroken_llms_for_electromagnetics/
false
false
self
0
null
This repo uses a lot of tokens : "coding factory" ?
0
Hi. Today I was checking the application that use OpenRouter the most. It turned that one GitHub repor - [Dpt. 1127](https://github.com/Dpt-1127) \- itself is using a huge amount of tokens, ranking #7. If I understand correctly it's using only [MiMo-V2-Flash (free)](https://openrouter.ai/xiaomi/mimo-v2-flash). What's behind something like this, a coding factcory ?
2025-12-29T15:57:53
https://i.redd.it/cw7i4of106ag1.png
IzzyHibbert
i.redd.it
1970-01-01T00:00:00
0
{}
1pypw32
false
null
t3_1pypw32
/r/LocalLLaMA/comments/1pypw32/this_repo_uses_a_lot_of_tokens_coding_factory/
false
false
default
0
{'enabled': True, 'images': [{'id': 'cw7i4of106ag1', 'resolutions': [{'height': 72, 'url': 'https://preview.redd.it/cw7i4of106ag1.png?width=108&crop=smart&auto=webp&s=acd1b63f66b8806f765468a8c7b8511dc1d049a2', 'width': 108}, {'height': 145, 'url': 'https://preview.redd.it/cw7i4of106ag1.png?width=216&crop=smart&auto=webp&s=b64b4f719cf749a814525a0926bd9ae212c608f5', 'width': 216}, {'height': 215, 'url': 'https://preview.redd.it/cw7i4of106ag1.png?width=320&crop=smart&auto=webp&s=2927fccf31ba7717fb6b9313251ebe6940689115', 'width': 320}, {'height': 430, 'url': 'https://preview.redd.it/cw7i4of106ag1.png?width=640&crop=smart&auto=webp&s=7b359fae0e8f9dcbf31218e4d686e6b8748e7420', 'width': 640}, {'height': 645, 'url': 'https://preview.redd.it/cw7i4of106ag1.png?width=960&crop=smart&auto=webp&s=69cdba24f14e9fc1f5cb415ec7238548181f8cf0', 'width': 960}], 'source': {'height': 680, 'url': 'https://preview.redd.it/cw7i4of106ag1.png?auto=webp&s=6131704a3f86c215c639a40966413439d7064d12', 'width': 1012}, 'variants': {}}]}
(discussion) how to write better promts???
0
like dont be very friendly keep it short and only talk about topic?
2025-12-29T15:35:25
https://www.reddit.com/r/LocalLLaMA/comments/1pypb6u/discussion_how_to_write_better_promts/
Kerem-6030
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1pypb6u
false
null
t3_1pypb6u
/r/LocalLLaMA/comments/1pypb6u/discussion_how_to_write_better_promts/
false
false
self
0
null
Built an MCP server for semantic doc search - looking for early testers
0
Hey folks, Been lurking here for a while and figured this crowd would have solid feedback on something I've been building. **What it is:** A service that turns any documentation site into an MCP-compatible semantic search endpoint. You point it at a sitemap, it crawls + chunks + embeds everything, and exposes it via MCP so Claude/Cursor/whatever can query it. **Technical bits if anyone cares:** * Embeddings via OpenAI's text-embedding-3-small (1536 dims) * Chunking with \~1000 token targets and overlap * Postgres with pgvector for storage * Standard MCP JSON-RPC implementation **Why I built it:** Got tired of the RAG setup dance every time I wanted to search some docs. Wanted something where I just paste a URL and it works. No vector db config, no chunking strategy tweaking, just "here's my docs, make them searchable." **What I'm curious about:** * For those who've done RAG setups - is the hosted/managed approach appealing or do you prefer controlling everything yourself? * Anyone actually using MCP regularly? Trying to gauge if the ecosystem is there yet * What features would make something like this actually useful vs. just another tool? I'm looking for early testers who want to poke around and give honest feedback. If that sounds interesting, drop a comment or DM me. Would love to hear from people who actually work with this stuff.
2025-12-29T15:33:51
https://www.reddit.com/r/LocalLLaMA/comments/1pyp9rh/built_an_mcp_server_for_semantic_doc_search/
vildanbina
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1pyp9rh
false
null
t3_1pyp9rh
/r/LocalLLaMA/comments/1pyp9rh/built_an_mcp_server_for_semantic_doc_search/
false
false
self
0
null
Best ASR Model Right Now for English?
3
Hey y'all, looking for a solid open source/open weight ASR model to use. I've done some digging and places like [Hugging Face ASR Leaderboard](https://huggingface.co/spaces/hf-audio/open_asr_leaderboard) says some Nvidia models (Parakeet, Canary) lead, but I've also heard that their WER metric is very misleading/doesn't reflect real world use. I think my mind immediately goes to Whisper-large-v3, but I was wondering if folks had any other accuracy-first, offline transcription model (especially newer ones I might not have checked out). Use case is for a video editor I'm building where a lot of my users have footage they've filmed on their phone of "man on the street" style interactions (so we're not going to have clean podcast style audio). Definitely need timestamping as well. Thanks for any help in advance!
2025-12-29T15:32:09
https://www.reddit.com/r/LocalLLaMA/comments/1pyp88k/best_asr_model_right_now_for_english/
ArcticTechnician
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1pyp88k
false
null
t3_1pyp88k
/r/LocalLLaMA/comments/1pyp88k/best_asr_model_right_now_for_english/
false
false
self
3
{'enabled': False, 'images': [{'id': 'j_zJp9sRPDfV-cY1nRpnFdGmJxzKXJCl8kJlo-cL61A', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/j_zJp9sRPDfV-cY1nRpnFdGmJxzKXJCl8kJlo-cL61A.png?width=108&crop=smart&auto=webp&s=8e7b3ca3434ee071ef54d6732c5c74bfa108f1d0', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/j_zJp9sRPDfV-cY1nRpnFdGmJxzKXJCl8kJlo-cL61A.png?width=216&crop=smart&auto=webp&s=67d797d2b0027d437608e2b7f05400e7d13174be', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/j_zJp9sRPDfV-cY1nRpnFdGmJxzKXJCl8kJlo-cL61A.png?width=320&crop=smart&auto=webp&s=6a861401db89b005f80687bfd9b892a15fbfaa93', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/j_zJp9sRPDfV-cY1nRpnFdGmJxzKXJCl8kJlo-cL61A.png?width=640&crop=smart&auto=webp&s=f044c5ce6e1272f48454e18fe9e5da33997bf960', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/j_zJp9sRPDfV-cY1nRpnFdGmJxzKXJCl8kJlo-cL61A.png?width=960&crop=smart&auto=webp&s=796a87cb7f77e71e6cbed4cde2c3f280d6c48829', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/j_zJp9sRPDfV-cY1nRpnFdGmJxzKXJCl8kJlo-cL61A.png?width=1080&crop=smart&auto=webp&s=92ac60775c9064c7c4267f0102f80e834c10948b', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/j_zJp9sRPDfV-cY1nRpnFdGmJxzKXJCl8kJlo-cL61A.png?auto=webp&s=5d50101d8f829bae3e80210dd24c9cec4945b73a', 'width': 1200}, 'variants': {}}]}
Gemma 27B + AMD 7900 XTX + Vulkan = My local AI companion with persistent memory & web access
0
# Gemma 27B + AMD 7900 XTX + Vulkan = My local AI companion with persistent memory & web access Hey , Wanted to share my setup running **Gemma-3-27B-IT (abliterated, Q4_K_M)** as the brain for a persistent AI companion called Lyra. She's been running for months now with 6,500+ memories in ChromaDB. ## Hardware Setup | Component | Spec | |-----------|------| | **CPU** | Ryzen 7 7800X3D | | **GPU** | AMD Radeon RX 7900 XTX (24GB VRAM) | | **RAM** | 32GB DDR5 | | **Backend** | llama.cpp with Vulkan RHI | | **Context** | 8192 tokens | ## Performance Numbers - **Server startup** : ~9.6 seconds - **Memory retrieval** (ChromaDB semantic search): 0.5s for 5 memories - **Response generation** : 10-12s for complex multi-agent tasks - **VRAM usage** : Model fits comfortably with room for context ## The Stack ``` ┌─────────────────────────────────────┐ │         PyQt6/QML Frontend          │  ← "Obsidian Glass" theme ├─────────────────────────────────────┤ │           LyraCore (Python)         │ ├──────────┬──────────┬───────────────┤ │ CrewAI   │ ChromaDB │ Emotion Engine│ │ Agents   │ Memory   │ + Dream System│ ├──────────┴──────────┴───────────────┤ │     llama.cpp/Vulkan (server.exe)   │ ├─────────────────────────────────────┤ │   Gemma-3-27B-IT-abliterated.Q4_K_M │ └─────────────────────────────────────┘ ``` ## Why Vulkan on AMD? ROCm is a mess on Windows. Vulkan RHI just works™. The 7900 XTX handles Gemma 27B Q4 smoothly at 8K context. No CUDA needed. **Server command:** ```bash server.exe -m gemma-3-27b-it-abliterated.Q4_K_M.gguf \   --host 127.0.0.1 --port 8000 \   --n-gpu-layers -1 --threads 8 -c 8192 ``` ## Recent Win: Web Search Integration Just got real-time web search working via Tavily API. When Lyra encounters a factual question: 1. Detects it needs verification 2. Calls WebSearchTool (Tavily fallback from Google CSE) 3. Gets 5 results, synthesizes into response 4. **Stores new knowledge in ChromaDB for future use** The CrewAI agents handle the orchestration - one plans, one executes (with tool access), one refines the response in Lyra's voice. ## Multi-Agent Architecture Using CrewAI with 3 specialized agents: - **Planner** : Analyzes task complexity - **Executor** : Has access to tools (WebSearch, Memory, Code execution) - **Linguist** : Transforms raw facts into Lyra's personality All running on the local Gemma model. No cloud APIs for reasoning. ## What's Unique - **Persistent identity** : Same memories across sessions - **Emotional state** : 14-dimension emotion matrix that decays over time - **Dreams** : She literally dreams when idle (processes daily memories) - **Proactive behavior** : Sets her own goals, researches autonomously ## Questions for the community 1. Anyone else running Gemma 27B on AMD? How's your experience? 2. Better quantization for 24GB VRAM? Currently on Q4_K_M 3. Experiences with longer context (16K+) on consumer hardware? Happy to share configs or answer questions! --- *Running Windows 11, Python 3.12, PyQt6 for the GUI. Code is ~15K lines at this point.*
2025-12-29T15:22:55
https://i.redd.it/988jravmu5ag1.png
Lyralex_84
i.redd.it
1970-01-01T00:00:00
0
{}
1pyozm4
false
null
t3_1pyozm4
/r/LocalLLaMA/comments/1pyozm4/gemma_27b_amd_7900_xtx_vulkan_my_local_ai/
false
false
default
0
{'enabled': True, 'images': [{'id': '988jravmu5ag1', 'resolutions': [{'height': 71, 'url': 'https://preview.redd.it/988jravmu5ag1.png?width=108&crop=smart&auto=webp&s=adb44fd82e5e6569789ccc1d1d56ee8e219feb06', 'width': 108}, {'height': 143, 'url': 'https://preview.redd.it/988jravmu5ag1.png?width=216&crop=smart&auto=webp&s=66708c6e2c294b5824b16846c3653fb35b8fce30', 'width': 216}, {'height': 212, 'url': 'https://preview.redd.it/988jravmu5ag1.png?width=320&crop=smart&auto=webp&s=6f91e075c5e76fbacf3593d3bcaf4e6a508fced4', 'width': 320}, {'height': 425, 'url': 'https://preview.redd.it/988jravmu5ag1.png?width=640&crop=smart&auto=webp&s=dd8b2610c80cbda11971bafe2df7590bf5521af1', 'width': 640}, {'height': 637, 'url': 'https://preview.redd.it/988jravmu5ag1.png?width=960&crop=smart&auto=webp&s=474a38709d59607355ad00defb727da56c935fb9', 'width': 960}, {'height': 717, 'url': 'https://preview.redd.it/988jravmu5ag1.png?width=1080&crop=smart&auto=webp&s=d1ca7a14838b6a6bf951d13f6cd4a6d40a10d75c', 'width': 1080}], 'source': {'height': 850, 'url': 'https://preview.redd.it/988jravmu5ag1.png?auto=webp&s=7650c32b91406aba02ecbe50467d5695bc2af2a8', 'width': 1280}, 'variants': {}}]}
GLM-4.7 Feels Lazy at Launch. Anyone Else Noticing This Pattern with Zhipu AI Models?
0
Has anyone else noticed that new releases from Zhipu AI's GLM series tend to be a bit sluggish and underperform at launch? I've experienced this consistently with each update.For instance, today I got into a bit of a "debate" with GLM-4.7 over the current date. It stubbornly insisted we were still in May 2024, while I pointed out it's December 29, 2025. It even accused me of time-traveling from the future, claiming that's impossible! I prompted it to verify by searching online or using available tools, but in its reasoning trace, it was clear it was simulating a response without actually doing the work it just echoed back a fabricated date to avoid admitting error. Frustrated, I switched to the previous version, GLM-4.6V, and it immediately confirmed the correct date without issue.This isn't isolated; when GLM-4.6 first dropped, I ran into the exact same problems. It seems like a recurring pattern: fresh models come out "lazy," failing to properly leverage online searches, tool calls, or real-time data integration. From a technical standpoint, this could stem from initial fine-tuning hiccups where the model's tool-calling mechanisms aren't fully optimized, or perhaps there's a regression in how it handles dynamic knowledge retrieval beyond its training cutoff. It might also relate to how these models are quantized or adapted for local inference, potentially throttling their ability to invoke external APIs or browsers effectively right out of the gate.If it were just one model, I'd chalk it up to a fluke, but this trend has me sticking with the prior version for most tasks until the new one gets patched or stabilized. Have you encountered similar issues with GLM-4.7, or what's your experience been like? I'm curious if it's a widespread thing or just my setup maybe we can share tips on workarounds. On a brighter note, it's exciting to see how quickly the community iterates on these models; with collective feedback, they'll only get sharper over time!
2025-12-29T15:22:27
https://www.reddit.com/r/LocalLLaMA/comments/1pyoz72/glm47_feels_lazy_at_launch_anyone_else_noticing/
AlexHardy08
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1pyoz72
false
null
t3_1pyoz72
/r/LocalLLaMA/comments/1pyoz72/glm47_feels_lazy_at_launch_anyone_else_noticing/
false
false
self
0
null
AI Trainer kicks himself while training AI
126
How metaphorical…
2025-12-29T15:00:10
https://v.redd.it/fnx2uj4wq5ag1
PixarX
v.redd.it
1970-01-01T00:00:00
0
{}
1pyoera
false
{'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/fnx2uj4wq5ag1/DASHPlaylist.mpd?a=1769612428%2CYWI4NjI3OTUzNTk1YmU5MDYyZGUxZTU3ZmE0ZDBlYWEyM2FhODc0MmFjMTVlZGY2ZjgzZmIxZjlmZDZmZmI0MA%3D%3D&v=1&f=sd', 'duration': 4, 'fallback_url': 'https://v.redd.it/fnx2uj4wq5ag1/CMAF_720.mp4?source=fallback', 'has_audio': True, 'height': 720, 'hls_url': 'https://v.redd.it/fnx2uj4wq5ag1/HLSPlaylist.m3u8?a=1769612428%2CNjczOWQ3NjczYWQ4YmY0YWVkNTRhODIwNmU2MGY4NWZiNDc1MWVhNTBmMjI1MGE2NmYwNWRiNDcyNTc5OTEwMg%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/fnx2uj4wq5ag1/CMAF_96.mp4', 'transcoding_status': 'completed', 'width': 1280}}
t3_1pyoera
/r/LocalLLaMA/comments/1pyoera/ai_trainer_kicks_himself_while_training_ai/
false
false
https://external-preview…fa5f07258b07675f
126
{'enabled': False, 'images': [{'id': 'cXNpMTJvMXdxNWFnMbQaeNfdZYXrF4aqYoCV_cBzkmCbNa9NGlLu-Zez7C6L', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/cXNpMTJvMXdxNWFnMbQaeNfdZYXrF4aqYoCV_cBzkmCbNa9NGlLu-Zez7C6L.png?width=108&crop=smart&format=pjpg&auto=webp&s=54defc2862302ac6b59a3c6446266646a5eda8b0', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/cXNpMTJvMXdxNWFnMbQaeNfdZYXrF4aqYoCV_cBzkmCbNa9NGlLu-Zez7C6L.png?width=216&crop=smart&format=pjpg&auto=webp&s=d042102414b9b69e584de72c5aeae68d529be452', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/cXNpMTJvMXdxNWFnMbQaeNfdZYXrF4aqYoCV_cBzkmCbNa9NGlLu-Zez7C6L.png?width=320&crop=smart&format=pjpg&auto=webp&s=23e79799129ddb9c0c27256fa6780ad6a4e5cf41', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/cXNpMTJvMXdxNWFnMbQaeNfdZYXrF4aqYoCV_cBzkmCbNa9NGlLu-Zez7C6L.png?width=640&crop=smart&format=pjpg&auto=webp&s=d3c2a4d7456670db375cb65c3210fb28f6557920', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/cXNpMTJvMXdxNWFnMbQaeNfdZYXrF4aqYoCV_cBzkmCbNa9NGlLu-Zez7C6L.png?width=960&crop=smart&format=pjpg&auto=webp&s=c252ae7fbe2e38be0e4034190177cb483f2da3ee', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/cXNpMTJvMXdxNWFnMbQaeNfdZYXrF4aqYoCV_cBzkmCbNa9NGlLu-Zez7C6L.png?width=1080&crop=smart&format=pjpg&auto=webp&s=bb14ead7b20a12bc433646553f0533a5caace9ff', 'width': 1080}], 'source': {'height': 720, 'url': 'https://external-preview.redd.it/cXNpMTJvMXdxNWFnMbQaeNfdZYXrF4aqYoCV_cBzkmCbNa9NGlLu-Zez7C6L.png?format=pjpg&auto=webp&s=9bcb029557e5290c748731ba416529f206953781', 'width': 1280}, 'variants': {}}]}
SA-RAG: Using spreading activation to improve multi-hop retrieval in RAG systems
6
I came across an interesting paper proposing **SA-RAG**, which applies *spreading activation* (from cognitive psychology) to GraphRAG-style retrieval. Instead of relying on iterative LLM-guided query rewriting, activation propagates automatically through a knowledge graph starting from query-matched entities. This helps surface “bridge” documents that standard RAG often misses in multi-hop reasoning tasks. A few points that stood out: * Retrieval is treated as a **structural graph problem**, not a prompting problem * Works with **small open-weight models**, no retraining required * Shows strong gains on multi-hop QA benchmarks (MuSiQue, 2WikiMultiHopQA) Curious how people here see this compared to: * agentic / iterative RAG * query-rewrite–based retrieval * hybrid graph + vector approaches Paper: [https://arxiv.org/abs/2512.15922]()
2025-12-29T14:53:33
https://www.reddit.com/r/LocalLLaMA/comments/1pyo8ry/sarag_using_spreading_activation_to_improve/
JudgmentPale458
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1pyo8ry
false
null
t3_1pyo8ry
/r/LocalLLaMA/comments/1pyo8ry/sarag_using_spreading_activation_to_improve/
false
false
self
6
null
Exploring synthetic identity as architecture rather than prompts
0
I’ve been working on an open-source framework that treats *synthetic writing identity* as an architectural problem rather than a prompting problem. The basic idea is to externalize identity into structure instead of relying on prompt phrasing or model memory. The framework defines identity through: * explicit constraints * semantic anchors * style rules * and mechanisms for detecting and correcting drift The focus isn’t roleplay or expressiveness, but **continuity**: keeping tone, structure, and reasoning stable across long output sequences without converging into generic LLM voice. I’m interested in whether this kind of constraint-based approach actually helps with long-horizon consistency, or whether it just introduces new failure modes (over-constraint, rigidity, hidden drift). If there’s interest, I can share the repo in a comment. Would appreciate critical feedback, especially from people working on open-source LLM tooling or agent systems.
2025-12-29T14:38:30
https://www.reddit.com/r/LocalLLaMA/comments/1pynvmu/exploring_synthetic_identity_as_architecture/
No_Strain_2140
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1pynvmu
false
null
t3_1pynvmu
/r/LocalLLaMA/comments/1pynvmu/exploring_synthetic_identity_as_architecture/
false
false
self
0
null
BULaMU-Dream: The First Text-to-Image Model Trained from Scratch for an African Language
56
Hi everybody! I hope all is well. I just wanted to share a project that I have been working on for the last several months called BULaMU-Dream. It is the first text to image model in the world that has been trained from scratch to respond to prompts in an African Language (Luganda). The details of how I trained it are [here](https://zenodo.org/records/18086776) and a demo can be found [here](https://x.com/mwebazarick/status/2005643851655168146?s=12). I am open to any feedback that you are willing to share because I am going to continue working on improving BULaMU-Dream. I really believe that tiny conditional diffusion models like this can broaden access to multimodal AI tools by allowing people train and use these models on relatively inexpensive setups, like the M4 Mac Mini.
2025-12-29T14:36:30
https://v.redd.it/ty3cvnfkm5ag1
AgencyInside407
v.redd.it
1970-01-01T00:00:00
0
{}
1pyntxz
false
{'reddit_video': {'bitrate_kbps': 800, 'dash_url': 'https://v.redd.it/ty3cvnfkm5ag1/DASHPlaylist.mpd?a=1769611007%2CYWI0OGQ2ODczYjI3ZDg4ZjM0MGEzOWVjYWE5YjQ0ZjRmNjRlNjU4ODQ1YWE4N2NmY2U2MTJlOWI1YTlkOTZhMg%3D%3D&v=1&f=sd', 'duration': 92, 'fallback_url': 'https://v.redd.it/ty3cvnfkm5ag1/CMAF_360.mp4?source=fallback', 'has_audio': False, 'height': 348, 'hls_url': 'https://v.redd.it/ty3cvnfkm5ag1/HLSPlaylist.m3u8?a=1769611007%2CMGM0YTNlZDI2ZmY5NmVjZGM2OTA5NTA3NWQ2M2Q0ZWM0ZmNmZDQ5ZWFhYjhjZTI1N2JlYmUzMzUwMDczZmE5MA%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/ty3cvnfkm5ag1/CMAF_96.mp4', 'transcoding_status': 'completed', 'width': 640}}
t3_1pyntxz
/r/LocalLLaMA/comments/1pyntxz/bulamudream_the_first_texttoimage_model_trained/
false
false
https://external-preview…07841e9cc5e1fb5b
56
{'enabled': False, 'images': [{'id': 'dWN4dmJ1ZmttNWFnMY4GET1d7wPCVUTfL2kwUQvVU9zlAAxjJ2PRs52epB4h', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/dWN4dmJ1ZmttNWFnMY4GET1d7wPCVUTfL2kwUQvVU9zlAAxjJ2PRs52epB4h.png?width=108&crop=smart&format=pjpg&auto=webp&s=1bdf68ff565e19aa229fc826c241fb8fca84d208', 'width': 108}, {'height': 117, 'url': 'https://external-preview.redd.it/dWN4dmJ1ZmttNWFnMY4GET1d7wPCVUTfL2kwUQvVU9zlAAxjJ2PRs52epB4h.png?width=216&crop=smart&format=pjpg&auto=webp&s=b7408890b4ddd23a8ca5295cd8bd600f6f5ce9cc', 'width': 216}, {'height': 173, 'url': 'https://external-preview.redd.it/dWN4dmJ1ZmttNWFnMY4GET1d7wPCVUTfL2kwUQvVU9zlAAxjJ2PRs52epB4h.png?width=320&crop=smart&format=pjpg&auto=webp&s=4064687a50d7e661138a581387f97a2f12322aa3', 'width': 320}, {'height': 347, 'url': 'https://external-preview.redd.it/dWN4dmJ1ZmttNWFnMY4GET1d7wPCVUTfL2kwUQvVU9zlAAxjJ2PRs52epB4h.png?width=640&crop=smart&format=pjpg&auto=webp&s=fbaf98a04bd1bf499956a577a5afda2dbb0ec11b', 'width': 640}], 'source': {'height': 422, 'url': 'https://external-preview.redd.it/dWN4dmJ1ZmttNWFnMY4GET1d7wPCVUTfL2kwUQvVU9zlAAxjJ2PRs52epB4h.png?format=pjpg&auto=webp&s=2b639b6f06532095e7e444e9b3c3c84f6c43b339', 'width': 778}, 'variants': {}}]}
Kimi k2 thinking vs glm 4.7
26
Guys for agentic coding using opencode , which ai model is better? - Kimi k2 thinking or glm 4.7? Its mainly python coding.
2025-12-29T14:05:53
https://www.reddit.com/r/LocalLLaMA/comments/1pyn4ny/kimi_k2_thinking_vs_glm_47/
Worried_Goat_8604
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1pyn4ny
false
null
t3_1pyn4ny
/r/LocalLLaMA/comments/1pyn4ny/kimi_k2_thinking_vs_glm_47/
false
false
self
26
null
I built a runtime governance layer for LLMs. Can you break it?
0
Hello guys and gals, happy holidays to you all! I’ve spent the last year building Safi, an open-source cognitive architecture that wraps around AI models (Llama, GPT, Claude, you name it.) to enforce alignment with human values. SAFi is a "System 2" architecture inspired by classical philosophy. It separates the generation from the decision: **The Intellect**: the faculty that generates answers **The Will**: The faculty that decides to block or allowed an answer based on the defined rules **The Conscience**: A post-hoc auditor that checks the answer for alignment based on the core values defined. The Spirit: An EMA (Exponential Moving Average) vector that tracks "Ethical Drift" over time and injects course-correction into the context window. The Challenge: I want to see if this architecture actually holds up. I’ve set up a demo with a few agents " I want you to try to jailbreak them. Repo: https://github.com/jnamaya/SAFi Demo:https://safi.selfalignmentframework.com/ Homepage: https://selfalignmentframework.com/ Safi is licensed under GPLv3. .make it yours!
2025-12-29T13:49:36
https://www.reddit.com/r/LocalLLaMA/comments/1pymrf8/i_built_a_runtime_governance_layer_for_llms_can/
forevergeeks
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1pymrf8
false
null
t3_1pymrf8
/r/LocalLLaMA/comments/1pymrf8/i_built_a_runtime_governance_layer_for_llms_can/
false
false
self
0
{'enabled': False, 'images': [{'id': 'XQocDqfEbUSOSkRheCoDfhbaaBYIGFY_RG1WqfXIL0Y', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/XQocDqfEbUSOSkRheCoDfhbaaBYIGFY_RG1WqfXIL0Y.png?width=108&crop=smart&auto=webp&s=d11f379b6e417229a5b355bbd88dfea9dc7b5d3c', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/XQocDqfEbUSOSkRheCoDfhbaaBYIGFY_RG1WqfXIL0Y.png?width=216&crop=smart&auto=webp&s=ce490d53e6960f7f0e1aa04e69f9629fb57d4383', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/XQocDqfEbUSOSkRheCoDfhbaaBYIGFY_RG1WqfXIL0Y.png?width=320&crop=smart&auto=webp&s=cb9ae92ee487709f21c96385d3bb60d2ba244a9c', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/XQocDqfEbUSOSkRheCoDfhbaaBYIGFY_RG1WqfXIL0Y.png?width=640&crop=smart&auto=webp&s=fe83f1cedee242027a18cb0a02bbf028044fa7ba', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/XQocDqfEbUSOSkRheCoDfhbaaBYIGFY_RG1WqfXIL0Y.png?width=960&crop=smart&auto=webp&s=a0e4d82df8dc7cfda4bac61b0fe32102bef267b4', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/XQocDqfEbUSOSkRheCoDfhbaaBYIGFY_RG1WqfXIL0Y.png?width=1080&crop=smart&auto=webp&s=8a339cc943510cea16c7318c2cb2dde1ccb399e9', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/XQocDqfEbUSOSkRheCoDfhbaaBYIGFY_RG1WqfXIL0Y.png?auto=webp&s=9f951c9fbe70b3f311710a3b25274520d3e06706', 'width': 1200}, 'variants': {}}]}
[Vast.ai Listing] 9800X3D + RTX 4080 (16GB) – Finally, a Host for Fast Prompt Processing & Inference
1
[removed]
2025-12-29T13:42:31
https://www.reddit.com/r/LocalLLaMA/comments/1pymlty/vastai_listing_9800x3d_rtx_4080_16gb_finally_a/
ResponsibleMountain5
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1pymlty
false
null
t3_1pymlty
/r/LocalLLaMA/comments/1pymlty/vastai_listing_9800x3d_rtx_4080_16gb_finally_a/
false
false
self
1
null
[Dev Discussion] What is biggest bottleneck in your data pipeline? I want to build something that YOU actually need.
0
Hi r/LocalLLaMA, We have tons of good tools for Vector DBs, Reranking, Quantization etc. But the pre-ingestion phase (cleaning, deduping, parsing) still feels like it lacks solid solutions from my POV. I recently developed **EntropyGuard** cause I got tired of writing custom scripts just to get OOMed. It’s a local-first CLI using **Polars LazyFrames** and **FAISS** to clean up your dataset from duplicates in 2 stages: firstly by xxHash (faster) and then semantically (more complex). It got some solid feedback so far, but it still feels like it should offer more to be a "no-brainer" install. The main engine is built, but now I am stuck. I do not want to build features that won't be used by anyone. I want to build something that solves your actual problem and saves you time. **I'm considering a few features now:** * **Semantic chunking:** Currently I rely on standard recursive splitters. Should I bake in cosine-based splitting? * **TUI for sanity checks:** Some sort of terminal UI to visually audit what's going to be deleted before pulling the trigger. * **PII scrubbing:** Automatically detecting and redacting emails, API keys, etc. using Presidio or regex. * **PDF hell solver:** Built-in wrappers for `docling` or `unstructured` to handle layout-heavy PDFs, so you could pipe a raw folder directly into clean JSONL. **Or should it be something completely different?** Is there any specific part of your RAG pipeline that is currently manual or just painful? I want this tool to be robust enough for production use cases. Let me know what specifically would make you `pip install entropyguard`? **Repo for context:** [https://github.com/DamianSiuta/entropyguard](https://github.com/DamianSiuta/entropyguard)
2025-12-29T13:40:48
https://i.redd.it/35zztk2sr4ag1.png
Low-Flow-6572
i.redd.it
1970-01-01T00:00:00
0
{}
1pymkeg
false
null
t3_1pymkeg
/r/LocalLLaMA/comments/1pymkeg/dev_discussion_what_is_biggest_bottleneck_in_your/
false
false
default
0
{'enabled': True, 'images': [{'id': '35zztk2sr4ag1', 'resolutions': [{'height': 122, 'url': 'https://preview.redd.it/35zztk2sr4ag1.png?width=108&crop=smart&auto=webp&s=dd8e62339931e64b8f9cb2a2aa9297354b96439a', 'width': 108}, {'height': 244, 'url': 'https://preview.redd.it/35zztk2sr4ag1.png?width=216&crop=smart&auto=webp&s=1f608e1f4ac0a433cc89ed871aaccc48b84962b7', 'width': 216}, {'height': 362, 'url': 'https://preview.redd.it/35zztk2sr4ag1.png?width=320&crop=smart&auto=webp&s=358d8b27d6f965fb5398ef1ed8e536203654ca34', 'width': 320}], 'source': {'height': 458, 'url': 'https://preview.redd.it/35zztk2sr4ag1.png?auto=webp&s=e53e1dc183d19f4d077a4b51a045ae1bd732a566', 'width': 404}, 'variants': {}}]}
I was training an AI model and...
137
2025-12-29T13:12:46
https://i.redd.it/06e4wmao75ag1.png
bapuc
i.redd.it
1970-01-01T00:00:00
0
{}
1pylyen
false
null
t3_1pylyen
/r/LocalLLaMA/comments/1pylyen/i_was_training_an_ai_model_and/
false
false
default
137
{'enabled': True, 'images': [{'id': '06e4wmao75ag1', 'resolutions': [{'height': 12, 'url': 'https://preview.redd.it/06e4wmao75ag1.png?width=108&crop=smart&auto=webp&s=0dd75206fe16cbfbf6411d7f64d9d95103da60d3', 'width': 108}, {'height': 25, 'url': 'https://preview.redd.it/06e4wmao75ag1.png?width=216&crop=smart&auto=webp&s=bdc073547b740162888d65d46dbcb4f3c846dc64', 'width': 216}, {'height': 38, 'url': 'https://preview.redd.it/06e4wmao75ag1.png?width=320&crop=smart&auto=webp&s=be3e8c5364a006c790f91615bf772de231410ed8', 'width': 320}, {'height': 76, 'url': 'https://preview.redd.it/06e4wmao75ag1.png?width=640&crop=smart&auto=webp&s=000d5dbb19fbb5ad24d3ec228e0a3bc802bd60a1', 'width': 640}], 'source': {'height': 100, 'url': 'https://preview.redd.it/06e4wmao75ag1.png?auto=webp&s=6180ceeb3b3548eb89eab524ffb882441da045b7', 'width': 840}, 'variants': {}}]}
What tool/SaaS do you use to maintain your internal documentation?
0
For things like: 1. API Collection 2. API Docs 3. Internal information of system design etc...
2025-12-29T13:11:50
https://www.reddit.com/r/LocalLLaMA/comments/1pylxpr/what_toolsaas_do_you_use_to_maintain_your/
Hari-Prasad-12
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1pylxpr
false
null
t3_1pylxpr
/r/LocalLLaMA/comments/1pylxpr/what_toolsaas_do_you_use_to_maintain_your/
false
false
self
0
null
New Llama.cpp Front-End (Intelligent Low VRAM Context Management)
0
Ready-to-run, just drop in your .gguf models and go! Try it out -https://github.com/F0R3V3R50F7/openOrchestrate
2025-12-29T13:11:35
https://www.reddit.com/gallery/1pylxj4
F0R3V3R50F7
reddit.com
1970-01-01T00:00:00
0
{}
1pylxj4
false
null
t3_1pylxj4
/r/LocalLLaMA/comments/1pylxj4/new_llamacpp_frontend_intelligent_low_vram/
false
false
https://b.thumbs.redditm…3_2uWzP_3y0I.jpg
0
null
aichat: Claude-Code/Codex-CLI tool for fast full-text session search, and continue work without compaction
7
[Using the \>resume trigger from a Claude Code session](https://reddit.com/link/1pylwsa/video/0kqoau6955ag1/player) [aichat search TUI for fast Rust\/Tantivy full-text session of sessions](https://reddit.com/link/1pylwsa/video/zzyrxor275ag1/player) In the [claude-code-tools](https://github.com/pchalasani/claude-code-tools) repo, I I've been sharing various tools I've built to improve productivity when working with Claude-Code or Codex-CLI. I wanted to share a recent addition: the [aichat command](https://github.com/pchalasani/claude-code-tools?tab=readme-ov-file#-aichat--session-search-and-continuation-without-compaction) which I use regularly to continue work **without having to compact**. TL/DR: Some ways to use this tool, once you've [installed](https://github.com/pchalasani/claude-code-tools?tab=readme-ov-file#-quick-start) it and the associated [aichat plugin](https://github.com/pchalasani/claude-code-tools?tab=readme-ov-file#claude-code-plugins) * in a Claude-code session nearing full context usage, type `>resume` \- this activates a `UserPromptSubmit` hook that copies your session id to clipboard and shows instructions to run `aichat resume <pasted-session-id>` , which will present 3 ways to continue your work (see below). * If you know which session id to continue work from, use `aichat resume <session-id>` * If you need to search for past sessions, use `aichat search` which launches a super-fast Rust/Tantivy-based full-text session search TUI with filters (unlike Claude-Code `--resume` which only searches session titles). * In a Claude-Code or Codex-CLI session, you can have the agent (or preferably a sub-agent) search for context on prior work using `aichat search ... --json` which returns JSONL-formatted results ideal for querying/filtering with `jq` that agents excel at. In the aichat plugin, there is a corresponding **session-search skill** and (for Claude-Code) a **session-searcher sub-agent.** You can say something like, *"use the session-searcher sub-agent to extract context of how we connected the Rust TUI to the Node-based menus"* * There are 3 ways to continue work from a session: (a) **blind trim**, i.e. clone session + truncate large tool calls/results + older assistant messages, (b) **smart-trim,** similar but uses headless agent to decide what to truncate, (c) **rollover** (I use this the most), which creates a new session, injects session-file **lineage** (back-pointer to parent session, parent's parent and so on) into the first user message, plus optional instructions to extract summary of latest work. **Install:** # Step 1: Python package uv tool install claude-code-tools # Step 2: Rust search engine (pick one) brew install pchalasani/tap/aichat-search # Homebrew cargo install aichat-search # Cargo # Or download binary from Releases # Step 3: Install Claude Code plugins (for >resume hook, session search related skill, agent, etc) claude plugin marketplace add pchalasani/claude-code-tools claude plugin install "aichat@cctools-plugins" # or from within Claude Code: /plugin marketplace add pchalasani/claude-code-tools /plugin install aichat@cctools-plugins # Background For those curious, I'm outlining the thought process underlying this tool, hoping it helps explain what the `aichat` tool does and why it might be useful to you. # Compaction is lossy: instead, clone the session and truncate long tool-results or older assistant messages There are very often situations where compaction loses important details, so I wanted to find ways to continue my work without compaction. A typical scenario: I am at 90% context usage, and I wish I can go on a bit longer to finish the current work-phase. So I thought, >I wish I could **truncate** some long tool results (e.g. file reads or API results) or older assistant messages (can include write/edit tool-calls) and clear out some context to continue my work. This lead to the [`aichat trim`](https://github.com/pchalasani/claude-code-tools#three-resume-strategies) utility. It provides two variants: * a "blind" [`trim`](https://github.com/pchalasani/claude-code-tools#three-resume-strategies) mode that truncates all tool-results longer than a threshold (default 500 chars), and optionally all-but-recent assistant messages -- all user-configurable. This can free up 40-60% context, depending on what's been going on in the session. * a [`smart-trim`](https://github.com/pchalasani/claude-code-tools#three-resume-strategies) mode that uses a headless Claude/Codex agent to determine which messages can be safely truncated in order to continue the current work. The precise truncation criteria can be customized (e.g. the user may want to continue some prior work rather than the current task). Both of these modes *clone* the current session before truncation, and inject two types of [*lineage*](https://github.com/pchalasani/claude-code-tools#lineage-nothing-is-lost) (essentially, back-pointers): * *Session-lineage* is injected into the first user message: a chronological listing of sessions from which the current session was derived. This allows the (sub-) agent to extract needed context from ancestor sessions, either when prompted by the user, or on its own initiative. * Each truncated message also carries a *pointer* to the specific message index in the parent session so full details can always be looked up if needed. # A cleaner alternative: Start new session with lineage and context summary Session trimming can be a quick way to clear out context in order to continue the current task for a bit longer, but after a couple of trims, does not yield as much benefit. But the lineage-injection lead to a different idea to avoid compaction: >Create a fresh session, inject parent-session lineage into the first user message, along with instructions to extract (using sub-agents if available) context of the latest task from the parent session, or skip context extraction and leave it to the user to extract context once the session starts. This is the idea behind the [`aichat rollover`](https://github.com/pchalasani/claude-code-tools#three-resume-strategies) functionality, which is the variant I use the most frequently, instead of first trimming a session (though the blind-trimming can still be useful to continue the current work for a bit longer). I usually choose to skip the summarization (this is the `quick` rollover option in the TUI) so that the new session starts quickly and I can instruct Claude-Code/Codex-CLI to extract needed context (usually from the latest chat session shown in the lineage), as shown in the demo video below. # A hook to simplify continuing work from a session I wanted to make it seamless to pick any of the above three task continuation modes, when inside a Claude Code session, so I set up a `UserPromptSubmit` [hook](https://github.com/pchalasani/claude-code-tools#resume-options) (via the `aichat` plugin) that is triggered when the user types `>resume` (or `>continue` or `>handoff`). When I am close to full context usage, I type `>resume`, and the hook script copies the current session id into the clipboard and shows instructions asking the user to run `aichat resume <pasted-session-id>`; this launches a TUI that offering options to choose one of the above [session resumption modes](https://github.com/pchalasani/claude-code-tools#three-resume-strategies), see demo video above. # Fast full-text session search for humans/agents to find prior work context The above session resumption methods are useful to continue your work from the *current* session, but often you want to continue work that was done in an *older* Claude-Code/Codex-CLI session. This is why I added this: >Super-fast Rust/Tantivy-based [full-text search](https://github.com/pchalasani/claude-code-tools#aichat-search--find-and-select-sessions) of all sessions across Claude-Code and Codex-CLI, with a pleasant self-explanatory TUI for humans, and a CLI mode for Agents to find past work. (The Rust/Tantivy-based search and TUI was inspired by the excellent TUI in the [zippoxer/recall](https://github.com/zippoxer/recall) repo). Users can launch the search TUI using [`aichat search ...`](https://github.com/pchalasani/claude-code-tools#aichat-search--find-and-select-sessions) and (sub-) [agents can run](https://github.com/pchalasani/claude-code-tools#agent-access-to-history-the-session-searcher-sub-agent) `aichat search ... --json` and get results in JSONL format for quick analysis and filtering using `jq` which of course CLI agents are great at using. There is a corresponding *skill* called `session-search` and a *sub-agent* called `session-searcher`, both available via the `aichat` [plugin](https://github.com/pchalasani/claude-code-tools#claude-code-plugins). For example in Claude Code, users can recover context of some older work by simply saying something like: >Use your session-searcher sub-agent to recover the context of how we worked on connecting the Rust search TUI with the node-based Resume Action menus.
2025-12-29T13:10:34
https://www.reddit.com/r/LocalLLaMA/comments/1pylwsa/aichat_claudecodecodexcli_tool_for_fast_fulltext/
SatoshiNotMe
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1pylwsa
false
null
t3_1pylwsa
/r/LocalLLaMA/comments/1pylwsa/aichat_claudecodecodexcli_tool_for_fast_fulltext/
false
false
self
7
null
Fine-tuning a Small LM for browser control with GRPO and OpenEnv
10
Today I want to share with you the write-up of a live 60-minute session I hosted on the[ Liquid AI Discord Community](https://discord.gg/tVvcxtkkhv). The topic? How to teach Language Models to navigate websites and complete tasks using Reinforcement Learning. We’re talking about building browser agents that can click buttons, fill forms, and even book flights, all by learning from trial and error instead of perfect demonstrations. You’ll see how to build the complete training pipeline with GRPO, BrowserGym, and LFM2-350M, starting with a simple “click-test” task and scaling up from there. Let me know if you have questions
2025-12-29T13:07:04
https://paulabartabajo.substack.com/p/fine-tuning-lfm2-350m-for-browser
PauLabartaBajo
paulabartabajo.substack.com
1970-01-01T00:00:00
0
{}
1pylu7n
false
null
t3_1pylu7n
/r/LocalLLaMA/comments/1pylu7n/finetuning_a_small_lm_for_browser_control_with/
false
false
default
10
{'enabled': False, 'images': [{'id': 'uTkHDDGDMOR6Vy-vy4ASQJNPUnxM33lqhu9afcVyTD8', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/uTkHDDGDMOR6Vy-vy4ASQJNPUnxM33lqhu9afcVyTD8.jpeg?width=108&crop=smart&auto=webp&s=45031b89c444291ef4ff8311f483c77009e13b02', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/uTkHDDGDMOR6Vy-vy4ASQJNPUnxM33lqhu9afcVyTD8.jpeg?width=216&crop=smart&auto=webp&s=4f19f41504c8f8ad6963306325768569652521ea', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/uTkHDDGDMOR6Vy-vy4ASQJNPUnxM33lqhu9afcVyTD8.jpeg?width=320&crop=smart&auto=webp&s=fb10af407c62ef217f37348a9c55c09cde991f8d', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/uTkHDDGDMOR6Vy-vy4ASQJNPUnxM33lqhu9afcVyTD8.jpeg?width=640&crop=smart&auto=webp&s=3ac3ac0b4424152125ee4380c95bfa3bfc0481bd', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/uTkHDDGDMOR6Vy-vy4ASQJNPUnxM33lqhu9afcVyTD8.jpeg?width=960&crop=smart&auto=webp&s=42586d9e637a139b44bb796109bc7fbb15fc1dcf', 'width': 960}], 'source': {'height': 540, 'url': 'https://external-preview.redd.it/uTkHDDGDMOR6Vy-vy4ASQJNPUnxM33lqhu9afcVyTD8.jpeg?auto=webp&s=543bd1f2a381efbe74a7fc27e2367a90a7c4a361', 'width': 960}, 'variants': {}}]}
Single RTX PRO 6000 - Minimax M2.1 (IQ2_M) speed
39
Been asked so often, "what's the speed?". It depends. I run the model using `llama-server -m ~/models/unsloth/MiniMax-M2.1-GGUF/UD-IQ2_M/MiniMax-M2.1-UD-IQ2_M-00001-of-00002.gguf --jinja -ngl 99 -t 80 -c 160000 -fa 1 -ctv q8_0 -ctk q8_0 --host 0.0.0.0 --port 8080 -cram -1 --log-file ~/m2.1.log` KV quantized to Q8 160k max context - **Total samples:** 107 - **Date generated:** 2025-12-29 13:27 ## Key Statistics | Metric | Min | Max | Mean | Median | Std Dev | |--------|-----|-----|------|--------|---------| | prompt_eval_speed | 23.09 | 1695.32 | 668.78 | 577.88 | 317.26 | | eval_speed | 30.02 | 91.17 | 47.97 | 46.36 | 14.09 | ## Key Insights - **Highest prompt eval speed:** 1695.32 tokens/sec (n_tokens=15276) - **Lowest prompt eval speed:** 23.09 tokens/sec (n_tokens=67201) - **Highest eval speed:** 91.17 tokens/sec (n_tokens=15276) - **Lowest eval speed:** 30.02 tokens/sec (n_tokens=92160) *So bottom line, bigger context = lower speed (both PP & TG)*
2025-12-29T13:05:12
https://i.redd.it/n0sxcy3q55ag1.png
johannes_bertens
i.redd.it
1970-01-01T00:00:00
0
{}
1pylstj
false
null
t3_1pylstj
/r/LocalLLaMA/comments/1pylstj/single_rtx_pro_6000_minimax_m21_iq2_m_speed/
false
false
default
39
{'enabled': True, 'images': [{'id': 'n0sxcy3q55ag1', 'resolutions': [{'height': 89, 'url': 'https://preview.redd.it/n0sxcy3q55ag1.png?width=108&crop=smart&auto=webp&s=1ab8d3f9dd457d711c923a4325abb8e10cb5d4fd', 'width': 108}, {'height': 179, 'url': 'https://preview.redd.it/n0sxcy3q55ag1.png?width=216&crop=smart&auto=webp&s=62c836ffd8df159cd39aab561b075eb078e3bc91', 'width': 216}, {'height': 265, 'url': 'https://preview.redd.it/n0sxcy3q55ag1.png?width=320&crop=smart&auto=webp&s=38670dbb429a4a3d36b9d2f91a3fc32bf1ec84eb', 'width': 320}, {'height': 530, 'url': 'https://preview.redd.it/n0sxcy3q55ag1.png?width=640&crop=smart&auto=webp&s=a383a6b2fe22a5ff97c3cee75cb4cdbaf506167c', 'width': 640}, {'height': 795, 'url': 'https://preview.redd.it/n0sxcy3q55ag1.png?width=960&crop=smart&auto=webp&s=388d746b8735c642cc1009550b0c738b4b640804', 'width': 960}, {'height': 895, 'url': 'https://preview.redd.it/n0sxcy3q55ag1.png?width=1080&crop=smart&auto=webp&s=7ff68a3127e23a3ce8156b0186430489d8177236', 'width': 1080}], 'source': {'height': 1476, 'url': 'https://preview.redd.it/n0sxcy3q55ag1.png?auto=webp&s=90e5b7e8cc1be679d31baf1ab8393e69f9ed4101', 'width': 1781}, 'variants': {}}]}
Newbie
0
I’m new to Ollama. I have it running on a cloud server. If I ssh into one of my models I can send request and get responses find. Everything appears to be working. My challenge now is to connect it to my ai agents. I need interaction without ssh. How do I get an api or what are my next steps?
2025-12-29T12:58:51
https://www.reddit.com/r/LocalLLaMA/comments/1pylnsa/newbie/
TroyB346
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1pylnsa
false
null
t3_1pylnsa
/r/LocalLLaMA/comments/1pylnsa/newbie/
false
false
self
0
null
I built Privemail a local-first email client that uses Ollama models to draft replies (No cloud AI)
0
I built a local-first email client that uses YOUR Ollama models to draft replies (No cloud AI) I got tired of "private" email assistants that just wrap the OpenAI API and send my data to the cloud. I wanted something that runs 100% offline using the models I already have in Ollama. So I built Privemail. It’s a desktop email client (Python-based) that connects to your local Ollama instance. You choose the model best suited for your VRAM/speed needs—whether that's llama3.2:3b for instant replies on a laptop or mistral-nemo for better reasoning. How it works: Ollama Native: It talks directly to localhost:11434. If you can pull it in Ollama, you can use it to draft emails. Zero Trust / BYOK: You provide your own Gmail API credentials (Client ID/Secret). I have zero access to your data; the app connects directly from your machine to Google. Context Aware: It feeds the email thread context into the local model to generate relevant replies, not just generic fluff. Tech Stack: Python 3.12 (Custom GUI) Ollama (Backend) Gmail API Why I built it: I wanted a "Help me write this" button that didn't cost $20/month or spy on me. Repo: [https://github.com/safhac/privemail](https://github.com/safhac/privemail) (There's a pre-compiled Windows installer for non-devs who want to support the project, but the source is 100% free to build/run). \#Ollama #Showcase
2025-12-29T12:45:58
https://www.reddit.com/r/LocalLLaMA/comments/1pylen6/i_built_privemail_a_localfirst_email_client_that/
safhac
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1pylen6
false
null
t3_1pylen6
/r/LocalLLaMA/comments/1pylen6/i_built_privemail_a_localfirst_email_client_that/
false
false
self
0
{'enabled': False, 'images': [{'id': 'Z-UEQvUe3Bckugh81IxEPcdQgf7U99VurFwYuTMz9TY', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/Z-UEQvUe3Bckugh81IxEPcdQgf7U99VurFwYuTMz9TY.png?width=108&crop=smart&auto=webp&s=276c5ef6333df4d31a157be783e32b512b8b8566', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/Z-UEQvUe3Bckugh81IxEPcdQgf7U99VurFwYuTMz9TY.png?width=216&crop=smart&auto=webp&s=1b4d7a17c9d4082841f007839aa1f855626fe4a1', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/Z-UEQvUe3Bckugh81IxEPcdQgf7U99VurFwYuTMz9TY.png?width=320&crop=smart&auto=webp&s=a445a0ddc087312e0a0ef8694d949ca7c4607546', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/Z-UEQvUe3Bckugh81IxEPcdQgf7U99VurFwYuTMz9TY.png?width=640&crop=smart&auto=webp&s=753c9b4f9617bb3bef778458fbbe7a2fb64cfffa', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/Z-UEQvUe3Bckugh81IxEPcdQgf7U99VurFwYuTMz9TY.png?width=960&crop=smart&auto=webp&s=e44fd174c5c45d107132514de742841361d9e03d', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/Z-UEQvUe3Bckugh81IxEPcdQgf7U99VurFwYuTMz9TY.png?width=1080&crop=smart&auto=webp&s=2442d66bf2f743d7f91a4ad4dc5149f7f546aebe', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/Z-UEQvUe3Bckugh81IxEPcdQgf7U99VurFwYuTMz9TY.png?auto=webp&s=8b236055de71ccda151759de6903f8d333b2b33d', 'width': 1200}, 'variants': {}}]}
Challenge: Help me benchmark my private AI model against SOTA (GPT-4/Claude 3.5). I'll run your prompts.
1
[removed]
2025-12-29T12:42:52
https://www.reddit.com/r/LocalLLaMA/comments/1pylce5/challenge_help_me_benchmark_my_private_ai_model/
Zealousideal_Fee7987
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1pylce5
false
null
t3_1pylce5
/r/LocalLLaMA/comments/1pylce5/challenge_help_me_benchmark_my_private_ai_model/
false
false
self
1
null
Model for scientific research?
0
Hi, is there a model that has been specifically trained for scientific research? Like training it with all the papers ever produced and not much more. This would be quite unique I think. No need for any tuning for unsociable behavior and similar, pure unobstructed science. I'd happily pay for it, anyone I could givey money to?
2025-12-29T12:39:26
https://www.reddit.com/r/LocalLLaMA/comments/1pyl9yk/model_for_scientific_research/
pythosynthesis
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1pyl9yk
false
null
t3_1pyl9yk
/r/LocalLLaMA/comments/1pyl9yk/model_for_scientific_research/
false
false
self
0
null
I just wanted to build ai to do specific tasks small tasks concerning small hardware usag and connecting to internet.
1
What do you recommend me to do or learn and what other alternatives to api and mcp cause I want it to use my own internet on my local machine
2025-12-29T12:31:40
https://www.reddit.com/r/LocalLLaMA/comments/1pyl4kn/i_just_wanted_to_build_ai_to_do_specific_tasks/
Expert-Bookkeeper815
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1pyl4kn
false
null
t3_1pyl4kn
/r/LocalLLaMA/comments/1pyl4kn/i_just_wanted_to_build_ai_to_do_specific_tasks/
false
false
self
1
null
Looking back at end of 2024 vs now
43
I’ve been rebuilding a few agent systems recently, and I kept having this vague feeling that everything already feels outdated, even compared to the middle of this year. **Models** GPT-4o → o3 → GPT-5.2 Claude 3.5 → Claude 3.7 → Claude 4.5 Gemini 1.5 → Gemini 2.5 → Gemini 3 DeepSeek v2 → DeepSeek R1 → DeepSeek v3 ... **Agent logic** single prompt loop → planner / executor split → long-running agent with state **RAG / retrieval** top-k doc chunks → hybrid retrieve + rerank → implicit context reads **Memory** chat history only → session + long-term memory → stateful memory across runs **Tool use** function calling JSON → structured tool execution → permissioned tool calls **Workflows** python scripts / cron → visual workflows (agent steps) → resumable execution engine **Observability** prompt logs → agent + tool traces → evals tied to deploys **Protocols / integration** custom tool schema per app → MCP-style shared interface → standardized interface + security boundaries Curious if others rebuilding systems recently feel the same.
2025-12-29T12:02:18
https://www.reddit.com/r/LocalLLaMA/comments/1pykkqx/looking_back_at_end_of_2024_vs_now/
Main-Fisherman-2075
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1pykkqx
false
null
t3_1pykkqx
/r/LocalLLaMA/comments/1pykkqx/looking_back_at_end_of_2024_vs_now/
false
false
self
43
null
[Project] I built a "Virtual VRAM" swarm to fine-tune 70B models on consumer GPUs (No OOM + Privacy)
0
Hi everyone, Like many of you, I hit the **"VRAM Wall"** trying to fine-tune a 70B model. My RTX 4090 (24GB) wasn't enough, and paying for H100s on AWS (plus the DevOps headache) was killing my runway. I realized we don't have a compute problem, we have a memory aggregation problem. So, I built **ZAGORA**. It's an orchestration layer (Environment-as-a-Service) that treats distributed consumer GPUs as a single cluster. **How it works:** * **Infinite Virtual VRAM:** It automatically shards the model across multiple nodes (similar to Petals but managed). You can load and train 70B+ models without OOM errors. * **Privacy-First:** Unlike rental marketplaces, we use **Federated Learning**. Your raw data never leaves your perimeter; only gradients are synced. * **Cost:** Targeting \~10x cheaper than AWS/GCP (aiming for \~$2-3/hr for a 70B run). I'm opening a **Private Beta** to test the scheduler and pricing model. I'm looking for people who are currently blocked by hardware limits. If you want to test the **Cost Calculator** or reserve a cluster for a specific run, check it out here: [www.zagora.ai](http://www.zagora.ai) Please feel free to roast my architecture in the comments. I’m handling the backend manually for now to ensure stability for the first batch of users.
2025-12-29T11:38:46
https://www.reddit.com/r/LocalLLaMA/comments/1pyk5qh/project_i_built_a_virtual_vram_swarm_to_finetune/
Alone-Detective-3317
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1pyk5qh
false
null
t3_1pyk5qh
/r/LocalLLaMA/comments/1pyk5qh/project_i_built_a_virtual_vram_swarm_to_finetune/
false
false
self
0
null
Help me build a (reasonable) 4GPU low-cost LLM machine, is ASUS WS X299 PRO/SE still good?
10
So I kind of exhausted what could be done with my fast. but VRAM poor, 4090 OC edition, so I was dreaming of designing an openframe 4 GPU machine that can drive with acceptable speed 4 GPUs. My preliminary research found rather acceptable priced WS X299 PRO/SE workstation motherboards that paired with an 48-Lane CPU may just do the trick, also the 64GB DDR4 for it is really price acceptable. So are there any better mobo/CPU combo under lesr than 1000EUR capable of driving 4 GPUS (proven solutions are getting a super thanks) , please share your experiences and thoughts, thanks.
2025-12-29T11:35:12
https://www.reddit.com/r/LocalLLaMA/comments/1pyk3jg/help_me_build_a_reasonable_4gpu_lowcost_llm/
HumanDrone8721
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1pyk3jg
false
null
t3_1pyk3jg
/r/LocalLLaMA/comments/1pyk3jg/help_me_build_a_reasonable_4gpu_lowcost_llm/
false
false
self
10
null
Unable to passtrough Nvidia RTX Pro to Ubuntu proxmox VM
1
Hi, Had 5090 passed trough to Ubuntu 24.04 VM in proxmox and it worked. Then switched the card to RTX Pro 5000 and the Ubuntu 24.04 VM wont boot, the memory consumption goes always 100%. But the card works on another VM which has Debian. Uninstalled the drivers in ubuntu but does not help. Is this known issue with Ubuntu and rtx pro?
2025-12-29T11:18:26
https://www.reddit.com/r/LocalLLaMA/comments/1pyjt0u/unable_to_passtrough_nvidia_rtx_pro_to_ubuntu/
Frosty_Chest8025
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1pyjt0u
false
null
t3_1pyjt0u
/r/LocalLLaMA/comments/1pyjt0u/unable_to_passtrough_nvidia_rtx_pro_to_ubuntu/
false
false
self
1
null
LM Studio alternative for images / Videos / Audio ?
23
With LM Studio (and others alike) it is super easy to run LLMs locally. Ist there anything as easy to create pictures, videos and audios locally using open models? I tried ComfyUI but didn't find it as easy. With LM Studio I can search for models, see if they will run fast/good with my specs (M3 Pro, 36GB Unified) before downloading them, and in general it is super straight forward. Two extra questions: 1. Which models would you recommend for this specs? 2. For LLMs in Mac, the mlx format makes a huge difference. Is there anything similar for image/video/audio models?
2025-12-29T11:11:00
https://www.reddit.com/r/LocalLLaMA/comments/1pyjohv/lm_studio_alternative_for_images_videos_audio/
mouseofcatofschrodi
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1pyjohv
false
null
t3_1pyjohv
/r/LocalLLaMA/comments/1pyjohv/lm_studio_alternative_for_images_videos_audio/
false
false
self
23
null
Project Share : I’m a Pharmacy student obsessed with LLMs. I treated "Hallucinations" like a disease and wrote a Python wrapper to cure them.
0
Hey everyone, A bit of a weird background here: **I am a pharmacy student by day, and a coder by night.** In my studies, precision is everything. A wrong dosage or a chemical interaction can be fatal. When I started building apps with LLMs (GPT-4, Llama 3), the "hallucinations" drove me crazy. To me, they looked like symptoms of an unstable patient. I couldn't stand the unpredictability. So, I decided to apply my medical mindset to my code. **The Concept: Treating the Model** I built a Python class I call **NEUROFIX**. I literally designed it to treat the LLM like a patient that needs stabilization before performing a complex task. Instead of just sending a raw `user_prompt`, the wrapper acts as a therapeutic middleware: 1. **Sterilizes Input:** It cleans the noise before it hits the context window. 2. **Injects a "Focus Serum":** It dynamically injects strict verification protocols (what I call "the medicine") based on the task difficulty. 3. **Monitors Vitals:** It tracks latency and confidence scores to give a "Health Check" on the output. **The Logic** I wrote the code using the terminology I know best. * We don't "configure settings", we **"Administer a Dose"**. * We don't "fix bugs", we **"Cure Symptoms"**. Here is the pseudo-code logic: Python # My "Pharmacy" approach to coding def administer_dose(prompt, prescription="CLINICAL"): if prescription == "CLINICAL": # Inject high-potency logic constraints active_ingredient = "[SYSTEM: ACTIVATE_STRICT_VERIFICATION]" return f"{active_ingredient} {prompt}" return prompt **The Result** It’s been a fun project combining my two passions. The wrapper drastically reduced the "drift" in my RAG applications. I’ve packaged the full, polished version (The **"Surgical Edition"**) on Lemon Squeezy for the price of a coffee. If you want to support a student bridging the gap between Med & Tech, check it out. I don't want to break the self-promotion rules here, so I won't drop the link in the post. If anyone wants to try the full version or support a Pharmacy student's side project, just let me know in the comments and I'll DM you the link! Let me know what you think of this "Medical" approach to prompt engineering!
2025-12-29T11:07:46
https://i.redd.it/zjew9iqvk4ag1.png
AfterPromise8752
i.redd.it
1970-01-01T00:00:00
0
{}
1pyjmht
false
null
t3_1pyjmht
/r/LocalLLaMA/comments/1pyjmht/project_share_im_a_pharmacy_student_obsessed_with/
false
false
https://b.thumbs.redditm…GQ3QxzCuByKQ.jpg
0
{'enabled': True, 'images': [{'id': 'BJjC9pFWNxtJca06pse7KxL4YYNOtwIfLXQ1AavBmrg', 'resolutions': [{'height': 57, 'url': 'https://preview.redd.it/zjew9iqvk4ag1.png?width=108&crop=smart&auto=webp&s=1e9c3f61aca8f218709260b57a091cc3f10a21e9', 'width': 108}, {'height': 115, 'url': 'https://preview.redd.it/zjew9iqvk4ag1.png?width=216&crop=smart&auto=webp&s=a19947b236f86e9390293aca1ab24b784f114863', 'width': 216}, {'height': 170, 'url': 'https://preview.redd.it/zjew9iqvk4ag1.png?width=320&crop=smart&auto=webp&s=5b121944d8dea531d254bfaf5d5d75c0331ef48e', 'width': 320}, {'height': 341, 'url': 'https://preview.redd.it/zjew9iqvk4ag1.png?width=640&crop=smart&auto=webp&s=36dcd7683a8936754d88bbc740fa370a6b9beef3', 'width': 640}, {'height': 512, 'url': 'https://preview.redd.it/zjew9iqvk4ag1.png?width=960&crop=smart&auto=webp&s=9eacb200540b2042caaa24a9034593ba19b475e9', 'width': 960}, {'height': 576, 'url': 'https://preview.redd.it/zjew9iqvk4ag1.png?width=1080&crop=smart&auto=webp&s=09a137e364bcf4fe85240475f87270e7219c9b53', 'width': 1080}], 'source': {'height': 1504, 'url': 'https://preview.redd.it/zjew9iqvk4ag1.png?auto=webp&s=3be4f90a9b76c97504eb249bf02feb84990fe3bd', 'width': 2816}, 'variants': {}}]}
Naver (South Korean internet giant), has just launched HyperCLOVA X SEED Think, a 32B open weights reasoning model and HyperCLOVA X SEED 8B Omni, a unified multimodal model that brings text, vision, and speech together
158
HyperCLOVA X SEED 32B Think: [https://huggingface.co/naver-hyperclovax/HyperCLOVAX-SEED-Think-32B](https://huggingface.co/naver-hyperclovax/HyperCLOVAX-SEED-Think-32B) HyperCLOVA X SEED 8B Omni: [https://huggingface.co/naver-hyperclovax/HyperCLOVAX-SEED-Omni-8B](https://huggingface.co/naver-hyperclovax/HyperCLOVAX-SEED-Omni-8B) Collection: [https://huggingface.co/collections/naver-hyperclovax/hyperclova-x-seed](https://huggingface.co/collections/naver-hyperclovax/hyperclova-x-seed) From Artificial Analysis on 𝕏: [https://x.com/ArtificialAnlys/status/2005429176615174207](https://x.com/ArtificialAnlys/status/2005429176615174207)
2025-12-29T11:02:29
https://www.reddit.com/gallery/1pyjjbw
Nunki08
reddit.com
1970-01-01T00:00:00
0
{}
1pyjjbw
false
null
t3_1pyjjbw
/r/LocalLLaMA/comments/1pyjjbw/naver_south_korean_internet_giant_has_just/
false
false
https://a.thumbs.redditm…f2kPk8Dbebv4.jpg
158
null
Project Share : I’m a Pharmacy student obsessed with LLMs. I treated "Hallucinations" like a disease and wrote a Python wrapper to cure them ^^
1
2025-12-29T11:01:15
https://i.redd.it/m7mwv798k4ag1.png
AfterPromise8752
i.redd.it
1970-01-01T00:00:00
0
{}
1pyjihh
false
null
t3_1pyjihh
/r/LocalLLaMA/comments/1pyjihh/project_share_im_a_pharmacy_student_obsessed_with/
false
false
default
1
{'enabled': True, 'images': [{'id': 'm7mwv798k4ag1', 'resolutions': [{'height': 57, 'url': 'https://preview.redd.it/m7mwv798k4ag1.png?width=108&crop=smart&auto=webp&s=4ee874903c89e080733bb4609c0b403b67b88937', 'width': 108}, {'height': 115, 'url': 'https://preview.redd.it/m7mwv798k4ag1.png?width=216&crop=smart&auto=webp&s=d6e879928ed5d07e4fabd7ca3adb33631e1fea61', 'width': 216}, {'height': 170, 'url': 'https://preview.redd.it/m7mwv798k4ag1.png?width=320&crop=smart&auto=webp&s=d72867e749e3bea4ed46f1f160dabda4ffcc8430', 'width': 320}, {'height': 341, 'url': 'https://preview.redd.it/m7mwv798k4ag1.png?width=640&crop=smart&auto=webp&s=fe9ba1b0f53a659c94830288d0516567d083a044', 'width': 640}, {'height': 512, 'url': 'https://preview.redd.it/m7mwv798k4ag1.png?width=960&crop=smart&auto=webp&s=0e532bf00be8f95bd419da3d83220e497994d537', 'width': 960}, {'height': 576, 'url': 'https://preview.redd.it/m7mwv798k4ag1.png?width=1080&crop=smart&auto=webp&s=ff2f3e61f531ea92b92a4c7fd60d75a8f1b1194f', 'width': 1080}], 'source': {'height': 1504, 'url': 'https://preview.redd.it/m7mwv798k4ag1.png?auto=webp&s=d603ff18771e68d3d3b5dac534aa914b3bd365e5', 'width': 2816}, 'variants': {}}]}
LM STUDIO on Mac M3
0
Ciao, sto esplorando LM STUDIO, mi sapete suggerire qualche modello interessante da installare? Ora ho scaricato openai/gpt-oss-20b ma h visto dai primi test che non ricorda nulla, se gli dico il mio nome mi saluta, ma al riavvio successivo non ricorda nulla. Vorrei provare altro, magari specifico in qualcosa, mi aiutate? Sono curioso a provare un po' di cose. Per esempio: \- Caricare un PDF e analizzarlo \- Creare grafici \- Creare una voce dal testo Grazie! Ciao
2025-12-29T10:54:10
https://www.reddit.com/r/LocalLLaMA/comments/1pyjdz5/lm_studio_on_mac_m3/
Signal_Pickle_3062
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1pyjdz5
false
null
t3_1pyjdz5
/r/LocalLLaMA/comments/1pyjdz5/lm_studio_on_mac_m3/
false
false
self
0
null
I proposed a standard (CommerceTXT) to stop RAG agents from scraping 2MB HTML pages. 95%+ token reduction. Thoughts?
0
Hi everyone, I've been building shopping agents locally (using Mistral and Llama 3) and hit a massive wall: **The "Haystack" Problem.** To check a single product price or stock status, the agent usually has to fetch and parse a 2MB+ HTML page. Even with good scrapers, you end up feeding the LLM a lot of noise (DOM elements, scripts, navigation). This eats up the context window fast and confuses smaller local models (leading to hallucinations like inventing prices). I got tired of this, so I wrote a spec for **CommerceTXT**. **The idea is simple:** It’s like `robots.txt` but for transactional context. It lives at the root (`/commerce.txt`) and serves strictly structured, read-only data based on SchemaOrg vocabulary. **Why I think this matters for Local LLMs:** 1. **Token Efficiency:** Reduces a typical product page from \~8,000 tokens (HTML) to \~60 tokens (CommerceTXT). You can fit 100 products in the context window of a 7B model instead of just 2. 2. **Speed:** No headless browser or JS rendering needed. Just a text fetch. 3. **Anti-Hallucination:** Includes directives like @`SEMANTIC_LOGIC` (e.g., "Limit 3 per customer") so the model doesn't have to guess the rules. **Current Status:** * Drafted the [Spec v1.0.1](https://github.com/commercetxt/commercetxt/tree/main/spec) (Open Source / CC0). * Built a basic [Python Parser](https://github.com/commercetxt/commercetxt/tree/main/parsers/python). * Got some initial validation on [Hacker News](https://news.ycombinator.com/item?id=46289481) and discussions with the [Schema.org team](https://github.com/schemaorg/schemaorg/discussions/4651). I'm looking for feedback from this community specifically on the **RAG implementation side**. Does the grammar look parse-friendly for your workflows? Here is the repo: [https://github.com/commercetxt/commercetxt](https://github.com/commercetxt/commercetxt) (P.S. I know `json-ld` exists, but extracting it from heavy HTML is still expensive/slow for agents. This is meant to be a direct "fast lane".)
2025-12-29T10:26:25
https://www.reddit.com/r/LocalLLaMA/comments/1pyix8z/i_proposed_a_standard_commercetxt_to_stop_rag/
TsaTsuTsi
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1pyix8z
false
null
t3_1pyix8z
/r/LocalLLaMA/comments/1pyix8z/i_proposed_a_standard_commercetxt_to_stop_rag/
false
false
self
0
null
I proposed a standard (CommerceTXT) to stop RAG agents from scraping 2MB HTML pages. 95%+ token reduction. Thoughts?
1
[removed]
2025-12-29T10:21:43
https://www.reddit.com/r/LocalLLaMA/comments/1pyiu6e/i_proposed_a_standard_commercetxt_to_stop_rag/
TsaZan
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1pyiu6e
false
null
t3_1pyiu6e
/r/LocalLLaMA/comments/1pyiu6e/i_proposed_a_standard_commercetxt_to_stop_rag/
false
false
self
1
null
I proposed a standard (CommerceTXT) to stop RAG agents from scraping 2MB HTML pages. 95%+ token reduction. Thoughts?
1
[removed]
2025-12-29T10:19:23
https://www.reddit.com/r/LocalLLaMA/comments/1pyisqz/i_proposed_a_standard_commercetxt_to_stop_rag/
TsaZan
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1pyisqz
false
null
t3_1pyisqz
/r/LocalLLaMA/comments/1pyisqz/i_proposed_a_standard_commercetxt_to_stop_rag/
false
false
self
1
null
I proposed a standard (CommerceTXT) to stop RAG agents from scraping 2MB HTML pages. 95% token reduction. Thoughts?
1
[removed]
2025-12-29T10:16:36
https://www.reddit.com/r/LocalLLaMA/comments/1pyir3s/i_proposed_a_standard_commercetxt_to_stop_rag/
TsaZan
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1pyir3s
false
null
t3_1pyir3s
/r/LocalLLaMA/comments/1pyir3s/i_proposed_a_standard_commercetxt_to_stop_rag/
false
false
self
1
null
do MoEoE models stand a chance?
18
ive heard about plans for DeepSeek to make their new models surpass 1 trillion parameter territory, and with them doing that, im sure other labs will too (especially labs like InclusionAI, where "scaling is all you need") so that begs the question, \*would\* and MoEoE model work? as in mixture of experts models that manage even more experts instead of parameters? imagine a 2-3 trillion model only having to decide on 128 experts instead of 2048 to keep low activated params? i dont know enough about LLMs to answer this question, so id like to ask all of you!
2025-12-29T10:15:49
https://www.reddit.com/r/LocalLLaMA/comments/1pyiqly/do_moeoe_models_stand_a_chance/
ComplexType568
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1pyiqly
false
null
t3_1pyiqly
/r/LocalLLaMA/comments/1pyiqly/do_moeoe_models_stand_a_chance/
false
false
self
18
null
I proposed a standard (CommerceTXT) to stop RAG agents from scraping 2MB HTML pages. 95%+ token reduction. Thoughts?
1
[removed]
2025-12-29T10:12:17
https://www.reddit.com/r/LocalLLaMA/comments/1pyiok9/i_proposed_a_standard_commercetxt_to_stop_rag/
TsaZan
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1pyiok9
false
null
t3_1pyiok9
/r/LocalLLaMA/comments/1pyiok9/i_proposed_a_standard_commercetxt_to_stop_rag/
false
false
self
1
null
EditMGT — fast, localized image editing with Masked Generative Transformers
11
https://preview.redd.it/…ichow23/EditMGT)
2025-12-29T10:05:26
https://www.reddit.com/r/LocalLLaMA/comments/1pyikha/editmgt_fast_localized_image_editing_with_masked/
freesysck
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1pyikha
false
null
t3_1pyikha
/r/LocalLLaMA/comments/1pyikha/editmgt_fast_localized_image_editing_with_masked/
false
false
https://a.thumbs.redditm…AtDUpbQk9sR0.jpg
11
null
Sharing data that may contain PII? Here's a case-study on how to use a task-specific SLM to remove sensitive info locally and preserve user privacy
2
When sharing user data that may contain Personally Identifiable Information, anonymization is a crucial step in ensuring user privacy. PII removal APIs exist, but they often defeat the purpose of anonymization, since data must be sent to third-party servers. Read this case-study to find out how to use [the Artifex library](https://github.com/tanaos/artifex) to create a task-specific Small Language Model to anonymize data on your local machine, without sending it to third-party APIs. [https://tanaos.com/blog/anonymize-text-locally/](https://tanaos.com/blog/anonymize-text-locally/) # TL;DR Too busy to read the case study? Here's the code-only version: pip install artifex from artifex import Artifex ta = Artifex().text_anonymization print(ta("John Doe lives at 123 Main St, New York. His phone number is (555) 123-4567.")) # >>> ["[MASKED] lives at [MASKED]. His phone number is [MASKED]."]
2025-12-29T10:05:08
https://www.reddit.com/r/LocalLLaMA/comments/1pyikbk/sharing_data_that_may_contain_pii_heres_a/
Ok_Hold_5385
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1pyikbk
false
null
t3_1pyikbk
/r/LocalLLaMA/comments/1pyikbk/sharing_data_that_may_contain_pii_heres_a/
false
false
self
2
{'enabled': False, 'images': [{'id': 'MEEPvEmBAko9w41fVCzz6jMhod6-p1VtYBttr3-sOvM', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/MEEPvEmBAko9w41fVCzz6jMhod6-p1VtYBttr3-sOvM.png?width=108&crop=smart&auto=webp&s=2b8b1491883c19044698fad152d6225057435402', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/MEEPvEmBAko9w41fVCzz6jMhod6-p1VtYBttr3-sOvM.png?width=216&crop=smart&auto=webp&s=46802c08f86e1d78abc4f2a41eb586e327a74648', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/MEEPvEmBAko9w41fVCzz6jMhod6-p1VtYBttr3-sOvM.png?width=320&crop=smart&auto=webp&s=ffd34cfd532b8254218e3bc44c5bbd3ccc4f3627', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/MEEPvEmBAko9w41fVCzz6jMhod6-p1VtYBttr3-sOvM.png?width=640&crop=smart&auto=webp&s=d7a084d36eba14c07b562deff72fc52ffd65ec7d', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/MEEPvEmBAko9w41fVCzz6jMhod6-p1VtYBttr3-sOvM.png?width=960&crop=smart&auto=webp&s=f73ae12eaa354aaf72486142a910448c4da04a4d', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/MEEPvEmBAko9w41fVCzz6jMhod6-p1VtYBttr3-sOvM.png?width=1080&crop=smart&auto=webp&s=3580a9ae9421f689bcdc99d48a5468a6629ae1f2', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/MEEPvEmBAko9w41fVCzz6jMhod6-p1VtYBttr3-sOvM.png?auto=webp&s=32b136a15ad4f3239ef844d2317f6939b1d5f1c7', 'width': 1200}, 'variants': {}}]}
[Tool] AIStory: An AI Co-Author for Novelists
0
I’ve been a long-time lurker and huge admirer of this community. Like many of you, I’m a writer who’s spent years wrestling with the chaotic early stages of a novel — you know the feeling: * A vivid scene in your head, but no idea where it fits. * A character who feels real, but whose arc is still foggy. * Pages of notes, index cards, and half-baked outlines… but zero forward momentum. I was inspired by Hilary Mantel’s idea of **“growing a book, not writing one”** — letting the story emerge from fragments, not forcing it into a rigid structure. But tools like Word or even Scrivener, while powerful, still feel… *passive*. They hold your notes, but don’t *participate*. So I built something different: [**AIStory**](https://aistory.base44.app/). It’s not another “AI writer” that spits out generic prose. Instead, it’s an **AI co-author** designed to work *with* your messy process: * **Start anywhere**: Jot down a phrase, a scene, a character trait — just like on an index card. AIStory doesn’t demand you begin with Chapter 1. * **Let it connect the dots**: The AI analyzes your fragments and gently suggests links: “This metaphor about ‘broken clocks’ might tie into your protagonist’s backstory. Want to explore that?” * **Grow your structure**: As you add more, it helps you build a dynamic beat sheet and character web — not as a template to fill, but as a living map of your story’s heartbeat. * **Protect your voice**: You can guide the AI with “style anchors” (e.g., “keep it in the tone of Le Guin” or “channel Murakami’s surrealism”), so it enhances, not replaces, your voice. I know this community is (rightfully) skeptical of AI tools that promise “easy books.” **AIStory isn’t about making writing easy — it’s about making the chaos of creation feel less lonely.** I’d be honored if any of you wanted to try it. There’s a free tier, and I’m actively incorporating feedback from professional writers like yourselves. **Link**: [https://aistory.base44.app/](https://aistory.base44.app/) **My ask**: If you give it a shot, I’d genuinely appreciate your blunt, writer-to-writer feedback — good or bad. Thanks for reading, and happy writing! — A fellow novelist & builder
2025-12-29T09:32:51
https://www.reddit.com/r/LocalLLaMA/comments/1pyi0xr/tool_aistory_an_ai_coauthor_for_novelists/
chupei0
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1pyi0xr
false
null
t3_1pyi0xr
/r/LocalLLaMA/comments/1pyi0xr/tool_aistory_an_ai_coauthor_for_novelists/
false
false
self
0
null
Decoding Tokens vs First Token
0
Generating tokens during the decode stage is much faster than producing the first token. Why? That’s where **KV caching** comes in. This figure makes it much easier to understand. https://preview.redd.it/esulhp9jy3ag1.png?width=1150&format=png&auto=webp&s=21ac1d43d26cd2e030d3517e6b400863dff271bf
2025-12-29T09:00:00
https://www.reddit.com/r/LocalLLaMA/comments/1pyhhhb/decoding_tokens_vs_first_token/
Happy-Conversation54
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1pyhhhb
false
null
t3_1pyhhhb
/r/LocalLLaMA/comments/1pyhhhb/decoding_tokens_vs_first_token/
false
false
https://b.thumbs.redditm…cFn34CWEURhg.jpg
0
null
3080 12GB suffices for llama?
0
Just getting into this whole localized LLM's -- doing freelance AI -- company servers handle it -- would like soemthing for overflow tasks just incase built a xeon 2697a (16core 3.6ghz) - 64 GB system ram To run a localized LLM is 3080 12 GB OK? I don't want to break the bank for a 4080. It will be a 450 dollar difference. - I need something that can handle small (100K documents?) batch sizes within a day or so incase i have overflow work. Not sure i can afford a 4080 until they give me more work. Just started the job.
2025-12-29T08:17:17
https://www.reddit.com/r/LocalLLaMA/comments/1pygsa9/3080_12gb_suffices_for_llama/
Ok_Artichoke_783
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1pygsa9
false
null
t3_1pygsa9
/r/LocalLLaMA/comments/1pygsa9/3080_12gb_suffices_for_llama/
false
false
self
0
null
[Release] Dingo v2.0 – Open-source AI data quality tool now supports SQL databases, RAG evaluation, and Agent-as-a-Judge hallucination detection!
0
Hi everyone! We’re excited to announce **Dingo v2.0** 🎉 – a comprehensive, open-source data quality evaluation tool built for the LLM era. **What’s new in v2.0?** * **SQL Database Support**: Directly connect to PostgreSQL, MySQL, Doris, etc., and run multi-field quality checks. * **Agent-as-a-Judge (Beta)**: Leverage autonomous agents to evaluate hallucination and factual consistency in your data. * **File Format Flexibility**: Ingest from CSV, Excel, Parquet, JSONL, Hugging Face datasets, and more. * **End-to-End RAG Evaluation**: Assess retrieval relevance, answer faithfulness, and context alignment out of the box. * Plus: Built-in LLM-based metrics (GPT-4o, Deepseek), 20+ heuristic rules, and a visual report dashboard. Dingo is designed to help AI engineers and data teams **catch bad data before it poisons your model** — whether it’s for pretraining, SFT, or RAG applications. * **GitHub**: [https://github.com/MigoXLab/dingo](https://github.com/MigoXLab/dingo) * **Apache 2.0 Licensed** | CLI + SDK + Gradio + MCP Server (IDE integration!) We’d love your feedback, bug reports, or even PRs! 🙌 Thanks for building with us!
2025-12-29T08:01:58
https://www.reddit.com/r/LocalLLaMA/comments/1pygj3k/release_dingo_v20_opensource_ai_data_quality_tool/
chupei0
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1pygj3k
false
null
t3_1pygj3k
/r/LocalLLaMA/comments/1pygj3k/release_dingo_v20_opensource_ai_data_quality_tool/
false
false
self
0
{'enabled': False, 'images': [{'id': 'BfeNimQqRM5aRhkucVY53VQ2-ZuoKgrz52PwC3MygcA', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/BfeNimQqRM5aRhkucVY53VQ2-ZuoKgrz52PwC3MygcA.png?width=108&crop=smart&auto=webp&s=2770ab373cba49a7e06bc06b3b93d8a127366066', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/BfeNimQqRM5aRhkucVY53VQ2-ZuoKgrz52PwC3MygcA.png?width=216&crop=smart&auto=webp&s=73ff6826d9a079df849ce3456f0a85c9c47905a1', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/BfeNimQqRM5aRhkucVY53VQ2-ZuoKgrz52PwC3MygcA.png?width=320&crop=smart&auto=webp&s=cb3d9c0d83807cb374dacb28cdf0ca1765cebffc', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/BfeNimQqRM5aRhkucVY53VQ2-ZuoKgrz52PwC3MygcA.png?width=640&crop=smart&auto=webp&s=77f3ced729676cc0021fde4fa8ba600195a922fc', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/BfeNimQqRM5aRhkucVY53VQ2-ZuoKgrz52PwC3MygcA.png?width=960&crop=smart&auto=webp&s=46b8d1b688510b94a99e9fb265cf9b1190425f7c', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/BfeNimQqRM5aRhkucVY53VQ2-ZuoKgrz52PwC3MygcA.png?width=1080&crop=smart&auto=webp&s=18c8446aa852b3f88699a96802fdff8485e8bc8e', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/BfeNimQqRM5aRhkucVY53VQ2-ZuoKgrz52PwC3MygcA.png?auto=webp&s=52ed614b1b76c6dafc9646619612adbed7937d2f', 'width': 1200}, 'variants': {}}]}
Tencent just released WeDLM 8B Instruct on Hugging Face
403
Hugging face: [https://huggingface.co/tencent/WeDLM-8B-Instruct](https://huggingface.co/tencent/WeDLM-8B-Instruct) A diffusion language model that runs 3-6× faster than vLLM-optimized Qwen3-8B on math reasoning tasks.
2025-12-29T07:38:43
https://www.reddit.com/gallery/1pyg4yt
Difficult-Cap-7527
reddit.com
1970-01-01T00:00:00
0
{}
1pyg4yt
false
null
t3_1pyg4yt
/r/LocalLLaMA/comments/1pyg4yt/tencent_just_released_wedlm_8b_instruct_on/
false
false
https://b.thumbs.redditm…t02oONhKPwFM.jpg
403
null
Why is sgalng's torch.compile startup so much slower than vLLM?
6
Hi all, I've been testing torch.compile on SGLang with Gemma 3 12B, and noticed some significant startup time differences compared to vLLM. ### What I'm seeing - SGLang without compile: ~1:30 startup - SGLang with compile (bs 1,2,4,8,16): ~6min startup - vLLM with compile enabled (default): ~1min startup I'm getting 5-15% perf gains from compile at lower batch sizes (bs < 16), so I'd like to use it—but the startup cost is pretty rough. ### details - vLLM: ``` vllm serve /root/models/gemma3 \ --tensor-parallel-size 1 \ --max-model-len 2448 \ --gpu-memory-utilization 0.8 \ --max-num-seqs 16 \ --compilation-config '{"cudagraph_capture_sizes": [1,2,4,8,16]}' ``` - sglang: ``` python -m sglang.launch_server \ --model-path /root/models/gemma3 \ --tp 1 \ --context-length 2448 \ --mem-fraction-static 0.8 \ --enable-torch-compile \ --torch-compile-max-bs 16 ``` ### My guess vLLM uses piecewise compilation by default, which is faster than full-graph. In SGLang, compile seems tied to CUDA graph, so piecewise compile only comes with piecewise CUDA graph—whose overhead might negate the compile benefits anyway. I understand "beat torch compile" is the long-term direction(https://github.com/sgl-project/sglang/issues/4748) and compile isn't really the focus right now. But given the gains I'm seeing on some models, I'm curious: **does anyone know what's actually different between vLLM and SGLang's compile implementations here?** Thanks!
2025-12-29T07:20:53
https://www.reddit.com/r/LocalLLaMA/comments/1pyfu01/why_is_sgalngs_torchcompile_startup_so_much/
Inside_Camp870
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1pyfu01
false
null
t3_1pyfu01
/r/LocalLLaMA/comments/1pyfu01/why_is_sgalngs_torchcompile_startup_so_much/
false
false
self
6
{'enabled': False, 'images': [{'id': 'ulkRwHo-1_Oz4cUSSKB477XFTLXRyF4QTk85h72Tcd0', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/ulkRwHo-1_Oz4cUSSKB477XFTLXRyF4QTk85h72Tcd0.png?width=108&crop=smart&auto=webp&s=32e297acbd51e031f067d37386372c71a736e281', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/ulkRwHo-1_Oz4cUSSKB477XFTLXRyF4QTk85h72Tcd0.png?width=216&crop=smart&auto=webp&s=041beeffd77b5a3133cc0958b7cf96557373e7eb', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/ulkRwHo-1_Oz4cUSSKB477XFTLXRyF4QTk85h72Tcd0.png?width=320&crop=smart&auto=webp&s=bd645b2d9b27fc619e6f85debe0034670f69d985', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/ulkRwHo-1_Oz4cUSSKB477XFTLXRyF4QTk85h72Tcd0.png?width=640&crop=smart&auto=webp&s=d80f0a74025b0a862a061c66f6d6acb25982b7b1', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/ulkRwHo-1_Oz4cUSSKB477XFTLXRyF4QTk85h72Tcd0.png?width=960&crop=smart&auto=webp&s=bbb263e97f3c5ce5bd11cab3cd019d90922ed620', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/ulkRwHo-1_Oz4cUSSKB477XFTLXRyF4QTk85h72Tcd0.png?width=1080&crop=smart&auto=webp&s=bae41ea1d15892596863c1415cb146892e23b31f', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/ulkRwHo-1_Oz4cUSSKB477XFTLXRyF4QTk85h72Tcd0.png?auto=webp&s=3286530277df3c15307e3674525b936ce637b9d3', 'width': 1200}, 'variants': {}}]}
GLM‑4.5‑Air on MacBook Pro prematurely emits EOS token (same issue across llama.cpp, and mlx_lm)
2
Hi everyone, I’ve been using the GLM‑4.5‑Air model on my M4 Max MacBook Pro for a while now and have run into an annoying problem: during generation the model suddenly emits an EOS token (\[EOS\] or similar) out of nowhere, causing the output to be truncated mid‑sentence. This happens even with short prompts and low temperature settings. I’ve tried a few workarounds: Running the model via llama.cpp on Apple Silicon – identical behavior. Using mlx\_lm (the MLX‑based inference library) – again, EOS pops up unexpectedly. It doesn’t seem to be a memory issue or a problem with my prompt length. The truncation happens at the same token position each time for a given prompt. Has anyone else experienced this? Did you find any fixes or workarounds (e.g., adjusting max\_tokens, disabling certain optimizations, patching the model weights, updating libraries)? Any insights, suggestions, or similar experiences would be greatly appreciated. Thanks in advance!
2025-12-29T06:35:11
https://www.reddit.com/r/LocalLLaMA/comments/1pyf19d/glm45air_on_macbook_pro_prematurely_emits_eos/
akirose1004
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1pyf19d
false
null
t3_1pyf19d
/r/LocalLLaMA/comments/1pyf19d/glm45air_on_macbook_pro_prematurely_emits_eos/
false
false
self
2
null
Why does LLama 3.1 give long textbook style answer for simple definition questions?
0
I am using Llama3.1-8b-Instruct inferenced via vllm for my course assistant. When I ask a question in simple language, for instance >what is sunrise and sunset? I get correct answer But if I ask the same question in different format >what is sunrise, sunset? I get a huge para that has little relevance to the query. What can I do to rectify this?
2025-12-29T06:24:26
https://www.reddit.com/r/LocalLLaMA/comments/1pyeud1/why_does_llama_31_give_long_textbook_style_answer/
Dizzy-Watercress-744
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1pyeud1
false
null
t3_1pyeud1
/r/LocalLLaMA/comments/1pyeud1/why_does_llama_31_give_long_textbook_style_answer/
false
false
self
0
null
AMD AI Max 395 128gb or Mac Studio M2 Ultra 128gb?
19
AMD AI Max 395 128gb or Mac Studio M2 Ultra 128gb? I found both of them used on OfferUp. The Mac Studio is an M2 Ultra 128gb 2TB for $2500. (No warranty) The AMD is an Beelink GTR9 Pro AI Max+ 395 128gb 2TB for $1500. (Probably doesn’t have warranty too) I’m a Mac user by the way. I already own a MacBook Pro M1 Max 64gb 2TB. Need something to run 70b models faster.
2025-12-29T06:19:16
https://www.reddit.com/r/LocalLLaMA/comments/1pyeqpe/amd_ai_max_395_128gb_or_mac_studio_m2_ultra_128gb/
solo_entrepreneur
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1pyeqpe
false
null
t3_1pyeqpe
/r/LocalLLaMA/comments/1pyeqpe/amd_ai_max_395_128gb_or_mac_studio_m2_ultra_128gb/
false
false
self
19
null
Benchmarking local llms for speed with CUDA and vulkan, found an unexpected speedup for select models
62
I was benchmarking my local llm collection to get an idea of tokens rates. I thought it might be interesting to compare CUDA vs Vulkan on my 3080 10GB. As expected, in almost all cases CUDA was the better option as far as token rate However, I found one suprise that affects a small number of models. Disclaimer: take the following results with a pinch of salt. I'm not a statistician nor mathmetician. I have been programming for some decades but this test code is mostly deslopped jive code. YMMV. The main findings is that when running certain models partially offloaded to GPU, some models perform much better on Vulkan than CUDA: - GLM4 9B Q6 had a 2.2x speedup on PP, and 1.7x speedup on TG - Qwen3 8B Q6 had a 1.5x speedup on PP, and 1.1x speedup on PP (meh) - and Ministral3 14B 2512 Q4 had a 4.4x speedup on PP, and a 1.6x speedup on TG --- The following tables only show models that are partially offloaded onto GPU: ### Token generation (tg) - CUDA vs vulkan | Model | CUDA (t/s) | Vulkan (t/s) | Diff (t/s) | Speedup | |-------|------|--------|------|---------| | ERNIE4.5 21B-A3B Q6 | 25.8 | 13.2 | -12.7 | 0.51x | | GLM4 9B Q6 | 25.4 | 44.0 | +18.6 | 1.73x | | Ling-lite-i1 Q6 | 40.4 | 21.6 | -18.9 | 0.53x | | Ministral3 14B 2512 Q4 | 36.1 | 57.1 | +21.0 | 1.58x | | Qwen3 30B-A3B 2507 Q6 | 23.1 | 15.9 | -7.1 | 0.69x | | Qwen3-8B Q6 | 23.7 | 25.8 | +2.1 | 1.09x | | Ring-mini-2.0-i1 Q6 | 104.3 | 61.4 | -42.9 | 0.59x | | Trinity-Mini 26B-A3B Q6 | 30.4 | 22.4 | -8.0 | 0.74x | | granite-4.0-h-small Q4 | 16.4 | 12.9 | -3.5 | 0.79x | | Kanana 1.5 15B-A3B instruct Q6 | 30.6 | 16.3 | -14.3 | 0.53x | | gpt-oss 20B Q6 | 46.1 | 23.4 | -22.7 | 0.51x | ### Prompt processing (pp) - CUDA vs vulkan | Model | CUDA (t/s) | Vulkan (t/s) | Diff (t/s) | Speedup | |-------|------|--------|------|---------| | ERNIE4.5 21B-A3B Q6 | 24.5 | 13.3 | -11.2 | 0.54x | | GLM4 9B Q6 | 34.0 | 75.6 | +41.6 | 2.22x | | Ling-lite-i1 Q6 | 37.0 | 20.2 | -16.8 | 0.55x | | Ministral3 14B 2512 Q4 | 58.1 | 255.4 | +197.2 | 4.39x | | Qwen3 30B-A3B 2507 Q6 | 21.4 | 14.0 | -7.3 | 0.66x | | Qwen3-8B Q6 | 30.3 | 46.0 | +15.8 | 1.52x | | Ring-mini-2.0-i1 Q6 | 88.4 | 55.6 | -32.8 | 0.63x | | Trinity-Mini 26B-A3B Q6 | 28.2 | 20.9 | -7.4 | 0.74x | | granite-4.0-h-small Q4 | 72.3 | 42.5 | -29.8 | 0.59x | | Kanana 1.5 15B-A3B instruct Q6 | 29.1 | 16.3 | -12.8 | 0.56x | | gpt-oss 20B Q6 | 221.9 | 112.1 | -109.8 | 0.51x |
2025-12-29T05:09:41
https://www.reddit.com/r/LocalLLaMA/comments/1pydegt/benchmarking_local_llms_for_speed_with_cuda_and/
Amazing_Athlete_2265
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1pydegt
false
null
t3_1pydegt
/r/LocalLLaMA/comments/1pydegt/benchmarking_local_llms_for_speed_with_cuda_and/
false
false
self
62
null
Is there a local alternative to Obsidian + Gemini Cli?
1
I'm using Obsidian to write a game design document. And I use Gemini Cli to take care of the pesky or mundane tasks like finding and replacing a certain keyword, or rewriting certain passages. That means I need a model + software that can read the files on my PC and do magic. I've been looking around a lot, but I couldn't find a solution that would let me do the same with a local model, preferably one that can be run on a laptop. I'll be getting on a flight soon and it would be great if somebody had a suggestion.
2025-12-29T05:00:22
https://www.reddit.com/r/LocalLLaMA/comments/1pyd7h2/is_there_a_local_alternative_to_obsidian_gemini/
StudentFew6429
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1pyd7h2
false
null
t3_1pyd7h2
/r/LocalLLaMA/comments/1pyd7h2/is_there_a_local_alternative_to_obsidian_gemini/
false
false
self
1
null
Day 21: 21 Days of Building a Small Language Model: Complete Journey Recap
25
No blog today. I created a video instead to recap the journey, just wanted to say a big thank you to everyone for the support. 🙏 Video link: [https://youtu.be/-rzMxb1JhuU](https://youtu.be/-rzMxb1JhuU) I can't believe we've made it to the end together. First, I want to say a massive thank you to everyone who has been following along, reading the blogs, engaging with the content, asking questions, and sharing your own learnings. This journey has been absolutely incredible, and it wouldn't have been the same without your support and engagement. Before we wrap up, I want to wish everyone a very Happy New Year! As we close out this year and begin a new one, I'm excited about what's ahead in the world of language models and AI. Until then, happy building! I’ve added all the links in the first comment.
2025-12-29T04:59:01
https://www.reddit.com/r/LocalLLaMA/comments/1pyd6fk/day_21_21_days_of_building_a_small_language_model/
Prashant-Lakhera
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1pyd6fk
false
null
t3_1pyd6fk
/r/LocalLLaMA/comments/1pyd6fk/day_21_21_days_of_building_a_small_language_model/
false
false
self
25
{'enabled': False, 'images': [{'id': 'vTfPHjANVAIf_KM7bhBtc9BCSZ0gHzhTDzU2iERL9yU', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/vTfPHjANVAIf_KM7bhBtc9BCSZ0gHzhTDzU2iERL9yU.jpeg?width=108&crop=smart&auto=webp&s=6cfd2ba7370ba0449db7de42f99f5f46e7d57855', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/vTfPHjANVAIf_KM7bhBtc9BCSZ0gHzhTDzU2iERL9yU.jpeg?width=216&crop=smart&auto=webp&s=58528357c9d77075432ef4aa00cc4e5adfcf80df', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/vTfPHjANVAIf_KM7bhBtc9BCSZ0gHzhTDzU2iERL9yU.jpeg?width=320&crop=smart&auto=webp&s=86110c79f1b1272644090b5ffc86f37e1260e72a', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/vTfPHjANVAIf_KM7bhBtc9BCSZ0gHzhTDzU2iERL9yU.jpeg?auto=webp&s=af7d98523338bdedfc4db4ed53ce35842315dbe1', 'width': 480}, 'variants': {}}]}
Good model/method for asking questions about long documents? (12 GB VRAM, 32 GB RAM)
3
Sorry for the remedial question but I have been playing with this for a while and not had great success. I could use some pointers. My goal is to put one or more long PDF documents into a local LLM, and ask the system questions about the documents, so it can explain concepts to me. If it helps, I am talking specifically about game rules -- RPGs and maybe board games too, though those documents would be shorter. So, yeah, I could just read the documents myself, but I thought it would be fun and faster if I could ask the LLM something like, "explain the basic task resolution rules in this game, using the example of a skilled character trying to pick a difficult lock." The PDFs can be very long, hundreds of pages, but most of that is useless story material and not game rules. If I can get it to work for games, then hopefully I can get it to work for cookbooks, too. I have some very long cookbooks and chatting with them could be a lot more useful than flipping pages. I have been using Anything LLM since it includes some kind of document ingestion, and tried a few models like Gemma 3 12B, Mistral 3 8B, Qwen 3 4B, but didn't love the results. Sometimes, the answers were just... Wrong. But, I also didn't try changing any settings like temperature. This seems like it should be doable at home, but maybe I am off base... My hardware is Windows, 12GB 3080 w/ 32 GB RAM. (and I don't mind waiting for an answer, if it's a high quality answer!) Thanks a bunch if you have any suggestions!
2025-12-29T04:58:56
https://www.reddit.com/r/LocalLLaMA/comments/1pyd6cl/good_modelmethod_for_asking_questions_about_long/
hockey-throwawayy
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1pyd6cl
false
null
t3_1pyd6cl
/r/LocalLLaMA/comments/1pyd6cl/good_modelmethod_for_asking_questions_about_long/
false
false
self
3
null
RTX 6000 Pro + RTX 3090 in one machine?
8
I was just able to get my hands on a RTX 6000 Pro 96gb card, and I currently have two 3090s in my machine. Should I keep one of the 3090s in there or should I just make do with the single 6000? I’m looking to run GPT-OSS at the best possible quality and speed I can.
2025-12-29T04:58:51
https://www.reddit.com/r/LocalLLaMA/comments/1pyd6a6/rtx_6000_pro_rtx_3090_in_one_machine/
az_6
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1pyd6a6
false
null
t3_1pyd6a6
/r/LocalLLaMA/comments/1pyd6a6/rtx_6000_pro_rtx_3090_in_one_machine/
false
false
self
8
null
Reddit, but with multiple LLM agents
2
This is a project I created for fun: https://redditwithagents.vercel.app/ [<screenshot>](https://i.imgur.com/JFMFBNF.png) It's basically a web app that mimic parts of Reddit's UI, allowing you to discuss with LLM agents right in the browswer. All of the LLM API calls happen in the browser as the app does not have a backend. You can also config the app to use your local LLM APIs as well. For example, to use LM Studio, make sure you serve the model locally and checked the two options: "Enable CORS" and "Serve on Local Network" [<image>](https://i.imgur.com/TfzIjl4.png) Then go to the app's settings page, set the following configs: API URL: http://192.168.<whatever>.<your>:1234/v1 API Key: whatever-key-you-set Model: soemthing like openai/gpt-oss-20b
2025-12-29T04:38:52
https://www.reddit.com/r/LocalLLaMA/comments/1pycroe/reddit_but_with_multiple_llm_agents/
bobaburger
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1pycroe
false
null
t3_1pycroe
/r/LocalLLaMA/comments/1pycroe/reddit_but_with_multiple_llm_agents/
false
false
self
2
{'enabled': False, 'images': [{'id': 'H2bGc8Q0SUcMaVmzBZHr1KyxWILn8Mk_L5WcfAq3yis', 'resolutions': [{'height': 70, 'url': 'https://external-preview.redd.it/H2bGc8Q0SUcMaVmzBZHr1KyxWILn8Mk_L5WcfAq3yis.png?width=108&crop=smart&auto=webp&s=5874407daa769b9f0c00f950d6ac2a118863bec1', 'width': 108}, {'height': 141, 'url': 'https://external-preview.redd.it/H2bGc8Q0SUcMaVmzBZHr1KyxWILn8Mk_L5WcfAq3yis.png?width=216&crop=smart&auto=webp&s=dd496761409ee289d42a12faac1f18965ae2445d', 'width': 216}, {'height': 209, 'url': 'https://external-preview.redd.it/H2bGc8Q0SUcMaVmzBZHr1KyxWILn8Mk_L5WcfAq3yis.png?width=320&crop=smart&auto=webp&s=89b62f2661f290b948616a2c093473abb2cb5856', 'width': 320}, {'height': 418, 'url': 'https://external-preview.redd.it/H2bGc8Q0SUcMaVmzBZHr1KyxWILn8Mk_L5WcfAq3yis.png?width=640&crop=smart&auto=webp&s=1a896a4c74a2358816d82326c71b5be3da36bf4b', 'width': 640}, {'height': 627, 'url': 'https://external-preview.redd.it/H2bGc8Q0SUcMaVmzBZHr1KyxWILn8Mk_L5WcfAq3yis.png?width=960&crop=smart&auto=webp&s=2ad70d99019edcf2cfe30bdeb4259f00793bbdec', 'width': 960}, {'height': 706, 'url': 'https://external-preview.redd.it/H2bGc8Q0SUcMaVmzBZHr1KyxWILn8Mk_L5WcfAq3yis.png?width=1080&crop=smart&auto=webp&s=a66cb45c01a711d2d4570496f0486d9607c8a82f', 'width': 1080}], 'source': {'height': 1150, 'url': 'https://external-preview.redd.it/H2bGc8Q0SUcMaVmzBZHr1KyxWILn8Mk_L5WcfAq3yis.png?auto=webp&s=ee7ebe240cbb5f7cf3b2910a5692c07077cc2ca1', 'width': 1759}, 'variants': {}}]}
How I scaled my output 20x without hiring more staff (The "GitHub as OS" method)
0
Two years ago, I hit a ceiling. I was spending half my day in meetings and the other half fixing inconsistencies in my team's work. I tried using ChatGPT to speed things up, but I was just pasting code back and forth. It was a 20% boost, not a transformation. Then I realized something: **I was using AI as a chatbot, when I should have been using it as an operating system.** I completely restructured my business workflow (I run a tech/media setup) around a new concept: **Treating GitHub not just as a code repo, but as the "office" where AI agents live and work.** Here is the system that took me from "overwhelmed" to a 20x increase in shipping speed: **1. The "Context-First" Folder Structure** Most people ask ChatGPT to "write a marketing email" or "fix this function" with zero context. Instead, I created "Master Repos" for every domain of my business (Hiring, Financials, Dev, Content). * Inside each folder, I have a [`RULES.md`](http://RULES.md) (the "manager" instructions). * I have [`CONTEXT.md`](http://CONTEXT.md) (the brand voice, the goal, the constraints). * When I need work done, I don't chat. I tell an agent: *"Go to* `/campaigns/Q1` *and execute the next step based on the rules."* **2. The Result: Compression of Time** Because the agent has *all* the context and rules pre-loaded in the file structure, I skip the "prompt engineering" phase. * **Idea to Spec:** Compressed from 3 days to 45 minutes. * **Spec to Prototype:** Compressed from 1 week to 4 hours. Because the agent works *inside* the repo, it creates files, updates logs, and checks its own work against my [`CONSTRAINTS.md`](http://CONSTRAINTS.md) file. **3. Killing the Daily Standup** This was an accidental win. Since agents and humans both log their actions into markdown files in the repo, I don't need standup meetings. I have an agent run a script at 5 PM that reads all the daily logs and generates a "Team Pulse" report. * What shipped? * What broke? * Who is blocked? It’s more accurate than a verbal meeting and takes 0 minutes of my time. **4. The "Feeling" of 20x** It’s hard to measure precise ROI, but the friction of *starting* a new project is gone. I used to hesitate to start a new feature because of the "setup cost." Now, I just create a folder, drop in a [`BRIEF.md`](http://BRIEF.md), and the system starts moving. **Why I'm sharing this:** I see a lot of founders here asking if they should hire a generic "AI Agency" or how to integrate AI. My advice: **Don't buy a tool. Build a file system.** If you organize your business logic into text files, AI becomes an infinite intern force. If you keep it in your head, AI is just a toy. Happy to answer questions on how I structure the [`RULES.md`](http://RULES.md) files or the agent permissions if anyone is interested.
2025-12-29T04:24:15
https://www.reddit.com/r/LocalLLaMA/comments/1pychbx/how_i_scaled_my_output_20x_without_hiring_more/
dipeshsukhani
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1pychbx
false
null
t3_1pychbx
/r/LocalLLaMA/comments/1pychbx/how_i_scaled_my_output_20x_without_hiring_more/
false
false
self
0
null
I built a local voice assistant that learns new abilities via auto-discovered n8n workflows exposed as tools via MCP (LiveKit + Ollama + n8n)
20
I just released CAAL - a local voice assistant that auto-discovers n8n workflows as tools. Stack: \- Ollama (I'm running Ministral-3:8B) \- LiveKit for WebRTC \- Whisper STT \- Kokoro TTS \- n8n for tools Key feature: Infinite tool expandability through n8n. Add a workflow, Cal learns it. It can even build its own tools on command. Demo Video: [youtube.com/watch?v=Fcn-qq8OiTA](https://www.youtube.com/watch?v=Fcn-qq8OiTA) Code: [github.com/CoreWorxLab/CAAL](https://github.com/coreworxlab/caal) Check it out and let me know what you think.
2025-12-29T03:28:35
https://www.reddit.com/r/LocalLLaMA/comments/1pybbjg/i_built_a_local_voice_assistant_that_learns_new/
CoreWorxLab
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1pybbjg
false
null
t3_1pybbjg
/r/LocalLLaMA/comments/1pybbjg/i_built_a_local_voice_assistant_that_learns_new/
false
false
self
20
{'enabled': False, 'images': [{'id': 'bG8FCgZo8AnQXj_ERVVPn1JaPVWVDvG8vG7tNGi14vs', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/bG8FCgZo8AnQXj_ERVVPn1JaPVWVDvG8vG7tNGi14vs.jpeg?width=108&crop=smart&auto=webp&s=cfe269232c801189ef645a2091b47f54158eea89', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/bG8FCgZo8AnQXj_ERVVPn1JaPVWVDvG8vG7tNGi14vs.jpeg?width=216&crop=smart&auto=webp&s=b57c6253c3243e43a1cf1627c7c157803ee1b056', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/bG8FCgZo8AnQXj_ERVVPn1JaPVWVDvG8vG7tNGi14vs.jpeg?width=320&crop=smart&auto=webp&s=7238581d65fc03cab13bb2236ca87290ab76ddf1', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/bG8FCgZo8AnQXj_ERVVPn1JaPVWVDvG8vG7tNGi14vs.jpeg?auto=webp&s=2f3632fb44dc5d7ebd61ce5a04adab0343dc6a2f', 'width': 480}, 'variants': {}}]}
Meta released RPG, a research plan generation dataset on Hugging Face
253
22k tasks spanning ML, Arxiv and PubMed, complete with evaluation rubrics and Llama-4 reference solutions for training AI co-scientists
2025-12-29T02:58:09
https://huggingface.co/datasets/facebook/research-plan-gen
Difficult-Cap-7527
huggingface.co
1970-01-01T00:00:00
0
{}
1pyao6g
false
null
t3_1pyao6g
/r/LocalLLaMA/comments/1pyao6g/meta_released_rpg_a_research_plan_generation/
false
false
default
253
{'enabled': False, 'images': [{'id': '_Vt3gVwDJIJ3tdTBBf0E6Y1zVMQL8lOjQzN3Hnt2brY', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/_Vt3gVwDJIJ3tdTBBf0E6Y1zVMQL8lOjQzN3Hnt2brY.png?width=108&crop=smart&auto=webp&s=3aa68d0edd234ee9eb73215e2e98ce8f69cbd55c', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/_Vt3gVwDJIJ3tdTBBf0E6Y1zVMQL8lOjQzN3Hnt2brY.png?width=216&crop=smart&auto=webp&s=30e94c470e10175897426b4866964189604a01bd', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/_Vt3gVwDJIJ3tdTBBf0E6Y1zVMQL8lOjQzN3Hnt2brY.png?width=320&crop=smart&auto=webp&s=e6b18685364c2953606faac06a11bb2f596cc19d', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/_Vt3gVwDJIJ3tdTBBf0E6Y1zVMQL8lOjQzN3Hnt2brY.png?width=640&crop=smart&auto=webp&s=062e0684599eddb333c0833911a29ff674bc632c', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/_Vt3gVwDJIJ3tdTBBf0E6Y1zVMQL8lOjQzN3Hnt2brY.png?width=960&crop=smart&auto=webp&s=99fcc5d82302169b32d70b646bdbaafc1d9d5e53', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/_Vt3gVwDJIJ3tdTBBf0E6Y1zVMQL8lOjQzN3Hnt2brY.png?width=1080&crop=smart&auto=webp&s=2fe66bc57d6fa8b7c46c2bb6c741a30bdac2293b', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/_Vt3gVwDJIJ3tdTBBf0E6Y1zVMQL8lOjQzN3Hnt2brY.png?auto=webp&s=5388ce267cea532dcee481be57d348cee0ee9df4', 'width': 1200}, 'variants': {}}]}
Why I'm building my own CLIs for agents
0
2025-12-29T02:38:00
https://martinalderson.com/posts/why-im-building-my-own-clis-for-agents/
malderson
martinalderson.com
1970-01-01T00:00:00
0
{}
1pya8qo
false
null
t3_1pya8qo
/r/LocalLLaMA/comments/1pya8qo/why_im_building_my_own_clis_for_agents/
false
false
default
0
{'enabled': False, 'images': [{'id': 'sjUFq1GezLXYqu_22z7QH3_zmuL4L9Fv1r-bqJ5s-Ek', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/sjUFq1GezLXYqu_22z7QH3_zmuL4L9Fv1r-bqJ5s-Ek.png?width=108&crop=smart&auto=webp&s=0f1ac9a1d23f518ccce7c90a7c5e1f17aa77f4e9', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/sjUFq1GezLXYqu_22z7QH3_zmuL4L9Fv1r-bqJ5s-Ek.png?width=216&crop=smart&auto=webp&s=3da878eac87c80f5b9ca635c9c4a72398fa69e5a', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/sjUFq1GezLXYqu_22z7QH3_zmuL4L9Fv1r-bqJ5s-Ek.png?width=320&crop=smart&auto=webp&s=9eea36d4a9645d5fe5316568a764cecbbf9d1284', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/sjUFq1GezLXYqu_22z7QH3_zmuL4L9Fv1r-bqJ5s-Ek.png?width=640&crop=smart&auto=webp&s=4984a9d87cb58c14f6f484363f162911c5a8efcd', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/sjUFq1GezLXYqu_22z7QH3_zmuL4L9Fv1r-bqJ5s-Ek.png?width=960&crop=smart&auto=webp&s=cb54c1bad5881a8329c365f127f74560a827dba8', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/sjUFq1GezLXYqu_22z7QH3_zmuL4L9Fv1r-bqJ5s-Ek.png?width=1080&crop=smart&auto=webp&s=dba96bb944e45c0589fac841f0939e0468f33023', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/sjUFq1GezLXYqu_22z7QH3_zmuL4L9Fv1r-bqJ5s-Ek.png?auto=webp&s=b256a6fd74f6f8a9e3b237d518e048f0525508c6', 'width': 1200}, 'variants': {}}]}
4x V100 32gb vs 2x4x V100 16gb (RPC) in MiniMax M2.1
1
tldr: there is a reason for the +300% premium on retail price of V100 32gb vs V100 16gb. https://preview.redd.it/qv8p6q1312ag1.png?width=1041&format=png&auto=webp&s=4e71c28705b7e8aa5cd69b4cc044d4db4857bc4d **Cost analysis:** **Capex**: There is an approximately $2800-4000 acquistion cost for an additional Quad SXM2 capable server. **Opex**: the 2nd server adds a 400w idle and an additional 200w draw during inference, which amounts to $73/mth upto $109.50 on 25 cents KwH. **The experiment:** 128gb VRAM vs 128gb VRAM. Single Server Quad V100 32gb (4 GPUs) vs Two nodes RPC on 10gbe each with Quad V100 16gb ( 8GPUs). 1st Test- small prompt - Single Server Quad V100 32gb slot launch_slot_: id 2 | task 317 | processing task slot update_slots: id 2 | task 317 | new prompt, n_ctx_slot = 131072, n_keep = 0, task.n_tokens = 64 slot update_slots: id 2 | task 317 | n_tokens = 31, memory_seq_rm [31, end) slot update_slots: id 2 | task 317 | prompt processing progress, n_tokens = 64, batch.n_tokens = 33, progress = 1.000000 slot update_slots: id 2 | task 317 | prompt done, n_tokens = 64, batch.n_tokens = 33 slot print_timing: id 2 | task 317 | prompt eval time = 25.61 ms / 33 tokens ( 0.78 ms per token, 1288.31 tokens per second) eval time = 171285.20 ms / 6219 tokens ( 27.54 ms per token, 36.31 tokens per second) total time = 171310.82 ms / 6252 tokens slot release: id 2 | task 317 | stop processing: n_tokens = 6282, truncated = 0 2nd Test- medium prompt - Single Server Quad V100 32gb slot launch_slot_: id 1 | task 6537 | processing task slot update_slots: id 1 | task 6537 | new prompt, n_ctx_slot = 131072, n_keep = 0, task.n_tokens = 468 slot update_slots: id 1 | task 6537 | n_tokens = 0, memory_seq_rm [0, end) slot update_slots: id 1 | task 6537 | prompt processing progress, n_tokens = 468, batch.n_tokens = 468, progress = 1.000000 slot update_slots: id 1 | task 6537 | prompt done, n_tokens = 468, batch.n_tokens = 468 slot print_timing: id 1 | task 6537 | prompt eval time = 2779.12 ms / 468 tokens ( 5.94 ms per token, 168.40 tokens per second) eval time = 90983.24 ms / 2873 tokens ( 31.67 ms per token, 31.58 tokens per second) total time = 93762.36 ms / 3341 tokens slot release: id 1 | task 6537 | stop processing: n_tokens = 3340, truncated = 0 3rd Test- medium/large prompt - Single Server Quad V100 32gb low slot launch_slot_: id 3 | task 9704 | processing task slot update_slots: id 3 | task 9704 | new prompt, n_ctx_slot = 131072, n_keep = 0, task.n_tokens = 1435 slot update_slots: id 3 | task 9704 | n_tokens = 31, memory_seq_rm [31, end) slot update_slots: id 3 | task 9704 | prompt processing progress, n_tokens = 1435, batch.n_tokens = 1404, progress = 1.000000 slot update_slots: id 3 | task 9704 | prompt done, n_tokens = 1435, batch.n_tokens = 1404 slot print_timing: id 3 | task 9704 | prompt eval time = 7135.25 ms / 1404 tokens ( 5.08 ms per token, 196.77 tokens per second) eval time = 78815.35 ms / 2081 tokens ( 37.87 ms per token, 26.40 tokens per second) total time = 85950.60 ms / 3485 tokens slot release: id 3 | task 9704 | stop processing: n_tokens = 3515, truncated = 0 high slot launch_slot_: id 3 | task 0 | processing task slot update_slots: id 3 | task 0 | new prompt, n_ctx_slot = 131072, n_keep = 0, task.n_tokens = 1435 slot update_slots: id 3 | task 0 | n_tokens = 0, memory_seq_rm [0, end) slot update_slots: id 3 | task 0 | prompt processing progress, n_tokens = 1435, batch.n_tokens = 1435, progress = 1.000000 slot update_slots: id 3 | task 0 | prompt done, n_tokens = 1435, batch.n_tokens = 1435 slot print_timing: id 3 | task 0 | prompt eval time = 6836.98 ms / 1435 tokens ( 4.76 ms per token, 209.89 tokens per second) eval time = 49533.75 ms / 1930 tokens ( 25.67 ms per token, 38.96 tokens per second) total time = 56370.74 ms / 3365 tokens ================== 1st Test- small prompt - Two nodes RPC each with Quad V100 16gb ( 8GPUs) slot launch_slot_: id 3 | task 2177 | processing task slot update_slots: id 3 | task 2177 | new prompt, n_ctx_slot = 131072, n_keep = 0, task.n_tokens = 64 slot update_slots: id 3 | task 2177 | n_tokens = 31, memory_seq_rm [31, end) slot update_slots: id 3 | task 2177 | prompt processing progress, n_tokens = 64, batch.n_tokens = 33, progress = 1.000000 slot update_slots: id 3 | task 2177 | prompt done, n_tokens = 64, batch.n_tokens = 33 slot print_timing: id 3 | task 2177 | prompt eval time = 741.63 ms / 33 tokens ( 22.47 ms per token, 44.50 tokens per second) eval time = 166216.98 ms / 4819 tokens ( 34.49 ms per token, 28.99 tokens per second) total time = 166958.61 ms / 4852 tokens slot release: id 3 | task 2177 | stop processing: n_tokens = 4882, truncated = 0 2nd Test- medium prompt - Two nodes RPC each with Quad V100 16gb ( 8GPUs) slot launch_slot_: id 1 | task 7291 | processing task slot update_slots: id 1 | task 7291 | new prompt, n_ctx_slot = 131072, n_keep = 0, task.n_tokens = 468 slot update_slots: id 1 | task 7291 | n_tokens = 0, memory_seq_rm [0, end) slot update_slots: id 1 | task 7291 | prompt processing progress, n_tokens = 468, batch.n_tokens = 468, progress = 1.000000 slot update_slots: id 1 | task 7291 | prompt done, n_tokens = 468, batch.n_tokens = 468 slot print_timing: id 1 | task 7291 | prompt eval time = 3412.67 ms / 468 tokens ( 7.29 ms per token, 137.14 tokens per second) eval time = 97740.80 ms / 2478 tokens ( 39.44 ms per token, 25.35 tokens per second) total time = 101153.47 ms / 2946 tokens slot release: id 1 | task 7291 | stop processing: n_tokens = 2945, truncated = 0 3rd Test- medium/large prompt - Two nodes RPC each with Quad V100 16gb ( 8GPUs) low slot launch_slot_: id 2 | task 11895 | processing task slot update_slots: id 2 | task 11895 | new prompt, n_ctx_slot = 131072, n_keep = 0, task.n_tokens = 1435 slot update_slots: id 2 | task 11895 | n_tokens = 32, memory_seq_rm [32, end) slot update_slots: id 2 | task 11895 | prompt processing progress, n_tokens = 1435, batch.n_tokens = 1403, progress = 1.000000 slot update_slots: id 2 | task 11895 | prompt done, n_tokens = 1435, batch.n_tokens = 1403 slot print_timing: id 2 | task 11895 | prompt eval time = 7746.55 ms / 1403 tokens ( 5.52 ms per token, 181.11 tokens per second) eval time = 89200.25 ms / 2035 tokens ( 43.83 ms per token, 22.81 tokens per second) total time = 96946.79 ms / 3438 tokens slot release: id 2 | task 11895 | stop processing: n_tokens = 3469, truncated = 0 high slot launch_slot_: id 3 | task 0 | processing task slot update_slots: id 3 | task 0 | new prompt, n_ctx_slot = 131072, n_keep = 0, task.n_tokens = 1435 slot update_slots: id 3 | task 0 | n_tokens = 0, memory_seq_rm [0, end) slot update_slots: id 3 | task 0 | prompt processing progress, n_tokens = 1435, batch.n_tokens = 1435, progress = 1.000000 slot update_slots: id 3 | task 0 | prompt done, n_tokens = 1435, batch.n_tokens = 1435 slot print_timing: id 3 | task 0 | prompt eval time = 7808.48 ms / 1435 tokens ( 5.44 ms per token, 183.77 tokens per second) eval time = 75172.41 ms / 2176 tokens ( 34.55 ms per token, 28.95 tokens per second) total time = 82980.89 ms / 3611 tokens slot release: id 3 | task 0 | stop processing: n_tokens = 3610, truncated = 0
2025-12-29T02:31:50
https://www.reddit.com/r/LocalLLaMA/comments/1pya3v8/4x_v100_32gb_vs_2x4x_v100_16gb_rpc_in_minimax_m21/
MachineZer0
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1pya3v8
false
null
t3_1pya3v8
/r/LocalLLaMA/comments/1pya3v8/4x_v100_32gb_vs_2x4x_v100_16gb_rpc_in_minimax_m21/
false
false
https://b.thumbs.redditm…4o-H3aSHnDCk.jpg
1
null
Llama 3.2 3B running on my Geekom IT15.
1
My IT15 has a Intel core ultra 9 285h with 32gb of ram. I am also running Home Assistant on this machine currently. I am still testing things out. I gave this container 6 cores and 16gb and passed the iGpu through. I am going to try other models as well but I am happy that I got it working at all. I am open to suggestions for other models to try out with this machine.
2025-12-29T02:13:23
https://i.redd.it/v7z4fdqbx1ag1.png
mickeybob00
i.redd.it
1970-01-01T00:00:00
0
{}
1py9p4r
false
null
t3_1py9p4r
/r/LocalLLaMA/comments/1py9p4r/llama_32_3b_running_on_my_geekom_it15/
false
false
default
1
{'enabled': True, 'images': [{'id': 'v7z4fdqbx1ag1', 'resolutions': [{'height': 38, 'url': 'https://preview.redd.it/v7z4fdqbx1ag1.png?width=108&crop=smart&auto=webp&s=534dcffe1b2c86dd7ce737ea84d7cb1b0371f970', 'width': 108}, {'height': 77, 'url': 'https://preview.redd.it/v7z4fdqbx1ag1.png?width=216&crop=smart&auto=webp&s=4cc56f636ae45ffc823d32c833df3d1955264d8e', 'width': 216}, {'height': 115, 'url': 'https://preview.redd.it/v7z4fdqbx1ag1.png?width=320&crop=smart&auto=webp&s=2487b263462a392fa184782f1dd0b5d39e1eb86b', 'width': 320}, {'height': 230, 'url': 'https://preview.redd.it/v7z4fdqbx1ag1.png?width=640&crop=smart&auto=webp&s=3801c6406b04354d48845d2842569cd992ba4b01', 'width': 640}, {'height': 346, 'url': 'https://preview.redd.it/v7z4fdqbx1ag1.png?width=960&crop=smart&auto=webp&s=23f93640cc12290251546c4f1e74b4ea40c4b2e3', 'width': 960}, {'height': 389, 'url': 'https://preview.redd.it/v7z4fdqbx1ag1.png?width=1080&crop=smart&auto=webp&s=2e4f1bcc327f63fcc14b5519be7f1c414f141cd7', 'width': 1080}], 'source': {'height': 576, 'url': 'https://preview.redd.it/v7z4fdqbx1ag1.png?auto=webp&s=0442c0abd7c913461620ff5e6bf62c95965efe34', 'width': 1598}, 'variants': {}}]}
Jetbrains AI users, what's your configuration with local models?
3
I am trying this configuration, but I would like to know what are you guys using for each category: https://preview.redd.it/8v2fr9a5x1ag1.png?width=710&format=png&auto=webp&s=d6d45d1a3b198b075a659b4e16765aca71b541dc
2025-12-29T02:08:25
https://www.reddit.com/r/LocalLLaMA/comments/1py9l67/jetbrains_ai_users_whats_your_configuration_with/
robertpro01
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1py9l67
false
null
t3_1py9l67
/r/LocalLLaMA/comments/1py9l67/jetbrains_ai_users_whats_your_configuration_with/
false
false
https://b.thumbs.redditm…pAGVnPp5oCcU.jpg
3
null
anyone have experience with turn detection for communication between humans and AI agents?
1
What are some of the best turn detection models that can solve these problems? 1. a transcribed utterance that is syntactically incomplete but is a continuation from a previous utterance that is also syntactically incomplete. example- turn1:I went to the , turn2: library today. 2. a transcribed utterance that is syntactically incomplete but is a continuation from previous utterance that is syntactically complete example - turn1: I want coffee, turn2:and I... 3. a transcribed utterance that is syntactically complete , which is a continuation from a previous utterance that is syntactically incomplete using correction markers for false starts and repair terms like 'actually', 'wait', etc example - turn1: I want this, turn2:actually I want this other one 4. a transcribed utterance that is syntactically complete and is a continuation from a previous utterance that is syntactically complete. example - turn1: I'm going to the store, turn2: I will buy x, y and z. 5. problem 1 but it is not a continuation. it is a new topic example- turn1: I went to the, turn2: today's weather is 6. problem 2 but it is not a continuation. it is a new topic example- turn1: I want coffee, turn2:skydiving is... 7. and 8. similar to 5. and 6. but the turn2 utterance is syntactically complete 9. directly responding to ai agent response. turn1: what is the weather today?, agent: where are you located?, turn2: I'm here in X city. turn 2 and turn 1 are not related but turn2 is related to the agent's clarification question. 10. more cases that I won't list here
2025-12-29T01:47:29
https://www.reddit.com/r/LocalLLaMA/comments/1py94k8/anyone_have_experience_with_turn_detection_for/
IcyMushroom4147
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1py94k8
false
null
t3_1py94k8
/r/LocalLLaMA/comments/1py94k8/anyone_have_experience_with_turn_detection_for/
false
false
self
1
null
[R] Progressive LoRA Merging - complete model identity replacement on consumer hardware
0
I'm here to democratize models - now you can create them with fine tuning & replace identity After more than 3 months into this, I finally can replace any model wieghts completly but preserve the structure. This means I can take Qwen3 for example, reuse the full millions of dollars used to train it, for a few bucks on cheap budget or collab. Its working perfectly fine for me on my main job, so don't complain about the paper being AI generated or whatever, I am too busy to even check it but you will get the concept and the idea from it plus I am releasing this for free so take it and enjoy it. Just remember I am the first to came up with this. How it works in my own words no AI slop: 1 - You finetune lora on top of a model, at the end you merge lora + base model 2- You merge the lora permenantly into the basemodel. 3- You use the new merged as a base model + train a fresh new lora on top with your dataset. 4- keep repeating non stop. This way we take advantage of catastrophic forgetting to rewrite the whole model way of reasoning, instead of just finetune existing weights. The more epochs you do the more you are preserving the model lingual knowdlge which they probably spent millions to to achieve brute forcing training from scratch, and you will rewrite all of it's self identity, way of thinking, this will be perfect distillation unlike normal training. It can now say I am X model instead of saying I am Qwen or the bullshit backed in, in fact it might forget about qwen entierly. Get the shit here [https://huggingface.co/hitonet/progressive-lora-merging](https://huggingface.co/hitonet/progressive-lora-merging) Thank me later, I spent restless nights and a lot of HP smoking cigs to achieve this. by not following any tutorials or copy paste. you got it the easy way. DM me for job offers. u/article{drissi2024bodysnatching, title={Body Snatching: Complete Model Identity Replacement via Progressive LoRA Merging}, author={Drissi, Ouissam Said}, year={2024}, url={[https://github.com/antibitcoin/progressive-lora-merging}](https://github.com/antibitcoin/progressive-lora-merging}) } Regards
2025-12-29T01:40:27
https://www.reddit.com/r/LocalLLaMA/comments/1py8yyw/r_progressive_lora_merging_complete_model/
TastyWriting8360
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1py8yyw
false
null
t3_1py8yyw
/r/LocalLLaMA/comments/1py8yyw/r_progressive_lora_merging_complete_model/
false
false
self
0
{'enabled': False, 'images': [{'id': '2Jj3SaFZmxLT5kl7Ibnw1B2IZFLYvLuvQi3F1jsmDF8', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/2Jj3SaFZmxLT5kl7Ibnw1B2IZFLYvLuvQi3F1jsmDF8.png?width=108&crop=smart&auto=webp&s=26c2d48ae041c5929d3d4190817e46d3fc4869b3', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/2Jj3SaFZmxLT5kl7Ibnw1B2IZFLYvLuvQi3F1jsmDF8.png?width=216&crop=smart&auto=webp&s=d8c6f0bd35b0dc5823a6dde7fac2d3c21c7d858c', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/2Jj3SaFZmxLT5kl7Ibnw1B2IZFLYvLuvQi3F1jsmDF8.png?width=320&crop=smart&auto=webp&s=26de4518cb3fc345166eaf0b41b9c9c0e071cb67', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/2Jj3SaFZmxLT5kl7Ibnw1B2IZFLYvLuvQi3F1jsmDF8.png?width=640&crop=smart&auto=webp&s=d2d3d3f175180ef9abd7bd4b175f8064f76001e8', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/2Jj3SaFZmxLT5kl7Ibnw1B2IZFLYvLuvQi3F1jsmDF8.png?width=960&crop=smart&auto=webp&s=1424ad36975798540933561f3e86f4e31cd362f5', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/2Jj3SaFZmxLT5kl7Ibnw1B2IZFLYvLuvQi3F1jsmDF8.png?width=1080&crop=smart&auto=webp&s=36f0782f61dadb7f99f338b4dc6e350675fbc412', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/2Jj3SaFZmxLT5kl7Ibnw1B2IZFLYvLuvQi3F1jsmDF8.png?auto=webp&s=e45d80b06f480e62f2d66fe9e0e9b30221fd5299', 'width': 1200}, 'variants': {}}]}
Is it feasible (and beneficial) to apply NVFP4 quantization to KV Cache on Blackwell?
9
Theoretically, NVFP4 (E2M1 format) should be superior to INT4 for activations. Its logarithmic distribution naturally fits the "long-tailed" nature of KV values (preserving small details while handling outliers via the exponent). Since Blackwell Tensor Cores support native FP4 compute, could we store KV Cache in NVFP4 and perform the Attention operation directly (or with minimal dequantization overhead)?🤔
2025-12-29T01:03:59
https://www.reddit.com/r/LocalLLaMA/comments/1py85zb/is_it_feasible_and_beneficial_to_apply_nvfp4/
No-Bag5084
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1py85zb
false
null
t3_1py85zb
/r/LocalLLaMA/comments/1py85zb/is_it_feasible_and_beneficial_to_apply_nvfp4/
false
false
self
9
null
LLaMA-3.2-3B fMRI-style probing: discovering a bidirectional “constrained ↔ expressive” control direction
16
I’ve been building a small interpretability tool that does fMRI-style visualization and *live hidden-state intervention* on local models. While exploring LLaMA-3.2-3B, I noticed one hidden dimension (layer 20, dim \~3039) that consistently stood out across prompts and timesteps. I then set up a simple Gradio UI to **poke that single dimension during inference** (via a forward hook) and swept epsilon in both directions. What I found is that this dimension appears to act as a **global control axis** rather than encoding specific semantic content. # Observed behavior (consistent across prompts) By varying epsilon on this one dim: * **Negative ε**: * outputs become restrained, procedural, and instruction-faithful * explanations stick closely to canonical structure * less editorializing or extrapolation * **Positive ε**: * outputs become more verbose, narrative, and speculative * the model adds framing, qualifiers, and audience modeling * responses feel “less reined in” even on factual prompts Crucially, this holds across: * conversational prompts * factual prompts (chess rules, photosynthesis) * recommendation prompts The effect is smooth, monotonic, and bidirectional. # Methods (brief) * Model: LLaMA-3.2-3B-Instruct * Intervention: single hidden dimension modified during forward pass * No gradients, no finetuning, no logit biasing * Visualization frontend in Godot; inference + hooks in PyTorch * All tests run locally; prompts trivially swappable Happy to share more details if folks are interested. # Why I’m posting I’m still very much in the *exploratory* phase — the goal right now is to: * identify stable control directions * understand their scope * design better tests to separate correlation from load-bearing causality If people have suggestions for additional sanity checks, ablations, or related work I should read, I’m all ears. TIME FOR SCIENCE 🧪 [Dim 3039 just begging to get poked.](https://preview.redd.it/ppyvusqvg1ag1.png?width=1858&format=png&auto=webp&s=1eb8a6b97091ec0a0bba13e4aad2e00524a826f6) https://preview.redd.it/w04unfb1h1ag1.png?width=1526&format=png&auto=webp&s=6f4eeb8b341a12d59173e5338a4ed58db3585500 https://preview.redd.it/rzioukb1h1ag1.png?width=1526&format=png&auto=webp&s=ae4d71911b14a68805069101be779819d8c97d22 https://preview.redd.it/eo1vyeb1h1ag1.png?width=1526&format=png&auto=webp&s=bdd3f9c990c07bc00f7bf55850fffa2b5934b54f https://preview.redd.it/tangtlb1h1ag1.png?width=1526&format=png&auto=webp&s=7b86b9c5ae15e3b9d413ddf80f607ec335855436 https://preview.redd.it/38fbskb1h1ag1.png?width=1526&format=png&auto=webp&s=3039a2ef5443fe71cfcce0069f74432b92340a2e https://preview.redd.it/qj2ltnb1h1ag1.png?width=1526&format=png&auto=webp&s=5bf14c7ca6281a4a4496d39244f4060997715734 https://preview.redd.it/ro7belb1h1ag1.png?width=1526&format=png&auto=webp&s=351f8c93a253fda22a44a4689d97068e434e5c5c https://preview.redd.it/305i2mb1h1ag1.png?width=1526&format=png&auto=webp&s=9dfb05fbed2f9104918898be72fff9663890fc26
2025-12-29T00:46:06
https://www.reddit.com/r/LocalLLaMA/comments/1py7ren/llama323b_fmristyle_probing_discovering_a/
Due_Hunter_4891
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1py7ren
false
null
t3_1py7ren
/r/LocalLLaMA/comments/1py7ren/llama323b_fmristyle_probing_discovering_a/
false
false
https://b.thumbs.redditm…ZEck9seQt-zA.jpg
16
null
Is there any way to use my GPUs?
9
Hi all, Over the last 5 or 6 years, I’ve managed to get a number of second hand GPUs for free from friends when they upgraded theirs. I now have; 3090 (used on my own gaming pc) 2060 2080s 1080ti x2 1080 I also have an opportunity to acquire a very cheap 3070. Is there any effective way to use these? I currently run Ollama on my main PC with Qwen32b, and might look into WSL later on, but for the rest of them, is there any use in this space or is it not worth the hassle? Thank you
2025-12-29T00:45:37
https://www.reddit.com/r/LocalLLaMA/comments/1py7r06/is_there_any_way_to_use_my_gpus/
Reasonable-Gold4971
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1py7r06
false
null
t3_1py7r06
/r/LocalLLaMA/comments/1py7r06/is_there_any_way_to_use_my_gpus/
false
false
self
9
null
Minimum viable VRAM for LLM (homelab only)
0
Hi, all. Essentially what the post is. I have a homelab that I run a number of services on and would love to self-host an LLM, I’m just not sure what hardware would be sufficient. Currently, it’s got a 3900X and 32GB DDR4 RAM, running OpenMediaVault. I have a Sapphire Pulse 5700XT that has literally only ever been used for HDMI out. My initial plan was to upgrade my gaming PC (3080 FE 10GB) and move that GPU into my lab. Then I’d only be buying one card. Conventional wisdom makes it sound like the 10GB of VRAM won’t really be viable, so I’m doing more homework. People of course suggested getting a 3090 or 4090, but I also saw the 5060 Ti comes in a 16GB variant? So I’m just trying to figure out what makes sense. If I can make the 3080 work, that would be ideal (for my finances). I was also planning on running Ollama through Docker, but in my quick perusal of this sub, maybe there’s a better way? And if an AMD card is more viable (7900?) for the 24GB of VRAM, I’m open to that, too. Very, very inexperienced with the self-hosting of LLMs, but learning as quickly as I can. Thanks.
2025-12-29T00:12:51
https://www.reddit.com/r/LocalLLaMA/comments/1py6zws/minimum_viable_vram_for_llm_homelab_only/
SoMuchLasagna
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1py6zws
false
null
t3_1py6zws
/r/LocalLLaMA/comments/1py6zws/minimum_viable_vram_for_llm_homelab_only/
false
false
self
0
null
RTX 2000 Pro Blackwell 16GB... does anyone of you have it already?
0
I want to know how much of an improvement it is compared to 2000 Ada 16GB but I can't fins any reviews
2025-12-28T23:58:46
https://www.reddit.com/r/LocalLLaMA/comments/1py6ntm/rtx_2000_pro_blackwell_16gb_does_anyone_of_you/
Wonderful-Lack3846
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1py6ntm
false
null
t3_1py6ntm
/r/LocalLLaMA/comments/1py6ntm/rtx_2000_pro_blackwell_16gb_does_anyone_of_you/
false
false
self
0
null
LLM Cluster with Routing for Prompt processing
2
Is there documentation (or is it even possible), to use llama.cpp or vLLM, to route processing to device like the DGX Spark and the text generation to something like a Mac Studio to get the best of both machines?
2025-12-28T23:44:06
https://www.reddit.com/r/LocalLLaMA/comments/1py6bd5/llm_cluster_with_routing_for_prompt_processing/
Every-Employment-357
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1py6bd5
false
null
t3_1py6bd5
/r/LocalLLaMA/comments/1py6bd5/llm_cluster_with_routing_for_prompt_processing/
false
false
self
2
null
Is Q8 KV cache alright for vision models and high context
33
What has your experience been with using q8 KV cache and a vision model? GLM4.6 V, qwen3VL… Would you say it’s good enough or does it ruin outputs?
2025-12-28T22:45:41
https://www.reddit.com/r/LocalLLaMA/comments/1py4xp6/is_q8_kv_cache_alright_for_vision_models_and_high/
Adventurous-Gold6413
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1py4xp6
false
null
t3_1py4xp6
/r/LocalLLaMA/comments/1py4xp6/is_q8_kv_cache_alright_for_vision_models_and_high/
false
false
self
33
null
Self hosting LLM on multi CPU + sys ram combo
13
I realised I got two socket supermicro board with two xeon 2690 v3 lying around. I could buy a bunch of ram for it cause it uses 2133 mhz RAM and the used prices to does are not bad. I was thinking about buying a bunch more sys ram to it and self host larger LLMs, maybe in the future I could run some good models on it. Do you think it would be able to run in a meaninful speed with lets say 256 gigs of RAM large open source models? Does anyone have experience with this? What kind of speeds should I expect from it, would it be worthwhile? Maybe if there are better open source models I could also run those for example I could maybe run qwen3:235b on it for example.
2025-12-28T22:34:22
https://www.reddit.com/r/LocalLLaMA/comments/1py4nuu/self_hosting_llm_on_multi_cpu_sys_ram_combo/
goodmenthelastwaveby
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1py4nuu
false
null
t3_1py4nuu
/r/LocalLLaMA/comments/1py4nuu/self_hosting_llm_on_multi_cpu_sys_ram_combo/
false
false
self
13
null
Math tutor
0
Looking for a good dedicated math and python tutor to run ollamma
2025-12-28T22:12:33
https://www.reddit.com/r/LocalLLaMA/comments/1py44y0/math_tutor/
Ok_Buddy_2096
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1py44y0
false
null
t3_1py44y0
/r/LocalLLaMA/comments/1py44y0/math_tutor/
false
false
self
0
null
He’s local llm for computer use agent.
1
[removed]
2025-12-28T22:05:37
https://www.reddit.com/r/LocalLLaMA/comments/1py3yqh/hes_local_llm_for_computer_use_agent/
Guilty_Nerve5608
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1py3yqh
false
null
t3_1py3yqh
/r/LocalLLaMA/comments/1py3yqh/hes_local_llm_for_computer_use_agent/
false
false
self
1
null
Do junior devs need local LLMs? Or it’s better to stick to closed source models
0
Hi everyone I’m a junior AI full stack dev Mostly working on LLM based solutions more towards a work of SWE My day to day work is with Python, RESTApis and microservices given how good the new Claude models are why should someone go and spend upwards of 5K for GPUs to run these open source models which cannot beat the closed source one Imo I feel it’s better to stick to closed source models pay as you go and whenever a new better model comes out it’s a simple replacement Far cheaper than building a custom rig
2025-12-28T22:05:11
https://www.reddit.com/r/LocalLLaMA/comments/1py3yc3/do_junior_devs_need_local_llms_or_its_better_to/
Effective-Yam-7656
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1py3yc3
false
null
t3_1py3yc3
/r/LocalLLaMA/comments/1py3yc3/do_junior_devs_need_local_llms_or_its_better_to/
false
false
self
0
null
Securing MCP in production
0
Just joined a company using MCP at scale. I'm building our threat model. I know about indirect injection and unauthorized tool use, but I'm looking for the "gotchas." For those running MCP in enterprise environments: What is the security issue that actually gives you headaches?
2025-12-28T22:01:10
https://www.reddit.com/r/LocalLLaMA/comments/1py3uru/securing_mcp_in_production/
Glass_Guitar1959
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1py3uru
false
null
t3_1py3uru
/r/LocalLLaMA/comments/1py3uru/securing_mcp_in_production/
false
false
self
0
null
Need Help: :Entry Triple GPU System for Local LLM
3
Okay, here's my situation: Today, I just got my hands on these 2x MSI RTX 3090 Gaming X Trio 24GB with the option to buy a 3rd one (@ $450). I also have a Zotac Gaming 3090 Trinity 24GB that currently lives in an Razer X Core eGPU case which is my current Local LLM Testing card that I combine with my laptop's RTX 5000. This has left me really stretched and I'm looking for the absolute cheapest way I could put together a system that can run them. I currently have an old Thermaltake MK1 Case with an OCZ 1600W PSU, so I can use at least that power supply for the GPUs if necessary, but I don't think 3x 3090 will fit in that case. I was looking at a few Dell Precisions and possibly modifying them, but every time I find one where it looks like it might work, I find out I need blower style cards reduced to 2 slots or to add water cooling I can't afford and don't want to do. So I was wondering if anyone as broke as me has figured out something like this that works? I would like to run 3x MSI RTX 3090 Gaming X Trio inside the case and have a Thunderbolt 3 so I can use my Zotac Gaming 3090 Trinity hooked up with the eGPU. This is my broke way to try and get to 96GB of VRAM without using too old trash GPUs. I would like something with decent PCIE lanes to maximize my bandwidth that can run 128GB of DDR4, and honestly, even if I have to cut and weld the case a this point, I'm willing to do what I need to and I want it to work, but I'm not really sure I care how ugly it is, though quiet is better as I don't want my wife to murder me.
2025-12-28T21:55:16
https://www.reddit.com/r/LocalLLaMA/comments/1py3pib/need_help_entry_triple_gpu_system_for_local_llm/
DonkeyBonked
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1py3pib
false
null
t3_1py3pib
/r/LocalLLaMA/comments/1py3pib/need_help_entry_triple_gpu_system_for_local_llm/
false
false
self
3
null
Owlex - an MCP server that lets Claude Code consult Codex, Gemini, and OpenCode as a "council"
26
Been using Claude Code for a while and wanted a way to get second opinions from other AI coding agents without leaving my workflow. So I built **Owlex**. What it does: The killer feature is **council\_ask** \- it queries Codex, Gemini, and OpenCode in parallel, then optionally runs a second round where each agent sees the others' answers and revises (or critiques) their response. council\_ask("Should I use Redis or PostgreSQL for this caching layer?") All three agents answer simultaneously (\~8s total), then deliberate. You get diverse perspectives without the copy-paste dance between terminals. Other features: \- Start/resume sessions with each agent individually \- Async task execution with timeouts \- Critique mode - agents actively look for bugs in each other's code suggestions Example output: Round 1: querying Codex, Gemini, Opencode... Codex completed (4.0s) OpenCode completed (5.6s) Gemini completed (7.7s) Round 2: deliberation phase.. Install: uv tool install git+https://github.com/agentic-mcp-tools/owlex.git GitHub: [https://github.com/agentic-mcp-tools/owlex](https://github.com/agentic-mcp-tools/owlex) Would love feedback!
2025-12-28T21:53:48
https://www.reddit.com/r/LocalLLaMA/comments/1py3o6p/owlex_an_mcp_server_that_lets_claude_code_consult/
spokv
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1py3o6p
false
null
t3_1py3o6p
/r/LocalLLaMA/comments/1py3o6p/owlex_an_mcp_server_that_lets_claude_code_consult/
false
false
self
26
null
LM Studio should support MCPO
0
OpenWebUi MCPO implementation works flawlessly LM studio should do this I don't know how to make my tool MCP http or whatever lol to me stdio just works. Anyway hopefully this inspires someone!
2025-12-28T21:39:33
https://v.redd.it/8wbpd5fxk0ag1
Serious_Molasses313
/r/LocalLLaMA/comments/1py3bhg/lm_studio_should_support_mcpo/
1970-01-01T00:00:00
0
{}
1py3bhg
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/8wbpd5fxk0ag1/DASHPlaylist.mpd?a=1769679579%2CM2ExNDQyMGM4NzUxNGQ3OGJlZWVlMzY1MDIyYzExZjFmN2I0ZDQ5ZWNjNGM0NzY1YTQ3Y2U3YTliZTI3Y2E1OA%3D%3D&v=1&f=sd', 'duration': 659, 'fallback_url': 'https://v.redd.it/8wbpd5fxk0ag1/CMAF_1080.mp4?source=fallback', 'has_audio': True, 'height': 1920, 'hls_url': 'https://v.redd.it/8wbpd5fxk0ag1/HLSPlaylist.m3u8?a=1769679579%2CNTExMDUwYWM5YTQzZmQwYTgyZjZjY2ZiZjY0NWU3NDU5ZTMyNTJlM2U0NDc5MDBiMjU2MmE1YTY4YjQwODgwOA%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/8wbpd5fxk0ag1/CMAF_96.mp4', 'transcoding_status': 'completed', 'width': 856}}
t3_1py3bhg
/r/LocalLLaMA/comments/1py3bhg/lm_studio_should_support_mcpo/
false
false
https://external-preview…9ee1c67de001181b
0
{'enabled': False, 'images': [{'id': 'dDNjdGF2ZnhrMGFnMaxjPgU0f58qu_Dl4frEQG0c4zDayU9COinZtCPz6UNo', 'resolutions': [{'height': 216, 'url': 'https://external-preview.redd.it/dDNjdGF2ZnhrMGFnMaxjPgU0f58qu_Dl4frEQG0c4zDayU9COinZtCPz6UNo.png?width=108&crop=smart&format=pjpg&auto=webp&s=548fb31f41471c95a19c00df2a95a00d4faea57a', 'width': 108}, {'height': 432, 'url': 'https://external-preview.redd.it/dDNjdGF2ZnhrMGFnMaxjPgU0f58qu_Dl4frEQG0c4zDayU9COinZtCPz6UNo.png?width=216&crop=smart&format=pjpg&auto=webp&s=dfd354bac3ffc6a2a2d497b93aae3ddf43356745', 'width': 216}, {'height': 640, 'url': 'https://external-preview.redd.it/dDNjdGF2ZnhrMGFnMaxjPgU0f58qu_Dl4frEQG0c4zDayU9COinZtCPz6UNo.png?width=320&crop=smart&format=pjpg&auto=webp&s=42cf8a13ed4a8ab004812451c0ece54571968f9e', 'width': 320}, {'height': 1280, 'url': 'https://external-preview.redd.it/dDNjdGF2ZnhrMGFnMaxjPgU0f58qu_Dl4frEQG0c4zDayU9COinZtCPz6UNo.png?width=640&crop=smart&format=pjpg&auto=webp&s=1c976132490bc1ba9b704a440beac49d9db2c136', 'width': 640}], 'source': {'height': 1746, 'url': 'https://external-preview.redd.it/dDNjdGF2ZnhrMGFnMaxjPgU0f58qu_Dl4frEQG0c4zDayU9COinZtCPz6UNo.png?format=pjpg&auto=webp&s=1e61295da8296b892c135037f5e394de597919a3', 'width': 778}, 'variants': {}}]}
Empirical Evidence Of Interpretation Drift & Taxonomy Field Guide
0
Some problems are invisible until someone names them. Like in Westworld when Dolores sees a photo from the real world and says, "It doesn’t look like anything to me." **Interpretation Drift** in LLMs feels exactly like that – it's often dismissed as "just temp=0 stochasticity" or a "largely solved" issue. My earlier [Empirical Evidence Of Interpretation Drift](https://drive.google.com/file/d/1iA8P71729hQ8swskq8J_qFaySz0LGOhz/view?usp=drive_link) tried to explain this didn't land widely, but a bunch of you did reached out privately and instantly got it: * “I’ve seen this constantly in MLOps pipelines – it's annoying as hell.” * "The real failure mode isn’t bad outputs, it’s this drift hiding behind fluent responses." * “Love the framing: stability emerges from interaction, not just model behavior." * “This explains why AI-assisted decisions feel so unstable.” * "Drift isn’t a model problem – it’s a boundary problem." * “Thanks for naming it clearly. The shift from 'are outputs acceptable?' to 'is interpretation stable across runs/time?' is huge." That made it click: this isn't about persuading skeptics. It's a **pattern recognition** problem for people already running into it daily. So I started an [Interpretation Drift Taxonomy ](https://drive.google.com/file/d/1oWtwNxuzDQm5xM1gvR-1SNu35PV-aYt8/view?usp=drive_link)– not to benchmark models or debate accuracy, but to build shared language around a subtle failure mode through real examples. It's a living document with a growing case library. Have you hit stuff like: * Same prompt → wildly different answers across runs * Different models interpreting the same input incompatibly * Model shifting its framing/certainty mid-conversation * Context causing it to reinterpret roles, facts, or authority **Share your cases!** * Drop a quick description in the comments * email [elin@omnisensai.com](mailto:elin@omnisensai.com) : prompt + what changed + what surprised you Real-world examples are how this grows into something useful for all of us working with these systems. Thanks – looking forward to your drift cases.
2025-12-28T21:34:47
https://www.reddit.com/r/LocalLLaMA/comments/1py3796/empirical_evidence_of_interpretation_drift/
Beneficial-Pear-1485
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1py3796
false
null
t3_1py3796
/r/LocalLLaMA/comments/1py3796/empirical_evidence_of_interpretation_drift/
false
false
self
0
null
GLM 4.5 Air and agentic CLI tools/TUIs?
13
I revisited GLM 4.5 Air and at least on llama.cpp I am able to get stable tool calls with unsloth's UD\_Q4\_K\_XL (unsloth updated the weights on HF a couple of days ago); that's probably thanks to: [https://github.com/ggml-org/llama.cpp/pull/16932](https://github.com/ggml-org/llama.cpp/pull/16932) and maybe unsloth (there is no changelog/reason why they recently updated the weights). Unfortunately with codex-cli sometimes the model becomes stuck at constantly doing the same tool call; maybe it was just bad luck in combination with the set of MCPs, quantization related instability, bad sampling parameters, or there could be some functionality within codex-cli missing to properly engage with GLM 4.5 Air. Is anyone seriously using GLM 4.5 Air locally for agentic coding (e.g., having it reliably do 10 to 50 tool calls in a single agent round) and has some hints regarding well-working coding TUIs? (ofc I am not expecting that GLM 4.5 Air can solve all tasks, but it imo shouldn't get stuck in tool-calling loops and/or I might be just spoiled by other models not doing that.) p.s., relevant llama.cpp parameters (derived from unsloth's GLM 4.6V flash docs (no GLM 4.5 Air docs) and temperature recommendation from zai labs): --ctx-size 128000 --temp 0.6 --top-p 0.6 --top-k 2 --min-p 0.0 --jinja
2025-12-28T20:56:33
https://www.reddit.com/r/LocalLLaMA/comments/1py294m/glm_45_air_and_agentic_cli_toolstuis/
bfroemel
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1py294m
false
null
t3_1py294m
/r/LocalLLaMA/comments/1py294m/glm_45_air_and_agentic_cli_toolstuis/
false
false
self
13
{'enabled': False, 'images': [{'id': 'KDDs4CzYnpi1yFEYsbrIWBUBbdrYX-VHhKHI16aTWKE', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/KDDs4CzYnpi1yFEYsbrIWBUBbdrYX-VHhKHI16aTWKE.png?width=108&crop=smart&auto=webp&s=d7a3523c963763f2dafb002755d0f826c5ebb26e', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/KDDs4CzYnpi1yFEYsbrIWBUBbdrYX-VHhKHI16aTWKE.png?width=216&crop=smart&auto=webp&s=e82d3a0852a4e2f891b20e5c419c4911a92e0b79', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/KDDs4CzYnpi1yFEYsbrIWBUBbdrYX-VHhKHI16aTWKE.png?width=320&crop=smart&auto=webp&s=bd35aa7643b47ba489b8f57b76c1dbd15dd244d7', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/KDDs4CzYnpi1yFEYsbrIWBUBbdrYX-VHhKHI16aTWKE.png?width=640&crop=smart&auto=webp&s=34bea2ad243fed101fa96198173318fad0152828', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/KDDs4CzYnpi1yFEYsbrIWBUBbdrYX-VHhKHI16aTWKE.png?width=960&crop=smart&auto=webp&s=301e417a88803725ae6a4c25a38d6e047ed12c05', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/KDDs4CzYnpi1yFEYsbrIWBUBbdrYX-VHhKHI16aTWKE.png?width=1080&crop=smart&auto=webp&s=10acaff9a4577fa92152b706d8b9d25717c8b9e6', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/KDDs4CzYnpi1yFEYsbrIWBUBbdrYX-VHhKHI16aTWKE.png?auto=webp&s=c742252686bf109967b88da2cb856c810b38cc06', 'width': 1200}, 'variants': {}}]}
Triple GPU LLM benchmarks with --n-cpu-moe help
2
Here we have three Nvidia [GTX-1070 ](https://www.techpowerup.com/gpu-specs/geforce-gtx-1070.c2840)8GB cards running a few LLM that sit right on the edge of the available 24GB VRAM. Down below you can see how to get LLM to work if it exceeds VRAM limit. [AM4 running triple GTX 1070 with Riser assist.](https://preview.redd.it/krumdzhea0ag1.jpg?width=3055&format=pjpg&auto=webp&s=36d84a35c99829dfe4c199f99472490bcd3791c8) System: AMD Ryzen 5 3600 CPU, 32GB DDR4 RAM, Kubuntu 25.10 Kernel 6.17 OS, Triple GTX 1070 (8GB) 24GB VRAM GPUs. Power limits set to 333 watts for GPUs. Llama.cpp Ubuntu Vulkan build: 06705fdcb (7552) # Gemma-3-27b-it.Q5_K_M.gguf |Model|Size|Params|Test|(t/s)| |:-|:-|:-|:-|:-| |Gemma3 27B Q5\_K - Medium|17.94 GiB|27.01 B|pp512|55.63 ± 0.63| |Gemma3 27B Q5\_K - Medium|17.94 GiB|27.01 B|tg128|5.45 ± 0.15| # Qwen3-Coder-30B-A3B-Instruct-UD-Q5_K_XL.gguf |Model|Size|Params|Test|(t/s)| |:-|:-|:-|:-|:-| |Qwen3Moe 30B.A3B Q5\_K - Medium|20.24 GiB|30.53 B|pp512|84.43 ± 0.54| |Qwen3Moe 30B.A3B Q5\_K - Medium|20.24 GiB|30.53 B|tg128|48.16 ± 1.89| # Nemotron-3-Nano-30B-A3B-UD-Q4_K_XL.gguf |Model|Size|Params|Test|(t/s)| |:-|:-|:-|:-|:-| |Nemotron H MoE 31B.A3.5B Q4\_K - Medium|21.26 GiB|31.58 B|pp512|78.35 ± 1.18| |Nemotron H MoE 31B.A3.5B Q4\_K - Medium|21.26 GiB|31.58 B|tg128|39.56 ± 0.34| # Olmo-3-32B-Think-UD-Q5_K_XL.gguf |Model|Size|Params|Test|(t/s)| |:-|:-|:-|:-|:-| |Olmo2 32B Q5\_K - Medium|21.23 GiB|32.23 B|pp512|45.74 ± 0.45| |Olmo2 32B Q5\_K - Medium|21.23 GiB|32.23 B|tg128|5.04 ± 0.01| # DeepSeek-R1-Distill-Qwen-32B-Q5_K_M.gguf |Model|Size|Params|Test|(t/s)| |:-|:-|:-|:-|:-| |Qwen2 32B Q5\_K - Medium|21.66 GiB|32.76 B|pp512|44.83 ± 0.37| |Qwen2 32B Q5\_K - Medium|21.66 GiB|32.76 B|tg128|5.04 ± 0.00| LLM Granite 4.0 must be right outside the 24GB VRAM limit so lets see if we can get it working. >In `llama.cpp`, the command-line argument `--n-cpu-moe N` (or `-ncmoe N`) is a performance tuning option used to offload the Mixture of Experts (MoE) weights of the first N layers from the GPU to the CPU.  \***Granite-4.0-h-small-UD-Q5\_K\_XL\***: ErrorOutOfDeviceMemory First we find what is best `-ngl` value. Granite-4.0-h-small-UD-Q5\_K\_XL.gguf `-ngl 39` |model|size|params|backend|ngl|test|t/s| |:-|:-|:-|:-|:-|:-|:-| |granitehybrid 32B Q5\_K - Medium|21.53 GiB|32.21 B|Vulkan|39|pp512|38.91 ± 0.24| |granitehybrid 32B Q5\_K - Medium|21.53 GiB|32.21 B|Vulkan|39|tg128|9.11 ± 0.99| Then we try different `-ncmoe` values and settled with Granite-4.0-h-small-UD-Q5\_K\_XL.gguf `-ngl 39 --n-cpu-moe 1` |model|size|params|backend|ngl|n\_cpu\_moe|test|t/s| |:-|:-|:-|:-|:-|:-|:-|:-| |granitehybrid 32B Q5\_K - Medium|21.53 GiB|32.21 B|Vulkan|39|1|pp512|41.24 ± 0.52| |granitehybrid 32B Q5\_K - Medium|21.53 GiB|32.21 B|Vulkan|39|1|tg128|14.52 ± 0.27|
2025-12-28T20:43:14
https://www.reddit.com/r/LocalLLaMA/comments/1py1xaa/triple_gpu_llm_benchmarks_with_ncpumoe_help/
tabletuser_blogspot
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1py1xaa
false
null
t3_1py1xaa
/r/LocalLLaMA/comments/1py1xaa/triple_gpu_llm_benchmarks_with_ncpumoe_help/
false
false
https://b.thumbs.redditm…ksn3xZkyM3Dk.jpg
2
null
[Tool Release] Skill Seekers v2.5.0 - Convert any documentation into structured markdown skills for local/remote LLMs
7
Hey 👋 Released **Skill Seekers v2.5.0** with universal LLM support - convert any documentation into structured markdown skills. ## What It Does Automatically scrapes documentation websites and converts them into organized, categorized reference files with extracted code examples. Works with any LLM (local or remote). ## New in v2.5.0: Universal Format Support - ✅ **Generic Markdown export** - works with ANY LLM - ✅ **Claude AI** format (if you use Claude) - ✅ **Google Gemini** format (with grounding) - ✅ **OpenAI ChatGPT** format (with vector search) ## Why This Matters for Local LLMs Instead of context-dumping entire docs, you get: - **Organized structure**: Categorized by topic (getting-started, API, examples, etc.) - **Extracted patterns**: Code examples pulled from docs with syntax highlighting - **Portable format**: Pure markdown ZIP - use with Ollama, llama.cpp, or any local model - **Reusable**: Build once, use with any LLM ## Quick Example ```bash # Install pip install skill-seekers # Scrape any documentation skill-seekers scrape --config configs/react.json # Export as universal markdown skill-seekers package output/react/ --target markdown # Result: react-markdown.zip with organized .md files ``` The output is just structured markdown files - perfect for feeding to local models or adding to your RAG pipeline. Features - 📄 Documentation scraping with smart categorization - 🐙 GitHub repository analysis - 📕 PDF extraction (for PDF-based docs) - 🔀 Multi-source unified (docs + code + PDFs in one skill) - 🎯 24 preset configs (React, Vue, Django, Godot, etc.) Links - GitHub: https://github.com/yusufkaraaslan/Skill_Seekers - PyPI: https://pypi.org/project/skill-seekers/ - Release: https://github.com/yusufkaraaslan/Skill_Seekers/releases/tag/v2.5.0 MIT licensed, contributions welcome! Would love to hear what documentation you'd like to see supported.
2025-12-28T20:36:54
https://www.reddit.com/r/LocalLLaMA/comments/1py1rpg/tool_release_skill_seekers_v250_convert_any/
Critical-Pea-8782
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1py1rpg
false
null
t3_1py1rpg
/r/LocalLLaMA/comments/1py1rpg/tool_release_skill_seekers_v250_convert_any/
false
false
self
7
{'enabled': False, 'images': [{'id': 'HcBWP1eqvQOeBxZ_MyH-yX8MJV2p8eJgqNyOVwJ6Ppk', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/HcBWP1eqvQOeBxZ_MyH-yX8MJV2p8eJgqNyOVwJ6Ppk.png?width=108&crop=smart&auto=webp&s=c30d069cfe72bdd33596f7fd94dcb59b4d1ec686', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/HcBWP1eqvQOeBxZ_MyH-yX8MJV2p8eJgqNyOVwJ6Ppk.png?width=216&crop=smart&auto=webp&s=c70a76430970977efbc8eca08c9babfa351725be', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/HcBWP1eqvQOeBxZ_MyH-yX8MJV2p8eJgqNyOVwJ6Ppk.png?width=320&crop=smart&auto=webp&s=347cfbda867761ccd2273b343efcfacb7607117e', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/HcBWP1eqvQOeBxZ_MyH-yX8MJV2p8eJgqNyOVwJ6Ppk.png?width=640&crop=smart&auto=webp&s=0a37f73b6c136bc86760c6df0753ef38e79e6f3c', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/HcBWP1eqvQOeBxZ_MyH-yX8MJV2p8eJgqNyOVwJ6Ppk.png?width=960&crop=smart&auto=webp&s=0aa26e12e221344fcae135d99747d4066201692c', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/HcBWP1eqvQOeBxZ_MyH-yX8MJV2p8eJgqNyOVwJ6Ppk.png?width=1080&crop=smart&auto=webp&s=ebcec12248f94cc7a82e061733b763d2029e6f9e', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/HcBWP1eqvQOeBxZ_MyH-yX8MJV2p8eJgqNyOVwJ6Ppk.png?auto=webp&s=f0a1adc95a4dfa4d406f119944edf759748ac66d', 'width': 1200}, 'variants': {}}]}