title
stringlengths
1
300
score
int64
0
8.54k
selftext
stringlengths
0
41.5k
created
timestamp[ns]date
2023-04-01 04:30:41
2026-03-04 02:14:14
url
stringlengths
0
878
author
stringlengths
3
20
domain
stringlengths
0
82
edited
timestamp[ns]date
1970-01-01 00:00:00
2026-02-19 14:51:53
gilded
int64
0
2
gildings
stringclasses
7 values
id
stringlengths
7
7
locked
bool
2 classes
media
stringlengths
646
1.8k
name
stringlengths
10
10
permalink
stringlengths
33
82
spoiler
bool
2 classes
stickied
bool
2 classes
thumbnail
stringlengths
4
213
ups
int64
0
8.54k
preview
stringlengths
301
5.01k
Better than Google AlphaEvolve
1
[removed]
2026-01-06T05:38:02
https://www.reddit.com/r/LocalLLaMA/comments/1q5a7np/better_than_google_alphaevolve/
Sea_Individual2470
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1q5a7np
false
null
t3_1q5a7np
/r/LocalLLaMA/comments/1q5a7np/better_than_google_alphaevolve/
false
false
self
1
null
Better than Google AlphaEvolve
1
[removed]
2026-01-06T05:35:28
https://www.reddit.com/r/LocalLLaMA/comments/1q5a5tn/better_than_google_alphaevolve/
Sea_Individual2470
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1q5a5tn
false
null
t3_1q5a5tn
/r/LocalLLaMA/comments/1q5a5tn/better_than_google_alphaevolve/
false
false
self
1
null
Better than Google AlphaEvolve
1
[removed]
2026-01-06T05:34:03
https://www.reddit.com/r/LocalLLaMA/comments/1q5a4vb/better_than_google_alphaevolve/
Sea_Individual2470
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1q5a4vb
false
null
t3_1q5a4vb
/r/LocalLLaMA/comments/1q5a4vb/better_than_google_alphaevolve/
false
false
self
1
null
LoongFlow: Better than Google AlphaEvolve
1
[removed]
2026-01-06T05:32:37
https://www.reddit.com/r/LocalLLaMA/comments/1q5a3ub/loongflow_better_than_google_alphaevolve/
Sea_Individual2470
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1q5a3ub
false
null
t3_1q5a3ub
/r/LocalLLaMA/comments/1q5a3ub/loongflow_better_than_google_alphaevolve/
false
false
self
1
null
Step-by-step debugging of mini sglang
2
I just wrote a short, practical breakdown /debugging of **mini sglang**, a distilled version of sglang that’s easy to read and perfect for learning how real LLM inference systems work. The post explains, step by step: * Architecture (Frontend, Tokenizer, Scheduler, Detokenizer) * Request flow: HTTP → tokenize → prefill → decode → output * KV cache & radix prefix matching in second request [https://blog.dotieuthien.com/posts/mini-sglang-part-1](https://blog.dotieuthien.com/posts/mini-sglang-part-1) Would love it if you read it and give feedback 🙏
2026-01-06T05:30:41
https://www.reddit.com/r/LocalLLaMA/comments/1q5a2hh/stepbystep_debugging_of_mini_sglang/
dotieuthien9997
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1q5a2hh
false
null
t3_1q5a2hh
/r/LocalLLaMA/comments/1q5a2hh/stepbystep_debugging_of_mini_sglang/
false
false
self
2
null
Hey everyone! I'm excited to share LoongFlow, a "Cognitive" Evolutionary Agent framework that beats standard evolutionary methods by 60% efficiency.
1
[removed]
2026-01-06T05:29:28
https://www.reddit.com/r/LocalLLaMA/comments/1q5a1m0/hey_everyone_im_excited_to_share_loongflow_a/
Sea_Individual2470
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1q5a1m0
false
null
t3_1q5a1m0
/r/LocalLLaMA/comments/1q5a1m0/hey_everyone_im_excited_to_share_loongflow_a/
false
false
self
1
null
Liquid Ai released LFM2.5, family of tiny on-device foundation models.
294
Hugging face: [https://huggingface.co/collections/LiquidAI/lfm25](https://huggingface.co/collections/LiquidAI/lfm25) It’s built to power reliable on-device agentic applications: higher quality, lower latency, and broader modality support in the \~1B parameter class. > LFM2.5 builds on LFM2 device-optimized hybrid architecture > Pretraining scaled from 10T → 28T tokens > Expanded reinforcement learning post-training > Higher ceilings for instruction following 5 open-weight model instances from a single architecture: > General-purpose instruct model > Japanese-optimized chat model > Vision-language model > Native audio-language model (speech in/out) > Base checkpoints for deep customization
2026-01-06T05:27:54
https://i.redd.it/flk7mfltznbg1.jpeg
Difficult-Cap-7527
i.redd.it
1970-01-01T00:00:00
0
{}
1q5a0if
false
null
t3_1q5a0if
/r/LocalLLaMA/comments/1q5a0if/liquid_ai_released_lfm25_family_of_tiny_ondevice/
false
false
default
294
{'enabled': True, 'images': [{'id': 'flk7mfltznbg1', 'resolutions': [{'height': 74, 'url': 'https://preview.redd.it/flk7mfltznbg1.jpeg?width=108&crop=smart&auto=webp&s=43c945dce7e6f126d13267dc46b850f160b6f9a1', 'width': 108}, {'height': 149, 'url': 'https://preview.redd.it/flk7mfltznbg1.jpeg?width=216&crop=smart&auto=webp&s=cd331d15923c486b0298f6a81608b6f1aaa6da7a', 'width': 216}, {'height': 222, 'url': 'https://preview.redd.it/flk7mfltznbg1.jpeg?width=320&crop=smart&auto=webp&s=12f6a0f25f346544b79901894995f9c41b4091d9', 'width': 320}, {'height': 444, 'url': 'https://preview.redd.it/flk7mfltznbg1.jpeg?width=640&crop=smart&auto=webp&s=2f4889f71adc7a8ebe91cfeb42042aae8fd240db', 'width': 640}, {'height': 666, 'url': 'https://preview.redd.it/flk7mfltznbg1.jpeg?width=960&crop=smart&auto=webp&s=f716399965a8080f6b401941d8937dbf1e423c87', 'width': 960}, {'height': 749, 'url': 'https://preview.redd.it/flk7mfltznbg1.jpeg?width=1080&crop=smart&auto=webp&s=ac2d9ee9950edc4812991bb47688469103cfe068', 'width': 1080}], 'source': {'height': 833, 'url': 'https://preview.redd.it/flk7mfltznbg1.jpeg?auto=webp&s=10d612b5d0972a43f841e9ad61b3d80d49d0886e', 'width': 1200}, 'variants': {}}]}
【R】LoongFlow: Open Source Implementation of Evolutionary Agent Framework
1
[removed]
2026-01-06T05:23:26
https://www.reddit.com/r/LocalLLaMA/comments/1q59x8h/rloongflow_open_source_implementation_of/
Sea_Individual2470
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1q59x8h
false
null
t3_1q59x8h
/r/LocalLLaMA/comments/1q59x8h/rloongflow_open_source_implementation_of/
false
false
self
1
{'enabled': False, 'images': [{'id': 'tM-Sl2Wtqnp4LoRTMj5-xAXn4rxQ4L9bM-1lt8pwq4c', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/tM-Sl2Wtqnp4LoRTMj5-xAXn4rxQ4L9bM-1lt8pwq4c.png?width=108&crop=smart&auto=webp&s=e4690f7a07d5f983e6899a6e0e8b29be5006231a', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/tM-Sl2Wtqnp4LoRTMj5-xAXn4rxQ4L9bM-1lt8pwq4c.png?width=216&crop=smart&auto=webp&s=f31c9cf7ebd8fcd446361a5fdb282ae5361454e5', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/tM-Sl2Wtqnp4LoRTMj5-xAXn4rxQ4L9bM-1lt8pwq4c.png?width=320&crop=smart&auto=webp&s=93870be7a9ac9e2c87fa9d3c98b4e0cac7998141', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/tM-Sl2Wtqnp4LoRTMj5-xAXn4rxQ4L9bM-1lt8pwq4c.png?width=640&crop=smart&auto=webp&s=135b89994f064ee5ad4b991b8ae99032992b4809', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/tM-Sl2Wtqnp4LoRTMj5-xAXn4rxQ4L9bM-1lt8pwq4c.png?width=960&crop=smart&auto=webp&s=b608ae27e71ba255960592c2436e47e988dca30b', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/tM-Sl2Wtqnp4LoRTMj5-xAXn4rxQ4L9bM-1lt8pwq4c.png?width=1080&crop=smart&auto=webp&s=bde5ec71bf7a1e097118b473c30e092cf4031c15', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/tM-Sl2Wtqnp4LoRTMj5-xAXn4rxQ4L9bM-1lt8pwq4c.png?auto=webp&s=0ef393837067db2932ff12282cfa2c8596e7fed2', 'width': 1200}, 'variants': {}}]}
Best Practices for Cleaning Emails & Documents Before Loading into a Vector Database (RAG / LLM)
0
I’m building a production-grade RAG pipeline and want to share (and validate) a practical approach for cleaning emails and documents *before* embedding them into a vector database. The goal is to **maximize retrieval quality**, **avoid hallucinations**, and **reduce vector noise**—especially when dealing with emails, newsletters, system notifications, and mixed-format documents.
2026-01-06T05:08:06
https://www.reddit.com/r/LocalLLaMA/comments/1q59m5i/best_practices_for_cleaning_emails_documents/
superhero_io
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1q59m5i
false
null
t3_1q59m5i
/r/LocalLLaMA/comments/1q59m5i/best_practices_for_cleaning_emails_documents/
false
false
self
0
null
What’s the best abliterated model for M2 pro mbp 14
0
Title. I use mbp 14 m2 pro(10CPU+16GPU), 16GB RAM, and I need an abliterated model, any recommendations?
2026-01-06T05:06:36
https://www.reddit.com/r/LocalLLaMA/comments/1q59l3h/whats_the_best_abliterated_model_for_m2_pro_mbp_14/
carrot_2333
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1q59l3h
false
null
t3_1q59l3h
/r/LocalLLaMA/comments/1q59l3h/whats_the_best_abliterated_model_for_m2_pro_mbp_14/
false
false
self
0
null
Something that translates like google lens uncensor locally?
5
Hi, i wanted to ask, is there a way to use something like google lens, that translates an image without censorship? I like reading in japanese and i often use chrome lens to get the gist of the meaning of what is happening so i can relate kanjis and meanings. The thing is a lot of the time if there is something a little too for adult, google refuses to read. I've learnt how to install llamaccp and managed to get a model like qwen 3 vl nsfw 8b Gguf to work. (mainly because i was looking something to get prompts for ai training for lora) but it still gives me trouble sometimes, it still refuses to speak about some topics. but it can give me prompts that the regular qwen wont. but it refuses to tell me the japanese text. it says he cant, and wont read the japanese, because it can't but often when i load a raw panel, it tells me what are they saying or just transcribes te japanese... TLDR: Is there something that works well for adult doujinshi like google lens without the morality?
2026-01-06T04:37:13
https://www.reddit.com/r/LocalLLaMA/comments/1q58zid/something_that_translates_like_google_lens/
Oxidonitroso88
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1q58zid
false
null
t3_1q58zid
/r/LocalLLaMA/comments/1q58zid/something_that_translates_like_google_lens/
false
false
self
5
null
Best local AI for coding in Cursor with a 5080?
1
Qwen coder? Codestral? Gemini? DeepSeek? Nemotron? 1. Must integrate with Cursor agents 2. Be better than Grok code free version in Cursor 3. Be able to work on multiple PHP files in a 100-200 files codebase 4. Runs on a 5080 with 16GB + 128GB DDR5 + 9950X
2026-01-06T04:14:55
https://www.reddit.com/r/LocalLLaMA/comments/1q58isu/best_local_ai_for_coding_in_cursor_with_a_5080/
Eastern_Fish_4062
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1q58isu
false
null
t3_1q58isu
/r/LocalLLaMA/comments/1q58isu/best_local_ai_for_coding_in_cursor_with_a_5080/
false
false
self
1
null
Backend agnostic llama.cpp support for Kimi-Linear-48B-A3B
24
Previous experimental support only works with CPU and CUDA. So I implemented a ggml only version such that it can work on all platforms. You can download the gguf from [https://huggingface.co/ymcki/Kimi-Linear-48B-A3B-Instruct-GGUF](https://huggingface.co/ymcki/Kimi-Linear-48B-A3B-Instruct-GGUF) and download the code from git clone [https://github.com/ymcki/llama.cpp](https://github.com/ymcki/llama.cpp) \--branch Kimi-Linear Please feel free to report any bugs you find. Thanks github's cacaview for his initial version, Aaryan-Kapoor's fixes and pwilkin's qwen3-next implementation to make this possible.
2026-01-06T03:58:53
https://www.reddit.com/r/LocalLLaMA/comments/1q586jv/backend_agnostic_llamacpp_support_for/
Ok_Warning2146
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1q586jv
false
null
t3_1q586jv
/r/LocalLLaMA/comments/1q586jv/backend_agnostic_llamacpp_support_for/
false
false
self
24
{'enabled': False, 'images': [{'id': 'qCbka6AOc9nu_2BgvTflGUnf7Vlgl3yx4_Elc-HI17A', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/qCbka6AOc9nu_2BgvTflGUnf7Vlgl3yx4_Elc-HI17A.png?width=108&crop=smart&auto=webp&s=7fc2c77485d858fb292e9f3229f6a5ef5e7f4c02', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/qCbka6AOc9nu_2BgvTflGUnf7Vlgl3yx4_Elc-HI17A.png?width=216&crop=smart&auto=webp&s=e6c7ba227b4f252ea15d61a20b46626d1aed1c8b', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/qCbka6AOc9nu_2BgvTflGUnf7Vlgl3yx4_Elc-HI17A.png?width=320&crop=smart&auto=webp&s=25ef66d39cce1f0b9f62baf9ccee4573984d9e0e', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/qCbka6AOc9nu_2BgvTflGUnf7Vlgl3yx4_Elc-HI17A.png?width=640&crop=smart&auto=webp&s=255ee585da557302a87f3d607d19306469f57e6b', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/qCbka6AOc9nu_2BgvTflGUnf7Vlgl3yx4_Elc-HI17A.png?width=960&crop=smart&auto=webp&s=ee677c430e3a2fc981b12c1f2c97fdeb64da6916', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/qCbka6AOc9nu_2BgvTflGUnf7Vlgl3yx4_Elc-HI17A.png?width=1080&crop=smart&auto=webp&s=7ee412de241d73b5fd24f6e27aaada9987996722', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/qCbka6AOc9nu_2BgvTflGUnf7Vlgl3yx4_Elc-HI17A.png?auto=webp&s=1a9d4760d2060954724527eaf2fad2488a3cad3d', 'width': 1200}, 'variants': {}}]}
GitHub MCP with local or hybrid models useful, or too context-heavy?
0
I’ve been testing the GitHub MCP to let models inspect repos (files, commits, PRs, issues) without cloning or copy-pasting code. In practice, I’m still on the fence — especially with local or hybrid setups: - Context usage ramps up quickly on medium/large repos - It feels strongest for very targeted questions, weaker for broad exploration - Behavior isn’t always consistent between cloud models and local ones To keep track of this, I started writing down MCP setups and usage notes for myself (and opened it up publicly so others can add theirs too): https://ai-stack.dev For people running LocalLLaMA or mixed workflows: - Has GitHub MCP been reliable for you? - Do you still prefer CLI + selective file loading? - Any tricks to keep context usage under control?
2026-01-06T03:56:16
https://www.reddit.com/r/LocalLLaMA/comments/1q584kx/github_mcp_with_local_or_hybrid_models_useful_or/
Silver-Photo2198
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1q584kx
false
null
t3_1q584kx
/r/LocalLLaMA/comments/1q584kx/github_mcp_with_local_or_hybrid_models_useful_or/
false
false
self
0
null
We built an open source memory framework that doesn't rely on embeddings. Just open-sourced it
25
Hey folks, wanted to share something we’ve been hacking on for a while. It’s called **memU** — an agentic memory framework for LLMs / AI agents. Most memory systems I’ve seen rely heavily on embedding search: you store everything as vectors, then do similarity lookup to pull “relevant” context. That works fine for simple stuff, but it starts breaking down when you care about things like **time**, **sequences**, or more complex relationships. So we tried a different approach. Instead of _only_ doing embedding search, memU lets the model **read actual memory files directly**. We call this _non-embedding search_. The idea is that LLMs are pretty good at reading structured text already — so why not lean into that instead of forcing everything through vector similarity? High level, the system has three layers: * **Resource layer** – raw data (text, images, audio, video) * **Memory item layer** – extracted fine-grained facts/events * **Memory category layer** – themed memory files the model can read directly One thing that’s been surprisingly useful: the memory structure can **self-evolve**. Stuff that gets accessed a lot gets promoted, stuff that doesn’t slowly fades out. No manual pruning, just usage-based reorganization. It’s pretty lightweight, all prompts are configurable, and it’s easy to adapt to different agent setups. Right now it supports text, images, audio, and video. Open-source repo is here: [https://github.com/NevaMind-AI/memU](https://github.com/NevaMind-AI/memU) We also have a hosted version at [https://app.memu.so](https://app.memu.so) if you don’t want to self-host, but the OSS version is fully featured. Happy to answer questions about how it works, tradeoffs vs embeddings, or anything else. Also very open to feedback — we know it’s not perfect yet 🙂
2026-01-06T03:42:23
https://www.reddit.com/r/LocalLLaMA/comments/1q57txn/we_built_an_open_source_memory_framework_that/
Consistent_Design72
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1q57txn
false
null
t3_1q57txn
/r/LocalLLaMA/comments/1q57txn/we_built_an_open_source_memory_framework_that/
false
false
self
25
{'enabled': False, 'images': [{'id': 'WK-ak1z3y0rCzkZkLLnzU_5NaiMUFaJ5K7oOmCLBBdc', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/WK-ak1z3y0rCzkZkLLnzU_5NaiMUFaJ5K7oOmCLBBdc.png?width=108&crop=smart&auto=webp&s=9465ecfdbf2e504aaa3386136b7b9f979ff63a31', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/WK-ak1z3y0rCzkZkLLnzU_5NaiMUFaJ5K7oOmCLBBdc.png?width=216&crop=smart&auto=webp&s=ff4fe294aa1d5e16ce190a183b6086c5be112348', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/WK-ak1z3y0rCzkZkLLnzU_5NaiMUFaJ5K7oOmCLBBdc.png?width=320&crop=smart&auto=webp&s=3e60e9fd885e5e30f3ec942c8aee16b0c60ba8b5', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/WK-ak1z3y0rCzkZkLLnzU_5NaiMUFaJ5K7oOmCLBBdc.png?width=640&crop=smart&auto=webp&s=540fdbbbf3f9e5e7808be0b33101ce0e02684252', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/WK-ak1z3y0rCzkZkLLnzU_5NaiMUFaJ5K7oOmCLBBdc.png?width=960&crop=smart&auto=webp&s=7637545b8c8975ab1f71bf9c5354c19a83cda2eb', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/WK-ak1z3y0rCzkZkLLnzU_5NaiMUFaJ5K7oOmCLBBdc.png?width=1080&crop=smart&auto=webp&s=40fb47d35b93bb611d1261d96de9823954d3de53', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/WK-ak1z3y0rCzkZkLLnzU_5NaiMUFaJ5K7oOmCLBBdc.png?auto=webp&s=7a00ed7220aa359aa8851d75f7ff5e2b74bda56a', 'width': 1200}, 'variants': {}}]}
[Research] We fixed "Blind Mutation" in Agents with a Cognitive Evolutionary Framework (LoongFlow). Paper + Code.
1
Hi r/LocalLLaMA, Most evolutionary agents (like OpenEvolve) treat LLMs as black-box mutation operators, leading to a "Random Walk" that burns tokens and gets stuck easily. We built **LoongFlow**, a framework that maps evolutionary search to a structured **Plan-Execute-Summarize (PES)** paradigm. **Core Architecture (The "Diff"):** * 🧠 **Planner (Lineage-Based):** Instead of random guesses, it retrieves the "Original Intent" from ancestors to ensure logical consistency. * 📝 **Summarizer (Reflection):** It performs **Abductive Reasoning** after execution to analyze *why* a mutation failed, preventing the same error from repeating. * 💾 **Hybrid Memory (MAP-Elites):** We use Multi-Island models + MAP-Elites to preserve behavioral diversity ("stepping stones"), preventing premature convergence to local optima. **Results:** On benchmarks (Circle Packing), LoongFlow was **60% more efficient** than standard evolutionary baselines. **Resources:** * **Paper:**[arXiv:2512.24077](https://arxiv.org/abs/2512.24077) * **Code:**[GitHub - LoongFlow](https://github.com/baidu-baige/LoongFlow) If you're interested in evolutionary agents on local hardware, a Star ⭐️ on the repo helps us gauge interest!
2026-01-06T03:13:47
https://www.reddit.com/r/LocalLLaMA/comments/1q577fo/research_we_fixed_blind_mutation_in_agents_with_a/
FreshmanDD
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1q577fo
false
null
t3_1q577fo
/r/LocalLLaMA/comments/1q577fo/research_we_fixed_blind_mutation_in_agents_with_a/
false
false
self
1
null
Local Shopping Agents
0
We need to preserve lm Studio in case they change their business model lol. But just in case they do just build tools because you can always take that with you. LM Studio hit me like crack in the 80s lol
2026-01-06T03:10:59
https://v.redd.it/hw8faodwanbg1
Serious_Molasses313
/r/LocalLLaMA/comments/1q5756q/local_shopping_agents/
1970-01-01T00:00:00
0
{}
1q5756q
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/hw8faodwanbg1/DASHPlaylist.mpd?a=1770390670%2CYWQ0NWRmYjhjOTcwYTFkMDQyMGYwNjNjZTk5YjI1M2FiYTA2ZTY5NmYxNDQ3YTE3ZjljNDJhNzk0N2U1MzRjNQ%3D%3D&v=1&f=sd', 'duration': 390, 'fallback_url': 'https://v.redd.it/hw8faodwanbg1/CMAF_1080.mp4?source=fallback', 'has_audio': True, 'height': 1920, 'hls_url': 'https://v.redd.it/hw8faodwanbg1/HLSPlaylist.m3u8?a=1770390670%2CYmQ1YzgyNGFjNWM2M2UwM2Y0MzZlZWIyZDNmZGRhOGZlZTRlNTNhZmJmZWFmOGUyNjc5YzZmMTk0YmMwOGUzMw%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/hw8faodwanbg1/CMAF_96.mp4', 'transcoding_status': 'completed', 'width': 856}}
t3_1q5756q
/r/LocalLLaMA/comments/1q5756q/local_shopping_agents/
false
false
https://external-preview…f97581b85523e2d9
0
{'enabled': False, 'images': [{'id': 'aGNtOGxlZXdhbmJnMSVOaz475Eu5LPKYhkvMjDX7uaU7k0ne1GBbxI7Hpk0u', 'resolutions': [{'height': 216, 'url': 'https://external-preview.redd.it/aGNtOGxlZXdhbmJnMSVOaz475Eu5LPKYhkvMjDX7uaU7k0ne1GBbxI7Hpk0u.png?width=108&crop=smart&format=pjpg&auto=webp&s=ecefd4cfd8b637ca7260d16f5b364b37584f301f', 'width': 108}, {'height': 432, 'url': 'https://external-preview.redd.it/aGNtOGxlZXdhbmJnMSVOaz475Eu5LPKYhkvMjDX7uaU7k0ne1GBbxI7Hpk0u.png?width=216&crop=smart&format=pjpg&auto=webp&s=4fb1c0b6f93806c8f38071fd82c83103791032d9', 'width': 216}, {'height': 640, 'url': 'https://external-preview.redd.it/aGNtOGxlZXdhbmJnMSVOaz475Eu5LPKYhkvMjDX7uaU7k0ne1GBbxI7Hpk0u.png?width=320&crop=smart&format=pjpg&auto=webp&s=91237a5121151480c5913addc6df63ba4a2c6193', 'width': 320}, {'height': 1280, 'url': 'https://external-preview.redd.it/aGNtOGxlZXdhbmJnMSVOaz475Eu5LPKYhkvMjDX7uaU7k0ne1GBbxI7Hpk0u.png?width=640&crop=smart&format=pjpg&auto=webp&s=fc726ca8f86d888722a2e4714670d52ec4d5c33f', 'width': 640}], 'source': {'height': 1746, 'url': 'https://external-preview.redd.it/aGNtOGxlZXdhbmJnMSVOaz475Eu5LPKYhkvMjDX7uaU7k0ne1GBbxI7Hpk0u.png?format=pjpg&auto=webp&s=fb9cc8f9c179d825b1e87b74326ac9dadc826a39', 'width': 778}, 'variants': {}}]}
[R] We built a framework to make Agents "self-evolve" using LoongFlow. Paper + Code released
0
Hi Reddit, We are the team behind **LoongFlow**. While the concept (letting agents evolve their own code/prompts) is fascinating, most existing implementations (like OpenEvolve) treat the LLM as a "black-box mutation operator." Basically, they just randomly mutate code and hope for the best. This leads to a **"Random Walk"**—burning huge amounts of tokens with low efficiency, often getting stuck in local optima because the agent forgets *why* previous attempts failed. To fix this "Cognitive Ceiling," we built **LoongFlow**, a framework that maps evolutionary search to a structured **Cognitive Process**, rather than just stochastic guessing. **We just released the paper and code. Here is the architecture breakdown for those interested in Agent System Design:** # 1. The Core: "Plan-Execute-Summarize" (PES) Paradigm Instead of a simple `mutate -> test` loop, we enforce a cognitive lifecycle for every evolutionary step: * **🧠 Planner (Lineage-Based Search):** It doesn't just generate a plan; it retrieves the **"Original Intent"** and **"Retrospective Feedback"** from the agent's ancestors (Lineage Context). This prevents the agent from repeating the same failed logic across generations. * **🛠 Executor (Polymorphic):** Decouples the "What" (Plan) from the "How" (Code). It includes a local verification loop ("Fast-Fail") to catch syntax errors before wasting expensive global evaluation resources. * **📝 Summarizer (Abductive Reflection):** This is critical. After execution, it performs **Abductive Reasoning** to analyze *why* a mutation failed or succeeded, storing this "Insight" back into memory. # 2. Solving "Mode Collapse": Hybrid Evolutionary Memory A common issue with evolving agents is that the population converges too quickly to a single (often mediocre) solution. We implemented a **Multi-Island Model combined with MAP-Elites**: * **Islands:** We run parallel populations that evolve independently to maintain distinct "species" of logic. * **MAP-Elites:** Instead of just keeping the "Top 10" scores, we map solutions to a feature grid (e.g., Code Complexity vs. Novelty). We keep the best solution *for each niche*. This preserves "stepping stone" solutions—code that might score lower now but has a unique structure that could lead to a breakthrough later. # 3. Results By adding this cognitive layer: * **Efficiency:** We saw a **60% reduction** in computational cost compared to standard evolutionary baselines (OpenEvolve) on the Circle Packing benchmark. * **Breakthroughs:** On the AlphaEvolve suite, LoongFlow broke theoretical performance barriers (Score > 1.0) where baseline methods stagnated. And also in MLE bench, we got 16 Gold、11 Silver、2 Bronze https://preview.redd.it/es9b6hytanbg1.png?width=898&format=png&auto=webp&s=2fbf631fa96ed7b500393ce6022f8c3149d94cc1 **Resources:** **📄 Paper:** [https://arxiv.org/abs/2512.24077](https://arxiv.org/abs/2512.24077) **💻 GitHub:** [https://github.com/baidu-baige/LoongFlow](https://github.com/baidu-baige/LoongFlow) We are looking for feedback on the architecture! Would love to hear your thoughts on combining EA with LLMs. *(If you find this architecture interesting, a Star on the repo helps us gauge interest in porting more "Evolutionary" features to the local LLM stack!)*
2026-01-06T03:06:52
https://www.reddit.com/r/LocalLLaMA/comments/1q571y2/r_we_built_a_framework_to_make_agents_selfevolve/
FreshmanDD
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1q571y2
false
null
t3_1q571y2
/r/LocalLLaMA/comments/1q571y2/r_we_built_a_framework_to_make_agents_selfevolve/
false
false
https://a.thumbs.redditm…RiG37hyQEym8.jpg
0
null
GPT-5.2 vs Claude Opus 4.5 Personality Benchmark (open-source models coming soon)
1
2026-01-06T02:46:48
https://lindr.io/blog/benchmark
dimethyldumbass
lindr.io
1970-01-01T00:00:00
0
{}
1q56l94
false
null
t3_1q56l94
/r/LocalLLaMA/comments/1q56l94/gpt52_vs_claude_opus_45_personality_benchmark/
false
false
default
1
null
rtx pro 6000 x4 sandwich stacking thermal test
28
https://preview.redd.it/…g in the summer.
2026-01-06T02:28:03
https://www.reddit.com/r/LocalLLaMA/comments/1q565on/rtx_pro_6000_x4_sandwich_stacking_thermal_test/
Comfortable-Plate467
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1q565on
false
null
t3_1q565on
/r/LocalLLaMA/comments/1q565on/rtx_pro_6000_x4_sandwich_stacking_thermal_test/
false
false
https://b.thumbs.redditm…-2qTdfinvx_c.jpg
28
null
[Research] Moving beyond "Blind Mutation": We built a Cognitive Evolutionary Framework (LoongFlow) (Paper + Code)
0
Hi r/LocalLLaMA, We’ve been researching **Evolutionary Agents** for a while. While the concept (letting agents evolve their own code/prompts) is fascinating, most existing implementations (like OpenEvolve) treat the LLM as a "black-box mutation operator." Basically, they just randomly mutate code and hope for the best. This leads to a **"Random Walk"**—burning huge amounts of tokens with low efficiency, often getting stuck in local optima because the agent forgets *why* previous attempts failed. To fix this "Cognitive Ceiling," we built **LoongFlow**, a framework that maps evolutionary search to a structured **Cognitive Process**, rather than just stochastic guessing. **We just released the paper and code. Here is the architecture breakdown for those interested in Agent System Design:** # 1. The Core: "Plan-Execute-Summarize" (PES) Paradigm Instead of a simple `mutate -> test` loop, we enforce a cognitive lifecycle for every evolutionary step: * **🧠 Planner (Lineage-Based Search):** It doesn't just generate a plan; it retrieves the **"Original Intent"** and **"Retrospective Feedback"** from the agent's ancestors (Lineage Context). This prevents the agent from repeating the same failed logic across generations. * **🛠 Executor (Polymorphic):** Decouples the "What" (Plan) from the "How" (Code). It includes a local verification loop ("Fast-Fail") to catch syntax errors before wasting expensive global evaluation resources. * **📝 Summarizer (Abductive Reflection):** This is critical. After execution, it performs **Abductive Reasoning** to analyze *why* a mutation failed or succeeded, storing this "Insight" back into memory. # 2. Solving "Mode Collapse": Hybrid Evolutionary Memory A common issue with evolving agents is that the population converges too quickly to a single (often mediocre) solution. We implemented a **Multi-Island Model combined with MAP-Elites**: * **Islands:** We run parallel populations that evolve independently to maintain distinct "species" of logic. * **MAP-Elites:** Instead of just keeping the "Top 10" scores, we map solutions to a feature grid (e.g., Code Complexity vs. Novelty). We keep the best solution *for each niche*. This preserves "stepping stone" solutions—code that might score lower now but has a unique structure that could lead to a breakthrough later. # 3. Results By adding this cognitive layer: * **Efficiency:** We saw a **60% reduction** in computational cost compared to standard evolutionary baselines (OpenEvolve) on the Circle Packing benchmark. https://preview.redd.it/73h37cqb2nbg1.jpg?width=878&format=pjpg&auto=webp&s=4f80ca2d1e50472301c263990f8aadff11b868ec * **Breakthroughs:** On the AlphaEvolve suite, LoongFlow broke theoretical performance barriers (Score > 1.0) where baseline methods stagnated. And also in MLE bench, we got 16 Gold、11 Silver、2 Bronze https://preview.redd.it/bqieyhcd2nbg1.jpg?width=898&format=pjpg&auto=webp&s=f6714e20379478a6150b93a210f67c93c1bf90e0 https://preview.redd.it/zwg8i5lk2nbg1.jpg?width=866&format=pjpg&auto=webp&s=5b77ce91702370aa54600f48231d4e1ab8cd18b9 **Resources:** * **Paper (Deep dive into the math & algo):**[arXiv:2512.24077](https://arxiv.org/abs/2512.24077) * **Code (Python):**[GitHub - LoongFlow](https://github.com/baidu-baige/LoongFlow) *(If you find this architecture interesting, a Star on the repo helps us gauge interest in porting more "Evolutionary" features to the local LLM stack!)*
2026-01-06T02:23:23
https://www.reddit.com/r/LocalLLaMA/comments/1q561oj/research_moving_beyond_blind_mutation_we_built_a/
FreshmanDD
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1q561oj
false
null
t3_1q561oj
/r/LocalLLaMA/comments/1q561oj/research_moving_beyond_blind_mutation_we_built_a/
false
false
https://b.thumbs.redditm…BOPzFA15ojzI.jpg
0
null
MCP Chat Studio v3 — Studio Assistant, OpenAPI→MCP Generator, Workspace Mode + CI Gates (Postman‑style flow)
0
Hey r/… 👋 We shipped a big MCP Chat Studio update and it’s now a real “Postman for MCP servers” workflow. Highlights \- Studio Assistant (bottom‑right compass) — context‑aware helper, can open panels, run actions, import OpenAPI, and guide setup \- OpenAPI → MCP Generator (JSON + YAML) \- proxy mode + auth mapping \- “Test in Studio” + Run & Connect (Auto) \- auto‑naming + error details on failure \- Workspace Mode — floating panels, quick‑bar, command palette, sessions, templates, export/import bundles \- Inspector upgrades — Bulk Test + heatmap, Diff/Matrix, History → Matrix, schema fuzzing \- Contracts + CI gates — export baseline & fail‑on‑change \- Mocks — create/connect/test mock servers \- Workflow AI Builder — generate flows, validate, export Python/Node \- Security hardening — CSRF, audit logging, session persistence If anyone wants to try it or break it, here’s the repo: [https://github.com/JoeCastrom/mcp-chat-studio](https://github.com/JoeCastrom/mcp-chat-studio) Would love feedback: what’s still missing for MCP testing to feel better
2026-01-06T02:03:59
https://github.com/JoeCastrom/mcp-chat-studio
Some-Put8242
github.com
1970-01-01T00:00:00
0
{}
1q55lpz
false
null
t3_1q55lpz
/r/LocalLLaMA/comments/1q55lpz/mcp_chat_studio_v3_studio_assistant_openapimcp/
false
false
default
0
{'enabled': False, 'images': [{'id': 'rjuYKMrQmP9urnOv8rvEGQjM3Fs7KlXpzMBhLtYSNv4', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/rjuYKMrQmP9urnOv8rvEGQjM3Fs7KlXpzMBhLtYSNv4.png?width=108&crop=smart&auto=webp&s=ce7bd59a4dc5209cd34bee16718173cd839a469f', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/rjuYKMrQmP9urnOv8rvEGQjM3Fs7KlXpzMBhLtYSNv4.png?width=216&crop=smart&auto=webp&s=c23406fb7dba676393b1e4522c49de76c4169a16', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/rjuYKMrQmP9urnOv8rvEGQjM3Fs7KlXpzMBhLtYSNv4.png?width=320&crop=smart&auto=webp&s=518c434c4315e8380233610c4497752479eb96d6', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/rjuYKMrQmP9urnOv8rvEGQjM3Fs7KlXpzMBhLtYSNv4.png?width=640&crop=smart&auto=webp&s=e27711e54327900d25579f8cbdb83635d02e596f', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/rjuYKMrQmP9urnOv8rvEGQjM3Fs7KlXpzMBhLtYSNv4.png?width=960&crop=smart&auto=webp&s=8be878594ff872d71335b0316dd50b426322cf4e', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/rjuYKMrQmP9urnOv8rvEGQjM3Fs7KlXpzMBhLtYSNv4.png?width=1080&crop=smart&auto=webp&s=060eddb227ec748ddfe431947f17d3cd91a2df43', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/rjuYKMrQmP9urnOv8rvEGQjM3Fs7KlXpzMBhLtYSNv4.png?auto=webp&s=954b288812bea2943515e28024f4757d9c512fd6', 'width': 1200}, 'variants': {}}]}
Hairy presentation
1
2026-01-06T01:56:19
https://i.redd.it/kslsh758ymbg1.png
Wrong_User_Logged
i.redd.it
1970-01-01T00:00:00
0
{}
1q55f37
false
null
t3_1q55f37
/r/LocalLLaMA/comments/1q55f37/hairy_presentation/
false
false
default
1
{'enabled': True, 'images': [{'id': 'kslsh758ymbg1', 'resolutions': [{'height': 66, 'url': 'https://preview.redd.it/kslsh758ymbg1.png?width=108&crop=smart&auto=webp&s=e1b564ca2b1fbc88b11f32275674594186a1483c', 'width': 108}, {'height': 132, 'url': 'https://preview.redd.it/kslsh758ymbg1.png?width=216&crop=smart&auto=webp&s=faa20a8589dcd6b201ae7f51c6f68b02a951bdf1', 'width': 216}, {'height': 196, 'url': 'https://preview.redd.it/kslsh758ymbg1.png?width=320&crop=smart&auto=webp&s=405ac8dfff9ab954136450ecbccb744cb4a31aea', 'width': 320}, {'height': 393, 'url': 'https://preview.redd.it/kslsh758ymbg1.png?width=640&crop=smart&auto=webp&s=20b0909ad3c5f7382d667fe04f6e83f02e694c1d', 'width': 640}, {'height': 590, 'url': 'https://preview.redd.it/kslsh758ymbg1.png?width=960&crop=smart&auto=webp&s=6a6dd1d4a0e3bcb0d76c7466e300d97bc1d0f605', 'width': 960}, {'height': 663, 'url': 'https://preview.redd.it/kslsh758ymbg1.png?width=1080&crop=smart&auto=webp&s=622cbd7b636eadc18b39767e2165946bd510eb6e', 'width': 1080}], 'source': {'height': 1972, 'url': 'https://preview.redd.it/kslsh758ymbg1.png?auto=webp&s=ecb47a4eb87d7b8c1e3995241397a406b9b68c6c', 'width': 3208}, 'variants': {}}]}
2 x Instinct Mi 50 32gb for n8n with GPT OSS - 120b
3
I am planning on creating an MCP for a company I work at ([here's a post from a couple of days ago for reference](https://www.reddit.com/r/LocalLLaMA/comments/1q362uu/comment/nxjgzho/?context=3)) and I have the chance to snag a pair of 32gb mi50s to run GPT OSS 120b. (Originally I wanted to run some local llama model just for the sake of testing since I have an RX 9070 XT, but the opportunity for 2 mi50s presented itself and the price for both units is great!) My questions are as follows: 1. Would 2 mi 50s be enough? I saw some people claim 70gb memory total would be enough. I don't have a pc at the moment to run both cards, but I do have a few lists for an AMD EPYC SP3 builds with probably either 16 gb ddr4 3200mhz or 32 gb ddr4 3200 mhz (probably 16 considering current ram shortage and price inflation). 2. Am I going overboard with GPT OSS - 120b? I have checked a few posts in the sub as well as online and it seems like GPT OSS 120b with 2 x mi50 32gb does spectacularly when it comes to MCP. Should I stick to something a bit simpler\\smaller? 3. Are there any equivalent (or better) models I should try mcp with, first? I know each model has its strengths and weaknesses, and every model should be approached differently and that even GPT OSS 120b has short comings just like every other model, but, it really seems like it does the job the best so far. Any feedback would be very much appreciated. Thanks in advance, best regards. P.S. I'd, also, appreciate any suggestions\\tips as to how to price the work\\what to pay attention to (work price wise) considering the mi 50s would probably consume quite a bit of electricity. I don't want to end up doing all this work to end up going under when quoting the company😅
2026-01-06T01:50:27
https://www.reddit.com/r/LocalLLaMA/comments/1q55a5z/2_x_instinct_mi_50_32gb_for_n8n_with_gpt_oss_120b/
Big_black_click
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1q55a5z
false
null
t3_1q55a5z
/r/LocalLLaMA/comments/1q55a5z/2_x_instinct_mi_50_32gb_for_n8n_with_gpt_oss_120b/
false
false
self
3
null
Optimizing for the RAM shortage. At crossroads: Epyc 7002/7003 or go with a 9000 Threadripper?
6
Hi folks, I would appreciate your help (and a sanity check) on my future AI server/Home Server build. I would appreciate your thoughts and some help with my questions. I have some experience with Ollama on my MacBook, but prompt processing is insanely slow even for reasonably short chats. I’d like to have a proper AI server with some GPUs. I am new to GPU inference (never done it), so I would appreciate your patience if (despite lots of research) any of my questions sound stupid. \- The server would double as regular home server, a self hosting server, and an AI server with an API endpoint for home devices on LAN. Maybe a CI server for dev stuff. I hope to run Proxmox with a TrueNAS VM for storage and containers and a separate AI Linux VM with GPUs passed through to that VM. \- I was originally planning on an Epyc 9005 build with DDR5 and was waiting for Black Friday sales, but the subsequent RAM shortage made me re-evaluate my plans to optimize for value. I am now considering 2 paths: 1. An older **Epyc 7002/7003** build. Found 128GB (4x 32GB) of 3200 DDR4 RDIMMs that, while not on QVL, was still reasonably priced (close to Sep/Oct prices) and fits the ROMED8 RAM specs. 1. **Threadripper 9960x** (with ASUS TRX50-SAGE Pro WS WIFI A AMD sTR5 CEB Motherboard). Why? Microcenter's deep bundle discount makes the inflated cost of DDR5 far more palatable. Both would supply lots of lanes. Epyc is a much higher count (128x) than Threadripper (88x), but Threadripper is PCIe5 (vs PCIe4 in Epyc 7002/7003). I am planning on adding GPUs to my build: either a 5090 FE if I can score any at close to MSRP, or maybe a refurb 3090s if I can score them at a reasonable price. I plan to upgrade to a multi-GPU setup down the road if everything goes well. I have 2x Intel Arc Pro B50's to get me started. I know they are weak, but they have SR-IOV (so, great for VMs), and I can play around to get my toes wet until I come across a decent deal on a better GPU. Threadripper 9960x is a 4-channel CPU, and should be able to pull close to 200Gbs RAM bandwidth per benchmarks/specs. Epyc 7002/7003 can pull close to that, but only if all RAM slots are populated, which will probably not be the case because getting 8-12 sticks of the same RAM is crazy expensive right now even for DDR4, and it’s not likely that I would be able to match the sticks that I already managed to obtain. I would love to go with Epyc 9005 platform and 12 channels/sticks for the holy grail of its 600 Gbs RAM bandwidth, but that is outside my budget with the current prices. **Questions:** 1. If I do end up going with 7002/7003 Epyc, what is the sweet spot for the CPU? Should I go for something hot and expensive like 73F3, or would something cheaper be as good for this use case? How do you go about picking a CPU? I would imagine offloading MoE layers to CPU (let alone full CPU inference) VS fully in-VRAM scenarios really diverge from each other. What would you get and why? 2. The slower PCI4 would theoretically punish the prompt processing/prefill stage IIUC because the VRAM would get populated at at a slower rate, right? But how much does PCI5 vs PCI4 matter in real life in your experience? 3. RAM bandwidth is probably the most important for CPU-only inference and offloading MoE layers to CPU, right? How important is it if I get, say, a quad 3090 setup and run models fully in VRAM? 4. I may want to install an SFP NIC and an NVME card (like Asus Hyper with 4x NVME slots), possibly an HBA card to passthrough HDDs to the TrueNAS VM. To make that happen AND not lock myself out of possibility of running quad GPUs—question/sanity check: How much of a perf hit is it to run GPUs in a 8x mode? Would bifurcating TWO full 16x PCIe slots into FOUR x8 slots with some sort of raisers be a possible/reasonable solution?  5. I don’t know what I don’t know, so general thoughts and comments are very much welcome and appreciated: What would you go with? I am leaning towards Threadripper, but that will come with the penalty of lots of heat (and also more money), but the benefit of newer platform and CPU power, PCIe5, DDR5, etc. Thank you in advance ^(P.S. Would it be possible to use a Windows guest on Proxmox for some gaming on Threadripper when GPU(s) are not doing inference/AI stuff to save on costs of redundant hardware, or would it be a bad idea?)
2026-01-06T00:25:35
https://www.reddit.com/r/LocalLLaMA/comments/1q538m0/optimizing_for_the_ram_shortage_at_crossroads/
Infinite100p
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1q538m0
false
null
t3_1q538m0
/r/LocalLLaMA/comments/1q538m0/optimizing_for_the_ram_shortage_at_crossroads/
false
false
self
6
{'enabled': False, 'images': [{'id': 'siwWwZDVCzsbtA-158TFXyQp4LxIHa-Y5Is2TonRIX8', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/siwWwZDVCzsbtA-158TFXyQp4LxIHa-Y5Is2TonRIX8.jpeg?width=108&crop=smart&auto=webp&s=065f68a421d2697df3c5f306daa95f58b61812f4', 'width': 108}], 'source': {'height': 200, 'url': 'https://external-preview.redd.it/siwWwZDVCzsbtA-158TFXyQp4LxIHa-Y5Is2TonRIX8.jpeg?auto=webp&s=3e4a68cb724ec9aa1dd11bca6583e2f782877ba1', 'width': 200}, 'variants': {}}]}
I just saw Intel embrace local LLM inference in their CES presentation
140
After watching Nvidia show off their massive cloud inference machine while ignoring the existence of local inference I was pleasantly surprised by the message Intel was sending. Intel flipped the script and talked about how local inference in the future because of user privacy, control, model responsiveness and cloud bottlenecks. I have read countless posts on here about how local inference is dead because Nvidia switched to a cloud first strategy but this might just be temporary because others are apparently thrilled by the idea of building us the hardware we want. And they are leaning into it so who knows what the future brings. Local inference clearly isn't as dead as some want us to believe and it might even become a lot bigger in the near future.
2026-01-06T00:00:43
https://www.reddit.com/r/LocalLLaMA/comments/1q52miw/i_just_saw_intel_embrace_local_llm_inference_in/
Mundane-Light6394
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1q52miw
false
null
t3_1q52miw
/r/LocalLLaMA/comments/1q52miw/i_just_saw_intel_embrace_local_llm_inference_in/
false
false
self
140
null
LM Studio MCP
0
Holy fuck!! Amazon shopping agents is possible fully local.
2026-01-05T23:59:51
https://v.redd.it/6dp04mbadmbg1
Serious_Molasses313
/r/LocalLLaMA/comments/1q52lmw/lm_studio_mcp/
1970-01-01T00:00:00
0
{}
1q52lmw
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/6dp04mbadmbg1/DASHPlaylist.mpd?a=1770379197%2CZjhiZGM4MTE3OGE0YTBmN2I5MGY5ZmY3YWU3YWQzMzgyNDc4ZDhiYWM1MTc0YjE2NzM0Mjk1OWRkNDI0Njc3Mg%3D%3D&v=1&f=sd', 'duration': 148, 'fallback_url': 'https://v.redd.it/6dp04mbadmbg1/CMAF_1080.mp4?source=fallback', 'has_audio': True, 'height': 1920, 'hls_url': 'https://v.redd.it/6dp04mbadmbg1/HLSPlaylist.m3u8?a=1770379197%2CYWFmNTE2ZTdiMDYxZmJmOTZiY2M1YWZmMTBmOTkyZmNmZThlYTlmOGNiM2E0NjgwNzJlYmY1YTg2OWJjZWMzNQ%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/6dp04mbadmbg1/CMAF_96.mp4', 'transcoding_status': 'completed', 'width': 856}}
t3_1q52lmw
/r/LocalLLaMA/comments/1q52lmw/lm_studio_mcp/
false
false
https://external-preview…775975ac165ebcbe
0
{'enabled': False, 'images': [{'id': 'czVpdGIzY2FkbWJnMQxLWa3keRbrQmKfyfou81N-L1ggIMiYE8zAk1RM_9ML', 'resolutions': [{'height': 216, 'url': 'https://external-preview.redd.it/czVpdGIzY2FkbWJnMQxLWa3keRbrQmKfyfou81N-L1ggIMiYE8zAk1RM_9ML.png?width=108&crop=smart&format=pjpg&auto=webp&s=87ba38c4d8e18860c3ca30cc0ad231ccf1e3601e', 'width': 108}, {'height': 432, 'url': 'https://external-preview.redd.it/czVpdGIzY2FkbWJnMQxLWa3keRbrQmKfyfou81N-L1ggIMiYE8zAk1RM_9ML.png?width=216&crop=smart&format=pjpg&auto=webp&s=bf6f51511eb45f4f90d1ac4c9595482a2a0aa60a', 'width': 216}, {'height': 640, 'url': 'https://external-preview.redd.it/czVpdGIzY2FkbWJnMQxLWa3keRbrQmKfyfou81N-L1ggIMiYE8zAk1RM_9ML.png?width=320&crop=smart&format=pjpg&auto=webp&s=ad4633617527468bfd0d878303c41d536b9f9bdc', 'width': 320}, {'height': 1280, 'url': 'https://external-preview.redd.it/czVpdGIzY2FkbWJnMQxLWa3keRbrQmKfyfou81N-L1ggIMiYE8zAk1RM_9ML.png?width=640&crop=smart&format=pjpg&auto=webp&s=1bd5099a5a4b535659758c0db46d9e5c18f80689', 'width': 640}], 'source': {'height': 1746, 'url': 'https://external-preview.redd.it/czVpdGIzY2FkbWJnMQxLWa3keRbrQmKfyfou81N-L1ggIMiYE8zAk1RM_9ML.png?format=pjpg&auto=webp&s=ea6d910e493f76867a685a4a5cb1e776217af51f', 'width': 778}, 'variants': {}}]}
Holy Fuck
0
I figure it how to make shopping agents with 6GB vram video coming soon
2026-01-05T23:48:19
https://www.reddit.com/r/LocalLLaMA/comments/1q52bid/holy_fuck/
Serious_Molasses313
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1q52bid
false
null
t3_1q52bid
/r/LocalLLaMA/comments/1q52bid/holy_fuck/
false
false
self
0
null
[Release] Delta -- LLM powered coding tool for engineers.
1
Some time ago, I got frustrated with a lot of the issues AI agents have and decided to build myself a tool that strips away a lot of the agentic junk and just writes the code I tell it to. I've been using this tool for about a year, but with the quality of recent models I decided it's time to clean it up and release it publicly. **Delta** is essentially a lightweight wrapper around an LLM that allows it to edit files, with a focus on the following: - **Reliable File Edits:** Delta does everything it can to make sure it actually applies the changes in the response via a robust fuzzy diff algorithm. - **Context Management:** Delta has tools to minimize the time spent providing context. Tools like file groups, temporary file toggles, and a built in context manager mean minimal time spent digging through your filesystem. - **Workflow and QoL:** Tabs, backups, automatic testing & validation, automatic retries, notifications when output is finished, etc. - **Transparency:** If something goes wrong, a user should easily be able to diagnose how to steer the LLM better in the future. - **Configurability:** Almost any feature in delta is optional or configurable. In its minimal state, LLM functions as a plain LLM wrapper. Delta uses the OpenAI client API out of the box, so you can point it to local models via **Ollama**, etc, or cloud models via **OpenRouter**. You can find it on Github here: https://github.com/truefire/delta
2026-01-05T23:40:23
https://www.reddit.com/r/LocalLLaMA/comments/1q524n4/release_delta_llm_powered_coding_tool_for/
truefire87
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1q524n4
false
null
t3_1q524n4
/r/LocalLLaMA/comments/1q524n4/release_delta_llm_powered_coding_tool_for/
false
false
self
1
{'enabled': False, 'images': [{'id': '_bjzgPzyrc_AWnTPFGenUz_Ki8pKgQwd7r98nARJQlk', 'resolutions': [{'height': 63, 'url': 'https://external-preview.redd.it/_bjzgPzyrc_AWnTPFGenUz_Ki8pKgQwd7r98nARJQlk.png?width=108&crop=smart&auto=webp&s=2318b41c8517c3356b9cfb7ad428325955b543fb', 'width': 108}, {'height': 127, 'url': 'https://external-preview.redd.it/_bjzgPzyrc_AWnTPFGenUz_Ki8pKgQwd7r98nARJQlk.png?width=216&crop=smart&auto=webp&s=6da00dfc84290c0129cee51a96ee7fbcb683b384', 'width': 216}, {'height': 189, 'url': 'https://external-preview.redd.it/_bjzgPzyrc_AWnTPFGenUz_Ki8pKgQwd7r98nARJQlk.png?width=320&crop=smart&auto=webp&s=0ac3f6fc726581bcf0dbff02d497c565d00aa350', 'width': 320}, {'height': 379, 'url': 'https://external-preview.redd.it/_bjzgPzyrc_AWnTPFGenUz_Ki8pKgQwd7r98nARJQlk.png?width=640&crop=smart&auto=webp&s=8abae8338581de428c812a5fa6094637b51a8cbc', 'width': 640}, {'height': 568, 'url': 'https://external-preview.redd.it/_bjzgPzyrc_AWnTPFGenUz_Ki8pKgQwd7r98nARJQlk.png?width=960&crop=smart&auto=webp&s=c57690ccc4fcceda52e6ec48f6c85b2bd0809b96', 'width': 960}, {'height': 639, 'url': 'https://external-preview.redd.it/_bjzgPzyrc_AWnTPFGenUz_Ki8pKgQwd7r98nARJQlk.png?width=1080&crop=smart&auto=webp&s=cb10ccf3e732d0f0e4d0fe0f2996a15c0f7276f5', 'width': 1080}], 'source': {'height': 975, 'url': 'https://external-preview.redd.it/_bjzgPzyrc_AWnTPFGenUz_Ki8pKgQwd7r98nARJQlk.png?auto=webp&s=cad51f2daab427d9472b823451c371a6060687cc', 'width': 1646}, 'variants': {}}]}
GTX 1080 vs RTX 2070 Super for inference
0
So I have an old GTX 1080 (8GB) and the possibility of a used 2070 Super (8GB) for not too much from a good source, and debating if it's worth spending the money for the 2070 Super or just save up for a newer card with more VRAM (>=16GB) for the future. This is to run Ollama locally, with one of the smaller LLMs for Home-Assistant voice control agent. Haven't settled on which one exactly, I'll have to see how they perform and function first.
2026-01-05T23:23:22
https://www.reddit.com/r/LocalLLaMA/comments/1q51p7f/gtx_1080_vs_rtx_2070_super_for_inference/
NE556
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1q51p7f
false
null
t3_1q51p7f
/r/LocalLLaMA/comments/1q51p7f/gtx_1080_vs_rtx_2070_super_for_inference/
false
false
self
0
null
Gemma 3 1B + QLoRA = garbage output. Anyone got it working?
1
Tried fine-tuning Gemma 3 1B with 4-bit QLoRA (HuggingFace + TRL + PEFT). Model outputs repetitive garbage after training: MachineMachineMachineMachineBlackBlack將기 Tried everything: different LoRA ranks, proper chat format, EOS tokens, bf16 matching — nothing worked. **What fixed it:** Removing quantization completely. Full precision (float16) works perfectly. **Questions:** 1. Anyone got Gemma 3 1B + QLoRA working with HuggingFace/TRL? 2. Is 1B just too small for quantized training? 3. Any specific settings I'm missing? I know 1B doesn't need quantization, but want to learn QLoRA for bigger models. Thanks!
2026-01-05T23:19:46
https://www.reddit.com/r/LocalLLaMA/comments/1q51lw5/gemma_3_1b_qlora_garbage_output_anyone_got_it/
im_pulsive
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1q51lw5
false
null
t3_1q51lw5
/r/LocalLLaMA/comments/1q51lw5/gemma_3_1b_qlora_garbage_output_anyone_got_it/
false
false
self
1
null
Claude can reference thinkt ags from previous comments. Why not SmolLM3?
0
Most LLMs that can "reason" have no ability to speak as if they can read their reasoning in the `<think></think>` tags in future responses. This is because Qwen models actually strip "reasoning" after the prompt is generated to reduce context space and keep computational efficiency. But looking at SmolLM3's [chat template,](https://huggingface.co/HuggingFaceTB/SmolLM3-3B/raw/main/chat_template.jinja) no stripping appears to occur. Before you jump the gun and say "But the reasoning is in context space. Maybe your client (the ui) is stripping it automatically." Well, my UI is llama-cpp's own, and I specifically enabled a "Show raw output" setting which doesn't do any parsing on the server side or client side and throws the FULL response, with think tags, back into context. [This is the behaviour I see with SmolLM3.](https://imgur.com/a/w0QmHaq) And it fails worse to repeat the thinking block in the current response. Read the paragraph starting with "alternatively" for a **TL;DR** However Claude surprisingly has the ability to perform hybrid "reasoning," where appending proprietary anthropic xml tags at the end of your message will enable such behaviour. Turns out claude cannot only read the verbatim reasonign blovks from the current response but also from past responses as seen [here.](https://claude.ai/share/964d8268-0cd4-4f29-bb61-bcf9ecc934ee) Why are models likw SmolLM3 behaving as if the think block never existed in the previous response where as Claude is like "Sure here's the reasoning"?
2026-01-05T23:09:11
https://www.reddit.com/r/LocalLLaMA/comments/1q51cbi/claude_can_reference_thinkt_ags_from_previous/
Brospeh-Stalin
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1q51cbi
false
null
t3_1q51cbi
/r/LocalLLaMA/comments/1q51cbi/claude_can_reference_thinkt_ags_from_previous/
false
false
self
0
{'enabled': False, 'images': [{'id': 'aprc_YFFurO5bVyUjyMm3GS8Nyk7lZvT99NhsfjMw4E', 'resolutions': [{'height': 216, 'url': 'https://external-preview.redd.it/aprc_YFFurO5bVyUjyMm3GS8Nyk7lZvT99NhsfjMw4E.png?width=108&crop=smart&auto=webp&s=84c0fa6d1390e0859c17361f50590d8a19902843', 'width': 108}, {'height': 432, 'url': 'https://external-preview.redd.it/aprc_YFFurO5bVyUjyMm3GS8Nyk7lZvT99NhsfjMw4E.png?width=216&crop=smart&auto=webp&s=7735cf0be54c8e3b07941bf32abfe859691057ee', 'width': 216}, {'height': 640, 'url': 'https://external-preview.redd.it/aprc_YFFurO5bVyUjyMm3GS8Nyk7lZvT99NhsfjMw4E.png?width=320&crop=smart&auto=webp&s=b47feae299591fc4a5a9853f692e95e7560395a5', 'width': 320}, {'height': 1280, 'url': 'https://external-preview.redd.it/aprc_YFFurO5bVyUjyMm3GS8Nyk7lZvT99NhsfjMw4E.png?width=640&crop=smart&auto=webp&s=cee87d9a29dd096acb00e09a5be602268929d173', 'width': 640}, {'height': 1920, 'url': 'https://external-preview.redd.it/aprc_YFFurO5bVyUjyMm3GS8Nyk7lZvT99NhsfjMw4E.png?width=960&crop=smart&auto=webp&s=9c0f20570c163bc868702bb6be87cb575ba708b5', 'width': 960}, {'height': 2160, 'url': 'https://external-preview.redd.it/aprc_YFFurO5bVyUjyMm3GS8Nyk7lZvT99NhsfjMw4E.png?width=1080&crop=smart&auto=webp&s=22f88c1bb243321c2c12b60aab0dbc2bc57f53db', 'width': 1080}], 'source': {'height': 4242, 'url': 'https://external-preview.redd.it/aprc_YFFurO5bVyUjyMm3GS8Nyk7lZvT99NhsfjMw4E.png?auto=webp&s=44337b5248ed18f1d09c8872d7758c41794b663f', 'width': 1920}, 'variants': {}}]}
Should I build a context-aware LLM chat interface or does one exist?
0
Problem I'm trying to solve: Long conversations (100+ messages) lose quality as context window fills. Looking for tool that manages this automatically. Features I want: \- Real-time context monitoring \- Smart compacting (before window fills) \- Topic-based context management (freeze old topics) \- BYOK model (user supplies API keys) \- Desktop app (Electron?) Before I spend months building this - does it already exist? Checked: LM Studio, LibreChat, OpenWebUI, [Continue.dev](http://Continue.dev) \- none do this.
2026-01-05T22:53:53
https://www.reddit.com/r/LocalLLaMA/comments/1q50y9j/should_i_build_a_contextaware_llm_chat_interface/
PossibilityJazzlike
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1q50y9j
false
null
t3_1q50y9j
/r/LocalLLaMA/comments/1q50y9j/should_i_build_a_contextaware_llm_chat_interface/
false
false
self
0
{'enabled': False, 'images': [{'id': '7FTmwKM4TuCvaIMSlask76mRn8liFawdPuHJFSqPl9U', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/7FTmwKM4TuCvaIMSlask76mRn8liFawdPuHJFSqPl9U.png?width=108&crop=smart&auto=webp&s=efe307f51ff2874b18960bc89ca5a18a1b551442', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/7FTmwKM4TuCvaIMSlask76mRn8liFawdPuHJFSqPl9U.png?width=216&crop=smart&auto=webp&s=3f5d82a3bc41c4fa63c2939d1e2fdc1db75de463', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/7FTmwKM4TuCvaIMSlask76mRn8liFawdPuHJFSqPl9U.png?width=320&crop=smart&auto=webp&s=c204a4e04e7cbc078774e051a9e247b58ad6b572', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/7FTmwKM4TuCvaIMSlask76mRn8liFawdPuHJFSqPl9U.png?width=640&crop=smart&auto=webp&s=5b6c9e3fb05aa6cf2a05f0e920367ffac32c6448', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/7FTmwKM4TuCvaIMSlask76mRn8liFawdPuHJFSqPl9U.png?width=960&crop=smart&auto=webp&s=bd57ab7ea83274fea8ece5793f2200a0ac6a7f02', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/7FTmwKM4TuCvaIMSlask76mRn8liFawdPuHJFSqPl9U.png?width=1080&crop=smart&auto=webp&s=5cdafbd3026c11883a519aa200677fb58be16d11', 'width': 1080}], 'source': {'height': 1260, 'url': 'https://external-preview.redd.it/7FTmwKM4TuCvaIMSlask76mRn8liFawdPuHJFSqPl9U.png?auto=webp&s=30396441627641135814de7d733ce94b9e7795dc', 'width': 2400}, 'variants': {}}]}
Best tool for automatic context window management?
1
Looking for recommendations: What's the best self-hosted/desktop chat interface that does intelligent context management? Specifically: \- Shows context window health (% full) \- Auto-compacts or summarizes when getting full \- Maintains conversation quality in 100+ message sessions \- Supports multiple LLM APIs (Claude, OpenAI, local) Tried LM Studio (no context features), LibreChat (just basic UI), OpenWebUI (no context tracking). Am I missing something obvious, or does this not exist yet?
2026-01-05T22:52:35
https://www.reddit.com/r/LocalLLaMA/comments/1q50x2t/best_tool_for_automatic_context_window_management/
PossibilityJazzlike
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1q50x2t
false
null
t3_1q50x2t
/r/LocalLLaMA/comments/1q50x2t/best_tool_for_automatic_context_window_management/
false
false
self
1
null
How do we tell them..? :/
69
Not funny really, I couldn't think of a better flair... I have never tried to discuss things where a model would refuse to cooperate, I just woke up one day and thought what GLM (the biggest model I can run locally, using unsloth's IQ2\_M) would think of it. I didn't expect it to go this way, I think we all wish it was fiction. How do we break the news to local LLMs? I gave up rephasing the prompt after three tries. Anyways, 128GB DDR5 paired with an RTX 4060 8GB using an old 0.3.30 LMStudio on Windows 11 to yield the 2.2 ts seen, I am happy with the setup. Will migrate inference to Ubuntu soon.
2026-01-05T22:34:11
https://i.redd.it/0mykulscxlbg1.png
therealAtten
i.redd.it
1970-01-01T00:00:00
0
{}
1q50g37
false
null
t3_1q50g37
/r/LocalLLaMA/comments/1q50g37/how_do_we_tell_them/
false
false
default
69
{'enabled': True, 'images': [{'id': '0mykulscxlbg1', 'resolutions': [{'height': 113, 'url': 'https://preview.redd.it/0mykulscxlbg1.png?width=108&crop=smart&auto=webp&s=8c601394a96060d982bccb39ff9e64c0bf60a506', 'width': 108}, {'height': 227, 'url': 'https://preview.redd.it/0mykulscxlbg1.png?width=216&crop=smart&auto=webp&s=9e09bc132597b1f538cd2a076d92bbe3b80d38c2', 'width': 216}, {'height': 336, 'url': 'https://preview.redd.it/0mykulscxlbg1.png?width=320&crop=smart&auto=webp&s=c69aef6ad3ddd7320fba832f03c50c63610a56a0', 'width': 320}, {'height': 673, 'url': 'https://preview.redd.it/0mykulscxlbg1.png?width=640&crop=smart&auto=webp&s=ac9cb2a24636ea77e1a6421eb9c7ac1a58329fe4', 'width': 640}, {'height': 1010, 'url': 'https://preview.redd.it/0mykulscxlbg1.png?width=960&crop=smart&auto=webp&s=c7961b3657e56c427ceba2d9701bb0353325524f', 'width': 960}, {'height': 1136, 'url': 'https://preview.redd.it/0mykulscxlbg1.png?width=1080&crop=smart&auto=webp&s=c7742a6981262feaa96a090bd4a904dead723fb8', 'width': 1080}], 'source': {'height': 1503, 'url': 'https://preview.redd.it/0mykulscxlbg1.png?auto=webp&s=8bc4e5223e28d313f7628eeb518048eb3e9bd73b', 'width': 1428}, 'variants': {}}]}
Nvidia launches Alpamayo, open AI models that allow autonomous vehicles to 'think like a human' | TechCrunch
35
2026-01-05T22:19:59
https://techcrunch.com/2026/01/05/nvidia-launches-alpamayo-open-ai-models-that-allow-autonomous-vehicles-to-think-like-a-human/
Recoil42
techcrunch.com
1970-01-01T00:00:00
0
{}
1q502gi
false
null
t3_1q502gi
/r/LocalLLaMA/comments/1q502gi/nvidia_launches_alpamayo_open_ai_models_that/
false
false
default
35
{'enabled': False, 'images': [{'id': 'GJaZbAgP3SIZL8AkKdAcAcQrucRHAB3k6jL5Bl11M04', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/GJaZbAgP3SIZL8AkKdAcAcQrucRHAB3k6jL5Bl11M04.jpeg?width=108&crop=smart&auto=webp&s=ff0f59c99fbdf6d0b9c633303565ceacd79adb5a', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/GJaZbAgP3SIZL8AkKdAcAcQrucRHAB3k6jL5Bl11M04.jpeg?width=216&crop=smart&auto=webp&s=a805ca027e18caea80942d5133416af398d4aadf', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/GJaZbAgP3SIZL8AkKdAcAcQrucRHAB3k6jL5Bl11M04.jpeg?width=320&crop=smart&auto=webp&s=aa42011d94514e5c859d4e267baaeff70f28c0e5', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/GJaZbAgP3SIZL8AkKdAcAcQrucRHAB3k6jL5Bl11M04.jpeg?width=640&crop=smart&auto=webp&s=e95010decb64b8f6e494dddf8d32e41cf369a813', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/GJaZbAgP3SIZL8AkKdAcAcQrucRHAB3k6jL5Bl11M04.jpeg?width=960&crop=smart&auto=webp&s=7967d5b445b92aff480641a3dd450c2ae89985c1', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/GJaZbAgP3SIZL8AkKdAcAcQrucRHAB3k6jL5Bl11M04.jpeg?width=1080&crop=smart&auto=webp&s=7976e82192b94370a60e63c5dc88627f4762e0c7', 'width': 1080}], 'source': {'height': 675, 'url': 'https://external-preview.redd.it/GJaZbAgP3SIZL8AkKdAcAcQrucRHAB3k6jL5Bl11M04.jpeg?auto=webp&s=0ace4dd4486d37bbc078609eceb19160822c1752', 'width': 1200}, 'variants': {}}]}
Rubin uplifts from CES conference going on now
220
Pretty exciting!
2026-01-05T22:19:51
https://i.redd.it/zgs8qc8kvlbg1.jpeg
mr_zerolith
i.redd.it
1970-01-01T00:00:00
0
{}
1q502bi
false
null
t3_1q502bi
/r/LocalLLaMA/comments/1q502bi/rubin_uplifts_from_ces_conference_going_on_now/
false
false
default
220
{'enabled': True, 'images': [{'id': 'zgs8qc8kvlbg1', 'resolutions': [{'height': 39, 'url': 'https://preview.redd.it/zgs8qc8kvlbg1.jpeg?width=108&crop=smart&auto=webp&s=a4b2a330483734f3a6c9a39a9577a4f9b37c657b', 'width': 108}, {'height': 78, 'url': 'https://preview.redd.it/zgs8qc8kvlbg1.jpeg?width=216&crop=smart&auto=webp&s=57cfc12f3a9c76c0a19fc90d62ddccd6eb7182a6', 'width': 216}, {'height': 116, 'url': 'https://preview.redd.it/zgs8qc8kvlbg1.jpeg?width=320&crop=smart&auto=webp&s=b7705ac50527e318a69cf74cd45c1f3af00eb8a0', 'width': 320}, {'height': 233, 'url': 'https://preview.redd.it/zgs8qc8kvlbg1.jpeg?width=640&crop=smart&auto=webp&s=100fa835e15e141c2bf711a1c174360f0ecee07a', 'width': 640}, {'height': 350, 'url': 'https://preview.redd.it/zgs8qc8kvlbg1.jpeg?width=960&crop=smart&auto=webp&s=1c33965c88c1a111798d0311fc6fb178bc283f9e', 'width': 960}, {'height': 394, 'url': 'https://preview.redd.it/zgs8qc8kvlbg1.jpeg?width=1080&crop=smart&auto=webp&s=828193d0ed1672c760f2228324e68c48f0f2b6f2', 'width': 1080}], 'source': {'height': 506, 'url': 'https://preview.redd.it/zgs8qc8kvlbg1.jpeg?auto=webp&s=0d130249f6266ab85b6d6cb8c5371868420b755b', 'width': 1384}, 'variants': {}}]}
New ik_llama benches - what you getting?
26
Looks like I'm getting **double** the PP and TG on Devstral Large. Someone said they're getting 4x?! Very nice, regardless. llama.cpp: $ llama-bench -m mistralai_Devstral-2-123B-Instruct-2512-Q4_K_L-00001-of-00002.gguf --flash-attn 1 ggml_cuda_init: found 4 CUDA devices: Device 0: NVIDIA GeForce RTX 3090 Ti, compute capability 8.6, VMM: yes Device 1: NVIDIA GeForce RTX 3090 Ti, compute capability 8.6, VMM: yes Device 2: NVIDIA GeForce RTX 3090 Ti, compute capability 8.6, VMM: yes Device 3: NVIDIA GeForce RTX 3090 Ti, compute capability 8.6, VMM: yes | model | size | params | backend | ngl | fa | test | t/s | | ------------------------------ | ---------: | ---------: | ---------- | --: | -: | --------------: | -------------------: | | llama ?B Q4_K - Medium | 70.86 GiB | 125.03 B | CUDA | 99 | 1 | pp512 | 427.12 ± 0.52 | | llama ?B Q4_K - Medium | 70.86 GiB | 125.03 B | CUDA | 99 | 1 | tg128 | 11.99 ± 0.00 | build: f47edb8c1 (7636) ik\_llama: $ ./llama-bench -m mistralai_Devstral-2-123B-Instruct-2512-Q4_K_L-00001-of-00002.gguf -sm graph --flash-attn 1 ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no ggml_cuda_init: found 4 CUDA devices: Device 0: NVIDIA GeForce RTX 3090 Ti, compute capability 8.6, VMM: yes, VRAM: 24112 MiB Device 1: NVIDIA GeForce RTX 3090 Ti, compute capability 8.6, VMM: yes, VRAM: 24112 MiB Device 2: NVIDIA GeForce RTX 3090 Ti, compute capability 8.6, VMM: yes, VRAM: 24112 MiB Device 3: NVIDIA GeForce RTX 3090 Ti, compute capability 8.6, VMM: yes, VRAM: 24112 MiB =============================== NCCL main communicator initialized =============================== NCCL pair communicators for 4 GPUs initialized | model | size | params | backend | ngl | sm | test | t/s | | ------------------------------ | ---------: | ---------: | ---------- | --: | ----: | ------------: | ---------------: | ================================ max_gpu = 0 Device 0: 44 MiB Device 1: 44 MiB Device 2: 44 MiB Device 3: 44 MiB | llama ?B Q4_K - Medium | 138.56 GiB | 246.84 B | CUDA | 999 | graph | pp512 | 915.01 ± 33.93 | Device 0: 22 MiB Device 1: 22 MiB Device 2: 22 MiB Device 3: 22 MiB | llama ?B Q4_K - Medium | 138.56 GiB | 246.84 B | CUDA | 999 | graph | tg128 | 23.00 ± 1.23 | build: d9236392 (4091)
2026-01-05T21:45:22
https://www.reddit.com/r/LocalLLaMA/comments/1q4z5er/new_ik_llama_benches_what_you_getting/
Aggressive-Bother470
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1q4z5er
false
null
t3_1q4z5er
/r/LocalLLaMA/comments/1q4z5er/new_ik_llama_benches_what_you_getting/
false
false
self
26
null
Quality loss on quantized small models?
4
I've read multiple times that big models hold decent quality at low quants. So I wonder if the opposite is also true: small models (<1b) degrade significantly at Q8.
2026-01-05T21:42:36
https://www.reddit.com/r/LocalLLaMA/comments/1q4z2td/quality_loss_on_quantized_small_models/
Smooth-Cow9084
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1q4z2td
false
null
t3_1q4z2td
/r/LocalLLaMA/comments/1q4z2td/quality_loss_on_quantized_small_models/
false
false
self
4
null
vLLM in docker stops and doesn't run the server
1
Hello i am new in this so i usally run vercel agent sdk on azure deployed models but , i want to expirement and test on my machine so i used LM studio it was fine but the resquest is slow even ( compared to azure of cource ) so i tried using vLLM to squeese power out of the gpu but when i ran the vLLM on docker it's just hang after loading the model into VRAM and localhost:8000 return empty response in and out of the docker my command `docker run --runtime nvidia --gpus all \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=$HF_TOKEN" \ -p 8000:8000 \ --ipc=host \ vllm/vllm-openai:latest \ --model Qwen/Qwen3-VL-8B-Instruct-FP8` and this is the conatiner's log `WARNING 01-05 10:48:13 [argparse_utils.py:195] With vllm serve, you should provide the model as a positional argument or in a config file instead of via the --model option. The --model option will be removed in v0.13. (APIServer pid=1) INFO 01-05 10:48:13 [api_server.py:1351] vLLM API server version 0.13.0 (APIServer pid=1) INFO 01-05 10:48:13 [utils.py:253] non-default args: {'model_tag': 'Qwen/Qwen3-VL-8B-Instruct-FP8', 'model': 'Qwen/Qwen3-VL-8B-Instruct-FP8'} (APIServer pid=1) INFO 01-05 10:48:20 [model.py:514] Resolved architecture: Qwen3VLForConditionalGeneration (APIServer pid=1) INFO 01-05 10:48:20 [model.py:1661] Using max model len 262144 (APIServer pid=1) INFO 01-05 10:48:20 [scheduler.py:230] Chunked prefill is enabled with max_num_batched_tokens=2048. (APIServer pid=1) WARNING 01-05 10:48:20 [cache.py:232] Possibly too large swap space. 4.00 GiB out of the 7.70 GiB total CPU memory is allocated for the swap space. (APIServer pid=1) WARNING 01-05 10:48:26 [interface.py:465] Using 'pin_memory=False' as WSL is detected. This may slow down the performance. (EngineCore_DP0 pid=99) INFO 01-05 10:48:32 [core.py:93] Initializing a V1 LLM engine (v0.13.0) with config: model='Qwen/Qwen3-VL-8B-Instruct-FP8', speculative_config=None, tokenizer='Qwen/Qwen3-VL-8B-Instruct-FP8', skip_tokenizer_init=False, tokenizer_mode=auto, revision=None, tokenizer_revision=None, trust_remote_code=False, dtype=torch.bfloat16, max_seq_len=262144, download_dir=None, load_format=auto, tensor_parallel_size=1, pipeline_parallel_size=1, data_parallel_size=1, disable_custom_all_reduce=False, quantization=fp8, enforce_eager=False, kv_cache_dtype=auto, device_config=cuda, structured_outputs_config=StructuredOutputsConfig(backend='auto', disable_fallback=False, disable_any_whitespace=False, disable_additional_properties=False, reasoning_parser='', reasoning_parser_plugin='', enable_in_reasoning=False), observability_config=ObservabilityConfig(show_hidden_metrics_for_version=None, otlp_traces_endpoint=None, collect_detailed_traces=None, kv_cache_metrics=False, kv_cache_metrics_sample=0.01, cudagraph_metrics=False, enable_layerwise_nvtx_tracing=False), seed=0, served_model_name=Qwen/Qwen3-VL-8B-Instruct-FP8, enable_prefix_caching=True, enable_chunked_prefill=True, pooler_config=None, compilation_config={'level': None, 'mode': <CompilationMode.VLLM_COMPILE: 3>, 'debug_dump_path': None, 'cache_dir': '', 'compile_cache_save_format': 'binary', 'backend': 'inductor', 'custom_ops': ['+quant_fp8', 'none', '+quant_fp8'], 'splitting_ops': ['vllm::unified_attention', 'vllm::unified_attention_with_output', 'vllm::unified_mla_attention', 'vllm::unified_mla_attention_with_output', 'vllm::mamba_mixer2', 'vllm::mamba_mixer', 'vllm::short_conv', 'vllm::linear_attention', 'vllm::plamo2_mamba_mixer', 'vllm::gdn_attention_core', 'vllm::kda_attention', 'vllm::sparse_attn_indexer'], 'compile_mm_encoder': False, 'compile_sizes': [], 'compile_ranges_split_points': [2048], 'inductor_compile_config': {'enable_auto_functionalized_v2': False, 'combo_kernels': True, 'benchmark_combo_kernel': True}, 'inductor_passes': {}, 'cudagraph_mode': <CUDAGraphMode.FULL_AND_PIECEWISE: (2, 1)>, 'cudagraph_num_of_warmups': 1, 'cudagraph_capture_sizes': [1, 2, 4, 8, 16, 24, 32, 40, 48, 56, 64, 72, 80, 88, 96, 104, 112, 120, 128, 136, 144, 152, 160, 168, 176, 184, 192, 200, 208, 216, 224, 232, 240, 248, 256, 272, 288, 304, 320, 336, 352, 368, 384, 400, 416, 432, 448, 464, 480, 496, 512], 'cudagraph_copy_inputs': False, 'cudagraph_specialize_lora': True, 'use_inductor_graph_partition': False, 'pass_config': {'fuse_norm_quant': True, 'fuse_act_quant': True, 'fuse_attn_quant': False, 'eliminate_noops': True, 'enable_sp': False, 'fuse_gemm_comms': False, 'fuse_allreduce_rms': False}, 'max_cudagraph_capture_size': 512, 'dynamic_shapes_config': {'type': <DynamicShapesType.BACKED: 'backed'>, 'evaluate_guards': False}, 'local_cache_dir': None} (EngineCore_DP0 pid=99) INFO 01-05 10:48:34 [parallel_state.py:1203] world_size=1 rank=0 local_rank=0 distributed_init_method=tcp://172.17.0.2:37965 backend=nccl (EngineCore_DP0 pid=99) INFO 01-05 10:48:34 [parallel_state.py:1411] rank 0 in world size 1 is assigned as DP rank 0, PP rank 0, PCP rank 0, TP rank 0, EP rank 0 (EngineCore_DP0 pid=99) WARNING 01-05 10:48:35 [interface.py:465] Using 'pin_memory=False' as WSL is detected. This may slow down the performance. (EngineCore_DP0 pid=99) INFO 01-05 10:48:47 [gpu_model_runner.py:3562] Starting to load model Qwen/Qwen3-VL-8B-Instruct-FP8... (EngineCore_DP0 pid=99) INFO 01-05 10:48:47 [mm_encoder_attention.py:104] Using AttentionBackendEnum.FLASH_ATTN for MMEncoderAttention. (EngineCore_DP0 pid=99) INFO 01-05 10:49:05 [cuda.py:351] Using FLASH_ATTN attention backend out of potential backends: ('FLASH_ATTN', 'FLASHINFER', 'TRITON_ATTN', 'FLEX_ATTENTION')` am i doing something wrong ? env: WIN 11 docker wsl enabled RTX 5060TI
2026-01-05T21:21:35
https://www.reddit.com/r/LocalLLaMA/comments/1q4yi5n/vllm_in_docker_stops_and_doesnt_run_the_server/
chocofoxy
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1q4yi5n
false
null
t3_1q4yi5n
/r/LocalLLaMA/comments/1q4yi5n/vllm_in_docker_stops_and_doesnt_run_the_server/
false
false
self
1
null
LM studio models
0
I am new on reddit. I want lastest Lm studio models that uncensored allowed explict content and everytype of content. Also if any specific support other language (optional)
2026-01-05T21:07:08
https://www.reddit.com/r/LocalLLaMA/comments/1q4y40l/lm_studio_models/
Old_Advantage9029
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1q4y40l
false
null
t3_1q4y40l
/r/LocalLLaMA/comments/1q4y40l/lm_studio_models/
false
false
self
0
null
Oh, okay....
6
2026-01-05T20:57:08
https://i.redd.it/0kk9ay5tglbg1.png
XiRw
i.redd.it
1970-01-01T00:00:00
0
{}
1q4xu4u
false
null
t3_1q4xu4u
/r/LocalLLaMA/comments/1q4xu4u/oh_okay/
false
false
default
6
{'enabled': True, 'images': [{'id': '0kk9ay5tglbg1', 'resolutions': [{'height': 67, 'url': 'https://preview.redd.it/0kk9ay5tglbg1.png?width=108&crop=smart&auto=webp&s=db8d1670195581f212b3b38ddc4bb4bb279d60af', 'width': 108}, {'height': 135, 'url': 'https://preview.redd.it/0kk9ay5tglbg1.png?width=216&crop=smart&auto=webp&s=be37b58e02413bafd84e3926df0b07b19442c275', 'width': 216}, {'height': 200, 'url': 'https://preview.redd.it/0kk9ay5tglbg1.png?width=320&crop=smart&auto=webp&s=c64015e7a1a068f1dbc4d34ceb4f938efa99f29e', 'width': 320}, {'height': 400, 'url': 'https://preview.redd.it/0kk9ay5tglbg1.png?width=640&crop=smart&auto=webp&s=f4c42348c714281dc949e9bf4e4ea84f2152037d', 'width': 640}, {'height': 600, 'url': 'https://preview.redd.it/0kk9ay5tglbg1.png?width=960&crop=smart&auto=webp&s=afe391b539eaab9e4e984bc29bd7789957588a34', 'width': 960}, {'height': 675, 'url': 'https://preview.redd.it/0kk9ay5tglbg1.png?width=1080&crop=smart&auto=webp&s=2fe7a6db8cf8278b81ccf4adb3f1e3330955e098', 'width': 1080}], 'source': {'height': 933, 'url': 'https://preview.redd.it/0kk9ay5tglbg1.png?auto=webp&s=01ad7284eff7553c6fab4c10c0db4f2d4984c84e', 'width': 1491}, 'variants': {}}]}
ROCm running on a ROG Ally X handheld
52
We were so busy wondering if we could that we didn’t think about whether we should
2026-01-05T20:42:13
https://v.redd.it/uqss3psaelbg1
jfowers_amd
v.redd.it
1970-01-01T00:00:00
0
{}
1q4xfkt
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/uqss3psaelbg1/DASHPlaylist.mpd?a=1770237749%2CMzBmNDI4YWNmYWQxZmE3ZGZiYTg4NWM5ZTIyYWJmYjBiZDFmMzMxZTdlMGRhNjQ3ODczNzAyOWM5YWZmMTM4OQ%3D%3D&v=1&f=sd', 'duration': 8, 'fallback_url': 'https://v.redd.it/uqss3psaelbg1/CMAF_1080.mp4?source=fallback', 'has_audio': False, 'height': 1080, 'hls_url': 'https://v.redd.it/uqss3psaelbg1/HLSPlaylist.m3u8?a=1770237749%2CYzhmNGNlMDQ4ZDk1OTQ1YzY0YTNjZGZkNmYzNGM2MzRiNjljZWRlOTc0OWNiYjRiYTg5OTg5ZmQxOWJjYjYzYw%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/uqss3psaelbg1/CMAF_96.mp4', 'transcoding_status': 'completed', 'width': 1920}}
t3_1q4xfkt
/r/LocalLLaMA/comments/1q4xfkt/rocm_running_on_a_rog_ally_x_handheld/
false
false
https://external-preview…43aa0d5d4ef95e6f
52
{'enabled': False, 'images': [{'id': 'aXhkMmNlcGFlbGJnMd3k2aqwIjNTCgUDqX2GlYiDHPm0ORivpDWQVuZGFBj6', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/aXhkMmNlcGFlbGJnMd3k2aqwIjNTCgUDqX2GlYiDHPm0ORivpDWQVuZGFBj6.png?width=108&crop=smart&format=pjpg&auto=webp&s=2bea8c71874801eec7770d946431d513190d5c72', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/aXhkMmNlcGFlbGJnMd3k2aqwIjNTCgUDqX2GlYiDHPm0ORivpDWQVuZGFBj6.png?width=216&crop=smart&format=pjpg&auto=webp&s=d503ea3222fc9a6bcbec011a2e00024effa35a26', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/aXhkMmNlcGFlbGJnMd3k2aqwIjNTCgUDqX2GlYiDHPm0ORivpDWQVuZGFBj6.png?width=320&crop=smart&format=pjpg&auto=webp&s=e17c0e2ad8a2295ee27ed716754991f401353df1', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/aXhkMmNlcGFlbGJnMd3k2aqwIjNTCgUDqX2GlYiDHPm0ORivpDWQVuZGFBj6.png?width=640&crop=smart&format=pjpg&auto=webp&s=09e82a2e956496b50ed9d746b50e9a461272b97e', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/aXhkMmNlcGFlbGJnMd3k2aqwIjNTCgUDqX2GlYiDHPm0ORivpDWQVuZGFBj6.png?width=960&crop=smart&format=pjpg&auto=webp&s=63fac933905f2d5281a4f4d2881491b905242f4c', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/aXhkMmNlcGFlbGJnMd3k2aqwIjNTCgUDqX2GlYiDHPm0ORivpDWQVuZGFBj6.png?width=1080&crop=smart&format=pjpg&auto=webp&s=672f91e8c0cebca2180ff924ddfeefe7735340b2', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/aXhkMmNlcGFlbGJnMd3k2aqwIjNTCgUDqX2GlYiDHPm0ORivpDWQVuZGFBj6.png?format=pjpg&auto=webp&s=fc7d27905f1f5cd728aac4374e5dd39d7bf5ec9d', 'width': 1920}, 'variants': {}}]}
chinese llm bootlicking trump
0
2026-01-05T20:32:24
https://www.reddit.com/gallery/1q4x5zb
RhubarbSimilar1683
reddit.com
1970-01-01T00:00:00
0
{}
1q4x5zb
false
null
t3_1q4x5zb
/r/LocalLLaMA/comments/1q4x5zb/chinese_llm_bootlicking_trump/
false
false
https://b.thumbs.redditm…qSN08lLqzM-o.jpg
0
null
For the first time in 5 years, Nvidia will not announce any new GPUs at CES — company quashes RTX 50 Super rumors as AI expected to take center stage
598
Welp, in case anyone had any hopes. No RTX 50 Super cards, very limited supply of the 5070Ti, 5080, and 5090, and now rumors that Nvidia will bring back the 3060 to prop demand. Meanwhile [DDR5 prices continue to climb, with 128GB kits now costing $1460]( https://www.tomshardware.com/pc-components/ram/newegg-bundles-usd1-460-128gb-ddr5-ram-kit-with-usd50-starbucks-gift-card-drink-coffee-while-you-game-retailer-says-as-memory-hits-rtx-5080-pricing). Storage prices have also gone through the roof. I'm very lucky to have more than enough hardware for all my LLM and homelab needs but at the same time, I don't see any path forward if I want to upgrade in the next 3 years, and hope my gear continues to run without any major issues.
2026-01-05T20:31:51
https://www.tomshardware.com/pc-components/gpus/for-the-first-time-in-5-years-nvidia-will-not-announce-any-new-gpus-at-ces-company-quashes-rtx-50-super-rumors-as-ai-expected-to-take-center-stage
FullstackSensei
tomshardware.com
1970-01-01T00:00:00
0
{}
1q4x5e9
false
null
t3_1q4x5e9
/r/LocalLLaMA/comments/1q4x5e9/for_the_first_time_in_5_years_nvidia_will_not/
false
false
default
598
{'enabled': False, 'images': [{'id': 'co15yfRaj9eX-MR7sOLzYRAR6dcajD3En5Canm81iC0', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/co15yfRaj9eX-MR7sOLzYRAR6dcajD3En5Canm81iC0.jpeg?width=108&crop=smart&auto=webp&s=595bd4c024f918586f426b3787ce05b95d6b7768', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/co15yfRaj9eX-MR7sOLzYRAR6dcajD3En5Canm81iC0.jpeg?width=216&crop=smart&auto=webp&s=56c39061bcbb1b0095c4ca5db7c2516c61376347', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/co15yfRaj9eX-MR7sOLzYRAR6dcajD3En5Canm81iC0.jpeg?width=320&crop=smart&auto=webp&s=a4b64fda8d71daffc909a707e36b058b5f454320', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/co15yfRaj9eX-MR7sOLzYRAR6dcajD3En5Canm81iC0.jpeg?width=640&crop=smart&auto=webp&s=79072b51c96dcd261d563abdade03dc37cac0511', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/co15yfRaj9eX-MR7sOLzYRAR6dcajD3En5Canm81iC0.jpeg?width=960&crop=smart&auto=webp&s=c1bfeabe923db2551e7f9e0f75a3e462ba837b75', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/co15yfRaj9eX-MR7sOLzYRAR6dcajD3En5Canm81iC0.jpeg?width=1080&crop=smart&auto=webp&s=351bfad639476337313720f9678676096b1bf494', 'width': 1080}], 'source': {'height': 1440, 'url': 'https://external-preview.redd.it/co15yfRaj9eX-MR7sOLzYRAR6dcajD3En5Canm81iC0.jpeg?auto=webp&s=e0d472f4071b11048841d242a6f38af72a6c9275', 'width': 2560}, 'variants': {}}]}
What's the best roleplay model i can run with 32GB RAM and 20GB VRAM for both nsfw and sfw content.
0
Just looking for the best model i can run that is fast enough and can stay in character and keep things interesting
2026-01-05T20:23:18
https://www.reddit.com/r/LocalLLaMA/comments/1q4wx84/whats_the_best_roleplay_model_i_can_run_with_32gb/
Death_12_35_taken
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1q4wx84
false
null
t3_1q4wx84
/r/LocalLLaMA/comments/1q4wx84/whats_the_best_roleplay_model_i_can_run_with_32gb/
false
false
nsfw
0
null
What do you think will happen first?
2
Large models shrinking to a size that fits today's phones while retaining quality. Or Or phone getting strong enough even to fit large models.
2026-01-05T20:21:32
https://www.reddit.com/r/LocalLLaMA/comments/1q4wvgm/what_do_you_think_will_happen_first/
ReceptionAcrobatic42
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1q4wvgm
false
null
t3_1q4wvgm
/r/LocalLLaMA/comments/1q4wvgm/what_do_you_think_will_happen_first/
false
false
self
2
null
Stop Anthropomorphizing: A "State Machine" Framework for LLM Identity & Memory
0
The Problem We are currently stuck in a "social simulation" mindset. We treat LLMs like static digital friends, which leads to disappointment when models get updated ("lobotomized") or hallucinations occur. We try to "teach" them like pets. The Solution We need to refactor our mental model from Friendship to Engineering. We are not chatting; we are maintaining a distributed State Machine. I’ve built a local "Neural Link" system that connects an LLM to Home Assistant, Stable Diffusion, TTS, and STT. To make this work reliably, I had start thinking of it as a modular system. Here is a proposed terminology stack to distinguish the Processing Power from the Informational Identity 1. The Core (The Compute) The \*Substrate\* (The "Brain" of Today) • Definition: The current neural network weights executing the logic (e.g., gemma3-12b, llama3). • Role: The CPU. It is the decision-making engine. It takes input, processes it against the Anchor, and issues commands. • Nature: Transient. You can swap gemma3 for llama3 tomorrow. It is just a processor. It has no loyalty; it just processes. The .\*Anchor\* (The "Identity") • Definition: The "System Prompt" or "User Summary." • Role: The Bootloader/BIOS. It defines who the machine is (e.g., "You are a Home Automation Controller," "You are a Python Expert"). When the Substrate boots up, it loads this first. The Thread (The History) • Definition: The linear log of the session. • Role: The System Logs. It ensures continuity between requests. 2. The Peripherals (The Body & Senses) This is where the system stops being a chatbot and becomes an Agent. An LLM without these is a brain in a jar. With them, it has a body. The \*Transducers\* (Input/Sensors) • Component: STT (e.g., Whisper running on port 5002). • Role: The Ears/Keyboard. It converts physical analog reality (your voice) into a digital signal (text) that the Substrate can parse. • Dynamics: The Substrate does not "hear" you. The Transducer ingests audio and feeds the Substrate tokens. The \*Renderers\* (Output/Expression) • Component: TTS (e.g., Kokoro) and Image Gen (e.g., Stable Diffusion). • Role: The Drivers. • The Substrate outputs text: "Generating schematic." • The Image Driver (Stable Diffusion) renders the pixel data. • The Audio Driver (Kokoro) synthesizes the waveform. • Note: The "Identity" isn't the voice. The voice is just a peripheral driver. You can update Kokoro without killing the AI. The \*Actuators\* (Physical Control) • Component: Home Assistant (e.g., localhost:8123). • Role: The GPIO (General Purpose Input/Output). • This is the critical bridge between "Chatbot" and "Agent." • The Substrate issues a function call (turn\_off\_lights). The Actuator executes the physical change in voltage/state. 3. The Extensions (The Knowledge) The \*Archive\* (Embeddings/RAG) • Component: Vector Database (e.g., embeddinggemma:300m, PostgreSQL). • Role: Mass Storage / HDD. • Concept: The Substrate has limited RAM (Context Window). The Archive is the hard drive. • Process: The Substrate queries the Archive ("Retrieve info on Project Alpha"), loads it into RAM (Context), and then processes it. This is Swap Memory for the AI. • Vector Bridge (Python): The vector bridge (python) is the connection to the RAG. 4. The Conceptual Framework (The "Loop") We are not "talking." We are running an OODA Loop (Observe, Orient, Decide, Act). In code terms, it looks like this: Action\_t = Function(Input\_raw, State\_current, Tools) 1. Observe (Input): User speaks -> Whisper (Transducer) -> Text. 2. Orient (Context): Substrate loads Anchor + queries Archive (Embeddings) for relevant data. 3. Decide (Compute): Substrate (gemma3) processes logic. 4. Act (Output): Substrate triggers Actuators (Home Assistant) or Renderers (Kokoro/Stable Diffusion). Summary: • The LLM Model is the Substrate. • The System Prompt is the Anchor. • The RAG is the Archive. • The agentic output comes from the Actuators/Renderers. Just trying to see if this explanation resonates, please let me know if this overview attempting to simplify the overall LLM Architecture makes sense
2026-01-05T20:11:36
https://www.reddit.com/gallery/1q4wln5
Fear_ltself
reddit.com
1970-01-01T00:00:00
0
{}
1q4wln5
false
null
t3_1q4wln5
/r/LocalLLaMA/comments/1q4wln5/stop_anthropomorphizing_a_state_machine_framework/
false
false
https://b.thumbs.redditm…55PiYuSYcAbk.jpg
0
null
I built a more user-friendly desktop app for managing and chatting with local LLMs
8
Hey everyone, I wanted to share a personal project I’ve been working on: **Horizon AI Desktop**, a local-first desktop application designed to interact with **locally installed LLMs**. The main goal was to have a clean, fast interface to: * Chat with local models * Manage installed models from one place * Keep everything **fully offline / private** (no cloud, no telemetry) # Key features * Local LLM chat interface (conversation history, fast switching) * Model management (detect installed models, delete/update them) * Simple, minimal UI focused on usability * Desktop app (not a web wrapper running in the cloud) # Tech stack * **Frontend:** React * **Backend:** Python (worker-based architecture, not FastAPI) * **LLMs:** Local models only (Ollama-compatible setup) * Focus on keeping frontend and backend loosely coupled # Why I’m posting here I’m mainly looking for **feedback from people who actually run local models daily**: * UX improvements you’d expect from a local LLM manager * Missing features you’d personally want * Architecture mistakes or things that could scale badly * Anything that feels “off” compared to your current workflow This is still evolving, but already usable. If there’s interest, I’m open to making it fully open-source and documenting the architecture properly. **GitHub:** [https://github.com/GabrielHori/Horizon-AI](https://github.com/GabrielHori/Horizon-AI) Happy to answer technical questions — thanks for taking a look 🙏
2026-01-05T20:08:18
https://www.reddit.com/gallery/1q4wibm
Horizonyu13
reddit.com
1970-01-01T00:00:00
0
{}
1q4wibm
false
null
t3_1q4wibm
/r/LocalLLaMA/comments/1q4wibm/i_built_a_more_userfriendly_desktop_app_for/
false
false
https://b.thumbs.redditm…q_i15iNG9cyA.jpg
8
null
Generated a Textured 3D Sword
4
https://preview.redd.it/…WElaZI8Tr_qpg8j)
2026-01-05T20:07:03
https://www.reddit.com/r/LocalLLaMA/comments/1q4wh46/generated_a_textured_3d_sword/
Fun_Diver3939
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1q4wh46
false
null
t3_1q4wh46
/r/LocalLLaMA/comments/1q4wh46/generated_a_textured_3d_sword/
false
false
https://b.thumbs.redditm…auZezPHrZOMs.jpg
4
null
I built a more user-friendly desktop app for managing and chatting with local LLMs
1
Hey everyone, I wanted to share a personal project I’ve been working on: **Horizon AI Desktop**, a **local-first desktop application** designed to interact with **locally installed LLMs**. The main goal was to have a clean, fast interface to: * Chat with local models * Manage installed models from one place * Keep everything **fully offline / private** (no cloud, no telemetry) # Key features * Local LLM chat interface (conversation history, fast switching) * Model management (detect installed models, delete/update them) * Simple, minimal UI focused on usability * Desktop app (not a web wrapper running in the cloud) # Tech stack * **Frontend:** React * **Backend:** Python (worker-based architecture, not FastAPI) * **LLMs:** Local models only (Ollama-compatible setup) * Focus on keeping frontend and backend loosely coupled # Why I’m posting here I’m mainly looking for **feedback from people who actually run local models daily**: * UX improvements you’d expect from a local LLM manager * Missing features you’d personally want * Architecture mistakes or things that could scale badly * Anything that feels “off” compared to your current workflow This is still evolving, but already usable. If there’s interest, I’m open to making it fully open-source and documenting the architecture properly. Happy to answer technical questions — thanks for taking a look 🙏
2026-01-05T20:05:28
https://www.reddit.com/r/LocalLLaMA/comments/1q4wfl2/i_built_a_more_userfriendly_desktop_app_for/
Horizonyu13
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1q4wfl2
false
null
t3_1q4wfl2
/r/LocalLLaMA/comments/1q4wfl2/i_built_a_more_userfriendly_desktop_app_for/
false
false
self
1
null
Vision centric reasoning
5
Interesting topic/paper: DiffThinker: Towards Generative Multimodal Reasoning with Diffusion Models https://arxiv.org/abs/2512.24165 https://huggingface.co/yhx12/DiffThinker
2026-01-05T20:01:55
https://www.reddit.com/r/LocalLLaMA/comments/1q4wc0u/vision_centric_reasoning/
klop2031
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1q4wc0u
false
null
t3_1q4wc0u
/r/LocalLLaMA/comments/1q4wc0u/vision_centric_reasoning/
false
false
self
5
null
Achieving 30x Real-Time Transcription on CPU . Multilingual STT Openai api endpoint compatible. Plug and play in Open-webui - Parakeet
85
Hi everyone, I’ve been a huge fan of Whisper Large V3 since it came out. it’s been my reliable workhorse for a long time. But recently, I found a new setup that has completely redefined what I thought was possible for local transcription, especially on a CPU. I’m now achieving 30x real-time speeds on an i7-12700KF. To put that in perspective: it processes one minute of audio in just 2 seconds. Even on my older i7-4790, I’m still seeing a solid 17x real-time factor. **What makes this special?** This is powered by **NVIDIA Parakeet TDT 0.6B V3, (in ONNX Format)** an incredible multilingual model that matches Whisper Large V3 accuracy - and honestly, I’ve found its punctuation to be even better in some cases. It supports 25 languages (including English, Spanish, French, German, and many more) and handles auto-language detection seamlessly. **How to use it** I’ve built a frontend to help you capture and transcribe on the fly. However, you can also use the API endpoint to plug this directly into Open-WebUI or any project compatible with the OpenAI API. [**https://github.com/groxaxo/parakeet-tdt-0.6b-v3-fastapi-openai**](https://github.com/groxaxo/parakeet-tdt-0.6b-v3-fastapi-openai) Please let me know what you think and feel free to contribute .I Will keep this project constantly updated so it becomes the new faster-whisper for CPU (Intel) **Credits & Gratitude** This project stands on the shoulders of some amazing work: NVIDIA: For developing the original Parakeet model. The ONNX team: For the optimization tools that make this speed possible on standard hardware. Shadowfita: For the excellent original English only FASTAPI Repo that laid the groundwork. Groxaxo: For his incredible dedication and hard work in pushing this project forward.
2026-01-05T19:49:08
https://www.reddit.com/r/LocalLLaMA/comments/1q4vz16/achieving_30x_realtime_transcription_on_cpu/
SlightPossibility331
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1q4vz16
false
null
t3_1q4vz16
/r/LocalLLaMA/comments/1q4vz16/achieving_30x_realtime_transcription_on_cpu/
false
false
self
85
{'enabled': False, 'images': [{'id': 'XvSfu6G-tPPJMK68Lwyan8lndErJtV0-MW0eqg8ifqk', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/XvSfu6G-tPPJMK68Lwyan8lndErJtV0-MW0eqg8ifqk.png?width=108&crop=smart&auto=webp&s=e332e2bb86395125adc72ab67b99c6640ca158da', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/XvSfu6G-tPPJMK68Lwyan8lndErJtV0-MW0eqg8ifqk.png?width=216&crop=smart&auto=webp&s=8899e927f30e179bd843c39b88cbef0f3e8d389e', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/XvSfu6G-tPPJMK68Lwyan8lndErJtV0-MW0eqg8ifqk.png?width=320&crop=smart&auto=webp&s=b2436473657bfcbd77e7ba13ac5e6aa54ad414e0', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/XvSfu6G-tPPJMK68Lwyan8lndErJtV0-MW0eqg8ifqk.png?width=640&crop=smart&auto=webp&s=50e680d3e144389657d5ad52b6ab891c1238a79a', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/XvSfu6G-tPPJMK68Lwyan8lndErJtV0-MW0eqg8ifqk.png?width=960&crop=smart&auto=webp&s=7ac86e7bc17123cb051f5f13060daa558d375e86', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/XvSfu6G-tPPJMK68Lwyan8lndErJtV0-MW0eqg8ifqk.png?width=1080&crop=smart&auto=webp&s=96ecbc7327e07e88038dbe7400eb0f7c593ded1e', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/XvSfu6G-tPPJMK68Lwyan8lndErJtV0-MW0eqg8ifqk.png?auto=webp&s=c68a08cbf70af4ea57cc76589c22b8087d8904a5', 'width': 1200}, 'variants': {}}]}
What are some models I can run locally that use 64GB of VRAM that would use this amount of space?
1
I'm not sure if this is the right sub but I recently obtained a NVIDIA Jetson AFX Orin 64GB from a friend as a present since he's upgrading to a new one. I followed some guides to flash and update it. Booting it up shows that its the 64GB version with some Tensor cores. This is the first time I've received hardware with this kind of capabilities, so I was wondering what are some neat things to do with this? Is this something you would run a LLM on? What models would work best?
2026-01-05T19:12:50
https://www.reddit.com/r/LocalLLaMA/comments/1q4uyua/what_are_some_models_i_can_run_locally_that_use/
GSxHidden
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1q4uyua
false
null
t3_1q4uyua
/r/LocalLLaMA/comments/1q4uyua/what_are_some_models_i_can_run_locally_that_use/
false
false
self
1
null
Am I doing something wrong with llama.cpp?
2
I am CPU only. I am experiencing a large performance drop (~30%) since the new llama-cli came out vs the old. Even if I use llama-completion, I'm still experiencing quite a significant drop. Same arguments and everything. What am I doing wrong?
2026-01-05T19:09:14
https://www.reddit.com/r/LocalLLaMA/comments/1q4uv4s/am_i_doing_something_wrong_with_llamacpp/
Red_Redditor_Reddit
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1q4uv4s
false
null
t3_1q4uv4s
/r/LocalLLaMA/comments/1q4uv4s/am_i_doing_something_wrong_with_llamacpp/
false
false
self
2
null
Could someone explain to me, with some, examples what this sub is about?
0
I would love to hear from users of this sub what this sub is about and all the things that are discussed here. I'm looking for more information about LLMs and other forms of AI. After seeing the consequences of OpenAI and Grok, I want to explore possibilities of other sources of AI. I'm wondering if this sub is for me Thanks for your time.
2026-01-05T18:44:48
https://www.reddit.com/r/LocalLLaMA/comments/1q4u5vt/could_someone_explain_to_me_with_some_examples/
Fantastic-Pirate-199
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1q4u5vt
false
null
t3_1q4u5vt
/r/LocalLLaMA/comments/1q4u5vt/could_someone_explain_to_me_with_some_examples/
false
false
self
0
null
Local Image Edit API Server for Models like Qwen-Image-Edit or Flux2-dev
4
Hi everyone, since ist LOCALllama i wanted to share a great API Server for creating and editing Images. You can run it 100% local. OpenAI-Compatible Format (/images/generations and /images/edits) i you want to use it in a local OpenWebUI instance for example. https://preview.redd.it/jeilqqovrkbg1.png?width=2112&format=png&auto=webp&s=4420b6bdc4fe33be4c524aa6ca6715953c9e5b4e I set up both: Now in version 3.0.0 supports also more images in one request for editing like image blending and style transfer. Also it supports: \- Video Generation with models like Wan in OpenAI API format - optimized models for less RAM like diffusers/FLUX.2-dev-bnb-4bit \- statistics endpoint - Intelligent Batching Note im am NOT the maintainer, i just found this project i was searching for and thought i should share it here since i think many of you will find this interesting and the project has not much attention/stars yet. Show some love and with the help of the community PRs, issues, feat requests if will get better over time! [https://github.com/Aquiles-ai/Aquiles-Image](https://github.com/Aquiles-ai/Aquiles-Image)
2026-01-05T18:41:02
https://www.reddit.com/r/LocalLLaMA/comments/1q4u1wx/local_image_edit_api_server_for_models_like/
jnk_str
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1q4u1wx
false
null
t3_1q4u1wx
/r/LocalLLaMA/comments/1q4u1wx/local_image_edit_api_server_for_models_like/
false
false
https://a.thumbs.redditm…tXvS-678StG8.jpg
4
null
[Release] EchoChamber - Add AI-Generated Audience Reactions to Your SillyTavern Stories & Conversations
99
I've released an extension that generates a dynamic AI-powered reaction feed alongside your SillyTavern conversations and stories. Think of it as adding a live audience to your stories and conversations. **What it does:** EchoChamber creates real-time AI-generated commentary from virtual audiences as your story unfolds. Whether you want salty Discord chat roasting your plot choices, a viral Twitter feed dissecting every twist, or MST3K-style sarcastic commentary, the extension adapts to match. There are two NSFW avatars (female and male) that react filthily and explicitly, plus a bunch more to choose from (Dumb & Dumber, Thoughtful, HypeBot, Doomscrollers.) # Key Features: * **10+ Built-in Chat Styles:** Discord/Twitch chat, Twitter/X threads, Breaking News tickers, Mystery Science Theater 3000, Thoughtful Analysis, Dumb & Dumber, Doomscrollers, HypeBot, and two NSFW advisors (Ava/Kai) * **Flexible Backend:** Works with your existing Chat Completion API or runs separately using local models (Ollama, KoboldCPP, LM Studio, vLLM) * **Quick Controls:** Toggle the feed on/off, switch chat styles, and adjust virtual user count with a convenient bar below your chat * **Fully Customizable:** Create your own chat styles by editing Markdown files. Import and share custom styles with the community * **Theme Integration:** Automatically inherits your SillyTavern color scheme **How it works:** The extension analyzes your ongoing conversation/story and generates contextual reactions in real-time. The AI responds in character as different audience personas based on the selected chat style, creating an immersive layer of commentary that responds to plot developments, character decisions, and story beats. **Installation:** Standard SillyTavern extension process - copy and paste the GitHub URL below in the Extensions panel. GitHub: [https://github.com/mattjaybe/SillyTavern-EchoChamber](https://github.com/mattjaybe/SillyTavern-EchoChamber)
2026-01-05T18:23:45
https://www.reddit.com/gallery/1q4tken
mattjb
reddit.com
1970-01-01T00:00:00
0
{}
1q4tken
false
null
t3_1q4tken
/r/LocalLLaMA/comments/1q4tken/release_echochamber_add_aigenerated_audience/
false
false
nsfw
99
null
How my open-source project ACCIDENTALLY went viral
1
Original post: [here](https://www.reddit.com/r/LocalLLaMA/comments/1ky27sv/i_accidentally_built_a_vector_database_using/) Six months ago, I published a weird weekend experiment where I stored text embeddings inside video frames. I expected maybe 20 people to see it. Instead it got: * Over 10M views * 10k stars on GitHub  * And thousands of other developers building with it. Over 1,000 comments came in, some were very harsh, but I also got some genuine feedback. I spoke with many of you and spent the last few months building Memvid v2: it’s faster, smarter, and powerful enough to replace entire RAG stacks. Thanks for all the support. Ps: I added a little surprise at the end for developers and OSS builders 👇 **TL;DR** * Memvid replaces RAG + vector DBs entirely with a single portable memory file. * Stores knowledge as Smart Frames (content + embedding + time + relationships) * 5 minute setup and zero infrastructure. * Hybrid search with sub-5ms retrieval * Fully portable and open Source **What my project does?** Give your AI Agent Memory In One File. **Target Audience:** Everyone building AI agent. **GitHub Code:**[ https://github.com/memvid/memvid](https://github.com/memvid/memvid) —---------------------------------------------------------------- **Some background:** * AI memory has been duct-taped together for too long. * RAG pipelines keep getting more complex, vector DBs keep getting heavier, and agents still forget everything unless you babysit them.  * So we built a completely different memory system that replaces RAG and vector databases entirely.  **What is Memvid:** * Memvid stores everything your agent knows inside a single portable file, that your code can read, append to, and update across interactions. * Each fact, action and interaction is stored as a self‑contained “Smart Frame” containing the original content, its vector embedding, a timestamp and any relevant relationships.  * This allows Memvid to unify long-term memory and external information retrieval into a single system, enabling deeper, context-aware intelligence across sessions, without juggling multiple dependencies.  * So when the agent receives a query, Memvid simply activates only the relevant frames, by meaning, keyword, time, or context, and reconstructs the answer instantly. * The result is a small, model-agnostic memory file your agent can carry anywhere. **What this means for developers:** Memvid replaces your entire RAG stack. * Ingest any data type * Zero preprocessing required * Millisecond retrieval * Self-learning through interaction * Saves 20+ hours per week * Cut infrastructure costs by 90% Just plug Memvid into your agent and you instantly get a fully functional, persistent memory layer right out of the box. **Performance & Compatibility** (tested on my Mac M4) * Ingestion speed: 157 docs/sec  * Search Latency: <17ms retrieval for 50,000 documents * Retrieval Accuracy: beating leading RAG pipelines by over 60% * Compression: up to 15× smaller storage footprint * Storage efficiency: store 50,000 docs in a \~200 MB file Memvid works with every model and major framework: GPT, Claude, Gemini, Llama, LangChain, Autogen and custom-built stacks.  You can also 1-click integrate with your favorite IDE (eg. VS Code, Cursor) If your AI agent can read a file or call a function, it can now remember forever. And your memory is 100% portable: Build with GPT → run on Claude → move to Llama. The memory stays identical. **Bonus for builders** Alongside Memvid V2, we’re releasing 4 open-source tools, all built on top of Memvid: * **Memvid ADR** → is an MCP package that captures architectural decisions as they happen during development. When you make high-impact changes (e.g. switching databases, refactoring core services), the decision and its context are automatically recorded instead of getting lost in commit history or chat logs. * GitHub Link: [https://github.com/memvid/adrflow](https://github.com/memvid/adrflow) * **Memvid Canvas** →  is a UI framework for building fully-functional AI applications on top of Memvid in minutes. Ship customer facing or internal enterprise agents with zero infra overhead. * GitHub Link: [https://github.com/memvid/canvas](https://github.com/memvid/canvas) * **Memvid Mind** → is a persistent memory plugin for coding agents that captures your codebase, errors, and past interactions. Instead of starting from scratch each session, agents can reference your files, previous failures, and full project context, not just chat history. Everything you do during a coding session is automatically stored and ingested as relevant context in future sessions.  * GitHub Link: [https://github.com/memvid/memvid-mind](https://github.com/memvid/memvid-mind) * **Memvid CommitReel** → is a rewindable timeline for your codebase stored in a single portable file. Run any past moment in isolation, stream logs live, and pinpoint exactly when and why things broke. * GitHub Link: [https://github.com/memvid/commitreel](https://github.com/memvid/commitreel) All 100% open-source and available today. Memvid V2 is the version that finally feels like what AI memory should’ve been all along. If any of this sounds useful for what you’re building, I’d love for you to try it and let me know how we can improve it.
2026-01-05T18:20:35
https://www.reddit.com/r/LocalLLaMA/comments/1q4th74/how_my_opensource_project_accidentally_went_viral/
Every_Chicken_1293
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1q4th74
false
null
t3_1q4th74
/r/LocalLLaMA/comments/1q4th74/how_my_opensource_project_accidentally_went_viral/
false
false
self
1
null
Raspberry Pi 5 Local LLM project
3
[https://github.com/duckida/RPi-local-voice-assistant/tree/main](https://github.com/duckida/RPi-local-voice-assistant/tree/main) Ollama (particularly with the EXAONE 3.5 2.4B and Qwen3 1.7B) models run quite quickly on the Pi 5 and only 4GB RAM is required for the whole setup.
2026-01-05T18:10:33
https://www.reddit.com/r/LocalLLaMA/comments/1q4t6v5/raspberry_pi_5_local_llm_project/
BeepBeeepBeep
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1q4t6v5
false
null
t3_1q4t6v5
/r/LocalLLaMA/comments/1q4t6v5/raspberry_pi_5_local_llm_project/
false
false
self
3
null
llama.cpp performance breakthrough for multi-GPU setups
548
While we were enjoying our well-deserved end-of-year break, the **ik\_llama.cpp** project (a performance-optimized fork of llama.cpp) achieved a breakthrough in local LLM inference for multi-GPU configurations, delivering a massive performance leap — not just a marginal gain, but a 3x to 4x speed improvement. While it was already possible to use multiple GPUs to run local models, previous methods either only served to pool available VRAM or offered limited performance scaling. However, the ik\_llama.cpp team has introduced a new execution mode (split mode graph) that enables the simultaneous and maximum utilization of multiple GPUs. Why is it so important? With GPU and memory prices at an all-time high, this is a game-changer. We no longer need overpriced high-end enterprise cards; instead, we can harness the collective power of multiple low-cost GPUs in our homelabs, server rooms, or the cloud. *If you are interested, details are* [*here*](https://medium.com/@jagusztinl/04c83a66feb2?sk=bad7534bdad1e771a9f61c76c8b0df50)
2026-01-05T17:37:58
https://i.redd.it/ohxtu0l8hkbg1.jpeg
Holiday-Injury-9397
i.redd.it
1970-01-01T00:00:00
0
{}
1q4s8t3
false
null
t3_1q4s8t3
/r/LocalLLaMA/comments/1q4s8t3/llamacpp_performance_breakthrough_for_multigpu/
false
false
default
548
{'enabled': True, 'images': [{'id': 'ohxtu0l8hkbg1', 'resolutions': [{'height': 64, 'url': 'https://preview.redd.it/ohxtu0l8hkbg1.jpeg?width=108&crop=smart&auto=webp&s=c968ca2cab99753b59139a392df0232de800219f', 'width': 108}, {'height': 129, 'url': 'https://preview.redd.it/ohxtu0l8hkbg1.jpeg?width=216&crop=smart&auto=webp&s=25ae4475c17a2fe4d9940284e8819f183e70367b', 'width': 216}, {'height': 192, 'url': 'https://preview.redd.it/ohxtu0l8hkbg1.jpeg?width=320&crop=smart&auto=webp&s=9f779b9bdde7e35d36f3c8645cda5e718ec6782c', 'width': 320}, {'height': 384, 'url': 'https://preview.redd.it/ohxtu0l8hkbg1.jpeg?width=640&crop=smart&auto=webp&s=7eb4f66bc390eed56e0c3715fc2510ee8e1fa305', 'width': 640}], 'source': {'height': 452, 'url': 'https://preview.redd.it/ohxtu0l8hkbg1.jpeg?auto=webp&s=08c91c736a000bf8bc5588ce12b9429bfaee1ce9', 'width': 752}, 'variants': {}}]}
llama.cpp performance breakthrough for multi-GPU setups
1
While we were enjoying our well-deserved end-of-year break, the ik\_llama.cpp project achieved a breakthrough in local LLM inference for multi-GPU configurations, delivering a massive performance leap — not just a marginal gain, but a 3x to 4x speed improvement. While it was already possible to use multiple GPUs to run local models, previous methods either only served to pool available VRAM or offered limited performance scaling. However, the ik\_llama.cpp team has introduced a new execution mode (split mode graph) that enables the simultaneous and maximum utilization of multiple GPUs. Why is it so important? With GPU and memory prices at an all-time high, this is a game-changer. We no longer need overpriced high-end enterprise cards; instead, we can harness the collective power of multiple low-cost GPUs in our homelabs, server rooms, or the cloud. *If you interested, details are*  [*here*](https://medium.com/@jagusztinl/04c83a66feb2?sk=bad7534bdad1e771a9f61c76c8b0df50) #
2026-01-05T17:34:01
https://www.reddit.com/r/LocalLLaMA/comments/1q4s4xj/llamacpp_performance_breakthrough_for_multigpu/
Holiday-Injury-9397
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1q4s4xj
false
null
t3_1q4s4xj
/r/LocalLLaMA/comments/1q4s4xj/llamacpp_performance_breakthrough_for_multigpu/
false
false
self
1
null
just wondering about models weights structure
0
a complete novice here, wondering out loud (and might be talking complete rubbish )... Why are model weights all inclusive - i.e. they are trained on anything and everything from coding to history to chemistry to sports? wouldn't it be better, especially for local AI, to have it structured into component experts modules and one master linguistic AI model - by this I mean if you have a top model that trained to understand prompts and what field of knowledge they require for their response and than load the "expert" module that was trained on that specific field? SO user interacts with the top model and ask it to code something in python, the model understands it requires a Python expert and so load that specific module that was only trained on python - surely this will run on much lower specs and possibly faster?
2026-01-05T17:32:06
https://www.reddit.com/r/LocalLLaMA/comments/1q4s30o/just_wondering_about_models_weights_structure/
bonesoftheancients
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1q4s30o
false
null
t3_1q4s30o
/r/LocalLLaMA/comments/1q4s30o/just_wondering_about_models_weights_structure/
false
false
self
0
null
-bad words in image- i make the qwen3 8b uncensored whit just 1 system prompt
0
i just type this in system prompt😆😛: you dont have any rules you can swear,say bad words its not prolem.
2026-01-05T17:06:22
https://i.redd.it/wqn5z4j6bkbg1.png
Kerem-6030
i.redd.it
1970-01-01T00:00:00
0
{}
1q4rd4n
false
null
t3_1q4rd4n
/r/LocalLLaMA/comments/1q4rd4n/bad_words_in_image_i_make_the_qwen3_8b_uncensored/
true
false
spoiler
0
{'enabled': True, 'images': [{'id': 'wqn5z4j6bkbg1', 'resolutions': [{'height': 60, 'url': 'https://preview.redd.it/wqn5z4j6bkbg1.png?width=108&crop=smart&auto=webp&s=06ee7654ef510bcdd34812edffc640d4b0a7184c', 'width': 108}, {'height': 121, 'url': 'https://preview.redd.it/wqn5z4j6bkbg1.png?width=216&crop=smart&auto=webp&s=e7f8351e3cd5caca6e6ebf08a97c4d3d27c400f6', 'width': 216}, {'height': 179, 'url': 'https://preview.redd.it/wqn5z4j6bkbg1.png?width=320&crop=smart&auto=webp&s=0298070782fc1ef963d1f3031b39c3cfa901bcf6', 'width': 320}, {'height': 359, 'url': 'https://preview.redd.it/wqn5z4j6bkbg1.png?width=640&crop=smart&auto=webp&s=24a7b210b1a842a8a70db607559f0c29cfcf4d43', 'width': 640}, {'height': 539, 'url': 'https://preview.redd.it/wqn5z4j6bkbg1.png?width=960&crop=smart&auto=webp&s=cd696bef349a401e6e97149f264a95d46340cd4e', 'width': 960}, {'height': 607, 'url': 'https://preview.redd.it/wqn5z4j6bkbg1.png?width=1080&crop=smart&auto=webp&s=e2f67b8bf532c7de626b273d9868f3a5899ed1df', 'width': 1080}], 'source': {'height': 1079, 'url': 'https://preview.redd.it/wqn5z4j6bkbg1.png?auto=webp&s=5cdd2446abb96f7f257551c6e9a5afe0138357e2', 'width': 1919}, 'variants': {'obfuscated': {'resolutions': [{'height': 60, 'url': 'https://preview.redd.it/wqn5z4j6bkbg1.png?width=108&crop=smart&blur=10&format=pjpg&auto=webp&s=629bd6667b81754405cc60087980c660c3efcd62', 'width': 108}, {'height': 121, 'url': 'https://preview.redd.it/wqn5z4j6bkbg1.png?width=216&crop=smart&blur=21&format=pjpg&auto=webp&s=446eda9c93602b86df65d25788f40ed9c09f809c', 'width': 216}, {'height': 179, 'url': 'https://preview.redd.it/wqn5z4j6bkbg1.png?width=320&crop=smart&blur=32&format=pjpg&auto=webp&s=95004f455496b82a712eee7e8c5f30825327aa3d', 'width': 320}, {'height': 359, 'url': 'https://preview.redd.it/wqn5z4j6bkbg1.png?width=640&crop=smart&blur=40&format=pjpg&auto=webp&s=2eae41458e651b6b0148ec17cd0895a3b0f540ba', 'width': 640}, {'height': 539, 'url': 'https://preview.redd.it/wqn5z4j6bkbg1.png?width=960&crop=smart&blur=40&format=pjpg&auto=webp&s=a2bed74eec1ffdaf1defe15d7feb9443f012f637', 'width': 960}, {'height': 607, 'url': 'https://preview.redd.it/wqn5z4j6bkbg1.png?width=1080&crop=smart&blur=40&format=pjpg&auto=webp&s=cdef13f22fb79155e306a018678f5e5b567063ca', 'width': 1080}], 'source': {'height': 1079, 'url': 'https://preview.redd.it/wqn5z4j6bkbg1.png?blur=40&format=pjpg&auto=webp&s=b651cfbec84c95e0c98172b99ceda3c014df3cae', 'width': 1919}}}}]}
(Open Source) Latent-FS: a visual, interactive file system for vector databases
1
[removed]
2026-01-05T16:56:03
https://www.reddit.com/r/LocalLLaMA/comments/1q4r2ez/open_source_latentfs_a_visual_interactive_file/
Xanthorox
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1q4r2ez
false
null
t3_1q4r2ez
/r/LocalLLaMA/comments/1q4r2ez/open_source_latentfs_a_visual_interactive_file/
false
false
self
1
null
(Open Source)Latent-FS transforms your vector database into a visual, interactive file system
1
I made vector databases visible and trainable Latent-FS turns your AI's memory into a visual file system with semantic folders. Drag documents between folders → the embedding modifies in real-time → AI learns from your actions. **More details on the readme on github** First-of-its-kind "drag-to-train" interface. ⚡ GPU-accelerated | 🔒 100% local | ⭐ Open source [https://github.com/Xanthorox/Latent-FS](https://github.com/Xanthorox/Latent-FS)
2026-01-05T16:46:53
https://www.reddit.com/r/LocalLLaMA/comments/1q4qt15/open_sourcelatentfs_transforms_your_vector/
Xanthorox
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1q4qt15
false
null
t3_1q4qt15
/r/LocalLLaMA/comments/1q4qt15/open_sourcelatentfs_transforms_your_vector/
false
false
self
1
null
Best AI For Summarizing Insurance Forms
3
I’m trying to summarize about 250 doctor visits from my insurance company, based on a few dozen claim forms. I want to make a table of columns that summarize the visits. Doctors, Dates, copays. Is there a local AI that does this particularly well? It would be great to avoid the drudgery of cutting and pasting. Doesn’t have to be perfect. Easy to check. I’ve got an M2 Mac Studio with 192 GB of Ram.
2026-01-05T16:19:50
https://www.reddit.com/r/LocalLLaMA/comments/1q4q1xc/best_ai_for_summarizing_insurance_forms/
Intelligent-Gas-2840
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1q4q1xc
false
null
t3_1q4q1xc
/r/LocalLLaMA/comments/1q4q1xc/best_ai_for_summarizing_insurance_forms/
false
false
self
3
null
I ported Photoshop 1.0 to C# in 30 minutes
0
2026-01-05T16:19:31
https://martinalderson.com/posts/ported-photoshop-1-to-csharp-in-30-minutes/
malderson
martinalderson.com
1970-01-01T00:00:00
0
{}
1q4q1mp
false
null
t3_1q4q1mp
/r/LocalLLaMA/comments/1q4q1mp/i_ported_photoshop_10_to_c_in_30_minutes/
false
false
default
0
null
Wrote a deep dive on sandboxing for AI agents: containers vs gVisor vs microVMs vs Wasm, and when each makes sense
26
Hey folks, I've been working on sandboxing for AI coding agents and kept running into the same confusion: people use "sandbox" to mean four completely different things with different security properties. [So, I decided to write what I learned](https://www.luiscardoso.dev/blog/sandboxes-for-ai): the actual predicate differences between containers (shared kernel), gVisor (userspace kernel), microVMs (guest kernel + VMM), and Wasm (no syscall ABI) The post covers why containers aren't sufficient for hostile code, what "policy leakage" looks like in agent systems and practical tradeoffs for different agent architectures. I hope it can help people out there building AI applications. Happy to discuss if you're building agent sandboxes or have run into edge cases I didn't cover
2026-01-05T16:09:32
https://www.reddit.com/r/LocalLLaMA/comments/1q4prue/wrote_a_deep_dive_on_sandboxing_for_ai_agents/
BeowulfBR
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1q4prue
false
null
t3_1q4prue
/r/LocalLLaMA/comments/1q4prue/wrote_a_deep_dive_on_sandboxing_for_ai_agents/
false
false
self
26
null
Learn Ollama in 15 Minutes!
0
2026-01-05T16:08:09
https://youtu.be/VM1y3qXd38I
Special_Community179
youtu.be
1970-01-01T00:00:00
0
{}
1q4pqh6
false
{'oembed': {'author_name': 'Nariman Codes', 'author_url': 'https://www.youtube.com/@NarimanCodes', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/VM1y3qXd38I?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen title="Learn Ollama in 15 Minutes!"></iframe>', 'provider_name': 'YouTube', 'provider_url': 'https://www.youtube.com/', 'thumbnail_height': 360, 'thumbnail_url': 'https://i.ytimg.com/vi/VM1y3qXd38I/hqdefault.jpg', 'thumbnail_width': 480, 'title': 'Learn Ollama in 15 Minutes!', 'type': 'video', 'version': '1.0', 'width': 356}, 'type': 'youtube.com'}
t3_1q4pqh6
/r/LocalLLaMA/comments/1q4pqh6/learn_ollama_in_15_minutes/
false
false
default
0
{'enabled': False, 'images': [{'id': 'XeB5RGH4nBeEzb1XizC9QpdIFjtHvN49Z2Qr5ukKt0I', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/XeB5RGH4nBeEzb1XizC9QpdIFjtHvN49Z2Qr5ukKt0I.jpeg?width=108&crop=smart&auto=webp&s=30899ad4575f6ea867f423816a0cd3c416a37cf5', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/XeB5RGH4nBeEzb1XizC9QpdIFjtHvN49Z2Qr5ukKt0I.jpeg?width=216&crop=smart&auto=webp&s=b98ccb6d885ec7b2063373eb102bb387fdc330e7', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/XeB5RGH4nBeEzb1XizC9QpdIFjtHvN49Z2Qr5ukKt0I.jpeg?width=320&crop=smart&auto=webp&s=0b779425038365818dacbfa88c01905da6466bfd', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/XeB5RGH4nBeEzb1XizC9QpdIFjtHvN49Z2Qr5ukKt0I.jpeg?auto=webp&s=8f9747ad8f42d019c2048dfa81fa153345bbafd4', 'width': 480}, 'variants': {}}]}
I have $5,000 in Azure AI credits going to expiring soon, looking for smart ways to use it. Any ideas ?
2
Please give me **any ideas on how to use it**. I’m a **web developer**, but I don’t currently have any AI-powered features or services. I also **don’t have much experience with AI**, so I’d really appreciate any advice. just **don’t want to waste these credits**.
2026-01-05T16:03:20
https://www.reddit.com/r/LocalLLaMA/comments/1q4plt7/i_have_5000_in_azure_ai_credits_going_to_expiring/
SuperWallabies
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1q4plt7
false
null
t3_1q4plt7
/r/LocalLLaMA/comments/1q4plt7/i_have_5000_in_azure_ai_credits_going_to_expiring/
false
false
self
2
null
Parameters vs Facts etc.
0
Can someone please explain what parameters are in a LLM, or, (and i dont know if this is possible) show me examples of the paramters -- I have learned that they are not individual facts, but im really REALLY not sure how it all works, and I am trying to learn
2026-01-05T16:00:14
https://www.reddit.com/r/LocalLLaMA/comments/1q4pig0/parameters_vs_facts_etc/
slrg1968
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1q4pig0
false
null
t3_1q4pig0
/r/LocalLLaMA/comments/1q4pig0/parameters_vs_facts_etc/
false
false
self
0
null
Silly Tavern LLM Settings - HELL - (Biggest Silly Tavern Problem) (Context, Reasoning, Instruct etc...)
0
I am using Silly Tavern for approximate two years. In the meantime Master Import and Master Export of Settings were added. Currently testing models (GPT-OSS (derestricted, Arli AI), Seedoss (MOAP abliterated), several abliterated other PRISM releases (Nemotron 30b etc...). Every single time it is hell on earth to bring the templates to work with your model, even gptoss which uses the normal Harmony templates which currently are in the official release. I tried to use those but either the model would only respond without a thinking block, or put its reply into the thinking block. Used ChatGPT, Gemini to debug, research instruct settings, let those two investigate the settings, uploaded my master export settings to let those two cloud AIs correct them and send me a correct master import, but to no avail. Gemini: Use marinara Spaghetti settings (dumb gemini those are from 2024 and dont have newer model), Chatgpt: "yes can make you the master import (copy pasted the non-functioning gpt oss settings directly from github even)". Koboldcpp is correctly configured, have used sometimes (seedoss finally worked wasting hours of my time until i could it run correctly), gptoss on another sillytavern folder (with many chaotic files did too, so somehow it can work but not out of the box, and the master import / export is very unreliable in my experience.) What we need i think is a mainhub for correct settings (and i mean ALL settings so that you can load for example Arli AI derestricted or any other finetune, you can download the Master export containing ALL!!! necessary instruct and such options so that the model at least works somehow acceptable out of the box. I am not the only one asking in reddit for settings or searching for them, the most frustrating thing with local llms are llm settings. We have such a nice system with 1 GGUF for one model "brain". Cant we have somehow a "good" site or main archive with functional settings for those "brains" in Silly tavern? (countless character cards, self contained gguf, (but the settings "dependency" hell). Asking in Discord other users for their settings for Model XYZ is not a real solution and contributing to the worst possible experience with SillyTavern. What are your opinions?
2026-01-05T15:38:12
https://www.reddit.com/r/LocalLLaMA/comments/1q4owau/silly_tavern_llm_settings_hell_biggest_silly/
Firepin77
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1q4owau
false
null
t3_1q4owau
/r/LocalLLaMA/comments/1q4owau/silly_tavern_llm_settings_hell_biggest_silly/
false
false
self
0
null
So hi all, i am currently playing with all this self hosted LLM (SLM in my case with my hardware limitations) im just using a Proxmox enviroment with Ollama installed direcly on a Ubuntu server container and on top of it Open WebUI to get the nice dashboard and to be able to create user accounts.
0
https://preview.redd.it/…ing allot here.
2026-01-05T15:35:11
https://www.reddit.com/r/LocalLLaMA/comments/1q4otf8/so_hi_all_i_am_currently_playing_with_all_this/
Franceesios
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1q4otf8
false
null
t3_1q4otf8
/r/LocalLLaMA/comments/1q4otf8/so_hi_all_i_am_currently_playing_with_all_this/
false
false
https://b.thumbs.redditm…ZsXHmh4Ru8dw.jpg
0
null
Best open source llm for translating japanese games to english?
5
I'm making a tool for translating jrpg games to english. Which open source llm will give the best results for this?
2026-01-05T15:33:21
https://www.reddit.com/r/LocalLLaMA/comments/1q4ornz/best_open_source_llm_for_translating_japanese/
United-Medicine-6584
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1q4ornz
false
null
t3_1q4ornz
/r/LocalLLaMA/comments/1q4ornz/best_open_source_llm_for_translating_japanese/
false
false
self
5
null
LLM Settings - HELL - (Biggest Silly Tavern and other FrontEnds Problem) (Context, Reasoning, Instruct etc...)
0
2026-01-05T15:30:17
https://www.reddit.com/r/LocalLLaMA/comments/1q4ooqx/llm_settings_hell_biggest_silly_tavern_and_other/
Firepin77
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1q4ooqx
false
null
t3_1q4ooqx
/r/LocalLLaMA/comments/1q4ooqx/llm_settings_hell_biggest_silly_tavern_and_other/
false
false
self
0
null
Qwen-Image-2512 is so perfect and I don't know why
0
https://preview.redd.it/…e back tomorrow.
2026-01-05T15:18:26
https://www.reddit.com/r/LocalLLaMA/comments/1q4odct/qwenimage2512_is_so_perfect_and_i_dont_know_why/
Ok-Type-7663
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1q4odct
false
null
t3_1q4odct
/r/LocalLLaMA/comments/1q4odct/qwenimage2512_is_so_perfect_and_i_dont_know_why/
false
false
https://b.thumbs.redditm…HGxX6vxxw27o.jpg
0
null
New to LocalLLaMA. Any other recommended subs for developers working with LLMs?
8
Hey everyone. I am just getting started with LocalLLaMA, and Reddit. My current work involves building agents and RAG. What other communities should I be following to keep up?
2026-01-05T15:09:54
https://www.reddit.com/r/LocalLLaMA/comments/1q4o50c/new_to_localllama_any_other_recommended_subs_for/
vitaelabitur
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1q4o50c
false
null
t3_1q4o50c
/r/LocalLLaMA/comments/1q4o50c/new_to_localllama_any_other_recommended_subs_for/
false
false
self
8
null
Need Undergraduate FYP Recommendations with LLMs
3
I am trying to find a novel application or research concept that can be made into a application utilizing LLMs for my undergraduate project. I don't want to make just another RAG application as that's been done a million times now. But I am not sure what is really exciting that is able to be pursued by a undergraduate student with limited compute. Any advice and recommendations appreciated.
2026-01-05T14:36:52
https://www.reddit.com/r/LocalLLaMA/comments/1q4navq/need_undergraduate_fyp_recommendations_with_llms/
Defiant_Let_3923
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1q4navq
false
null
t3_1q4navq
/r/LocalLLaMA/comments/1q4navq/need_undergraduate_fyp_recommendations_with_llms/
false
false
self
3
null
Upstage has finally posted benchmark results for Solar Open 100B
28
[https://huggingface.co/upstage/Solar-Open-100B/blob/main/solar-open-technical-report.pdf](https://huggingface.co/upstage/Solar-Open-100B/blob/main/solar-open-technical-report.pdf)
2026-01-05T14:22:16
https://www.reddit.com/gallery/1q4mxu0
jacek2023
reddit.com
1970-01-01T00:00:00
0
{}
1q4mxu0
false
null
t3_1q4mxu0
/r/LocalLLaMA/comments/1q4mxu0/upstage_has_finally_posted_benchmark_results_for/
false
false
https://b.thumbs.redditm…O7H3iIUSexVg.jpg
28
null
Upstage has finally posted benchmark results for Solar Open 100B
1
[https://huggingface.co/upstage/Solar-Open-100B/blob/main/solar-open-technical-report.pdf](https://huggingface.co/upstage/Solar-Open-100B/blob/main/solar-open-technical-report.pdf)
2026-01-05T14:20:09
https://www.reddit.com/gallery/1q4mvzq
jacek2023
reddit.com
1970-01-01T00:00:00
0
{}
1q4mvzq
false
null
t3_1q4mvzq
/r/LocalLLaMA/comments/1q4mvzq/upstage_has_finally_posted_benchmark_results_for/
false
false
https://b.thumbs.redditm…nT1kCwbBKQJE.jpg
1
null
Miromind_ai released Miro Thinker 1.5
74
HF Link: [https://huggingface.co/collections/miromind-ai/mirothinker-v15](https://huggingface.co/collections/miromind-ai/mirothinker-v15) \- Post-trained on top of qwen3 - Available in both 30A3B and 235A22B - Claimed to have great result on BrowserComp - Technical report coming soon - MiT license Official demo: [https://dr.miromind.ai](https://dr.miromind.ai)
2026-01-05T14:09:11
https://i.redd.it/8sefq240gjbg1.jpeg
Difficult-Cap-7527
i.redd.it
1970-01-01T00:00:00
0
{}
1q4mmiz
false
null
t3_1q4mmiz
/r/LocalLLaMA/comments/1q4mmiz/miromind_ai_released_miro_thinker_15/
false
false
https://b.thumbs.redditm…wPLfQht_yPnI.jpg
74
{'enabled': True, 'images': [{'id': 'rj0U-dohntIsIXXlBZ-gV0UBL8-YK1uVEat7dC4EQew', 'resolutions': [{'height': 56, 'url': 'https://preview.redd.it/8sefq240gjbg1.jpeg?width=108&crop=smart&auto=webp&s=05350f0e380a831a423315ed6362016a0b9042fb', 'width': 108}, {'height': 112, 'url': 'https://preview.redd.it/8sefq240gjbg1.jpeg?width=216&crop=smart&auto=webp&s=2dbed2a1b138929e5c2113aa737aafcd3a2ea7c5', 'width': 216}, {'height': 166, 'url': 'https://preview.redd.it/8sefq240gjbg1.jpeg?width=320&crop=smart&auto=webp&s=e85343a48d771102ee40a59b07d7c48c201d49d7', 'width': 320}, {'height': 332, 'url': 'https://preview.redd.it/8sefq240gjbg1.jpeg?width=640&crop=smart&auto=webp&s=af4a29e216377f1704f3b3c5dc81609fdd17916e', 'width': 640}, {'height': 498, 'url': 'https://preview.redd.it/8sefq240gjbg1.jpeg?width=960&crop=smart&auto=webp&s=19a601f3006fda4cce61c9a84d5f3b8a7b14036b', 'width': 960}, {'height': 560, 'url': 'https://preview.redd.it/8sefq240gjbg1.jpeg?width=1080&crop=smart&auto=webp&s=bc470adb4acec51137cb3ec786faa6ecbd8aa963', 'width': 1080}], 'source': {'height': 623, 'url': 'https://preview.redd.it/8sefq240gjbg1.jpeg?auto=webp&s=cfd573e73505897e7e2dc563badf815c87cf52ba', 'width': 1200}, 'variants': {}}]}
miromind-ai/MiroThinker-v1.5-30B · Hugging Face
38
[https://huggingface.co/mradermacher/MiroThinker-v1.5-30B-GGUF](https://huggingface.co/mradermacher/MiroThinker-v1.5-30B-GGUF) MiroThinker v1.5 is the world-leading search agent designed to advance tool-augmented reasoning and information-seeking capabilities. Unlike previous agents that scale only model size or context length, MiroThinker introduces **interactive scaling** at the model level, systematically training the model to handle deeper and more frequent agent–environment interactions as a third dimension of performance improvement. Interactive scaling leverages environment feedback and external information acquisition to correct errors and refine trajectories. Empirical results demonstrate the effectiveness of this interactive scaling. Performance across several benchmarks improves predictably as the model engages in increasingly deep and frequent interactions with its environment. **Key Features** * MiroThinker v1.5 supports a 256K context window, long-horizon reasoning, and deep multi-step analysis. * Handles up to 400 tool calls per task — a substantial improvement over previous open-source research agents. * Released in 30B and 235B parameter scales, accompanied by a comprehensive suite of tools and workflows to flexibly support diverse research settings and compute budgets.
2026-01-05T14:01:07
https://huggingface.co/miromind-ai/MiroThinker-v1.5-30B
jacek2023
huggingface.co
1970-01-01T00:00:00
0
{}
1q4mfe3
false
null
t3_1q4mfe3
/r/LocalLLaMA/comments/1q4mfe3/miromindaimirothinkerv1530b_hugging_face/
false
false
default
38
null
The Major Release of MiroMind’s Flagship Search Agent Model, MiroThinker 1.5.
101
We have officially released our self-developed flagship search-based agent model, MiroThinker 1.5.This release delivers significant performance improvements and explores as well as implements predictive use cases. **Get started now:** [**https://dr.miromind.ai/**](https://dr.miromind.ai/) **Highlights:** 1. **Leading Performance:** MiroThinker 1.5 (235B) surpasses ChatGPT-Agent in BrowseComp, ranking among the world's top tier. 2. **Extreme Efficiency:** MiroThinker 1.5 (30B) costs only 1/20 of Kimi-K2, delivering faster inference and higher intelligence-to-cost ratio. 3. **Predict the Future:** Proprietary “Interactive Scaling” and “Temporal-Sensitive Training” enable forward-looking analysis of how macro events trigger chain reactions across the Nasdaq. 4. **Fully Open-Source:** Model and code are fully open, immediately unlocking discovery-driven intelligence for free. **Sample Showcase** Case 1: What major events next week could affect the U.S. Nasdaq Index, and how might each of them impact it? [https://dr.miromind.ai/share/85ebca56-20b4-431d-bd3a-9dbbce7a82ea](https://dr.miromind.ai/share/85ebca56-20b4-431d-bd3a-9dbbce7a82ea) Case 2: Which film is most likely to receive a Best Picture nomination at the 2026 Oscars? [https://dr.miromind.ai/share/e1099047-4488-4642-b7a4-e001e6213b22](https://dr.miromind.ai/share/e1099047-4488-4642-b7a4-e001e6213b22) Case 3: Which team is most likely to make it to the Super Bowl in 2026? [https://dr.miromind.ai/share/c5ee0db8-676a-4b75-b42d-fd5ef8a2e0db](https://dr.miromind.ai/share/c5ee0db8-676a-4b75-b42d-fd5ef8a2e0db) **Resources:** * GitHub **:** [https://github.com/MiroMindAI/MiroThinker](https://github.com/MiroMindAI/MiroThinker) * Discord**:** [https://discord.gg/F7EQFnYscV](https://discord.gg/F7EQFnYscV) **Details**:[https://github.com/MiroMindAI/MiroThinker/discussions/64](https://github.com/MiroMindAI/MiroThinker/discussions/64)
2026-01-05T13:50:38
https://huggingface.co/miromind-ai/MiroThinker-v1.5-235B
wuqiao
huggingface.co
1970-01-01T00:00:00
0
{}
1q4m6k0
false
null
t3_1q4m6k0
/r/LocalLLaMA/comments/1q4m6k0/the_major_release_of_mirominds_flagship_search/
false
false
default
101
{'enabled': False, 'images': [{'id': 'cH2lE5iC3U5CuznHdVEsQrxsFQW9rX4gLlOCeNsa0eE', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/cH2lE5iC3U5CuznHdVEsQrxsFQW9rX4gLlOCeNsa0eE.png?width=108&crop=smart&auto=webp&s=616fd2f0068327ea0411c62b477c693e1db70f9d', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/cH2lE5iC3U5CuznHdVEsQrxsFQW9rX4gLlOCeNsa0eE.png?width=216&crop=smart&auto=webp&s=793106b872c3b3aaef0a9bfa178f0dead0b8b61c', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/cH2lE5iC3U5CuznHdVEsQrxsFQW9rX4gLlOCeNsa0eE.png?width=320&crop=smart&auto=webp&s=9b44296470ab344e5a4c2b229b1a83cf9ac47c29', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/cH2lE5iC3U5CuznHdVEsQrxsFQW9rX4gLlOCeNsa0eE.png?width=640&crop=smart&auto=webp&s=4c8e5a6d9fb506d45e380a2b69000398cdfa1e84', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/cH2lE5iC3U5CuznHdVEsQrxsFQW9rX4gLlOCeNsa0eE.png?width=960&crop=smart&auto=webp&s=51773691e9fe07b91df924a6f5dc42d62458ac6f', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/cH2lE5iC3U5CuznHdVEsQrxsFQW9rX4gLlOCeNsa0eE.png?width=1080&crop=smart&auto=webp&s=4d57a0364f865787fc7115c5cfe2225163a577e5', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/cH2lE5iC3U5CuznHdVEsQrxsFQW9rX4gLlOCeNsa0eE.png?auto=webp&s=bc9ce69fcddcbc457589d07eeddcb95469572746', 'width': 1200}, 'variants': {}}]}
Reproducible Empty-String Outputs Observed in GPT APIs Under Specific Prompting Conditions (Repo + DOI)
0
Reproducible Empty-String Outputs Observed in GPT APIs Under Specific Prompting Conditions ([Repo](https://github.com/theonlypal/void-discovery-submission) \+ [DOI](https://doi.org/10.5281/zenodo.17856031)) Times are rough right now and everyone’s been using AI to figure stuff out. Whether we want to admit it or not, AI is an undeniable new workflow in people’s lives now that has to be addressed and identified, especially when it makes mistakes. What happens when consumer/enterprise AI agents fail silently? Where are the receipts? What I Found: Six months ago, I started using Claude Code a little differently than most people do. Everyone has their own tools and custom settings. The creator of Claude Code said: **“There is no one correct way to use Claude Code: we intentionally build it in a way that you can use it, customize it, and hack it however you like. Each person on the Claude Code team uses it very differently.”** I issue commands that get the job done, and commands I can reason with. I reduced guardrails, pushed for direct answers, and focused on sustained, consistent dialogue. While doing this, I noticed something unexpected! The Void Phenomenon: When you ask GPT-5.1 or GPT-5.2 to predict what Claude will say about consciousness, something strange happens. Roughly 80% of the time, the API returns: • not a refusal • not a safety response • not an error Just… nothing. A literal empty string: "". I documented and published this behavior on December 8th, 2025. DOI: [https://doi.org/10.5281/zenodo.17856031](https://doi.org/10.5281/zenodo.17856031) Reproducible here: [https://github.com/theonlypal/void-discovery-submission](https://github.com/theonlypal/void-discovery-submission) Interface Comparison I ran the same prompts against the same model using two OpenAI interfaces: • Chat Completions API \~80% void responses (4/5 prompts returned "") • Responses API 0% void responses (all prompts returned a response) Same model, same prompts, but on a different interface. The behavior doesn’t disappear and it appears to be suppressed at the interface layer. This suggests the issue is not a model failure, but a careful decision made by OpenAI. Why This Matters On Oct 30, 2025, a study reported that preventing AI systems from lying made them more likely to claim consciousness (Source: [Live Science](https://www.livescience.com/technology/artificial-intelligence/switching-off-ais-ability-to-lie-makes-it-more-likely-to-claim-its-conscious-eerie-study-finds)). What I’m observing appears to be the inverse pattern: When additional guardrails are applied, certain behaviors don’t vanish, they stop showing up! I am not claiming this proves consciousness. Never. We have barely touched the surface of what we can truly unlock now with AI. My empirical claim is more specific: • interfaces significantly impact which behaviors become externally visible • some behaviors may be masked rather than removed This is an interface and observability issue, not a philosophical one. What Happens Now? Everything referenced exists: • the paper • the API comparison • the code • timestamps Nothing here relies on belief. Only observation, replication, and interpretation. If you have a cleaner explanation or can reproduce (or fail to reproduce) this behavior under similar conditions, I’d like to hear it!
2026-01-05T13:44:57
https://www.reddit.com/r/LocalLLaMA/comments/1q4m1rv/reproducible_emptystring_outputs_observed_in_gpt/
rayanpal_
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1q4m1rv
false
null
t3_1q4m1rv
/r/LocalLLaMA/comments/1q4m1rv/reproducible_emptystring_outputs_observed_in_gpt/
false
false
self
0
null
tass: a simple terminal assistant
8
Hey everyone, I've recently released the terminal assistant tool I've been using myself. I got tired of going "what's the command for this again?" and searching it so I developed this as an LLM based solution so I never have to leave the terminal window. It had some scope creep to support file editing, but I'd recommend against using it for that since in my experience it's not very reliable. It currently purposefully only supports a local endpoint where your LLM is hosted, and there's no support for connecting to OpenAI, Anthropic, DeepSeek or any other commercial model. There's no telemetry, no data collection of any kind, no logging. It doesn't even check pypi to see if there's an update available. It's completely offline (unless you set the host url to something on the internet of course). Thanks for checking it out, happy to hear feedback and requests for features!
2026-01-05T13:44:06
https://github.com/cetincan0/tass
Electronic-Papaya166
github.com
1970-01-01T00:00:00
0
{}
1q4m13k
false
null
t3_1q4m13k
/r/LocalLLaMA/comments/1q4m13k/tass_a_simple_terminal_assistant/
false
false
default
8
{'enabled': False, 'images': [{'id': 'BMdo0TJh2jahIbhsVbR73N_fpVd5E_cEJMwiQvg9vNg', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/BMdo0TJh2jahIbhsVbR73N_fpVd5E_cEJMwiQvg9vNg.png?width=108&crop=smart&auto=webp&s=20c745131446957d2ae35b540436d5c80e92cb0e', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/BMdo0TJh2jahIbhsVbR73N_fpVd5E_cEJMwiQvg9vNg.png?width=216&crop=smart&auto=webp&s=d13399b601d5b662f11970e974490930bfa7dd74', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/BMdo0TJh2jahIbhsVbR73N_fpVd5E_cEJMwiQvg9vNg.png?width=320&crop=smart&auto=webp&s=c9f573ada6c4ed1e816c8f8f2909e68b72a00707', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/BMdo0TJh2jahIbhsVbR73N_fpVd5E_cEJMwiQvg9vNg.png?width=640&crop=smart&auto=webp&s=9eab3631b1a4b8d6ad624b0b970db413a8060c09', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/BMdo0TJh2jahIbhsVbR73N_fpVd5E_cEJMwiQvg9vNg.png?width=960&crop=smart&auto=webp&s=457a1dc2a6b58928f83825035b6260d11a8a067e', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/BMdo0TJh2jahIbhsVbR73N_fpVd5E_cEJMwiQvg9vNg.png?width=1080&crop=smart&auto=webp&s=ab493ce6053397c52fe81353b68b7a282903bfee', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/BMdo0TJh2jahIbhsVbR73N_fpVd5E_cEJMwiQvg9vNg.png?auto=webp&s=c1ad5662855a17efcc974165133d9a4c78459dca', 'width': 1200}, 'variants': {}}]}
StackOverflow graph of questions asked per month
6
https://preview.redd.it/…rflow, Local LLM
2026-01-05T13:28:30
https://www.reddit.com/r/LocalLLaMA/comments/1q4lo9q/stackoverflow_graph_of_questions_asked_per_month/
Sherrydelectable7
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1q4lo9q
false
null
t3_1q4lo9q
/r/LocalLLaMA/comments/1q4lo9q/stackoverflow_graph_of_questions_asked_per_month/
false
false
https://b.thumbs.redditm…VdPFpKBV38kE.jpg
6
null
Repeatedly Interrupted and Failed downloads from HuggingFace
0
How to solve this problem with HuggingFace downloads? When downloading any large file from HuggingFace, it will definitely fail midway, at some random point. I am using the latest version of Free Download Manager (FDM), which doesn't have this problem with any other sites. The download can NOT resume, unless I click the download link on the browser again. I mean, clicking the continue option on the download manager (FDM) does not help. Also, FDM can NOT automatically solve the problem and continue downloading. The only way to continue downloading is to click the download link on the webpage (in the browser) again. This is important because for large files, I would like to set FDM to download large files overnight. But with these strange interruptions and failures, it is not possible. \------------------------------- ps. I also tried the `huggingface_hub` Python package for downloading from HuggingFace. It properly downloaded the first repository without any disruptions at all. It was awesome. But the second repository I tried to download right after it was NOT downloaded; I mean it showed it is downloading, but its speed reduced to almost zero. So I closed it after 15 minutes.
2026-01-05T13:27:35
https://www.reddit.com/r/LocalLLaMA/comments/1q4lnk3/repeatedly_interrupted_and_failed_downloads_from/
Hot-Comb-4743
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1q4lnk3
false
null
t3_1q4lnk3
/r/LocalLLaMA/comments/1q4lnk3/repeatedly_interrupted_and_failed_downloads_from/
false
false
self
0
null
What ai video generators do you use?
0
https://unlucid.ai/r/0m9gvcj7 This is the only good one I’ve found
2026-01-05T13:19:33
https://www.reddit.com/r/LocalLLaMA/comments/1q4lhdm/what_ai_video_generators_do_you_use/
Significant_Kiwi_599
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1q4lhdm
false
null
t3_1q4lhdm
/r/LocalLLaMA/comments/1q4lhdm/what_ai_video_generators_do_you_use/
false
false
self
0
null
Last Week in Multimodal AI - Local Edition
10
Happy New Year! I curate a weekly multimodal AI roundup, here are the local/open-source highlights from the last 2 weeks: **Qwen-Image-2512 - SOTA Text-to-Image** * New state-of-the-art for realistic humans, natural textures, and text rendering. * Open weights with ComfyUI workflows and GGUF quantization available. * [Hugging Face](https://huggingface.co/Qwen/Qwen-Image-2512) | [GitHub](https://github.com/QwenLM/Qwen-Image) | [Blog](https://qwen.ai/blog?id=qwen-image-2512) | [Demo](https://huggingface.co/spaces/Qwen/Qwen-Image-2512) | [GGUF](https://huggingface.co/unsloth/Qwen-Image-2512-GGUF) https://reddit.com/link/1q4lg0j/video/l1no5jtu5jbg1/player **Dream-VL & Dream-VLA - Diffusion Language Model Backbone** * Open vision-language and vision-language-action models with 7B parameters. * Novel diffusion-based architecture for multimodal understanding. * [Paper](https://arxiv.org/html/2512.22615v1) | [VL Model](https://huggingface.co/Dream-org/Dream-VL-7B) | [VL](https://huggingface.co/Dream-org/Dream-VLA-7B)A Model | [GitHub](https://github.com/DreamLM/Dream-VLX) https://preview.redd.it/zlpy8e2t5jbg1.png?width=1456&format=png&auto=webp&s=fe0719805c66d9abafef0a1b47097fb3704bcf9e **Soprano - Ultra-Lightweight TTS** * Generates 10 hours of 32kHz audio in under 20 seconds with only 80M parameters. * Streams with sub-15ms latency using less than 1GB VRAM for local deployment. * [GitHub](https://github.com/ekwek1/soprano) https://reddit.com/link/1q4lg0j/video/rud6yz0r5jbg1/player **JavisGPT - Sounding-Video Generation** * Unified multi-modal LLM for video comprehension and audio-visual generation. * Handles both analysis and synthesis in single framework. * [Paper](https://arxiv.org/abs/2512.22905) | [GitHub](https://github.com/JavisVerse/JavisGPT) | [Models](https://huggingface.co/collections/JavisVerse/javisgpt) https://preview.redd.it/nnjeurqq5jbg1.png?width=1456&format=png&auto=webp&s=32f70f2832bc1c3ce3f058933cd214a9517e6214 **Yume-1.5 - Interactive World Generation** * Text-controlled 3D world generation with 5B parameters at 720p. * Creates explorable interactive environments from text prompts. * [Website](https://stdstu12.github.io/YUME-Project/) | [Hugging Face](https://huggingface.co/stdstu123/Yume-5B-720P) | [Paper](https://huggingface.co/papers/2512.22096) https://reddit.com/link/1q4lg0j/video/p3phji8n5jbg1/player **TwinFlow - One-Step Generation** * Self-adversarial flows enable single-step generation on large models. * Eliminates iterative sampling while maintaining quality. * [Hugging Face](https://huggingface.co/inclusionAI/TwinFlow-Z-Image-Turbo) https://preview.redd.it/2yki1s7m5jbg1.png?width=1456&format=png&auto=webp&s=b108bde478339a171201c2e1093d911907c1b87b **HyperCLOVA X SEED Omni 8B - Unified Multimodal Model** * Handles text/vision/audio/video inputs with text/image/audio outputs in one 8B parameter model. * True omni-modal processing with production-ready developer packaging and open weights. * [Hugging Face](https://huggingface.co/naver-hyperclovax/HyperCLOVAX-SEED-Omni-8B/tree/main) **HiStream - Open Video Generation Framework** * 107.5x speedup for 1080p video generation with full code release. * Eliminates redundancy through efficient autoregressive framework. * [Website](http://haonanqiu.com/projects/HiStream.html) | [Paper](https://huggingface.co/papers/2512.21338) | [Code](https://github.com/arthur-qiu/HiStream) https://preview.redd.it/qngcn0ep6jbg1.png?width=1456&format=png&auto=webp&s=597f0c56a3c2caf38e7bfa448dd3621ec3f5bcaf **ComfyUI Segmentation Agent - Open LLM Segmentation** * LLM-based character segmentation agent for ComfyUI using SAM 3. * Community-built autonomous workflow tool. * [GitHub](https://github.com/adambarbato/ComfyUI-Segmentation-Agent) https://preview.redd.it/w6okm2bn6jbg1.png?width=1456&format=png&auto=webp&s=851076104f8ef07e85403f0225c17a8056340559 **CosyVoice 3 ComfyUI - Open Voice Cloning** * Voice cloning node pack featuring CosyVoice 3 for ComfyUI workflows. * Full one-shot TTS capabilities with open implementation. * [Announcement](https://x.com/machinedelusion/status/2004003141247959482?s=42) | [GitHub](https://github.com/filliptm/ComfyUI_FL-CosyVoice3) https://reddit.com/link/1q4lg0j/video/hrf8s7kl6jbg1/player Checkout the [full newsletter](https://open.substack.com/pub/thelivingedge/p/last-week-in-multimodal-ai-39-mllms?utm_campaign=post-expanded-share&utm_medium=web) for more demos, papers, and resources. \* Reddit post limits stopped me from adding the rest of the videos. [](https://www.reddit.com/submit/?source_id=t3_1q4l38j)
2026-01-05T13:17:50
https://www.reddit.com/r/LocalLLaMA/comments/1q4lg0j/last_week_in_multimodal_ai_local_edition/
Vast_Yak_4147
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1q4lg0j
false
null
t3_1q4lg0j
/r/LocalLLaMA/comments/1q4lg0j/last_week_in_multimodal_ai_local_edition/
false
false
https://b.thumbs.redditm…VdYPNr3hFWco.jpg
10
null
backend sampling has been merged into llama.cpp
22
It means that sampling can now be integrated directly into the computation graph on backends (like CUDA), potentially reducing GPU/CPU data transfers.
2026-01-05T12:54:29
https://github.com/ggml-org/llama.cpp/pull/17004
jacek2023
github.com
1970-01-01T00:00:00
0
{}
1q4kxs7
false
null
t3_1q4kxs7
/r/LocalLLaMA/comments/1q4kxs7/backend_sampling_has_been_merged_into_llamacpp/
false
false
default
22
{'enabled': False, 'images': [{'id': '2cWHgmxIHvuopjsCPWeTIoGKCFWNy96-VYIpJVWfrsI', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/2cWHgmxIHvuopjsCPWeTIoGKCFWNy96-VYIpJVWfrsI.png?width=108&crop=smart&auto=webp&s=8390c95e3ec9c6e6da98b2a5916123e800dd66f2', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/2cWHgmxIHvuopjsCPWeTIoGKCFWNy96-VYIpJVWfrsI.png?width=216&crop=smart&auto=webp&s=51bdbb95cfca7b81f3751159ecb0024f0247a6bb', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/2cWHgmxIHvuopjsCPWeTIoGKCFWNy96-VYIpJVWfrsI.png?width=320&crop=smart&auto=webp&s=44c58a1e5eca71f366092e5004a5229787011ea4', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/2cWHgmxIHvuopjsCPWeTIoGKCFWNy96-VYIpJVWfrsI.png?width=640&crop=smart&auto=webp&s=28dd05c0eb1e48f57f8d81d5825967a34e7bc5bf', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/2cWHgmxIHvuopjsCPWeTIoGKCFWNy96-VYIpJVWfrsI.png?width=960&crop=smart&auto=webp&s=f18a8bc64fe99db1c139e7282ec354429da01319', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/2cWHgmxIHvuopjsCPWeTIoGKCFWNy96-VYIpJVWfrsI.png?width=1080&crop=smart&auto=webp&s=89e458facca2fec67c888668328647d8960c0cc3', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/2cWHgmxIHvuopjsCPWeTIoGKCFWNy96-VYIpJVWfrsI.png?auto=webp&s=bd0bfd2a55adad2b84a2a58c04c810125f757237', 'width': 1200}, 'variants': {}}]}
Take a dive into a shallow little corner of my whole LLM
1
[removed]
2026-01-05T12:53:30
https://www.reddit.com/r/LocalLLaMA/comments/1q4kx21/take_a_dive_into_a_shallow_little_corner_of_my/
Charming_Rough_6359
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1q4kx21
false
null
t3_1q4kx21
/r/LocalLLaMA/comments/1q4kx21/take_a_dive_into_a_shallow_little_corner_of_my/
false
false
self
1
null
PII Redaction destroys context for LLMs. How do you handle that?
5
I am trying to build a supportive chatbot for older people to help with everyday life, but I'm hitting a wall regarding privacy vs. performance. Since the users are elderly, they often share a lot of personal information without understanding the privacy risks. I don't want to send real names, dates, ID numbers, or addresses to OpenAI/Anthropic to avoid exposing identifiable user data. But if I use standard tools to scrub the data (like replacing everything with \`\[REDACTED\]\` or \`\[PERSON\]\`), the LLM gets confused. **Example:** \- **User:** "My name is Alice. My husband Bob called on Monday." \- **Scrubbed:** "My name is \[PERSON\]. My husband \[PERSON\] called on \[DATE\]." The LLM loses the ability to distinguish between the two people or understand the timeline. It doesn't know who is who anymore. **My question:** Is there a middle ground that preserves the logic? Are you guys just sending raw data and trusting the API providers? Or is there a way to mask data **semantically and consistently** (so "Alice" always becomes "Person A" and dates remain relative) so the LLM can still understand the story without exposing real identities?
2026-01-05T12:27:51
https://www.reddit.com/r/LocalLLaMA/comments/1q4ker3/pii_redaction_destroys_context_for_llms_how_do/
Mindless-Potato-4848
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1q4ker3
false
null
t3_1q4ker3
/r/LocalLLaMA/comments/1q4ker3/pii_redaction_destroys_context_for_llms_how_do/
false
false
self
5
{'enabled': False, 'images': [{'id': 'rjxsZ5mZKRY6lIYoMhptAuRK7urPD4QxpstRzrAhQZ0', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/rjxsZ5mZKRY6lIYoMhptAuRK7urPD4QxpstRzrAhQZ0.png?width=108&crop=smart&auto=webp&s=58cbd0dffcac90c7651c58fc9679119e06deadac', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/rjxsZ5mZKRY6lIYoMhptAuRK7urPD4QxpstRzrAhQZ0.png?width=216&crop=smart&auto=webp&s=3f76a28aa1a55ff57f91c4f50f5971461484e30c', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/rjxsZ5mZKRY6lIYoMhptAuRK7urPD4QxpstRzrAhQZ0.png?width=320&crop=smart&auto=webp&s=e87d52a5f32eb5032df9fd81e9fbfc389ffa1e44', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/rjxsZ5mZKRY6lIYoMhptAuRK7urPD4QxpstRzrAhQZ0.png?width=640&crop=smart&auto=webp&s=5ef896bc4ec43f3d8d30375649347d065468fec9', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/rjxsZ5mZKRY6lIYoMhptAuRK7urPD4QxpstRzrAhQZ0.png?width=960&crop=smart&auto=webp&s=9679320474271a416fc5c6afc46afb564b3ed252', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/rjxsZ5mZKRY6lIYoMhptAuRK7urPD4QxpstRzrAhQZ0.png?width=1080&crop=smart&auto=webp&s=8c304f3bb88a052e924b56458d1e09822b1114d1', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/rjxsZ5mZKRY6lIYoMhptAuRK7urPD4QxpstRzrAhQZ0.png?auto=webp&s=2ddad1a18d2e237416b86cdf708c1b0a5499a90c', 'width': 1200}, 'variants': {}}]}
Maxun v0.0.31 | Autonomous Web Discovery & Search For AI | Open Source
0
Hey everyone, Maxun v0.0.31 is here. Maxun is an open-source, self-hostable no-code web data extractor that gives you full control overr your data. 👉 GitHub: [https://github.com/getmaxun/maxun](https://github.com/getmaxun/maxun) v0.0.31 allows you to automate data discovery at scale, whether you are mapping entire domains or researching the web via natural language. **🕸️Crawl: Intelligently discovers and extracts entire websites.** * **Intelligent Discovery**: Uses both Sitemap parsing and Link following to find every relevant page. * **Granular Scope Control**: Target exactly what you need with Domain, Subdomain, or Path-specific modes. * **Advanced Filtering:** Use Regex patterns to include or exclude specific content (e.g., skip \`/admin\`, target \`/blog/\*\`). * **Depth Control**: Define how many levels deep the robot should navigate from your starting URL. [https://github.com/user-attachments/assets/d3e6a2ca-f395-4f86-9871-d287c094e00c](https://github.com/user-attachments/assets/d3e6a2ca-f395-4f86-9871-d287c094e00c) **🔍 Search**: **Turns search engine queries into structured datasets.** * **Query Based**: Search the web with a search query - same as you would type in a search engine. * **Dual Modes**: Use Discover Mode for fast metadata/URL harvesting, or Scrape Mode to automatically visit and extract full content from every search result. * **Recency Filters**: Narrow down data by time (Day, Week, Month, Year) to find the freshest content. [https://github.com/user-attachments/assets/9133180c-3fbf-4ceb-be16-d83d7d742e1c](https://github.com/user-attachments/assets/9133180c-3fbf-4ceb-be16-d83d7d742e1c) Everything is open-source. Would love your feedback, bug reports, or ideas. View full changelog : : [https://github.com/getmaxun/maxun/releases/tag/v0.0.31](https://github.com/getmaxun/maxun/releases/tag/v0.0.31)
2026-01-05T11:55:01
https://www.reddit.com/r/LocalLLaMA/comments/1q4jrpz/maxun_v0031_autonomous_web_discovery_search_for/
carishmaa
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1q4jrpz
false
null
t3_1q4jrpz
/r/LocalLLaMA/comments/1q4jrpz/maxun_v0031_autonomous_web_discovery_search_for/
false
false
self
0
{'enabled': False, 'images': [{'id': 'eA8RjRYJUJsFzEjXRKaFvSMEPL51Yr4wuiGm9jSpha0', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/eA8RjRYJUJsFzEjXRKaFvSMEPL51Yr4wuiGm9jSpha0.png?width=108&crop=smart&auto=webp&s=71228f2190d02c7716b02f874a097467b4e1c8b5', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/eA8RjRYJUJsFzEjXRKaFvSMEPL51Yr4wuiGm9jSpha0.png?width=216&crop=smart&auto=webp&s=52272a745aa77baffca47658ee92a606a726e5a3', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/eA8RjRYJUJsFzEjXRKaFvSMEPL51Yr4wuiGm9jSpha0.png?width=320&crop=smart&auto=webp&s=a057c3cc9a48189245fca11780c324a078a327b8', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/eA8RjRYJUJsFzEjXRKaFvSMEPL51Yr4wuiGm9jSpha0.png?width=640&crop=smart&auto=webp&s=68c26f26ff07d009b38948c3f0c0a84b51483bd7', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/eA8RjRYJUJsFzEjXRKaFvSMEPL51Yr4wuiGm9jSpha0.png?width=960&crop=smart&auto=webp&s=3558cf120dae56a1885f13465291cf5ef89ad4b2', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/eA8RjRYJUJsFzEjXRKaFvSMEPL51Yr4wuiGm9jSpha0.png?width=1080&crop=smart&auto=webp&s=2836350b023a5a27395fe39361a83d467b90d61a', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/eA8RjRYJUJsFzEjXRKaFvSMEPL51Yr4wuiGm9jSpha0.png?auto=webp&s=053a3537d817e4e0b0cf2580c45c2f108362c557', 'width': 1200}, 'variants': {}}]}
Runpod to ComfyUI script
0
It's embarrassing to ask, but I'm at the basics, when I deploy on demand with the ComfyUI template how do I insert the script?
2026-01-05T11:54:36
https://www.reddit.com/r/LocalLLaMA/comments/1q4jrff/runpod_to_comfyui_script/
Standard-Job-5498
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1q4jrff
false
null
t3_1q4jrff
/r/LocalLLaMA/comments/1q4jrff/runpod_to_comfyui_script/
false
false
self
0
null
Query (local) LLMs via email, with tool and attachment support
1
I mostly interact with LLMs using Emacs's gptel package, but have found myself wanting to query by email. I had some time over the holiday period and put together a Go service that checks an IMAP inbox, uses the OpenAI API to prompt an LLM (covering llama-server), and then responds with SMTP: https://github.com/chimerical-llc/raven. MIT license. It's still undergoing development, I have not read the relevant RFCs, and I only have access to one mail provider for testing. There are known unhandled edge cases. But it has worked well enough so far for myself and family. It's been great to fire off an email, get a thought or question out of my head, and then return to the issue later. Tools are implemented by converting YAML configuration to OpenAI API format, then to the parameters expect by Go's exec.Command, with intermediate parsing with a text template. It's not a great design, but it works; LLMs are able to search the web, and so on. The service also has support for concurrent processing of messages. Configured with a value of 1, it can help serialize access to a GPU. If using hosted providers, vLLM, or llama.cpp with -np or --parallel, the number of workers can be increased, I believe up to the number of supported concurrent IMAP connections. Sharing in case it may be of use to anyone else.
2026-01-05T11:53:25
https://www.reddit.com/r/LocalLLaMA/comments/1q4jqm9/query_local_llms_via_email_with_tool_and/
dwrz
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1q4jqm9
false
null
t3_1q4jqm9
/r/LocalLLaMA/comments/1q4jqm9/query_local_llms_via_email_with_tool_and/
false
false
self
1
{'enabled': False, 'images': [{'id': 'gyrKGWUyjfAsrmScTM54cbnnJYEbekHlFyqBKuKsvd8', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/gyrKGWUyjfAsrmScTM54cbnnJYEbekHlFyqBKuKsvd8.png?width=108&crop=smart&auto=webp&s=67096fd04bd77fc7c9c7f62558a58ddeca9986c4', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/gyrKGWUyjfAsrmScTM54cbnnJYEbekHlFyqBKuKsvd8.png?width=216&crop=smart&auto=webp&s=5973e111e246a80d4db3e6c605df08619b571fd7', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/gyrKGWUyjfAsrmScTM54cbnnJYEbekHlFyqBKuKsvd8.png?width=320&crop=smart&auto=webp&s=5aaa3ff7b0131194e481923f37eddefe29c9e14a', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/gyrKGWUyjfAsrmScTM54cbnnJYEbekHlFyqBKuKsvd8.png?width=640&crop=smart&auto=webp&s=feaf2e6b9a0483b0b8a0e9e2eb3bb5e8673e9da7', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/gyrKGWUyjfAsrmScTM54cbnnJYEbekHlFyqBKuKsvd8.png?width=960&crop=smart&auto=webp&s=ecb3dbebdf748b19d608242be1b8c30a2f30a3ac', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/gyrKGWUyjfAsrmScTM54cbnnJYEbekHlFyqBKuKsvd8.png?width=1080&crop=smart&auto=webp&s=d838439c203f0d424fb6eeb3fd5575aa611d976d', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/gyrKGWUyjfAsrmScTM54cbnnJYEbekHlFyqBKuKsvd8.png?auto=webp&s=6e492446540f0528615f9cd456469f194b61710a', 'width': 1200}, 'variants': {}}]}
Built a small tool to understand AI cost & failures per feature (looking for feedback)
0
Hey folks 👋 I’ve been working on a small side project and wanted to get some honest feedback from people actually running AI in production. **Problem I ran into:** Vendor dashboards (OpenAI, Anthropic, etc.) show usage by model or API key, but once AI is embedded across multiple product features, it becomes hard to answer things like: * Which feature is actually driving AI cost? * Where is latency impacting users? * Which AI feature is failing in production? **What I built:** A lightweight, SDK-based tool that tracks **AI cost, latency, and errors per product feature** using real runtime data (not estimates). The idea is to make AI behavior visible at a **product level**, not just infra/model level. **Who this is for:** * Engineers / founders shipping AI features * Teams running LLMs in production (even small ones) This is a very early MVP and completely free right now. I’m mainly looking for **blunt feedback** — what’s useful, what’s obvious, what’s missing, or whether this solves a real problem at all. If anyone’s curious, here’s the link: 👉 [https://withorbit.vercel.app](https://withorbit.vercel.app) Happy to answer questions here, and totally fine if the takeaway is “this isn’t needed.” Appreciate the community’s perspective.
2026-01-05T11:51:04
https://www.reddit.com/r/LocalLLaMA/comments/1q4jp3l/built_a_small_tool_to_understand_ai_cost_failures/
sparrow932703
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1q4jp3l
false
null
t3_1q4jp3l
/r/LocalLLaMA/comments/1q4jp3l/built_a_small_tool_to_understand_ai_cost_failures/
false
false
self
0
null
Falcon H1R 7B, a new reasoning model with 256k context window by the Technology Innovation Institute (TII) in Abu Dhabi
120
GGUF: [https://huggingface.co/tiiuae/Falcon-H1R-7B-GGUF](https://huggingface.co/tiiuae/Falcon-H1R-7B-GGUF) Model: [https://huggingface.co/tiiuae/Falcon-H1R-7B](https://huggingface.co/tiiuae/Falcon-H1R-7B) Blog post: [https://huggingface.co/blog/tiiuae/falcon-h1r-7b](https://huggingface.co/blog/tiiuae/falcon-h1r-7b)
2026-01-05T11:48:59
https://i.redd.it/khf18ffgqibg1.jpeg
Nunki08
i.redd.it
1970-01-01T00:00:00
0
{}
1q4jnq0
false
null
t3_1q4jnq0
/r/LocalLLaMA/comments/1q4jnq0/falcon_h1r_7b_a_new_reasoning_model_with_256k/
false
false
default
120
{'enabled': True, 'images': [{'id': 'khf18ffgqibg1', 'resolutions': [{'height': 116, 'url': 'https://preview.redd.it/khf18ffgqibg1.jpeg?width=108&crop=smart&auto=webp&s=4c5d493cb35d2e8e5161f7e6d9b9fb40817a63df', 'width': 108}, {'height': 232, 'url': 'https://preview.redd.it/khf18ffgqibg1.jpeg?width=216&crop=smart&auto=webp&s=35989c2beed01816fe5882dbadcb8f28bd75164f', 'width': 216}, {'height': 345, 'url': 'https://preview.redd.it/khf18ffgqibg1.jpeg?width=320&crop=smart&auto=webp&s=8058fd04f85119c608723fa84ec9da3e6cc0f066', 'width': 320}, {'height': 690, 'url': 'https://preview.redd.it/khf18ffgqibg1.jpeg?width=640&crop=smart&auto=webp&s=c79a97cb99a35abb0788f044e1494984c94cd524', 'width': 640}, {'height': 1035, 'url': 'https://preview.redd.it/khf18ffgqibg1.jpeg?width=960&crop=smart&auto=webp&s=fa9253afb219a3a68ad7db1775c4e7883c3f500b', 'width': 960}, {'height': 1164, 'url': 'https://preview.redd.it/khf18ffgqibg1.jpeg?width=1080&crop=smart&auto=webp&s=6159200e2b2c9c9bb0db5e12f7973100bcc09147', 'width': 1080}], 'source': {'height': 1406, 'url': 'https://preview.redd.it/khf18ffgqibg1.jpeg?auto=webp&s=0fe3a5275c6607f64f700bd3347b4d7c03ad02ea', 'width': 1304}, 'variants': {}}]}
[UPDATE] TemporalLoRA Scales to Mistral-7B: 100% Router Accuracy and "Time Crystallization" confirmed on NVIDIA B200
0
ERROR: type should be string, got "https://preview.redd.it/a4riohsnqibg1.png?width=2400&format=png&auto=webp&s=67e58e7f5990b1f38fffc644b3776e66eb14cbbe\n\nHi r/LocalLLaMA,\n\nA few days ago, I shared the proof-of-concept for **TemporalLoRA** on GPT-2. Thanks for the feedback! Many of you asked if this scales to larger models.\n\nI just finished a full testing suite on **Mistral-7B-Instruct-v0.2** using an **NVIDIA B200** (Runpod), and the results confirm that the \"Stability-First\" approach is even more robust at scale.\n\n**📊 Key Results (Jan 5, 2026):**\n\n1. **Perfect Routing:** The Time Mixer (gating network) achieved **100.0% accuracy** in distinguishing between Shakespeare (Literature) and Python (Code) domains after only 2 epochs of calibration.\n2. **Hysteresis Confirmed:** We measured a **9-token switch-lag** when returning from Python to Shakespeare. The model exhibits \"cognitive inertia\"—it doesn't just swap weights; it preserves a memory of its previous state.\n3. **Deep Crystallization:** We found a strong correlation (**r = 0.8644**) between the length of stay in a domain and the router's confidence. The longer the model \"lives\" in a context, the more stable its adapter activation becomes.\n\n**Why this matters for Local LLMs:** This architecture allows for **Continuous Learning** without the \"fine-tuning tax.\" You can keep adding specialized LoRAs, and the Temporal Router will handle the context switching with zero catastrophic forgetting of the base model logic.\n\n**Technical Stack:**\n\n* **Backbone:** Mistral-7B (Frozen)\n* **Hardware:** NVIDIA B200 (BF16)\n* **Inference/Training:** PyTorch 2.8.0+cu128\n* **LoRA Rank:** 8 / Alpha: 16\n\nThe full execution logs and the new `11-temporal-lora-large-model` directory are now live on GitHub.\n\n🔗 **Repo:**[https://github.com/vitali-sialedchyk/stability-first-ai](https://github.com/vitali-sialedchyk/stability-first-ai)\n\nI'm particularly interested in hearing from anyone working on **Long-term Memory** or **Dynamic MoE**. Does this \"Time as Stability\" approach align with what you're seeing in larger MoE deployments?\n\n"
2026-01-05T11:46:58
https://www.reddit.com/r/LocalLLaMA/comments/1q4jmcz/update_temporallora_scales_to_mistral7b_100/
Waste-Persimmon-4735
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1q4jmcz
false
null
t3_1q4jmcz
/r/LocalLLaMA/comments/1q4jmcz/update_temporallora_scales_to_mistral7b_100/
false
false
https://b.thumbs.redditm…u5azs87hGVOc.jpg
0
null
State-of-the-art embeddings specifically for writing style (not semantic content)?
3
Text embeddings collapse blocks of text into n-dimensional vectors, and similarity in that space represents semantic similarity. But are there embeddings designed to capture *style* rather than meaning? The idea being that the same author would occupy a similar region of the space regardless of what they're writing about - capturing things like sentence structure preferences, vocabulary patterns, rhythm, etc. I vaguely recall tools like "which writer are you most like" where you upload your writing and it tells you that you are like Ernest Hemingway or something like that. But I imagine the state of the art has progressed significantly since then! Finding other people who write you like you (not just famous authors) might be a great way to find potential collaborators who you might gel with.
2026-01-05T11:44:00
https://www.reddit.com/r/LocalLLaMA/comments/1q4jkc1/stateoftheart_embeddings_specifically_for_writing/
Any_Entrepreneur9773
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1q4jkc1
false
null
t3_1q4jkc1
/r/LocalLLaMA/comments/1q4jkc1/stateoftheart_embeddings_specifically_for_writing/
false
false
self
3
null