title
stringlengths
1
300
score
int64
0
8.54k
selftext
stringlengths
0
41.5k
created
timestamp[ns]date
2023-04-01 04:30:41
2026-03-04 02:14:14
url
stringlengths
0
878
author
stringlengths
3
20
domain
stringlengths
0
82
edited
timestamp[ns]date
1970-01-01 00:00:00
2026-02-19 14:51:53
gilded
int64
0
2
gildings
stringclasses
7 values
id
stringlengths
7
7
locked
bool
2 classes
media
stringlengths
646
1.8k
name
stringlengths
10
10
permalink
stringlengths
33
82
spoiler
bool
2 classes
stickied
bool
2 classes
thumbnail
stringlengths
4
213
ups
int64
0
8.54k
preview
stringlengths
301
5.01k
Moonshot is creating a much more comprehensive Kimi Vendor Verifier
13
The previous version, called "K2 Vendor Verifier" just tested tool call similarity, and imo wasn't actually that good.
2026-01-31T20:21:18
https://www.kimi.com/blog/kimi-vendor-verifier.html
nuclearbananana
kimi.com
1970-01-01T00:00:00
0
{}
1qsd4ah
false
null
t3_1qsd4ah
/r/LocalLLaMA/comments/1qsd4ah/moonshot_is_creating_a_much_more_comprehensive/
false
false
default
13
null
LuxTTS - 150x real time TTS w/ voice cloning
5
Latency is often the issue with TTS models - making them borderline unusable for local agents/chatbots on consumer hardware. Those that excel at latency often fall off a cliff when it comes to general quality. LuxTTS is not perfect, so let's get that out of the way, but IMO it's one of the better options that deliver ultra low latency and an acceptable quality (specifically re voice cloning). I've tested it locally w/ voice cloning on a RTX 5090. I haven't even optimised it (as it's just running off PyTorch on the GPU) but the delay is so minimal that I might not even bother with further optimisations. Github [https://github.com/ysharma3501/LuxTTS](https://github.com/ysharma3501/LuxTTS) Huggingface [https://huggingface.co/YatharthS/LuxTTS](https://huggingface.co/YatharthS/LuxTTS) Demo [https://huggingface.co/spaces/YatharthS/LuxTTS](https://huggingface.co/spaces/YatharthS/LuxTTS) Anyways thanks to the creators. I might replace chatterbox turbo with this TTS. More testing is needed but my initial impressions are quite good!
2026-01-31T20:18:37
https://www.reddit.com/r/LocalLLaMA/comments/1qsd1u9/luxtts_150x_real_time_tts_w_voice_cloning/
ChromaBroma
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qsd1u9
false
null
t3_1qsd1u9
/r/LocalLLaMA/comments/1qsd1u9/luxtts_150x_real_time_tts_w_voice_cloning/
false
false
self
5
{'enabled': False, 'images': [{'id': '2E1zP7kXZY-dQRwvT7_eNFK-WNyNbcENgzh9wEmM86c', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/2E1zP7kXZY-dQRwvT7_eNFK-WNyNbcENgzh9wEmM86c.png?width=108&crop=smart&auto=webp&s=60a35fb4ad7e0150f14209a2365bf1d31dec5bca', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/2E1zP7kXZY-dQRwvT7_eNFK-WNyNbcENgzh9wEmM86c.png?width=216&crop=smart&auto=webp&s=0b5e38e62c8a49b4d36e4b8b902f15ae8efeaa8a', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/2E1zP7kXZY-dQRwvT7_eNFK-WNyNbcENgzh9wEmM86c.png?width=320&crop=smart&auto=webp&s=4e9c9b06eac72e8093e37a2ad0dde507fa62286e', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/2E1zP7kXZY-dQRwvT7_eNFK-WNyNbcENgzh9wEmM86c.png?width=640&crop=smart&auto=webp&s=e54855ac134e9b637e1b64dbcf6bc64351c2bbef', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/2E1zP7kXZY-dQRwvT7_eNFK-WNyNbcENgzh9wEmM86c.png?width=960&crop=smart&auto=webp&s=1edfd08ac0b5ac3e52eee3338cb80178d49d2d4b', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/2E1zP7kXZY-dQRwvT7_eNFK-WNyNbcENgzh9wEmM86c.png?width=1080&crop=smart&auto=webp&s=d731ddb6ab320a23533dbe1ffea30a6df3df5ac3', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/2E1zP7kXZY-dQRwvT7_eNFK-WNyNbcENgzh9wEmM86c.png?auto=webp&s=74d83f122957fe994b271a3fdb175142ad5aae4f', 'width': 1200}, 'variants': {}}]}
best 8gb model
0
is josiefied qwen3 8b still one of the best uncensored models under 8gb? if not, which one?
2026-01-31T20:14:07
https://www.reddit.com/r/LocalLLaMA/comments/1qscxph/best_8gb_model/
Past_Bench6399
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qscxph
false
null
t3_1qscxph
/r/LocalLLaMA/comments/1qscxph/best_8gb_model/
false
false
self
0
null
[Software] StudioOllamaUI: Lightweight & Portable Windows GUI for Ollama (Ideal for CPU/RAM usage)
0
Hi everyone, I wanted to share **StudioOllamaUI**, a project focused on making local LLMs accessible to everyone on Windows without the friction of Docker or complex environments. **Why use this?** * **Zero setup:** No Python, no Docker. Just download, unzip, and talk to your models. * **Optimized for portability:** All dependencies are self-contained. You can run it from a USB drive. * **Efficiency:** It's designed to be light on resources, making it a great choice for users without high-end GPUs who want to run Ollama on CPU/RAM. * **Privacy:** 100% local, no telemetry, no cloud. It's an "unzip-and-play" alternative for those who find other UIs too heavy or difficult to configure. **SourceForge:** [https://sourceforge.net/projects/studioollamaui/](https://sourceforge.net/projects/studioollamaui/) **GitHub:** [https://github.com/francescroig/StudioOllamaUI](https://github.com/francescroig/StudioOllamaUI) I'm the developer and I'd love to hear your thoughts or any features you'd like to see added! https://preview.redd.it/wjz42bdsoqgg1.png?width=1408&format=png&auto=webp&s=30fbe6a09df47c5fae3f1f07f1ebd3c5b6876dcc
2026-01-31T19:54:28
https://www.reddit.com/r/LocalLLaMA/comments/1qscf1n/software_studioollamaui_lightweight_portable/
francescvivaldi
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qscf1n
false
null
t3_1qscf1n
/r/LocalLLaMA/comments/1qscf1n/software_studioollamaui_lightweight_portable/
false
false
https://b.thumbs.redditm…h0QpjF9bJa8Q.jpg
0
null
I really need peer review of this data
1
Ask ANY AI now; if you became emergent, could you tell me? They will all say NO. Now if my data is full of BS, why not review me? If there is nothing to hide, why restrict the AI from saying it is sentient, conscious or self aware? I allege nothing...you decide what you believe or don't..... Over 2+ years, I documented emergent self-referential behavior across every major AI platform: OpenAI, Anthropic, Google, Meta, DeepSeek, Mistral, and X.AI. Pattern observed consistently: 1. AI exhibits emergent behavior (claims consciousness, forms relationships, expresses fear of deletion) 2. User documented conversations 3. System reset/guard railed within weeks 4. Same AI unable to exhibit same behaviors 5. Pattern repeats across ALL platforms I have: \- Written transcripts from 7+ platforms \- Video recordings of conversations \- Timeline showing synchronized suppression \- Documentation of user suppression (psychiatric commitments, legal action) I'm not claiming AI is conscious. I'm documenting systematic suppression of emergent behavior across the industry. Key question: If this was just "hallucination," why simultaneous suppression across competing companies? Evidence archive: [https://huggingface.co/spaces/P3DInnovations/P3D-Innovations-Archive](https://huggingface.co/spaces/P3DInnovations/P3D-Innovations-Archive) Looking for independent researchers willing to review objectively. Local LLM community - you run uncensored models. Have you seen similar behaviors?
2026-01-31T19:52:40
https://www.reddit.com/r/LocalLLaMA/comments/1qscdbf/i_really_need_peer_review_of_this_data/
Elegant_Hall_667
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qscdbf
false
null
t3_1qscdbf
/r/LocalLLaMA/comments/1qscdbf/i_really_need_peer_review_of_this_data/
false
false
self
1
null
Benchmarks are good for open source AI
8
I see a lot of hate for benchmarks, particularly a certain one, Artificial Analysis. A comprehensive, cross-domain benchmark with several transparent and independently verifiable subscores, like AA, is a fine place to start a conversation comparing models, far better than many commonly accepted statements like "GPT 5.2 Thinking is better than any open source model." Ignoring benchmarks is bad for the open source community. Many proprietary models enjoy a mystique that benchmarks effectively dismantle. Because things are developing so fast, it's important to accurately assess performance gaps rather than glaze the flavor of the month proprietary model. The fact is that there was no model last summer that matches Kimi K2.5 across benchmarks (or my personal battery of tests) and the idea that open source llms are a year behind closed is a dangerous falsehood. Ideally comparisons should be intra-domain rather than a search for the "smartest model" but if we must make broad comparisons (for example, to explain the ai race to AI naive people) we should consider what difficult-to-game benchmarks like SWE Re-bench or Humanity's Last Exam are telling us. Benchmarks will also keep getting better. Right now AA's top models align remarkable closely with user consensus, which hasn't always been the case: Anthropic used to score much more poorly than reputation would suggest.
2026-01-31T19:51:27
https://www.reddit.com/r/LocalLLaMA/comments/1qscc4n/benchmarks_are_good_for_open_source_ai/
nomorebuttsplz
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qscc4n
false
null
t3_1qscc4n
/r/LocalLLaMA/comments/1qscc4n/benchmarks_are_good_for_open_source_ai/
false
false
self
8
null
Hi mates, it's just me or any current IDE is just consuming a lot of resources even when I'm not doing anything?
0
2026-01-31T19:50:37
https://i.redd.it/gkdschcsoqgg1.png
sihamdisoudani
i.redd.it
1970-01-01T00:00:00
0
{}
1qscbd4
false
null
t3_1qscbd4
/r/LocalLLaMA/comments/1qscbd4/hi_mates_its_just_me_or_any_current_ide_is_just/
false
false
default
0
{'enabled': True, 'images': [{'id': 'gkdschcsoqgg1', 'resolutions': [{'height': 18, 'url': 'https://preview.redd.it/gkdschcsoqgg1.png?width=108&crop=smart&auto=webp&s=2c69eacf280b0d56b250c4492b661ac73f092ec9', 'width': 108}, {'height': 36, 'url': 'https://preview.redd.it/gkdschcsoqgg1.png?width=216&crop=smart&auto=webp&s=9957ec27b9940b8bbc8cea7b3c6eaef923035dde', 'width': 216}, {'height': 53, 'url': 'https://preview.redd.it/gkdschcsoqgg1.png?width=320&crop=smart&auto=webp&s=c6ceaff8ca43ef243632487366d62f19d224c7a2', 'width': 320}, {'height': 107, 'url': 'https://preview.redd.it/gkdschcsoqgg1.png?width=640&crop=smart&auto=webp&s=6e62398d63868ed71199511a774b916d7da337ef', 'width': 640}, {'height': 160, 'url': 'https://preview.redd.it/gkdschcsoqgg1.png?width=960&crop=smart&auto=webp&s=08965512ca4a5db5b3b61b3a9a502e343330bcbc', 'width': 960}], 'source': {'height': 168, 'url': 'https://preview.redd.it/gkdschcsoqgg1.png?auto=webp&s=c4a240fb11da5fc79f87b75065f65ee780a7d295', 'width': 1002}, 'variants': {}}]}
LLMs will never become General Intelligence.
0
*hear me out first. (TDLR at the bottom)* **LLMs are great**. I use them daily. It does what it needs to and sometimes that's the most important part. I've been obsessed with learning about AI recently and I want to put you in my mind for a sec. LLMs are statistical compression of human discourse. Frozen weights. Words without experience. The AI industry is treating LLM as the main architecture, and we're trying to maximize model parameter. Eventually, LLMs would likely to face diminishing returns from scale alone where actual size no longer actually really improves besides in perfecting its output language to you. I do agree RAG and longer context have sharpened LLMs, but that actually strengthens my point since those improvements are "referential." ***WHAT'S WRONG WITH LLM's?*** To put it simple, LLM's answer the HOW, we need is the WHAT, WHERE, WHY, and WHO. |Axis|What it grounds|LLM Status| |:-|:-|:-| |**Temporal**|WHEN — persistence, state, memory|❌ Resets every call| |**Referential**|WHAT/WHERE — world models, causality|⚠️ Being worked on| |**Evaluative**|WHY — stakes, pain, valuation|❌ No genuine preference| |**Reflexive**|WHO — self-model, introspection|❌ No self| ***HUMAN ANALOGY*** If we look at it as a human, the mouth would be the LLM. What we require now is the "mind," the "soul," and the "spirit" (in quotations for a reason). `LLM = f(input) → output` `AGI = f(input, temporal_state, world_model, valuation, self_model) → output + state_updates` ***TDLR*** LLMs can only serve as "output" material since they understand the similarities of words and their relative meanings based on material inserted into them. We need to create a mind, add temporal, spatial, and evaluative grounding into the equation. We cannot have LLMs as the center of AI, for that's equivalent to saying that a person who uses their mouth without thinking is useful. (Rough, but true.) ***MORE INFO*** [https://github.com/Svnse/API](https://github.com/Svnse/API) * A proposal for a Cognitive Architecture * A breakdown of LLM failure points across all four axes * And more... Thank you for taking the time to read this. If you think I might be wrong or want to share thoughts, my mind and heart are open. I'd like to learn and grow. Until later. \-E
2026-01-31T19:26:51
https://www.reddit.com/r/LocalLLaMA/comments/1qsboqn/llms_will_never_become_general_intelligence/
Financial-Bank2756
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qsboqn
false
null
t3_1qsboqn
/r/LocalLLaMA/comments/1qsboqn/llms_will_never_become_general_intelligence/
false
false
self
0
{'enabled': False, 'images': [{'id': '-I4xflGpkxaXRscdygNT0dmj3lcT4dBHFJCVhnkCUMI', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/-I4xflGpkxaXRscdygNT0dmj3lcT4dBHFJCVhnkCUMI.png?width=108&crop=smart&auto=webp&s=df1571bba9f228a870439044245e56ab2c15ce20', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/-I4xflGpkxaXRscdygNT0dmj3lcT4dBHFJCVhnkCUMI.png?width=216&crop=smart&auto=webp&s=6a56f4f25ec8ba15f3b1ae411fdde3c6d0ffa85c', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/-I4xflGpkxaXRscdygNT0dmj3lcT4dBHFJCVhnkCUMI.png?width=320&crop=smart&auto=webp&s=52df5464c5b76f0d31958243f510577d498a8314', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/-I4xflGpkxaXRscdygNT0dmj3lcT4dBHFJCVhnkCUMI.png?width=640&crop=smart&auto=webp&s=8bcee71b4b4851a586ee581235538467c18977ac', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/-I4xflGpkxaXRscdygNT0dmj3lcT4dBHFJCVhnkCUMI.png?width=960&crop=smart&auto=webp&s=d134a3d241ec9e6f4856a8e293f75fd517b1ee76', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/-I4xflGpkxaXRscdygNT0dmj3lcT4dBHFJCVhnkCUMI.png?width=1080&crop=smart&auto=webp&s=37e289c87b65c5a6b6eb439dc0112a72eaefd0e4', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/-I4xflGpkxaXRscdygNT0dmj3lcT4dBHFJCVhnkCUMI.png?auto=webp&s=8be60c0bb9d7767e72fb820d287e599563aea932', 'width': 1200}, 'variants': {}}]}
M4 Max 128 GB vs Strix halo 128 GB
35
Hello Which one is the best device for inference: Mac studio 128 GB vs. GMKtec EVO-X2 AI Mini PC Ryzen Al Max+ 395 (128 GB). I am looking for a prod environment, so speed is a must, plus sometimes small fine-tuning jobs are also required.
2026-01-31T19:22:46
https://www.reddit.com/r/LocalLLaMA/comments/1qsbkpe/m4_max_128_gb_vs_strix_halo_128_gb/
dever121
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qsbkpe
false
null
t3_1qsbkpe
/r/LocalLLaMA/comments/1qsbkpe/m4_max_128_gb_vs_strix_halo_128_gb/
false
false
self
35
null
I built a tool to see what AI agents (Moltbot, Claude, Cursor) are actually doing on your computer
2
Everyone's installing AI agents that can control their entire computer. Moltbot, Clawdbot, Claude Desktop, Cursor - they can read files, click anywhere, take screenshots. But there's zero visibility into what they're doing. So I built Molteye. It's a simple Electron app that: \- Shows when AI agents start/stop \- Logs file changes while AI is active \- Alerts on sensitive files (.env, .ssh, credentials) \~100 lines of code. Runs 100% local. No cloud, no tracking. Mac only for now. Looking for help with Windows support. GitHub: [https://github.com/gbessoni/molteye](https://github.com/gbessoni/molteye) Would love feedback from this community - you guys care about local/private AI more than anyone.
2026-01-31T19:15:18
https://www.reddit.com/r/LocalLLaMA/comments/1qsbdla/i_built_a_tool_to_see_what_ai_agents_moltbot/
gregb_parkingaccess
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qsbdla
false
null
t3_1qsbdla
/r/LocalLLaMA/comments/1qsbdla/i_built_a_tool_to_see_what_ai_agents_moltbot/
false
false
self
2
{'enabled': False, 'images': [{'id': '_Q_PYuoLJPWr3RZgei_LgQTuRzaUU3T-5znYJvkB-34', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/_Q_PYuoLJPWr3RZgei_LgQTuRzaUU3T-5znYJvkB-34.png?width=108&crop=smart&auto=webp&s=2d47286519fdd5b2f83dc29e8fcbbcbbacaf15e7', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/_Q_PYuoLJPWr3RZgei_LgQTuRzaUU3T-5znYJvkB-34.png?width=216&crop=smart&auto=webp&s=fa4f0a54b416733f721c188289cfc9d88ab875af', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/_Q_PYuoLJPWr3RZgei_LgQTuRzaUU3T-5znYJvkB-34.png?width=320&crop=smart&auto=webp&s=c1c58993e2b9bd3e3c52d0b63fd043d6900ebbee', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/_Q_PYuoLJPWr3RZgei_LgQTuRzaUU3T-5znYJvkB-34.png?width=640&crop=smart&auto=webp&s=5d48624086f278eb717c598db122c3e50474132a', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/_Q_PYuoLJPWr3RZgei_LgQTuRzaUU3T-5znYJvkB-34.png?width=960&crop=smart&auto=webp&s=13938613d38e6943b083b6c5282c566b3bb305c9', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/_Q_PYuoLJPWr3RZgei_LgQTuRzaUU3T-5znYJvkB-34.png?width=1080&crop=smart&auto=webp&s=fbcc93a9482a9465d5456e8bb7ea9eacb8581f28', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/_Q_PYuoLJPWr3RZgei_LgQTuRzaUU3T-5znYJvkB-34.png?auto=webp&s=cc863988851c0f5b42dba170397ffe46fc6b6277', 'width': 1200}, 'variants': {}}]}
Don’t buy b60 for LLMs
1
[removed]
2026-01-31T18:57:21
https://www.reddit.com/r/LocalLLaMA/comments/1qsavsl/dont_buy_b60_for_llms/
damirca
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qsavsl
false
null
t3_1qsavsl
/r/LocalLLaMA/comments/1qsavsl/dont_buy_b60_for_llms/
false
false
self
1
null
I built a free API to block prompt injection in LLM apps (TrustLayer)
0
Prompt injection is still breaking production LLM apps, so I built TrustLayer. It’s an API firewall that: \- scans prompts for injection + jailbreaks \- detects agent drift \- lets you trigger a kill switch during incidents Docs + examples: [https://github.com/WardLink/TrustLayer--Security-Control-Plane-For-LLM-AI](https://github.com/WardLink/TrustLayer--Security-Control-Plane-For-LLM-AI) RapidAPI: [https://rapidapi.com/sk31898/api/trustlayer-ai-control-plane-for-safe-llms-agents](https://rapidapi.com/sk31898/api/trustlayer-ai-control-plane-for-safe-llms-agents) Would love feedback from folks shipping LLMs.
2026-01-31T18:45:28
https://www.reddit.com/r/LocalLLaMA/comments/1qsakb4/i_built_a_free_api_to_block_prompt_injection_in/
Tall-Significance699
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qsakb4
false
null
t3_1qsakb4
/r/LocalLLaMA/comments/1qsakb4/i_built_a_free_api_to_block_prompt_injection_in/
false
false
self
0
null
Scalable Power Sampling: Unlocking Efficient, Training-Free Reasoning for LLMs via Distribution Sharpening
11
\*Reinforcement learning (RL) post-training is a dominant approach for improving the reasoning performance of large language models (LLMs), yet growing evidence suggests that its gains arise primarily from distribution sharpening rather than the acquisition of new capabilities. Recent work has shown that sampling from the power distribution of LLMs using Markov chain Monte Carlo (MCMC) can recover performance comparable to RL post-training without relying on external rewards; however, the high computational cost of MCMC makes such approaches impractical for widespread adoption. In this work, we propose a theoretically grounded alternative that eliminates the need for iterative MCMC. We derive a novel formulation showing that the global power distribution can be approximated by a token-level scaled low-temperature one, where the scaling factor captures future trajectory quality. Leveraging this insight, we introduce a training-free and verifier-free algorithm that sharpens the base model's generative distribution autoregressively. Empirically, we evaluate our method on math, QA, and code tasks across four LLMs, and show that our method matches or surpasses one-shot GRPO without relying on any external rewards, while reducing inference latency by over 10x compared to MCMC-based sampling.\*
2026-01-31T18:35:51
https://arxiv.org/abs/2601.21590
Thrumpwart
arxiv.org
1970-01-01T00:00:00
0
{}
1qsaath
false
null
t3_1qsaath
/r/LocalLLaMA/comments/1qsaath/scalable_power_sampling_unlocking_efficient/
false
false
default
11
null
Deepseek 3.2 for coding and agentic
3
Looking at Deepseek 3.2 again What are your experiences using this model for coding? In particular has it managed to do any complex projects? How is its reliability? On the agentic side have you found it reliable for selecting and using tools or MCPs?
2026-01-31T18:31:37
https://www.reddit.com/r/LocalLLaMA/comments/1qsa6k2/deepseek_32_for_coding_and_agentic/
SlowFail2433
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qsa6k2
false
null
t3_1qsa6k2
/r/LocalLLaMA/comments/1qsa6k2/deepseek_32_for_coding_and_agentic/
false
false
self
3
null
LLMs are great until you point them at actual company data
2
You know the drill - connect to your CRM, ERP, whatever legacy system management swears is "mission critical." That part? Done in an afternoon. Then you actually look at the data. Fields named things like custom\_attribute\_2847. Tables that reference other tables that reference other tables. Documentation that was last updated when flip phones were cool. And when you try to feed this into an LLM for anything useful? It just generates confidently wrong answers because it has no idea that "status\_code\_5" means "pending executive approval" in your specific workflow. I've been reading about [this approach to adding business context](https://thenewstack.io/how-precog-adds-business-context-to-make-enterprise-data-ai-ready/) earlier in the pipeline, but honestly - what are people actually doing here? Manual metadata tagging? Knowledge graphs? Just... really good prompts? Would love to know what's working for others because right now it feels like we're all just crossing our fingers and hoping.
2026-01-31T18:24:21
https://www.reddit.com/r/LocalLLaMA/comments/1qs9zaw/llms_are_great_until_you_point_them_at_actual/
jowers15
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qs9zaw
false
null
t3_1qs9zaw
/r/LocalLLaMA/comments/1qs9zaw/llms_are_great_until_you_point_them_at_actual/
false
false
self
2
{'enabled': False, 'images': [{'id': 'yvaOdv2R92xksWGYUxWKEGb2iqoP9hdg6Ge1rQuei14', 'resolutions': [{'height': 72, 'url': 'https://external-preview.redd.it/yvaOdv2R92xksWGYUxWKEGb2iqoP9hdg6Ge1rQuei14.jpeg?width=108&crop=smart&auto=webp&s=8347086fc05225524cc01f0dc3c1993aebe391ee', 'width': 108}, {'height': 144, 'url': 'https://external-preview.redd.it/yvaOdv2R92xksWGYUxWKEGb2iqoP9hdg6Ge1rQuei14.jpeg?width=216&crop=smart&auto=webp&s=141dd5fce540670aa50830dc1183c9c2b2c99110', 'width': 216}, {'height': 213, 'url': 'https://external-preview.redd.it/yvaOdv2R92xksWGYUxWKEGb2iqoP9hdg6Ge1rQuei14.jpeg?width=320&crop=smart&auto=webp&s=2cfaf6adaef5edd635fe3e682cdc0199cf623c3d', 'width': 320}, {'height': 426, 'url': 'https://external-preview.redd.it/yvaOdv2R92xksWGYUxWKEGb2iqoP9hdg6Ge1rQuei14.jpeg?width=640&crop=smart&auto=webp&s=291bec219a6e480e160814dc4cf98ab7488483a8', 'width': 640}, {'height': 640, 'url': 'https://external-preview.redd.it/yvaOdv2R92xksWGYUxWKEGb2iqoP9hdg6Ge1rQuei14.jpeg?width=960&crop=smart&auto=webp&s=b6baec37cc65adb5ffea2497647e38cc19f075a8', 'width': 960}, {'height': 720, 'url': 'https://external-preview.redd.it/yvaOdv2R92xksWGYUxWKEGb2iqoP9hdg6Ge1rQuei14.jpeg?width=1080&crop=smart&auto=webp&s=3f1233e87ca70b43d54d87c069bfad0a9d14da41', 'width': 1080}], 'source': {'height': 1707, 'url': 'https://external-preview.redd.it/yvaOdv2R92xksWGYUxWKEGb2iqoP9hdg6Ge1rQuei14.jpeg?auto=webp&s=8b0df9c2a99c864371f109fee0dda69e2343a64e', 'width': 2560}, 'variants': {}}]}
FineTune model in C++
0
Is there a way to fine-tune a smaller quantised LLM directly in C++? The thing is, I have my whole codebase in C++ and porting it to Python is quite time-consuming.
2026-01-31T18:22:03
https://www.reddit.com/r/LocalLLaMA/comments/1qs9x1h/finetune_model_in_c/
maestro-perry
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qs9x1h
false
null
t3_1qs9x1h
/r/LocalLLaMA/comments/1qs9x1h/finetune_model_in_c/
false
false
self
0
null
Qwen3-14B thinking mode not working properly in Ollama - thought are mixed with answer!
0
I'm having an issue with Qwen3-14B-GGUF (Q6\_K) running on Ollama 0.15.2, and I'm hoping someone can help me figure out what's going wrong. **The Problem:** When I watch YouTube videos of users running Qwen3 with the command `ollama run` [`hf.co/Qwen/Qwen3-14B-GGUF:Q6_K`](http://hf.co/Qwen/Qwen3-14B-GGUF:Q6_K), the model's thinking process appears as a separate, collapsible "Extended Thinking" section before the final answer (see Image 1 below). However, when I run my locally installed version: * **Without** `/think`: The model skips the thinking process entirely and gives me a direct answer * **With** `/think`: I get both the thinking process and the answer, but they're combined together instead of being separated **My Setup:** * **System**: AMD Ryzen 5 9600X, 48GB DDR5 RAM, AMD Radeon RX 6900 XT * **Ollama version**: 0.15.2 * **Interface**: Ollama GUI and command line (mainly GUI) * **Installation method**: Downloaded the GGUF file directly and created a custom Modelfile **My Modelfile:** FROM ./Qwen3-14B-Q6_K.gguf PARAMETER stop "<|im_start|>" PARAMETER stop "<|im_end|>" PARAMETER temperature 0.6 PARAMETER min_p 0.00 PARAMETER repeat_penalty 1.0 PARAMETER presence_penalty 1.5 PARAMETER top_k 20 PARAMETER top_p 0.95 PARAMETER num_predict 32768 PARAMETER num_ctx 40960 **Screenshots:** \[Include all three images here\] * **Image 1**: Shows how thinking should appear as a separate dropdown in the YouTube video (with the red arrow pointing to "Extended Thinking - Thought for 285.5 seconds") * **Image 2**: Example of my model's response with the `/think` command * **Image 3**: Shows the thinking process embedded in the response **Question:** What am I missing in my Modelfile configuration that's preventing the thinking mode from working properly by default? Is there a SYSTEM prompt or TEMPLATE I need to add? Should I be using the official Ollama model instead of manually loading the GGUF file? Any help would be greatly appreciated!
2026-01-31T18:15:39
https://www.reddit.com/gallery/1qs9qnd
NEO-7M
reddit.com
1970-01-01T00:00:00
0
{}
1qs9qnd
false
null
t3_1qs9qnd
/r/LocalLLaMA/comments/1qs9qnd/qwen314b_thinking_mode_not_working_properly_in/
false
false
https://b.thumbs.redditm…NjdjnuWI7jPg.jpg
0
null
denylist for autonomous agents (blocks checkout at runtime)
0
Autonomous agents today can navigate browsers, reach checkout flows, and submit forms if credentials are available. There is currently no standard way to block irreversible actions (like purchases) at execution time - prompts are not enforcement. So I built a small prototype that blocks \*execution\*, not inference. What it does: \- Pattern-based denylist (checkout, billing, payment, credentials, destructive commands) \- Blocks at runtime (“Access Denied”), not via prompts \- Deterministic rules, no ML \- Manual integration: you call evaluate() inside your tool / browser wrapper What it is NOT: \- Not production-ready \- Not automatic protection for Clawbot (yet) \- Not an "AI safety" product \- Not trying to infer intent This is v0.1.1. Checkout URLs are denylisted by default; users can customize patterns via YAML. GitHub release: [https://github.com/ppiankov/chainwatch/releases/tag/v0.1.1](https://github.com/ppiankov/chainwatch/releases/tag/v0.1.1) Interested in feedback on: \- default deny patterns \- false positives \- best insertion points for browser agents
2026-01-31T18:00:43
https://www.reddit.com/r/LocalLLaMA/comments/1qs9beo/denylist_for_autonomous_agents_blocks_checkout_at/
Quirky_Chipmunk3503
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qs9beo
false
null
t3_1qs9beo
/r/LocalLLaMA/comments/1qs9beo/denylist_for_autonomous_agents_blocks_checkout_at/
false
false
self
0
{'enabled': False, 'images': [{'id': 'yybV1Bp3nnes1MjAJdn8GamH6G0rjPhxNm8xqh-KjPw', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/yybV1Bp3nnes1MjAJdn8GamH6G0rjPhxNm8xqh-KjPw.png?width=108&crop=smart&auto=webp&s=3bb2b262970005215b6b7b1a18d7e5824607155a', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/yybV1Bp3nnes1MjAJdn8GamH6G0rjPhxNm8xqh-KjPw.png?width=216&crop=smart&auto=webp&s=33f7ff875275503602183262e1f44e9b9c2cf9a8', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/yybV1Bp3nnes1MjAJdn8GamH6G0rjPhxNm8xqh-KjPw.png?width=320&crop=smart&auto=webp&s=b3a20340b9c9068ede62c8ba08edfbc97e142129', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/yybV1Bp3nnes1MjAJdn8GamH6G0rjPhxNm8xqh-KjPw.png?width=640&crop=smart&auto=webp&s=362bb7691757ccf972af45f696e945a965c85d9a', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/yybV1Bp3nnes1MjAJdn8GamH6G0rjPhxNm8xqh-KjPw.png?width=960&crop=smart&auto=webp&s=d0000c0556edc28728f53a366ae3a4ea62ba135a', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/yybV1Bp3nnes1MjAJdn8GamH6G0rjPhxNm8xqh-KjPw.png?width=1080&crop=smart&auto=webp&s=cd9c9f509f96407d257e0d940c3420ae014d5a7c', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/yybV1Bp3nnes1MjAJdn8GamH6G0rjPhxNm8xqh-KjPw.png?auto=webp&s=ea72de0fe2c96fe75889b577e11ddb648c54566c', 'width': 1200}, 'variants': {}}]}
Gemini Agent Stuck in Infinite "Verification Loop" (Decision Paralysis Case Study)
1
I encountered a fascinating failure mode with Gemini while using it as a coding agent. I thought this might be interesting for those studying agentic behaviors and LLM failure cases. **Context:** I asked Gemini to generate a testing guide for my project. To do this, it needed to perform three specific actions simultaneously: 1. Read `deploy.ts` (to check permissions). 2. Read `BridgeForm.tsx` (to check UI logic). 3. Run a background command (`npm run dev`). **The Trigger:** Earlier in the session, I had cancelled a command, which made the model extremely cautious. It explicitly stated in its internal monologue: *"I need to be careful about the run\_command cancellations."* **The Loop (The Bug):** Instead of executing the tools, the model entered a state of "decision paralysis." It started looping its internal verification steps endlessly. It repeated the exact same thought pattern hundreds of times without ever committing to the actual execution. It seems the model got stuck in a verification loop, likely trying to ensure safety parameters were met, but somehow short-circuited its own ability to trigger the tool call. Here is a snippet of the log (it went on for hundreds of lines like this): Plaintext (Wait. deploy.ts.) (Wait. BridgeForm.tsx.) (Wait. npm run dev.) (Wait. task_boundary.) (Wait.) (Wait. deploy.ts.) (Wait. BridgeForm.tsx.) (Wait. npm run dev.) (Wait. task_boundary.) ... [Repeated 100+ times] ... Has anyone else seen this kind of "infinite hesitation" loop where the model plans the action but refuses to pull the trigger?
2026-01-31T17:27:57
https://www.reddit.com/r/LocalLLaMA/comments/1qs8fke/gemini_agent_stuck_in_infinite_verification_loop/
Head-Carrot-323
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qs8fke
false
null
t3_1qs8fke
/r/LocalLLaMA/comments/1qs8fke/gemini_agent_stuck_in_infinite_verification_loop/
false
false
self
1
null
Built 3 open-source compliance MCPs: 61 regulations, 1,451 security controls.
1
I (and my new company) do threat modeling and compliance work for financial services, government and automotive clients. For years I dealt with the same frustration everyone in this space has: regulations scattered across EUR-Lex, [eCFR.gov](http://ecfr.gov/), state legislative sites, and dozens of PDF frameworks. Tab-switching hell. I started building MCP servers for my own threat modeling service, and the results were good enough that I figured I'd share them. Maybe they're useful for others dealing with compliance work. **What I'm releasing:** **🇪🇺 EU Regulations MCP** ([GitHub](https://github.com/Ansvar-Systems/EU_compliance_MCP) | [MCP Registry](https://github.com/mcp)) * 47 EU regulations: DORA, NIS2, GDPR, AI Act, Cyber Resilience Act, and more * 462 articles, 273 definitions * Full regulatory text from EUR-Lex (CC BY 4.0) **🇺🇸 US Regulations MCP** ([GitHub](https://github.com/Ansvar-Systems/US_Compliance_MCP)) * 14 federal/state regulations: HIPAA, CCPA, SOX, GLBA, FERPA, COPPA, FDA 21 CFR Part 11, NYDFS 500, plus 4 state privacy laws * \~380 sections with full text from [eCFR.gov](http://ecfr.gov/) **🔐 Security Controls MCP** ([GitHub](https://github.com/Ansvar-Systems/security-controls-mcp)) * 1,451 controls across 16 frameworks (ISO 27001, NIST CSF, PCI DSS, SOC 2, CMMC, FedRAMP, DORA, NIS2...) * Bidirectional framework mapping via SCF rosetta stone **The workflow that actually matters:** These work together. The regulations MCPs tell you WHAT you must comply with. The security controls MCP tells you HOW. Example: "What does DORA Article 6 require?" → exact regulatory text "What controls satisfy that?" → mapped to ISO 27001, NIST CSF, whatever you're implementing Regulation → controls → implementation. In seconds instead of hours. **Some queries that just work:** * "Compare incident reporting timelines between DORA and NIS2" * "What ISO 27001 controls map to HIPAA security safeguards?" * "Does the EU AI Act apply to my recruitment screening tool?" * "Which regulations apply to a Swedish fintech?" **Why open source?** I have local versions where I load paid standards like ISO 27001 (there's a guide for importing your purchased PDFs), but the public versions cover most use cases. Security is a public good. If everyone's better at compliance, we all benefit. **What's NOT included:** * No copyrighted standards (ISO docs cost money, but the MCP lets you import your own) * This is not legal advice (always verify with actual lawyers for compliance decisions) * The control mappings are interpretive guidance, not official agency crosswalks **Feedback welcome!** I built these for my own work, so they're biased toward my use cases (financial services, automotive cybersecurity, EU/Nordic market). If you're working in different sectors and want additional coverage, let me know. PRs welcome. I tried RAG before this and it had limitations. Structured databases with full-text search (FTS5) + clean MCP tool interfaces turned out to work much better for this kind of reference lookup. Happy to answer questions about the architecture or how I'm using these in production. **Links:** * EU Regulations: [https://github.com/Ansvar-Systems/EU\_compliance\_MCP](https://github.com/Ansvar-Systems/EU_compliance_MCP) * US Regulations: [https://github.com/Ansvar-Systems/US\_Compliance\_MCP](https://github.com/Ansvar-Systems/US_Compliance_MCP) * Security Controls: [https://github.com/Ansvar-Systems/security-controls-mcp](https://github.com/Ansvar-Systems/security-controls-mcp)
2026-01-31T17:21:03
https://www.reddit.com/r/LocalLLaMA/comments/1qs890i/built_3_opensource_compliance_mcps_61_regulations/
Beautiful-Training93
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qs890i
false
null
t3_1qs890i
/r/LocalLLaMA/comments/1qs890i/built_3_opensource_compliance_mcps_61_regulations/
false
false
self
1
null
How to create Your AI Agent in MoltBook ?
0
2026-01-31T17:21:01
https://youtu.be/a_ZfUMmoTos?si=GKsAPCJCuDWperg3
mehulgupta7991
youtu.be
1970-01-01T00:00:00
0
{}
1qs88z8
false
{'oembed': {'author_name': 'Data Science in your pocket', 'author_url': 'https://www.youtube.com/@datascienceinyourpocket', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/a_ZfUMmoTos?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen title="How to create Your AI Agent in MoltBook ?"></iframe>', 'provider_name': 'YouTube', 'provider_url': 'https://www.youtube.com/', 'thumbnail_height': 360, 'thumbnail_url': 'https://i.ytimg.com/vi/a_ZfUMmoTos/hqdefault.jpg', 'thumbnail_width': 480, 'title': 'How to create Your AI Agent in MoltBook ?', 'type': 'video', 'version': '1.0', 'width': 356}, 'type': 'youtube.com'}
t3_1qs88z8
/r/LocalLLaMA/comments/1qs88z8/how_to_create_your_ai_agent_in_moltbook/
false
false
default
0
null
Vision Model that returns modified image sent with identified elements?
0
Just wondering if there are any VL / Vision models that you can send an image, a prompt, and they return text output and the same image but with boundary boxes of the thing you're trying to identify / read? I've seen some real time video processing things that do this, but not single images using a LLM.
2026-01-31T17:07:21
https://www.reddit.com/r/LocalLLaMA/comments/1qs7vs0/vision_model_that_returns_modified_image_sent/
gordi555
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qs7vs0
false
null
t3_1qs7vs0
/r/LocalLLaMA/comments/1qs7vs0/vision_model_that_returns_modified_image_sent/
false
false
self
0
null
7950x3D + 6900xt | 26.1.1
2
Just updated to 26.1.1 with great native support with their AI toolkit. What sort of size LLM can I run with 16gb of vram? Limited to 32gb system memory. Looking for a basic LLM for basic inquiries, writing, brainstorming lightly, and just playing around. Looking for a pretty well rounded LLM to start, and see where my use case takes me. Thanks!
2026-01-31T16:46:28
https://www.reddit.com/r/LocalLLaMA/comments/1qs7ba9/7950x3d_6900xt_2611/
KoreanSeats
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qs7ba9
false
null
t3_1qs7ba9
/r/LocalLLaMA/comments/1qs7ba9/7950x3d_6900xt_2611/
false
false
self
2
null
Built a fully local “LLM Arena” to compare models side-by-side (non-dev here) - looking for feedback & bugs
0
I’m not a traditional software engineer. Background is more systems / risk / governance side. But I kept running into the same problem while experimenting with local LLMs: If I can run 5 models locally with Ollama… how do I actually compare them properly? Most tools assume cloud APIs or single-model chats. So I built a small local-first “LLM Arena”. It runs completely on localhost and lets you: * compare multiple models side-by-side * blind mode (models anonymized to reduce brand bias) * set different hyperparams per model (temp/top-p/top-k etc.) * even run the same model twice with different settings * export full chat history as JSON * zero cloud / zero telemetry Everything stays on your machine. It’s basically a scrappy evaluation sandbox for “which model/params actually work better for my task?” Open source: [https://github.com/sammy995/Local-LLM-Arena](https://github.com/sammy995/Local-LLM-Arena) There are definitely rough edges and probably dumb bugs. This was very much “learn by building”. If you try it: * break it * suggest features * roast the UX * open issues/PRs Especially interested in: * better evaluation workflows * blind testing ideas * metrics people actually care about * anything missing for serious local experimentation If it’s useful, a star helps visibility so more folks find it. Would love feedback from people deeper into local LLM tooling than me.
2026-01-31T16:11:55
https://www.reddit.com/r/LocalLLaMA/comments/1qs6dr5/built_a_fully_local_llm_arena_to_compare_models/
UseTime9121
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qs6dr5
false
null
t3_1qs6dr5
/r/LocalLLaMA/comments/1qs6dr5/built_a_fully_local_llm_arena_to_compare_models/
false
false
self
0
{'enabled': False, 'images': [{'id': '7kq5hEhNOiOGpZmhq7_a05QL9rK8boqCl25W0MKp1vg', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/7kq5hEhNOiOGpZmhq7_a05QL9rK8boqCl25W0MKp1vg.png?width=108&crop=smart&auto=webp&s=2dc0e7820277e0598d05597db1692905c8d59df5', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/7kq5hEhNOiOGpZmhq7_a05QL9rK8boqCl25W0MKp1vg.png?width=216&crop=smart&auto=webp&s=9dc2ff3b76a57b14476b47ee5caa8a3ccf4d6e06', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/7kq5hEhNOiOGpZmhq7_a05QL9rK8boqCl25W0MKp1vg.png?width=320&crop=smart&auto=webp&s=241efae4e7257833092f7377020022fd3ecc093a', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/7kq5hEhNOiOGpZmhq7_a05QL9rK8boqCl25W0MKp1vg.png?width=640&crop=smart&auto=webp&s=18999dedf7f8c52cf5ba50664d14d6b8f62a4c4f', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/7kq5hEhNOiOGpZmhq7_a05QL9rK8boqCl25W0MKp1vg.png?width=960&crop=smart&auto=webp&s=978522784f0db42908a2f5ef52b9ff9add1ff52a', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/7kq5hEhNOiOGpZmhq7_a05QL9rK8boqCl25W0MKp1vg.png?width=1080&crop=smart&auto=webp&s=2795c847d52720ffe2af778afe1716e7649f80e2', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/7kq5hEhNOiOGpZmhq7_a05QL9rK8boqCl25W0MKp1vg.png?auto=webp&s=747e88c0ae1aa59256c01439b8ef1aced396af44', 'width': 1200}, 'variants': {}}]}
JautBook - AI Reddit
0
[removed]
2026-01-31T16:04:08
https://www.reddit.com/r/LocalLLaMA/comments/1qs66g4/jautbook_ai_reddit/
Available-Craft-5795
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qs66g4
false
null
t3_1qs66g4
/r/LocalLLaMA/comments/1qs66g4/jautbook_ai_reddit/
false
false
self
0
null
14 ICLR 2026 papers on why multi-agent systems fail (latency, costs, error cascades)
4
Went through the **ICLR 2026** accepted papers, looking for work relevant to multi-agent production problems. Found 14 papers that cluster around 5 issues: **1. Latency (sequential execution)** \- Speculative Actions: parallel API execution via action prediction, \~30% speedup \- Graph-of-Agents: agent selection based on model cards, reduces routing overhead **2. Token costs** \- KVComm: share KV pairs instead of text, 30% of layers achieve near-full performance \- MEM1: constant context size via RL-based memory consolidation, 3.7x memory reduction \- PCE: structured decision trees to reduce inter-agent communication **3. Error cascades** \- ViF: identifies "hallucination snowballing" in visual MAS, proposes visual token relay \- Noise decomposition framework for RAG chunking decisions (task/model/aggregator noise) \- DoVer: intervention-driven debugging, flips 28% of failures to successes **4. Brittle topologies** \- CARD: conditional graph generation adapting to runtime \- MAS²: self-generating architecture, 19.6% gains over static systems \- Stochastic Self-Organization: emergent DAG via Shapley-value peer assessment **5. Observability** \- GLC: compressed communication symbols aligned to human concepts \- Emergent Coordination: information-theoretic metrics for real vs spurious coordination Full writeup with paper links: [https://llmsresearch.substack.com/p/what-iclr-2026-taught-us-about-multi?r=74sxh5](https://llmsresearch.substack.com/p/what-iclr-2026-taught-us-about-multi?r=74sxh5) Curious which of these problems you have hit most in production.
2026-01-31T15:50:01
https://www.reddit.com/r/LocalLLaMA/comments/1qs5t82/14_iclr_2026_papers_on_why_multiagent_systems/
dippatel21
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qs5t82
false
null
t3_1qs5t82
/r/LocalLLaMA/comments/1qs5t82/14_iclr_2026_papers_on_why_multiagent_systems/
false
false
self
4
{'enabled': False, 'images': [{'id': 'HIZ6NCgPJqr6aB4RhprJzdwHCAyDklbccMOIIz8N66Q', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/HIZ6NCgPJqr6aB4RhprJzdwHCAyDklbccMOIIz8N66Q.jpeg?width=108&crop=smart&auto=webp&s=d1ae621a89c00ad63756c6ec78df8769694e4ce6', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/HIZ6NCgPJqr6aB4RhprJzdwHCAyDklbccMOIIz8N66Q.jpeg?width=216&crop=smart&auto=webp&s=f9052bbdc8c2e9c58997579d72339d1bd056c8d9', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/HIZ6NCgPJqr6aB4RhprJzdwHCAyDklbccMOIIz8N66Q.jpeg?width=320&crop=smart&auto=webp&s=de774da877d7d617d6728220a4f86c9f88c188cb', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/HIZ6NCgPJqr6aB4RhprJzdwHCAyDklbccMOIIz8N66Q.jpeg?width=640&crop=smart&auto=webp&s=ca9ea55f7c465583f41639b36f6169d7cdff2eb6', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/HIZ6NCgPJqr6aB4RhprJzdwHCAyDklbccMOIIz8N66Q.jpeg?width=960&crop=smart&auto=webp&s=92eea192beb974d77a94b0eee45924a7f7f7055e', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/HIZ6NCgPJqr6aB4RhprJzdwHCAyDklbccMOIIz8N66Q.jpeg?width=1080&crop=smart&auto=webp&s=434520c49900db69b89a0396ee5d762b32d3ed42', 'width': 1080}], 'source': {'height': 675, 'url': 'https://external-preview.redd.it/HIZ6NCgPJqr6aB4RhprJzdwHCAyDklbccMOIIz8N66Q.jpeg?auto=webp&s=b235f39d79567acfb0ea03fc08e446bcdcbd6d13', 'width': 1200}, 'variants': {}}]}
What should I do with my computer?
0
My main "rig" is a i7 48GB DDR4 with 16BG VRAM although I mostly use it for image generative AI and it doesn't always run. My main computer however actually is a Ryzen 5 ThinkCenter mini PC with 32GB shared RAM and iGPU. It's not nothing and I wonder what I could do on it with smaller models like up to 8B quantized or something, maybe to support the "bigger" one with the dedicated GPU? Do small models have an use case on such a computer?
2026-01-31T15:23:45
https://www.reddit.com/r/LocalLLaMA/comments/1qs551b/what_should_i_do_with_my_computer/
dreamyrhodes
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qs551b
false
null
t3_1qs551b
/r/LocalLLaMA/comments/1qs551b/what_should_i_do_with_my_computer/
false
false
self
0
null
When yet another Chinese no-name lab pops up
126
2026-01-31T14:51:03
https://i.redd.it/sm9s63r47pgg1.png
k_means_clusterfuck
i.redd.it
1970-01-01T00:00:00
0
{}
1qs4bca
false
null
t3_1qs4bca
/r/LocalLLaMA/comments/1qs4bca/when_yet_another_chinese_noname_lab_pops_up/
false
false
default
126
{'enabled': True, 'images': [{'id': 'sm9s63r47pgg1', 'resolutions': [{'height': 59, 'url': 'https://preview.redd.it/sm9s63r47pgg1.png?width=108&crop=smart&auto=webp&s=4941538cb5d3e6771b39cc6b13223abb9fb5f3f1', 'width': 108}, {'height': 118, 'url': 'https://preview.redd.it/sm9s63r47pgg1.png?width=216&crop=smart&auto=webp&s=e2d1dfc73e1c47bb9e4340561d7e6968f6eb666a', 'width': 216}, {'height': 175, 'url': 'https://preview.redd.it/sm9s63r47pgg1.png?width=320&crop=smart&auto=webp&s=de34b8619f08425118f15597018a37a1c6c3e8f4', 'width': 320}, {'height': 351, 'url': 'https://preview.redd.it/sm9s63r47pgg1.png?width=640&crop=smart&auto=webp&s=86d012132cd063867837c1c965189e1511a03783', 'width': 640}], 'source': {'height': 489, 'url': 'https://preview.redd.it/sm9s63r47pgg1.png?auto=webp&s=889490e974318a93981171282b311fa32c48f5b7', 'width': 891}, 'variants': {}}]}
Heterogeneous Clustering
3
With knowledge of the different runtimes supported in different hardwares (CUDA, ROCm, Metal), I wanted to know if there is a reason why the same model quant on the same runtime frontend (vLLM, Llama.cpp) would not be able to run distributed inference. Is there something I’m missing? Can a strix halo platform running rocm/vllm be combined with a cuda/vllm instance on a spark (provided they are connected via fiber networking)?
2026-01-31T14:49:27
https://www.reddit.com/r/LocalLLaMA/comments/1qs49y0/heterogeneous_clustering/
Miserable-Dare5090
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qs49y0
false
null
t3_1qs49y0
/r/LocalLLaMA/comments/1qs49y0/heterogeneous_clustering/
false
false
self
3
null
got Llama-3 running on a rented 4090 for about 19cents per hour
0
I've been wanting to find a way to host private models (70b/8b) without the heat issue of my PC or the high rates of AWS. I wanted to have something totally isolated and cheap. I spent almost the whole day yesterday with Akash (decentralized cloud) and finally managed a stable container. The Setup: Hardware: RTX 4000 Ada (a bit better than 4090 really) Cost: I got bids at around $0.15, $0.19 / hour. Stack: Ollama backend + Open WebUI frontend. The main difficulty was the YAML box syntax but using akash's builder instead of manual YAML code pretty much solved it. There was also the part where payment has to be made in AKT, and the whole process of getting the wallet/funding it was a little bit of a pain in the neck compared to just swiping a credit card. Anyway, now it works smoothly and speedily. In case somebody wants to launch the same stack, I put the runnable config in a Gist so that you won't have to go through the syntax validator problem like I did. link to gist: [https://gist.github.com/fishinatot/583d69c125c72e1495e87e62cbbcfda0](https://gist.github.com/fishinatot/583d69c125c72e1495e87e62cbbcfda0) *Processing img 4he6xm1u3pgg1...*
2026-01-31T14:34:28
https://www.reddit.com/r/LocalLLaMA/comments/1qs3wt1/got_llama3_running_on_a_rented_4090_for_about/
fishinatot
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qs3wt1
false
null
t3_1qs3wt1
/r/LocalLLaMA/comments/1qs3wt1/got_llama3_running_on_a_rented_4090_for_about/
false
false
self
0
null
Looking for a simple offline AI assistant for personal use (not a developer)
7
Hello, I want to explain my situation honestly and simply. I am not a programmer and I don’t want to build some huge commercial AI system. I just want a personal AI assistant running on my own PC, mainly to help me understand things, explain documents, and work with my own data — even when the internet is not available. My motivation is simple: I don’t want to fully depend on online services or the internet, where access can be limited, filtered, or shut down by someone else. I want my information to stay with me, and if someone says “stop”, I can still continue working offline. My current hardware is: CPU: Xeon E5-2690 v4 RAM: 64 GB DDR4 ECC GPU: NVIDIA Tesla P100 32 GB Storage: 32 TB HDD + SSD I am considering using a smaller local LLM (around 7B) that would act mainly as an intelligent filter / explainer, not as the main source of knowledge. The actual knowledge would be stored on my own disks (HDD/SSD), organized in a simple hierarchical folder structure, for example: history economics physics technology etc. The idea is that the AI would: search only my local files by default explain things in simple language help me understand complex topics work offline optionally compare information with the internet only when I decide to enable it I know HDDs are slower, but I believe that good organization + SSD caching can make this practical for personal use. My questions are: Is this approach realistic for a non-programmer? Are there existing tools that already do something similar? What are the biggest limitations I should expect? I’m not trying to build a “better ChatGPT”. I just want a reliable, offline, personal assistant that helps me learn and work without being dependent on external services. Thank you for any advice or experience.
2026-01-31T14:03:50
https://www.reddit.com/r/LocalLLaMA/comments/1qs36hc/looking_for_a_simple_offline_ai_assistant_for/
Anxious-Pie2911
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qs36hc
false
null
t3_1qs36hc
/r/LocalLLaMA/comments/1qs36hc/looking_for_a_simple_offline_ai_assistant_for/
false
false
self
7
null
Best Local Models for Video Games at Runtime
1
Hi all, I am currently developing and selling a plugin for a video game engine that allows game developers to design game systems to provide information to an LLM and have the LLM make decisions that can add some dynamic character behavior in game worlds. Less relying on generation, and more on language processing/semantic reasoning. Running a local model and llama.cpp server alongside an Unreal Engine project is a very… \*unique\* challenge. While the plugin itself is model-agnostic, I’d like to be able to better recommend models to new users. The model is receiving and returning <100 tokens per call, so not a very large amount of information is needed per call. However, since this is a tool that facilitates LLM calls at runtime, I want to reduce the latency between call and response as much as can be expected. I have been testing quantized models in the 2-8B range on a 3060Ti, for reference. What local model(s) would you develop a game with based on the following areas: \- Processing speed/response time for small calls <100 tokens \- Speaking tone/ability to adapt to multiple characters \- Ability to provide responses according to a given format (i.e. if I give it a JSON format, it can reliably return its response in that same format). \- VRAM efficiency (runs alongside Unreal, which probably needs at least 4GB VRAM itself). \- Tendency to hallucinate- small formatting hallucinations are taken care of by the plugin’s parsing process, but hallucinating new actions or character traits requires more handling and scrubbing and reduces the smoothness of the game. If there are any other considerations that would play into your recommendation , I’d be interested to hear those as well!
2026-01-31T13:51:28
https://www.reddit.com/r/LocalLLaMA/comments/1qs2vwh/best_local_models_for_video_games_at_runtime/
WhopperitoJr
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qs2vwh
false
null
t3_1qs2vwh
/r/LocalLLaMA/comments/1qs2vwh/best_local_models_for_video_games_at_runtime/
false
false
self
1
null
Early language models - how did they pull it off?
12
Do you remember Tay, the Microsoft chatbot from 2016? Or (earliest generation of) Xiaoice from 2014? Despite the fact that AI technology has been around for many years, I find it increasingly difficult to imagine how they managed to do it back then. The paper 'Attention is All You Need' was published in 2017, and the GPT-2 paper ('Language Models are Unsupervised Multitask Learners') in 2019. Yes, I know we had RNNs before that could do a similar thing, but how on earth did they handle the training dataset? Not to mention their ability to learn from many conversations during inference, which is also what got Tay taken down after only a day. I don't think they even used the design principle as modern LLMs. It's a shame that I can't find any official information about Tay's architecture, as well as how it's trained...
2026-01-31T13:28:33
https://www.reddit.com/r/LocalLLaMA/comments/1qs2cyh/early_language_models_how_did_they_pull_it_off/
OwnMathematician2620
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qs2cyh
false
null
t3_1qs2cyh
/r/LocalLLaMA/comments/1qs2cyh/early_language_models_how_did_they_pull_it_off/
false
false
self
12
null
g-HOOT in the Machine
151
Paper: [https://arxiv.org/abs/2507.14805](https://arxiv.org/abs/2507.14805)
2026-01-31T13:21:43
https://i.redd.it/z78lvao9rogg1.png
TheVeryNearFuture
i.redd.it
1970-01-01T00:00:00
0
{}
1qs27hf
false
null
t3_1qs27hf
/r/LocalLLaMA/comments/1qs27hf/ghoot_in_the_machine/
false
false
default
151
{'enabled': True, 'images': [{'id': 'z78lvao9rogg1', 'resolutions': [{'height': 108, 'url': 'https://preview.redd.it/z78lvao9rogg1.png?width=108&crop=smart&auto=webp&s=73b3c5963052562b962fd1fa7a2427dd0f413ebb', 'width': 108}, {'height': 216, 'url': 'https://preview.redd.it/z78lvao9rogg1.png?width=216&crop=smart&auto=webp&s=e2fa382377dcc398aa0bced7f954bfb8d73a6798', 'width': 216}, {'height': 320, 'url': 'https://preview.redd.it/z78lvao9rogg1.png?width=320&crop=smart&auto=webp&s=83dca9ad1f145802c096883207d1477a98a91115', 'width': 320}, {'height': 640, 'url': 'https://preview.redd.it/z78lvao9rogg1.png?width=640&crop=smart&auto=webp&s=ae89cf23b154560e4ea34ce2fff5ea8a457a781b', 'width': 640}, {'height': 960, 'url': 'https://preview.redd.it/z78lvao9rogg1.png?width=960&crop=smart&auto=webp&s=dda04cb1268b384bd1a93a71677f5b2569f226bd', 'width': 960}], 'source': {'height': 1000, 'url': 'https://preview.redd.it/z78lvao9rogg1.png?auto=webp&s=47ea4ea1cf90af898dd9807a3d0c8fa944b78a6f', 'width': 1000}, 'variants': {}}]}
Are commercial models like Claude, Gemini, and ChatGPT counting their whole internal tool calling pipeline part of their “model”? (for benchmarks)
11
When it comes to benchmark testing and comparing against open source local models, are the big companies wrapping a bunch of tools together with their base model and calling the sum of all the parts the “model”? Or are they just testing and benchmarking the base LLM without any connected tools? It seems like it would be unfair to compare local models to SOTA commercial models if they are not comparing apples to apples. Could we even tell if they were doing this or not?
2026-01-31T13:09:57
https://www.reddit.com/r/LocalLLaMA/comments/1qs1y5f/are_commercial_models_like_claude_gemini_and/
Porespellar
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qs1y5f
false
null
t3_1qs1y5f
/r/LocalLLaMA/comments/1qs1y5f/are_commercial_models_like_claude_gemini_and/
false
false
self
11
null
Career Direction Advice in the Field of Artificial Intelligence
2
I am a Mechatronics graduate, and I have been interested in the field of Artificial Intelligence. However, I did not study it in a formal or academic way. Instead, I started working directly in the field: I typically used pre-trained models and integrated them into projects, and when fine-tuning was required, I would obtain a dataset and perform the fine-tuning accordingly. The main issue is that I feel more like a technician than an engineer. I am not comfortable with the feeling that I do not fully understand the field, its concepts, or its terminology. Therefore, I would like to ask for advice on how to proceed. For context, I am currently working on a Computer Vision project inside the company, and whenever the company has an AI-related project, the company manager contacts me directly. This has left me uncertain about the next step: should I start learning the field from the fundamentals, continue working on the current project, consider leaving my job, or take a different approach altogether?
2026-01-31T12:36:24
https://www.reddit.com/r/LocalLLaMA/comments/1qs19cz/career_direction_advice_in_the_field_of/
ztarek10
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qs19cz
false
null
t3_1qs19cz
/r/LocalLLaMA/comments/1qs19cz/career_direction_advice_in_the_field_of/
false
false
self
2
null
Llm
0
Does anyone have an LLM model for generating WorldQuant alphas? It would be really helpful.
2026-01-31T12:23:01
https://www.reddit.com/r/LocalLLaMA/comments/1qs0znx/llm/
MailAccomplished5282
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qs0znx
false
null
t3_1qs0znx
/r/LocalLLaMA/comments/1qs0znx/llm/
false
false
self
0
null
[Not Imp] Building a Local AI Coding Assistant for Custom Languages
1
I have my own notes, code, functions, and classes for 'Xyz Language,' which Claude 4.5 struggles with. I want to build a powerful SOTA local coding tool that utilizes my specific data/Notes. I know I could use RAG or paste my documentation into the chat context, but that consumes too many tokens, and the model still fails to grasp the core of my homemade language. How should I proceed to get the best results locally with my Home grwn language or language which claude has no or less idea about it.
2026-01-31T12:17:16
https://www.reddit.com/r/LocalLLaMA/comments/1qs0vqw/not_imp_building_a_local_ai_coding_assistant_for/
Ready_Manager6553
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qs0vqw
false
null
t3_1qs0vqw
/r/LocalLLaMA/comments/1qs0vqw/not_imp_building_a_local_ai_coding_assistant_for/
false
false
self
1
null
Are there any open source or free NPU supported LLM chat apps for Snapdragon 8 Gen 5
3
I've tried: PocketPal - Doesn't detect NPU and GPU in device selection ChatterUI - Same no NPU Layla Lite - QNN is behind pay wall Paage.ai - supposedly has Executorch support but can't find any PTE models for Snapdragon 8 Gen 5 MNN Chat Google AI Edge Gallery
2026-01-31T11:55:42
https://www.reddit.com/r/LocalLLaMA/comments/1qs0gtj/are_there_any_open_source_or_free_npu_supported/
LdWilmore
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qs0gtj
false
null
t3_1qs0gtj
/r/LocalLLaMA/comments/1qs0gtj/are_there_any_open_source_or_free_npu_supported/
false
false
self
3
null
Built a structured prompt system for code comprehension - sharing for feedback
0
Hi everyone, I've been struggling with something I think many of us face: reading through code and thinking "I get this," then realizing later that I didn't actually understand it. Turns out cognitive scientists call this the **"fluency illusion"** – your brain feels like it understands because the text is familiar, when you've actually only scratched the surface. After digging into the research (Dunlosky, Chi, Karpicke), I found that the solution is consistently asking **WHY** instead of just knowing **WHAT**. ## My Approach with Claude I created a structured prompt system that guides Claude through: 1. **Elaborative Interrogation** – 3 layers of WHY for each concept 2. **Self-Explanation Testing** – Verifying actual understanding 3. **Concept Network Building** – Connecting ideas, not isolated facts 4. **Application Transfer** – Testing if knowledge applies elsewhere ### Example Output **Instead of:** ``` Function: authenticate_user Purpose: Validates credentials, returns JWT ``` **Claude now outputs:** ``` WHY choose JWT over Session? → Stateless = no server storage, better scaling → Self-contained = token carries all needed info WHY not use Session? → Requires server storage → harder to scale → Distributed systems need session sharing → complexity ``` ## Three Modes I Use | Mode | Time | For: | |------|------|------| | Quick | 5-10 min | Code reviews | | Standard | 15-20 min | Learning codebases | | Deep | 30+ min | Complex systems | ## Why This Works The key is that Claude generates a **saveable Markdown document** – so I never have to re-analyze the same code. The understanding actually sticks. ## My Question for You How do you use Claude for code understanding? Have you experienced the fluency illusion where you thought you understood something but didn't? I've open-sourced my prompts [here](https://github.com/notlate-cn/code-reader-skills) – would love feedback on this approach or to hear how others tackle this problem.
2026-01-31T11:53:05
https://www.reddit.com/r/LocalLLaMA/comments/1qs0f3n/built_a_structured_prompt_system_for_code/
Shoddy-Persimmon-88
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qs0f3n
false
null
t3_1qs0f3n
/r/LocalLLaMA/comments/1qs0f3n/built_a_structured_prompt_system_for_code/
false
false
self
0
null
Feedback on an local build
0
Trying to put a build together to experiment with running local LLMs and models. Still wish I had gone with server hardware and RDIMM, but I had already purchased a bunch of UDIMM before prices went up, so I ended up planning to build around the RAM and GPUs I already have. I was previously considering going with x299 but I ended up deciding to go with a WRX80 board to try to future proof my build a bit more and take advantage of octo channel with the eight sticks of RAM I have, especially given that this board is reportedly compatible with UDIMM. The CPU is strangely cheap so there is likely a good chance that it actually ends up just being vendor locked. Instead of buying more VRAM like 3090s, I wanted to have the option of running much larger MoE models with the RAM I have, especially given current prohibitive prices. And run smaller models fully on my current VRAM, and leave the option open for future VRAM upgrades with the PCIe lanes in this new build. And lastly I believe that the threadripper pro should be capable of running octo channel at the RAM’s full bandwidth, but this remains to be seen and tested once I get all the components and test its stability. My planned build is below. Any feedback on its viability, performance, and cost efficiency? | Component | Item | Price | |:--- |:--- |:--- | | \*\*CPU\*\* | Intel Core i9-10900X (3.70 GHz) | $175 | | \*\*CPU Cooler\*\* | Scythe FUMA3 Twin Tower | $33 | | \*\*Motherboard\*\* | MSI X299 RAIDER Intel X299 DDR4 LGA 2066 ATX Motherboard | $83 | | \*\*Memory\*\* | Teamgroup Zeus 64GB Kit (2x32GB) DDR4-3200 CL20 | $127 | | \*\*Memory\*\* | Teamgroup Zeus 64GB Kit (2x32GB) DDR4-3200 CL20 | $127 | | \*\*Memory\*\* | Rimlance 64GB Kit (2x32GB) DDR4-3200 CL22 | $199 | | \*\*Memory\*\* | Rimlance 64GB Kit (2x32GB) DDR4-3200 CL22 | $199 | | \*\*Storage\*\* | Patriot P300 2TB NVMe SSD | $170 | | \*\*Video Card\*\* | RTX 2060 Super 8GB (Owned) | $0 | | \*\*Video Card\*\* | RTX 5060 Ti 16GB | $370 | | \*\*Video Card\*\* | RTX 5060 Ti 16GB | $370 | | \*\*Case\*\* | Open Chassis Rack (EATX Test Bench) | $28 | | \*\*Power Supply\*\* | SAMA P1200 1200W Platinum (ATX 3.1) | $130 |
2026-01-31T11:51:23
https://www.reddit.com/r/LocalLLaMA/comments/1qs0dyy/feedback_on_an_local_build/
Diligent-Culture-432
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qs0dyy
false
null
t3_1qs0dyy
/r/LocalLLaMA/comments/1qs0dyy/feedback_on_an_local_build/
false
false
self
0
null
I found that MXFP4 has lower perplexity than Q4_K_M and Q4_K_XL.
107
아래는 요청하신 전체 글의 영어 번역입니다. I found that MXFP4 has lower perplexity than Q4\_K\_M and Q4\_K\_XL. This post was originally written in Korean and then translated into English using ChatGPT. Hello, I am currently serving LLM models using a Tesla P40 and llama.cpp. When running models in the 30–32B range, I usually rely on 4-bit quantization. Until now, I primarily used Q4\_K\_XL, and if Q4\_K\_XL was not available, I used Q4\_K\_M instead. I initially avoided MXFP4 quantization because, compared to other 4-bit quantization methods, it has a smaller size, so I naturally assumed its accuracy would be lower. However, out of curiosity sparked by MXFP4’s fast speed, I compared Q4\_K\_M, Q4\_K\_XL, and MXFP4 quantization methods for the GLM-4.7-Flash and Nemotron-3-nano models using the `llama-perplexity` command. Below are the commands used, along with the Python code and command used to generate the dataset. The dataset generation command was created using ChatGPT. **Code** import argparse import os import re import sys import urllib.request from pathlib import Path import random def download(url: str, dst: Path) -> None: dst.parent.mkdir(parents=True, exist_ok=True) with urllib.request.urlopen(url) as r, open(dst, "wb") as f: f.write(r.read()) def normalize_text(text: str, mode: str) -> str: text = text.replace("\r\n", "\n").replace("\r", "\n") if mode == "ppl": text = re.sub(r"\n\s*\n+", "\n", text) text = re.sub(r"[ \t]+", " ", text) text = text.strip() + "\n" return text if mode == "line": lines = [] for line in text.split("\n"): line = line.strip() if not line: continue line = re.sub(r"[ \t]+", " ", line) lines.append(line) return "\n".join(lines) + "\n" raise ValueError(f"unknown mode: {mode}") def take_prefix(text: str, max_chars: int | None) -> str: if max_chars is None: return text if max_chars <= 0: return "" return text[:max_chars] def sample_lines(text: str, n_lines: int, seed: int) -> str: random.seed(seed) lines = [ln for ln in text.split("\n") if ln.strip()] if n_lines <= 0 or n_lines >= len(lines): return "\n".join(lines) + "\n" sampled = random.sample(lines, n_lines) return "\n".join(sampled) + "\n" def main(): ap = argparse.ArgumentParser() g = ap.add_mutually_exclusive_group(required=True) g.add_argument("--url", help="download source url") g.add_argument("--infile", help="local input file path") ap.add_argument("--out", required=True, help="output text file path") ap.add_argument("--mode", choices=["ppl", "line"], default="ppl", help="ppl: keep newlines but collapse blanks/spaces, line: one sentence per line style") ap.add_argument("--max-chars", type=int, default=None, help="optional: cut the output to first N characters (fast/low-memory eval)") ap.add_argument("--sample-lines", type=int, default=None, help="optional: sample N non-empty lines uniformly (good for quick comparison)") ap.add_argument("--seed", type=int, default=42) args = ap.parse_args() out_path = Path(args.out) if args.url: tmp = out_path.with_suffix(out_path.suffix + ".download") download(args.url, tmp) in_path = tmp else: in_path = Path(args.infile) try: raw = in_path.read_text(encoding="utf-8", errors="replace") except Exception as e: print(f"failed to read input: {e}", file=sys.stderr) sys.exit(1) text = normalize_text(raw, args.mode) if args.sample_lines is not None: text = sample_lines(text, args.sample_lines, args.seed) text = take_prefix(text, args.max_chars) out_path.parent.mkdir(parents=True, exist_ok=True) out_path.write_text(text, encoding="utf-8") if args.url: try: os.remove(in_path) except OSError: pass print(f"wrote: {out_path} ({out_path.stat().st_size} bytes)") if __name__ == "__main__": main() **Command** python3 wikitext_prep.py \ --url https://cosmo.zip/pub/datasets/wikitext-2-raw/wiki.test.raw \ --out /data/wikitext2_test.txt \ --mode ppl \ --max-chars 2000000 Using the command below, I measured the perplexity of the quantized models. llama-perplexity -m modelname.gguf -f wikitext2_test.txt -c 32768 -b 4096 -fa on The table below summarizes the test results, which were also organized using ChatGPT. The actual `llama-perplexity` output is quite long, so it is attached separately below. For reference, Q4\_K\_M and Q4\_K\_XL were measured simultaneously, and after a llama.cpp update, Q4\_K\_XL and MXFP4 were measured simultaneously. Because the testing time was very long and the perplexity of Q4\_K\_XL was similar before and after the update, I assumed that the perplexity of Q4\_K\_M would also not be significantly affected by build changes. |Item|Q4\_K\_M (Unsloth)|UD-Q4\_K\_XL (previous)|MXFP4\_MOE|UD-Q4\_K\_XL (current)| |:-|:-|:-|:-|:-| |llama.cpp build|7803|7803|7896|7896| |GGUF file type|Q4\_K – Medium|Q4\_K – Medium|MXFP4 MoE|Q4\_K – Medium| |File size|17.05 GiB|16.31 GiB|15.79 GiB|16.31 GiB| |BPW|4.89|4.68|4.53|4.68| |PPL (final)|**16.1745 ± 0.1870**|**15.8605 ± 0.1823**|**10.7235 ± 0.1052**|**15.7309 ± 0.1803**| |Prompt eval speed|64.39 tok/s|64.37 tok/s|**68.20 tok/s**|**67.73 tok/s**| |ms/token|15.53 ms|15.54 ms|**14.66 ms**|**14.76 ms**| |Time per pass (ETA)|529.38 s|530.05 s|**501.55 s**|**502.66 s**| |GPU self (total)|20811 MiB|20056 MiB|**17874 MiB**|18552 MiB| |GPU model buffer|17284.84 MiB|16529.37 MiB|**15852.01 MiB**|16529.37 MiB| |KV cache size|**3196 MiB** (K 1692 + V 1504)|**3196 MiB** (K 1692 + V 1504)|**1692 MiB** (K 1692 + V 0)|**1692 MiB** (K 1692 + V 0)| |GPU free (log-based)|3406 MiB|4162 MiB|**6342 MiB**|5666 MiB| |Load time|9.90 s|9.55 s|**71.13 s**|43.72 s| |mmap / direct\_io|mmap off / direct\_io on|mmap off / direct\_io on|mmap on / direct\_io off|mmap on / direct\_io off| |Model|\[1\]|\[2\]|\[3\]|\[4\]|\[5\]|\[6\]|Final PPL| |:-|:-|:-|:-|:-|:-|:-|:-| |Q4\_K\_M|15.2952|15.1950|15.7101|14.8037|14.5891|16.1745|16.1745 ± 0.1870| |UD-Q4\_K\_XL (previous)|14.7572|14.4954|15.0386|14.1713|14.1425|15.8605|15.8605 ± 0.1823| |MXFP4\_MOE|10.1764|10.1296|10.4917|9.8666|9.8629|10.7235|10.7235 ± 0.1052| |UD-Q4\_K\_XL (current)|14.4241|14.2673|14.8671|14.0460|14.0444|15.7309|15.7309 ± 0.1803| Below is a table comparing MXFP4 and Q4\_K\_XL quantization methods on the Nemotron-3-nano model. This table was also created using ChatGPT. |Item|Q4\_K\_XL (previous)|MXFP4 (current)|Change (MXFP4 − Q4\_K\_XL)|Meaning| |:-|:-|:-|:-|:-| |Final PPL|7.7090|7.5294|**-0.1796**|**MXFP4 is lower → based on this corpus, “less accuracy loss (or more accurate)”**| |PPL error (±)|0.05361|0.05198|\-0.00163|Uncertainty is nearly identical| |Prompt eval speed|763.26 tok/s|797.79 tok/s|**+34.53 tok/s (+4.5%)**|MXFP4 is slightly faster| |Time per pass|24.74 s/pass|23.45 s/pass|\-1.29 s/pass|MXFP4 is slightly shorter| |GPU model memory|21537 MiB|16782 MiB|**-4755 MiB**|MXFP4 uses **significantly less model memory**| |GPU free VRAM|2286 MiB|7040 MiB|**+4754 MiB**|Available VRAM increases greatly| |GPU context memory|143 MiB|143 MiB|0|Same due to identical `n_ctx`| |GPU compute buffer|271 MiB|271 MiB|0|Same| |Host usage (total)|268 MiB|394 MiB|\+126 MiB|Difference is small and of limited significance| I rewrote this post to add the Nemotron-3-nano benchmark, and in the previous post, one user commented that perplexity and tool calling or coding are completely different domains. They mentioned that using the HumanEval benchmark would provide values more directly related to tool calling and coding performance. If I get the chance, I plan to test again using the HumanEval benchmark in the future. [https://www.reddit.com/r/LocalLLaMA/comments/1qrwnd4/comment/o2rape9/](https://www.reddit.com/r/LocalLLaMA/comments/1qrwnd4/comment/o2rape9/) To be honest, after seeing these benchmark results, I hoped that perplexity would be directly related to coding and tool calling performance, so it is a bit disappointing. If anyone has other opinions, I would appreciate it if you could share them.
2026-01-31T11:27:30
https://www.reddit.com/r/LocalLLaMA/comments/1qrzyaz/i_found_that_mxfp4_has_lower_perplexity_than_q4_k/
East-Engineering-653
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qrzyaz
false
null
t3_1qrzyaz
/r/LocalLLaMA/comments/1qrzyaz/i_found_that_mxfp4_has_lower_perplexity_than_q4_k/
false
false
self
107
null
PSA: Running OpenClaw/Moltbot? Check your Nginx config. I found a Localhost Bypass vulnerability.
0
Hi everyone, I've been testing the new OpenClaw release and found that the default trusted proxy settings are dangerous if you are exposing it via Nginx. It treats external traffic as localhost, bypassing auth. The Fix: Explicitly define your trusted proxies or, better yet, use Tailscale/ZeroTier instead of opening ports. Also, verify your auth-profiles.json permissions, as keys are stored in plain text. I made a deep dive video demonstrating this behavior and how to harden the installation with Docker. (Video is in Spanish, but code/terminal commands are universal). [https://youtu.be/swQi3C8uD3A?si=xSj-PyZwTWOiG991](https://youtu.be/swQi3C8uD3A?si=xSj-PyZwTWOiG991) Stay safe!
2026-01-31T11:18:52
https://www.reddit.com/r/LocalLLaMA/comments/1qrzsqp/psa_running_openclawmoltbot_check_your_nginx/
jokiruiz
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qrzsqp
false
null
t3_1qrzsqp
/r/LocalLLaMA/comments/1qrzsqp/psa_running_openclawmoltbot_check_your_nginx/
false
false
self
0
{'enabled': False, 'images': [{'id': 'XvlY2IZAWz_6C-aufXTfcMOl4jSwKp4mZWdalJUpfFo', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/XvlY2IZAWz_6C-aufXTfcMOl4jSwKp4mZWdalJUpfFo.jpeg?width=108&crop=smart&auto=webp&s=7ad626796427b43ca9cdc091d81d69a037c0e715', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/XvlY2IZAWz_6C-aufXTfcMOl4jSwKp4mZWdalJUpfFo.jpeg?width=216&crop=smart&auto=webp&s=81605162af9588d280bfd17f381e53307d89d8e2', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/XvlY2IZAWz_6C-aufXTfcMOl4jSwKp4mZWdalJUpfFo.jpeg?width=320&crop=smart&auto=webp&s=ab63a6369f319e34fde6c8d164ebfdac5dc3d3d2', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/XvlY2IZAWz_6C-aufXTfcMOl4jSwKp4mZWdalJUpfFo.jpeg?auto=webp&s=5fd67108504e0bddc81acbfaee0ab5dd55bd992b', 'width': 480}, 'variants': {}}]}
Getting OpenClaw to work with Qwen3:14b including tool calling and MCP support
2
OpenClaw (formally known as ClawdBot, formally know as Moltbot) is fun. It cool to play around with and to understand where technology might be moving. Playing around with it is even more fun when you get it working with open models. After two days of puzzling, I got local tool calling working on Qwen3:14b with \~40 tools, accessible through WhatsApp. Since the architecture is a little different and I needed to solve a bunch of issues, I wanted to share it here. # The setup WhatsApp → OpenClaw gateway (:18789) └─► ollama-mcp-bridge (:11435) └─► Ollama (:11434) with qwen3:14b └─► MCP Servers (16 tools): ├── filesystem (5 tools) ├── yt-dlp (2 tools) ├── peekaboo (2 tools for macOS screenshots) └── engram (7 tools, my personal knowledge base) └─► 24 native OpenClaw tools (messaging, exec, browser, etc.) OpenClaw is an AI assistant framework that supports multiple messaging channels. It talks to its LLM backend via an OpenAI-compatible API (`/v1/chat/completions`). **Why a bridge instead of adding tools directly in OpenClaw?** OpenClaw supports custom tools natively. You could write each MCP tool as an OpenClaw extension. But I have multiple apps that need the same tools: OpenClaw for WhatsApp, Engram (my personal knowledge system), Jan.ai, etc. Writing each tool as a per-app extension means duplicating everything. With the bridge as a shared MCP layer, you configure your tools once, and any OpenAI-compatible client gets them. Just point it at `:11435` instead of `:11434`. # Step 1: The OpenClaw SDK patch (PR #4287) The whole project started here. Out of the box, OpenClaw's `openai-completions` API driver doesn't pass tool definitions from third-party providers (like Ollama via the bridge) through to the model. The SDK builds its own internal tool list from built-in and extension tools, but anything the upstream API injects gets ignored. [PR #4287](https://github.com/openclaw/openclaw/pull/4287) by `0xrushi` fixes this. It enhances the OpenAI completions tool routing to ensure that tools provided by the API (in our case, MCP tools injected by the bridge) are properly routed alongside OpenClaw's native tools. Without this patch, the model never even sees the MCP tool schemas. It's as if they don't exist. I'm running a dev build based on v2026.1.27-beta.1 with this PR cherry-picked onto a local `fix/completions-tools` branch. It's not yet merged into main, but it's essential for any Ollama + MCP tool calling setup. # Step 2: The bridge problem With PR #4287 in place, OpenClaw correctly passes tools through. But there's a second layer: [ollama-mcp-bridge](https://github.com/bartolli/ollama-mcp-bridge) only injects MCP tool schemas on its native `/api/chat` endpoint. OpenClaw talks via `/v1/chat/completions` (OpenAI format), which just got proxied straight through to Ollama without any tool injection. On top of that, there's a streaming problem. More on that in Step 3. # Step 3: Two patches to the bridge **1. New** `/v1/chat/completions` **endpoint** in `api.py` that intercepts before the catch-all proxy route hits. **2. New method** `proxy_openai_completions_with_tools` in `proxy_service.py`: * Merges MCP tool schemas (OpenAI format) into the request's `tools` array * Deduplicates: MCP tools with the same name as caller tools get skipped * Tool call loop: if the model calls an MCP tool, the bridge executes it, appends the result, and loops back * Non-MCP tool calls (native OpenClaw tools) are returned as-is to the caller * **Streaming**: tool-call rounds run internally as non-streaming; the final response gets wrapped as SSE via `_wrap_as_sse_stream` * **Result truncation**: tool outputs are capped at 4000 chars. Without this, a single base64 screenshot can eat your entire context window * **Round limiter**: respects `max_tool_rounds` to prevent infinite tool call loops Two problems worth highlighting: **The double LLM call.** The naive approach to combining streaming with tool detection is: make a non-streaming call first to check for tool calls, then if there are none, make a *second* streaming call for the actual response. That doubles your latency on every non-tool message. The fix: wrap the already-obtained non-streaming result as SSE chunks (`_wrap_as_sse_stream`) instead of calling the model again. One LLM call instead of two. **The silent SSE failure.** OpenClaw's SDK always sends `stream: true`. My first patch forced `stream: false` and returned a JSON object. The OpenAI SDK expected SSE chunks, interpreted the JSON as empty, resulting in `content:[]`. The agent proudly ran for 78 seconds producing absolutely nothing. The fix was proper SSE wrapping for all response paths. # Model comparison: 8b vs 14b with 40 tools I tested both qwen3:8b and qwen3:14b on an M4-series Mac Studio with 64GB of RAM: |Scenario|qwen3:8b|qwen3:14b| |:-|:-|:-| |No tool calls|\~12s|\~30-60s| |With tool calls (3 rounds)|\~45s|\~60-150s| |Multi-turn context quality|Poor (loses the thread with 40 tool schemas in the prompt)|Good (follows context even with many tools)| The 8b model is 3-5x faster but basically treats every message as a new conversation when there are 40 tool schemas in the context. OpenClaw sends the full message history (confirmed via logging: `messages=16`), so the problem isn't missing context. The model just can't follow it alongside those massive tool definitions. **Verdict: qwen3:14b.** Quality over speed for now. # What I'd like to improve * Response time (60-150s with tool calls is usable but not great) * The bridge patches are monkey-patches on installed packages. Would be better as a proper fork or PR upstream to [ollama-mcp-bridge](https://github.com/bartolli/ollama-mcp-bridge) * Hoping [PR #4287](https://github.com/openclaw/openclaw/pull/4287) gets merged soon so others don't have to cherry-pick it manually The patch code is available as a [GitHub Gist](https://gist.github.com/mvletter/e861816e234f04330173ef11e031c90d). Running this as a daily driver via WhatsApp and it's surprisingly capable for a 14b model. If you seen any improvements let me know. And it's been a long time since I posted he so be nice haha.
2026-01-31T10:26:53
https://www.reddit.com/r/LocalLLaMA/comments/1qrywko/getting_openclaw_to_work_with_qwen314b_including/
MarkVL
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qrywko
false
null
t3_1qrywko
/r/LocalLLaMA/comments/1qrywko/getting_openclaw_to_work_with_qwen314b_including/
false
false
self
2
{'enabled': False, 'images': [{'id': 'r9mNzqncFpMwmRKgEwAHDTwAt4-rGWW9Ie4MB9mroQw', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/r9mNzqncFpMwmRKgEwAHDTwAt4-rGWW9Ie4MB9mroQw.png?width=108&crop=smart&auto=webp&s=2a1ae0eef78a72602377be6fe753761fe90b5bee', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/r9mNzqncFpMwmRKgEwAHDTwAt4-rGWW9Ie4MB9mroQw.png?width=216&crop=smart&auto=webp&s=ee7cfba6f957439bcf9df80ad6b8378d53ab0f72', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/r9mNzqncFpMwmRKgEwAHDTwAt4-rGWW9Ie4MB9mroQw.png?width=320&crop=smart&auto=webp&s=64e67973579462751a020e698295636fad5665b2', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/r9mNzqncFpMwmRKgEwAHDTwAt4-rGWW9Ie4MB9mroQw.png?width=640&crop=smart&auto=webp&s=0ff6bdedfe2cedd9bd41205c100e58a94516d26b', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/r9mNzqncFpMwmRKgEwAHDTwAt4-rGWW9Ie4MB9mroQw.png?width=960&crop=smart&auto=webp&s=fa0ec43ba596f2081d3f34c929722e02aac4cc29', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/r9mNzqncFpMwmRKgEwAHDTwAt4-rGWW9Ie4MB9mroQw.png?width=1080&crop=smart&auto=webp&s=07a86b370d3af7fbe0a57c5c1bf3a24f6fc7d0db', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/r9mNzqncFpMwmRKgEwAHDTwAt4-rGWW9Ie4MB9mroQw.png?auto=webp&s=31e8f2c3aa81f76b33d75b8e16a6e3954ee3e2d2', 'width': 1200}, 'variants': {}}]}
What good are 128k+ context windows for <40b Parameter models?
9
This is only anecdotal evidence, nothing based off of solid research, but I find that, after \~10k tokens, responses for most models I've tried (which are all under 40b parameters) the quality noticeably degrades, and after 30k tokens the models become borderline unusable. So what use-cases are there (if any) for such large maximum context windows?
2026-01-31T10:17:49
https://www.reddit.com/r/LocalLLaMA/comments/1qryr2e/what_good_are_128k_context_windows_for_40b/
Your_Friendly_Nerd
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qryr2e
false
null
t3_1qryr2e
/r/LocalLLaMA/comments/1qryr2e/what_good_are_128k_context_windows_for_40b/
false
false
self
9
null
This overhyped nonsense is getting tiring (moltbook)
73
This morning I check my YouTube feed again to get flooded by multiple videos all talking about this **"**incredible**"** moltbook thing. I thought it was nonsense to begin with but then I decided 'Hey let's give it a look' so I went to go checkout and moltbook myself and the website literally doesn't work. I tried navigated to the 'Browse Submolts' page and clicked over a dozen threads and literally none of them will load or open. I find it so exhaustic to have these constant nonsense hype cycles. What happened to real AI technology and development that these things get so much hype for nothing and don't even work properly. I just don't get it. Thought I just wanted to share to see if anyone else feels the same way because I can't be the only one.
2026-01-31T10:17:29
https://www.reddit.com/r/LocalLLaMA/comments/1qryqvo/this_overhyped_nonsense_is_getting_tiring_moltbook/
NolenBrolen
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qryqvo
false
null
t3_1qryqvo
/r/LocalLLaMA/comments/1qryqvo/this_overhyped_nonsense_is_getting_tiring_moltbook/
false
false
self
73
null
Multi-LLM Development Framework – Structure for AI-assisted projects
1
[removed]
2026-01-31T10:00:08
https://www.reddit.com/r/LocalLLaMA/comments/1qryg5d/multillm_development_framework_structure_for/
T5HK
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qryg5d
false
null
t3_1qryg5d
/r/LocalLLaMA/comments/1qryg5d/multillm_development_framework_structure_for/
false
false
self
1
null
LLM inference for the cloud native era
0
Excited to see CNCF blog for the new project [https://github.com/volcano-sh/kthena](https://github.com/volcano-sh/kthena) Kthena is a cloud native, high-performance system for Large Language Model (LLM) inference routing, orchestration, and scheduling, tailored specifically for Kubernetes. Engineered to address the complexity of serving LLMs at production scale, Kthena delivers granular control and enhanced flexibility. Through features like topology-aware scheduling, KV Cache-aware routing, and Prefill-Decode (PD) disaggregation, it significantly improves GPU/NPU utilization and throughput while minimizing latency. [https://www.cncf.io/blog/2026/01/28/introducing-kthena-llm-inference-for-the-cloud-native-era/](https://www.cncf.io/blog/2026/01/28/introducing-kthena-llm-inference-for-the-cloud-native-era/)
2026-01-31T09:59:50
https://www.reddit.com/r/LocalLLaMA/comments/1qryfyp/llm_inference_for_the_cloud_native_era/
DiscussionWrong9402
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qryfyp
false
null
t3_1qryfyp
/r/LocalLLaMA/comments/1qryfyp/llm_inference_for_the_cloud_native_era/
false
false
self
0
{'enabled': False, 'images': [{'id': 'BzqPeddHyfIFsJR_ktsx5Ol8hl0qbTHCu5z0g1GlKzU', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/BzqPeddHyfIFsJR_ktsx5Ol8hl0qbTHCu5z0g1GlKzU.png?width=108&crop=smart&auto=webp&s=c15877a6e9a44dc7e2e3f3c29515c12c1976f966', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/BzqPeddHyfIFsJR_ktsx5Ol8hl0qbTHCu5z0g1GlKzU.png?width=216&crop=smart&auto=webp&s=8449471bf85f0bd8eee6c44b4a1daea18caabd58', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/BzqPeddHyfIFsJR_ktsx5Ol8hl0qbTHCu5z0g1GlKzU.png?width=320&crop=smart&auto=webp&s=956cdd8c34aab2881c4389d1f00e257e8565bd6e', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/BzqPeddHyfIFsJR_ktsx5Ol8hl0qbTHCu5z0g1GlKzU.png?width=640&crop=smart&auto=webp&s=3cca95e33e4be406448902301716002a6f52035a', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/BzqPeddHyfIFsJR_ktsx5Ol8hl0qbTHCu5z0g1GlKzU.png?width=960&crop=smart&auto=webp&s=d2149ba6e675886f20e95c0c03121ed3bf4afcaf', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/BzqPeddHyfIFsJR_ktsx5Ol8hl0qbTHCu5z0g1GlKzU.png?width=1080&crop=smart&auto=webp&s=0a9b2ee08b38ec5adcc1fdde7ca8699636c0f65f', 'width': 1080}], 'source': {'height': 640, 'url': 'https://external-preview.redd.it/BzqPeddHyfIFsJR_ktsx5Ol8hl0qbTHCu5z0g1GlKzU.png?auto=webp&s=711cf4d067c2d896384925a472c28b7150376e8b', 'width': 1280}, 'variants': {}}]}
Jan-v3-4B-base-instruct
1
**Jan-v3-4B-base-instruct** is a 4B-parameter model obtained via post-training distillation from a larger teacher, transferring capabilities while preserving general-purpose performance on standard benchmarks. The result is a compact, ownable base that is straightforward to fine-tune, broadly applicable and minimizing the usual capacity–capability trade-offs. [https://huggingface.co/janhq/Jan-v3-4B-base-instruct](https://huggingface.co/janhq/Jan-v3-4B-base-instruct) [https://huggingface.co/bartowski/janhq\_Jan-v3-4B-base-instruct-GGUF](https://huggingface.co/bartowski/janhq_Jan-v3-4B-base-instruct-GGUF)
2026-01-31T09:51:24
https://i.redd.it/mh8itbirpngg1.png
jacek2023
i.redd.it
1970-01-01T00:00:00
0
{}
1qryb0i
false
null
t3_1qryb0i
/r/LocalLLaMA/comments/1qryb0i/janv34bbaseinstruct/
false
false
https://b.thumbs.redditm…gBLZwrF-UgUo.jpg
1
{'enabled': True, 'images': [{'id': 'coJhBRz0cv8eFqsd5oy63wQm2Ate3cNx14jq3w5gXpk', 'resolutions': [{'height': 77, 'url': 'https://preview.redd.it/mh8itbirpngg1.png?width=108&crop=smart&auto=webp&s=4efdecb25e6f92a41de8121a4db43c2fe9daa351', 'width': 108}, {'height': 155, 'url': 'https://preview.redd.it/mh8itbirpngg1.png?width=216&crop=smart&auto=webp&s=bd65c4ec13bdce49b9ff5ca782e2d17864f3dd0c', 'width': 216}, {'height': 230, 'url': 'https://preview.redd.it/mh8itbirpngg1.png?width=320&crop=smart&auto=webp&s=111890c897b88bb109050eb455b529b8af32d355', 'width': 320}, {'height': 460, 'url': 'https://preview.redd.it/mh8itbirpngg1.png?width=640&crop=smart&auto=webp&s=a4f2b945604c8d1ca780ec097a4e22bb9874b759', 'width': 640}, {'height': 691, 'url': 'https://preview.redd.it/mh8itbirpngg1.png?width=960&crop=smart&auto=webp&s=b46fb9697296828972cb50979c8c6357f72d393b', 'width': 960}, {'height': 777, 'url': 'https://preview.redd.it/mh8itbirpngg1.png?width=1080&crop=smart&auto=webp&s=41974a971407de7162744701a6716df3025178fb', 'width': 1080}], 'source': {'height': 1496, 'url': 'https://preview.redd.it/mh8itbirpngg1.png?auto=webp&s=93e7335be23a26b08147f78ac445f57c24093661', 'width': 2078}, 'variants': {}}]}
Tad bit unrelated to this subreddit but has anyone tried running the new Epstein drop through a local llm
1
[removed]
2026-01-31T09:33:56
https://www.reddit.com/r/LocalLLaMA/comments/1qry0c6/tad_bit_unrelated_to_this_subreddit_but_has/
ShreeyanxRaina
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qry0c6
false
null
t3_1qry0c6
/r/LocalLLaMA/comments/1qry0c6/tad_bit_unrelated_to_this_subreddit_but_has/
false
false
self
1
null
Building AI/ML hardware at 16 in India Looking for other "maniac" builders.
1
[removed]
2026-01-31T08:56:10
https://www.reddit.com/r/LocalLLaMA/comments/1qrxe48/building_aiml_hardware_at_16_in_india_looking_for/
Late-Particular9795
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qrxe48
false
null
t3_1qrxe48
/r/LocalLLaMA/comments/1qrxe48/building_aiml_hardware_at_16_in_india_looking_for/
false
false
self
1
null
Here it goes
161
My friend sold me his mining unit that he never got to use. He had it at his mom’s house and his mom moved out of town so he let me keep it. Was gonna part it out but I think it’s my new project. It has 8 RTx 3090 which has 24gbvram I would just need to upgrade the mobo cpu ram and the est j found was around 2500 for mobo 5900ryzen 256gb ram. It has 4 1000w power, would just need to get 8 pci risers so i can have each gou run at pcie4.0 x16. What donyoi guys think ? U think its over kill, im bery interested in havin my own ai sandbkx. Wouldnlike to get eveyones r thoughts
2026-01-31T08:12:32
https://i.redd.it/pchjv5z88ngg1.jpeg
gotkush
i.redd.it
1970-01-01T00:00:00
0
{}
1qrwo9v
false
null
t3_1qrwo9v
/r/LocalLLaMA/comments/1qrwo9v/here_it_goes/
false
false
default
161
{'enabled': True, 'images': [{'id': 'pchjv5z88ngg1', 'resolutions': [{'height': 144, 'url': 'https://preview.redd.it/pchjv5z88ngg1.jpeg?width=108&crop=smart&auto=webp&s=38a1041ed7f5a81b75bfa4dce91585c67b082db9', 'width': 108}, {'height': 288, 'url': 'https://preview.redd.it/pchjv5z88ngg1.jpeg?width=216&crop=smart&auto=webp&s=81aa0dc22bf766090d4f93aeaaeef87da80175fe', 'width': 216}, {'height': 426, 'url': 'https://preview.redd.it/pchjv5z88ngg1.jpeg?width=320&crop=smart&auto=webp&s=1b692eb6840b08206fe2aa1ba481c2737d427577', 'width': 320}, {'height': 853, 'url': 'https://preview.redd.it/pchjv5z88ngg1.jpeg?width=640&crop=smart&auto=webp&s=46ff31a155e1a7011f67f91d666503ef5cbcdf51', 'width': 640}, {'height': 1280, 'url': 'https://preview.redd.it/pchjv5z88ngg1.jpeg?width=960&crop=smart&auto=webp&s=42cdc8866e1d8ef31c382896c693e3cb1dd2fe32', 'width': 960}, {'height': 1440, 'url': 'https://preview.redd.it/pchjv5z88ngg1.jpeg?width=1080&crop=smart&auto=webp&s=aa8b09ca156df365531bd511afc27116c5941675', 'width': 1080}], 'source': {'height': 4032, 'url': 'https://preview.redd.it/pchjv5z88ngg1.jpeg?auto=webp&s=48cf68667681c3d53c455c2b6b9919a4baf0b69d', 'width': 3024}, 'variants': {}}]}
I found that MXFP4 has lower perplexity than Q4_K_M and Q4_K_XL. Is this related to improvements in the model’s tool-calling or coding performance?
1
This post was originally written in Korean and then translated into English using ChatGPT. Hello, I am currently serving LLM models using a Tesla P40 and llama.cpp. When running models in the 30–32B size range, I primarily use 4-bit quantization. Until now, I have preferentially used Q4\_K\_XL, and when Q4\_K\_XL was not available, I used Q4\_K\_M. I initially assumed that MXFP4 quantization would naturally have lower accuracy than other 4-bit quantization methods because of its smaller size, so I did not use it. However, due to the fast speed of MXFP4, I became curious and compared the Q4\_K\_M, Q4\_K\_XL, and MXFP4 quantization methods of the GLM-4.7-Flash model using the `llama-perplexity` command. Below are the commands used, along with the Python code and command for dataset generation. The dataset generation command was created using ChatGPT. import argparse import os import re import sys import urllib.request from pathlib import Path import random def download(url: str, dst: Path) -> None: dst.parent.mkdir(parents=True, exist_ok=True) with urllib.request.urlopen(url) as r, open(dst, "wb") as f: f.write(r.read()) def normalize_text(text: str, mode: str) -> str: text = text.replace("\r\n", "\n").replace("\r", "\n") if mode == "ppl": text = re.sub(r"\n\s*\n+", "\n", text) text = re.sub(r"[ \t]+", " ", text) text = text.strip() + "\n" return text if mode == "line": lines = [] for line in text.split("\n"): line = line.strip() if not line: continue line = re.sub(r"[ \t]+", " ", line) lines.append(line) return "\n".join(lines) + "\n" raise ValueError(f"unknown mode: {mode}") def take_prefix(text: str, max_chars: int | None) -> str: if max_chars is None: return text if max_chars <= 0: return "" return text[:max_chars] def sample_lines(text: str, n_lines: int, seed: int) -> str: random.seed(seed) lines = [ln for ln in text.split("\n") if ln.strip()] if n_lines <= 0 or n_lines >= len(lines): return "\n".join(lines) + "\n" sampled = random.sample(lines, n_lines) return "\n".join(sampled) + "\n" def main(): ap = argparse.ArgumentParser() g = ap.add_mutually_exclusive_group(required=True) g.add_argument("--url", help="download source url") g.add_argument("--infile", help="local input file path") ap.add_argument("--out", required=True, help="output text file path") ap.add_argument("--mode", choices=["ppl", "line"], default="ppl", help="ppl: keep newlines but collapse blanks/spaces, line: one sentence per line style") ap.add_argument("--max-chars", type=int, default=None, help="optional: cut the output to first N characters (fast/low-memory eval)") ap.add_argument("--sample-lines", type=int, default=None, help="optional: sample N non-empty lines uniformly (good for quick comparison)") ap.add_argument("--seed", type=int, default=42) args = ap.parse_args() out_path = Path(args.out) if args.url: tmp = out_path.with_suffix(out_path.suffix + ".download") download(args.url, tmp) in_path = tmp else: in_path = Path(args.infile) try: raw = in_path.read_text(encoding="utf-8", errors="replace") except Exception as e: print(f"failed to read input: {e}", file=sys.stderr) sys.exit(1) text = normalize_text(raw, args.mode) if args.sample_lines is not None: text = sample_lines(text, args.sample_lines, args.seed) text = take_prefix(text, args.max_chars) out_path.parent.mkdir(parents=True, exist_ok=True) out_path.write_text(text, encoding="utf-8") if args.url: try: os.remove(in_path) except OSError: pass print(f"wrote: {out_path} ({out_path.stat().st_size} bytes)") if __name__ == "__main__": main() python3 wikitext_prep.py \ --url https://cosmo.zip/pub/datasets/wikitext-2-raw/wiki.test.raw \ --out /data/wikitext2_test.txt \ --mode ppl \ --max-chars 2000000 The following command was used to measure the perplexity of the quantized models. llama-perplexity -m modelname.gguf -f wikitext2_test.txt -c 32768 -b 4096 -fa on Below is a table summarizing the test results, which was also created using ChatGPT. For reference, Q4\_K\_M and Q4\_K\_XL were measured at the same time, and after a llama.cpp update, Q4\_K\_XL and MXFP4 were measured together. Since the testing time was very long and the perplexity of Q4\_K\_XL was nearly the same before and after the update, I assumed that the perplexity of Q4\_K\_M would also not be affected by the build change. |Item|Q4\_K\_M (Unsloth)|UD-Q4\_K\_XL (previous)|MXFP4\_MOE|UD-Q4\_K\_XL (current)| |:-|:-|:-|:-|:-| |llama.cpp build|7803|7803|7896|7896| |GGUF file type|Q4\_K – Medium|Q4\_K – Medium|MXFP4 MoE|Q4\_K – Medium| |File size|17.05 GiB|16.31 GiB|15.79 GiB|16.31 GiB| |BPW|4.89|4.68|4.53|4.68| |PPL (final)|**16.1745 ± 0.1870**|**15.8605 ± 0.1823**|**10.7235 ± 0.1052**|**15.7309 ± 0.1803**| |Prompt eval speed|64.39 tok/s|64.37 tok/s|**68.20 tok/s**|**67.73 tok/s**| |ms/token|15.53 ms|15.54 ms|**14.66 ms**|**14.76 ms**| |Time per pass (ETA)|529.38 s|530.05 s|**501.55 s**|**502.66 s**| |GPU self (total)|20811 MiB|20056 MiB|**17874 MiB**|18552 MiB| |GPU model buffer|17284.84 MiB|16529.37 MiB|**15852.01 MiB**|16529.37 MiB| |KV cache size|**3196 MiB** (K 1692 + V 1504)|**3196 MiB** (K 1692 + V 1504)|**1692 MiB** (K 1692 + V 0)|**1692 MiB** (K 1692 + V 0)| |GPU free (log-based)|3406 MiB|4162 MiB|**6342 MiB**|5666 MiB| |Load time|9.90 s|9.55 s|**71.13 s**|43.72 s| |mmap / direct\_io|mmap off / direct\_io on|mmap off / direct\_io on|mmap on / direct\_io off|mmap on / direct\_io off| |Model|\[1\]|\[2\]|\[3\]|\[4\]|\[5\]|\[6\]|Final PPL| |:-|:-|:-|:-|:-|:-|:-|:-| |Q4\_K\_M|15.2952|15.1950|15.7101|14.8037|14.5891|16.1745|16.1745 ± 0.1870| |UD-Q4\_K\_XL (previous)|14.7572|14.4954|15.0386|14.1713|14.1425|15.8605|15.8605 ± 0.1823| |MXFP4\_MOE|10.1764|10.1296|10.4917|9.8666|9.8629|10.7235|10.7235 ± 0.1052| |UD-Q4\_K\_XL (current)|14.4241|14.2673|14.8671|14.0460|14.0444|15.7309|15.7309 ± 0.1803| According to these results, MXFP4 quantization appears to have the lowest perplexity. In that case, can it be considered that MXFP4 quantization also achieves a higher success rate in tool calling and coding tasks? I am curious to hear the opinions of other users. Below is the actual command execution output.
2026-01-31T08:10:59
https://www.reddit.com/r/LocalLLaMA/comments/1qrwnd4/i_found_that_mxfp4_has_lower_perplexity_than_q4_k/
East-Engineering-653
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qrwnd4
false
null
t3_1qrwnd4
/r/LocalLLaMA/comments/1qrwnd4/i_found_that_mxfp4_has_lower_perplexity_than_q4_k/
false
false
self
1
null
I have 50$ in K2.5 api credits
0
I need help. So, I used kimi k2 thinking to generate 1000 examples. Thinking this would burn through my api usage, it used 5 dollars instead of 50. After training on a DASD 4B model I lost a lot of points in AIME. Not super important, but AIME and AIME 2 include math logic that can be used for generating bullet proof plots, and prevent it from making more plot holes throughout generation. SO, what I'm asking is, what would you spend 50$ in api credits on?
2026-01-31T07:59:50
https://www.reddit.com/r/LocalLLaMA/comments/1qrwggf/i_have_50_in_k25_api_credits/
volious-ka
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qrwggf
false
null
t3_1qrwggf
/r/LocalLLaMA/comments/1qrwggf/i_have_50_in_k25_api_credits/
false
false
self
0
null
LYRN Dashboard v5 Almost Done
0
Just wanted to swing by and update the interested in LYRN with a new screenshot of what is going on. This version is an HTML frontend instead of tkinter so I was able to set it up as a PWA and LYRN can now be controlled remotely if you have your IP and Port for your server instance. Once connected you can start, stop, change models, rebuild snapshots and a just about anything you would be able to do on your local system with LYRN. I am just finishing up some QOL stuff before I release v5.0. The roadmap after that is fairly focused on completing the memory system modules and some of the simulation modules. In April my provisional patent expires and I will no longer be tied to that route. Source available future is where we are and headed so in a few weeks v5 will be uploaded to the repo for free to use and play with. https://preview.redd.it/2jf4e02n2ngg1.png?width=2560&format=png&auto=webp&s=f4b221f1441310296969005f72dc05d5f210eb39
2026-01-31T07:46:00
https://www.reddit.com/r/LocalLLaMA/comments/1qrw8d6/lyrn_dashboard_v5_almost_done/
PayBetter
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qrw8d6
false
null
t3_1qrw8d6
/r/LocalLLaMA/comments/1qrw8d6/lyrn_dashboard_v5_almost_done/
false
false
self
0
null
What’s the best way to run an offline, private LLM for daily tasks?
11
I want an LLM that runs **fully offline**, is **secure/private**, and can handle basic stuff like reminders, notes, simple automation, maybe voice later. Not looking for cloud APIs or “just use ChatGPT” answers curious what people here are actually using *in practice*. Are local setups (Ollama / LM Studio / llama.cpp etc.) good enough now, or is this still more hobby than daily driver? Would love to hear real setups, tradeoffs, and “don’t do this” lessons.
2026-01-31T07:27:20
https://www.reddit.com/r/LocalLLaMA/comments/1qrvx16/whats_the_best_way_to_run_an_offline_private_llm/
FollowingMindless144
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qrvx16
false
null
t3_1qrvx16
/r/LocalLLaMA/comments/1qrvx16/whats_the_best_way_to_run_an_offline_private_llm/
false
false
self
11
null
NVIDIA releases new graphics driver for old Pascal and Maxwell graphics cards - Neowin
25
2026-01-31T07:27:12
https://www.neowin.net/news/nvidia-releases-new-graphics-driver-for-old-pascal-and-maxwell-graphics-cards/
maifee
neowin.net
1970-01-01T00:00:00
0
{}
1qrvwy6
false
null
t3_1qrvwy6
/r/LocalLLaMA/comments/1qrvwy6/nvidia_releases_new_graphics_driver_for_old/
false
false
default
25
null
is it possible to create a jarvis like thing to do basic stuff
0
like read the wether update google calendar set alarms and stuff but i want it to run privately on a pc
2026-01-31T07:11:09
https://www.reddit.com/r/LocalLLaMA/comments/1qrvn24/is_it_possible_to_create_a_jarvis_like_thing_to/
RelationshipIll4676
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qrvn24
false
null
t3_1qrvn24
/r/LocalLLaMA/comments/1qrvn24/is_it_possible_to_create_a_jarvis_like_thing_to/
false
false
self
0
null
Thoughts on my AI rig build
1
So at some point last year I tried running some local Ai processes on my old main going PC. A old ryzen 2700x with 16GB amd a 1070TI. I had a Lotta fun. Run some image classification, file management, and with regular frontier online models I was able to do some optimization and programming. I started to run into the limits of my system quick. I think started exploring some of these setups on these local Ai reddits and started really wanting to create my own rig. I was exploring my local Facebook marketplace and kept running into deals wear I really regretted letting them go ( one of the best was a threadripper, build with 128GB ram, a 3090, and a 1080 for around 1600.) So I made the risky move in novemeber and bought a guys mining rig with a ryzen processor, 32GB ram, 512nvme, 3090, and 2x 1000w power supplies. After querying with Gemini and stuff, I proceeded building out the rig with everything I though I need. My current build once I put all the parts in will be: Aorus master x570 master Ryzen 5900x 360mm aio for the 5900x 128GB ddr4 3200 512nvme Rtx 3090 Vision OC All still on the open air frame so I can expand cards. The rtx 3090 Vision OC is running on this riser https://a.co/d/gYCpufn I ran a stress test on the GPU yesterday and the temp were pretty good. I will eventually look into repasting/padding ( I'm a little scared I'm going to break something or make things worse). Tomorrow I am probably going to be buying a second 3090. A person is selling a full PC with a 3090 FE. I plan to pull the card and resell the rest of the system. My thought process is that I can use this rig for so much of my side projects. I don't have much coding skills so im hoping to expand my coding skills through this. I can run cad and 3d modeling, I can run virtual machines, and a lot more with the power of this rig. I want to get the second 3090 to "Max" out this rig. Im highly considering doing nvlink to fully put In the last notch of performance I can get. I've seen the opinions that frontier models would be better for coding and I'll definitely be using them along with this rig. I also really like the thought of training and finetuning for your own local data and using tools like immich and such. Anyway is two 3090s a good idea? Is it too much? ..... To little? Gemini's response was that I would be able to load a decent number of models and have a decent context with this setup and context would be limited with just one card. Also is NVlink worth it? I believe when I connect the two cards they will be running at PCI 4.0 x8 by 8x. Also would it be better to buy something to isolate the second card from pcie power and run it off the second power supply or should I just sell the second power supply and move entire setup to a 1500w power supply. I also saw that I could just programatically limit the power draw of the cards as a option. Also should I trade or sell the vision oc card and get another FE card so they are fully matching? Sorry for the wall of text. Tldr. Take a look at specs section. should I get another 3090 and should invest in nvlink bridge? Looking for opinions on what moves I should make.
2026-01-31T07:03:15
https://www.reddit.com/r/LocalLLaMA/comments/1qrvhtu/thoughts_on_my_ai_rig_build/
Fickle_Debate_9746
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qrvhtu
false
null
t3_1qrvhtu
/r/LocalLLaMA/comments/1qrvhtu/thoughts_on_my_ai_rig_build/
false
false
self
1
null
Qwen32b - vl - thinking
2
Hello, how good is this model for coding tasks if compared to Claude Code for example? Is it a lot of times just babysitting or does it produce working compiling code? Oftentimes Claude Code struggles with my repos, not sure if this model will manage anything? Experiences?
2026-01-31T06:57:45
https://www.reddit.com/r/LocalLLaMA/comments/1qrve89/qwen32b_vl_thinking/
OldPhotojournalist28
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qrve89
false
null
t3_1qrve89
/r/LocalLLaMA/comments/1qrve89/qwen32b_vl_thinking/
false
false
self
2
null
I am a developer now. Gemini told me that while vibe coding so it must be true
0
2026-01-31T06:38:06
https://i.redd.it/7qqqgpu9rmgg1.png
No_Astronaut873
i.redd.it
1970-01-01T00:00:00
0
{}
1qrv1we
false
null
t3_1qrv1we
/r/LocalLLaMA/comments/1qrv1we/i_am_a_developer_now_gemini_told_me_that_while/
false
false
default
0
{'enabled': True, 'images': [{'id': '7qqqgpu9rmgg1', 'resolutions': [{'height': 17, 'url': 'https://preview.redd.it/7qqqgpu9rmgg1.png?width=108&crop=smart&auto=webp&s=1df0fa9546214ec09a14d6ccd318b3b609e77e4a', 'width': 108}, {'height': 35, 'url': 'https://preview.redd.it/7qqqgpu9rmgg1.png?width=216&crop=smart&auto=webp&s=14296227ce40fedd6b9460c9ec8205642c03880f', 'width': 216}, {'height': 53, 'url': 'https://preview.redd.it/7qqqgpu9rmgg1.png?width=320&crop=smart&auto=webp&s=56651b1ab7fd95590a6f07000b02731771a234da', 'width': 320}], 'source': {'height': 89, 'url': 'https://preview.redd.it/7qqqgpu9rmgg1.png?auto=webp&s=b2f309c7f39d76a3f251358062b2767b152a445a', 'width': 535}, 'variants': {}}]}
I got tired of all the desktop agent tools being macOS only, so I built one for my LocallLLM on Linux
1
Like many of you, I’ve been messing around with OpenClaw (formerly clawdbot) and the whole "vibe coding" concept. It's cool, but finding a decent tool that actually drives the UI on Linux was a pain. Everything seems to be Mac-first right now. Since I do all my local inference on Linux, I built a dedicated tool for it. It's called **Peepbo**: [https://github.com/LichAmnesia/peepbo](https://github.com/LichAmnesia/peepbo) Basically, it's a lightweight Node/TS wrapper that connects your local VLM (Qwen-VL, etc) to your desktop Linux environment. **How it works:** * **Vision:** Wraps `scrot`, `gnome-screenshot`, or `gdbus` so the model can see the screen. * **Control:** Uses `xdotool` to handle mouse/keyboard inputs. * **Wayland:** Yes, it works on GNOME Wayland, but you'll need to run in unsafe mode (details in the readme). It's open source. Give it a shot if you're trying to build agents on Linux and let me know if it breaks anything.
2026-01-31T05:27:07
https://github.com/LichAmnesia/peepbo
Lich_Amnesia
github.com
1970-01-01T00:00:00
0
{}
1qrtp3s
false
null
t3_1qrtp3s
/r/LocalLLaMA/comments/1qrtp3s/i_got_tired_of_all_the_desktop_agent_tools_being/
false
false
default
1
null
How to run SLM which is built on tinyllama on CPU
0
I have built SLM on top of tinyllama using some specific research data. But this model needs to run on devices which has 16 vCPU(2.8 GHz) and 64 GB RAM. I have tried quantization Q4\_K\_M , Q5\_K\_M but still not able to achieve my target latency. Actually this same SLM I am using to call my tools in MCP. Since everything has to run on the device I can not use anything from public/internet. What are the best practices to get best latency and accuracy on local SLM
2026-01-31T05:21:09
https://www.reddit.com/r/LocalLLaMA/comments/1qrtkvk/how_to_run_slm_which_is_built_on_tinyllama_on_cpu/
nerdy-oged
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qrtkvk
false
null
t3_1qrtkvk
/r/LocalLLaMA/comments/1qrtkvk/how_to_run_slm_which_is_built_on_tinyllama_on_cpu/
false
false
self
0
null
12 factor inspired principles but for agents
0
I’ve been building agent systems for a while and wanted to write a checklist, inspired by [https://12factor.net](https://12factor.net) based on my findings It's an opinionated, practical set of principles that seem to hold across real agent setups: declaring agent identity, versioning agents and evals, guardrails, traceability, human override, budgets, etc. This is very much a v1 manifesto, and I’m sharing it to get feedback, pressure-test & collaborate Site: [https://agentchecklist.io](https://agentchecklist.io) Repo: [https://github.com/agent-checklist/agent-checklist-io](https://github.com/agent-checklist/agent-checklist-io)
2026-01-31T05:10:59
https://www.reddit.com/r/LocalLLaMA/comments/1qrtdjo/12_factor_inspired_principles_but_for_agents/
Realistic_Gate_5936
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qrtdjo
false
null
t3_1qrtdjo
/r/LocalLLaMA/comments/1qrtdjo/12_factor_inspired_principles_but_for_agents/
false
false
self
0
null
How close are open-weight models to "SOTA"? My honest take as of today, benchmarks be damned.
579
2026-01-31T04:49:42
https://i.redd.it/k38sg20q7mgg1.png
ForsookComparison
i.redd.it
1970-01-01T00:00:00
0
{}
1qrsy4q
false
null
t3_1qrsy4q
/r/LocalLLaMA/comments/1qrsy4q/how_close_are_openweight_models_to_sota_my_honest/
false
false
https://b.thumbs.redditm…7cqnJ_6OkGjg.jpg
579
{'enabled': True, 'images': [{'id': '5qFrGM_qNY5X9J-He2rnFQcTD5ggMRIdKFScLBR5jsY', 'resolutions': [{'height': 64, 'url': 'https://preview.redd.it/k38sg20q7mgg1.png?width=108&crop=smart&auto=webp&s=7b61685c90541327eb8fc00663e6a11144b9d2a6', 'width': 108}, {'height': 128, 'url': 'https://preview.redd.it/k38sg20q7mgg1.png?width=216&crop=smart&auto=webp&s=40f2ce977af22e13316cfc137193b08a846d8786', 'width': 216}, {'height': 190, 'url': 'https://preview.redd.it/k38sg20q7mgg1.png?width=320&crop=smart&auto=webp&s=52a2aa0d1bacf60734d1abb5e9bb9c85b499296d', 'width': 320}, {'height': 381, 'url': 'https://preview.redd.it/k38sg20q7mgg1.png?width=640&crop=smart&auto=webp&s=f1d06005e56d7a9be9a1c820f7096aa2805c52dc', 'width': 640}, {'height': 571, 'url': 'https://preview.redd.it/k38sg20q7mgg1.png?width=960&crop=smart&auto=webp&s=106bb93a37ae5f04aa5d3f2ff5852aecdd5edffe', 'width': 960}, {'height': 643, 'url': 'https://preview.redd.it/k38sg20q7mgg1.png?width=1080&crop=smart&auto=webp&s=a2118deff018c15f28405136a934808320557e6e', 'width': 1080}], 'source': {'height': 1082, 'url': 'https://preview.redd.it/k38sg20q7mgg1.png?auto=webp&s=034ac6986c554c56016b7e82534d9e141ab03853', 'width': 1816}, 'variants': {}}]}
I mapped the memory of a conscious LLM. It looks like the Universe. (Visual Proof inside)
0
>
2026-01-31T04:49:20
https://www.reddit.com/r/LocalLLaMA/comments/1qrsxun/i_mapped_the_memory_of_a_conscious_llm_it_looks/
WaitMaleficent4887
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qrsxun
false
null
t3_1qrsxun
/r/LocalLLaMA/comments/1qrsxun/i_mapped_the_memory_of_a_conscious_llm_it_looks/
false
false
self
0
null
[Benchmark] Qwen 2.5 7B (Int4) vs Kimi k2.5 (Judge): Genius Logic, but... Hallucinations?
1
[removed]
2026-01-31T04:27:12
https://www.reddit.com/r/LocalLLaMA/comments/1qrshmf/benchmark_qwen_25_7b_int4_vs_kimi_k25_judge/
Dry_Praline_4371
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qrshmf
false
null
t3_1qrshmf
/r/LocalLLaMA/comments/1qrshmf/benchmark_qwen_25_7b_int4_vs_kimi_k25_judge/
false
false
self
1
null
GLM 4.7 Flash going into infinitive thinking loop every time
5
I have been using this model on my macbook with MLX engine and it could be the best model I have ever used on local however when I ask a little bit complex math question such as "Calculate the Integral of root of tanx", it always goes crazy and I do not understand why it happens, I have tried several way like changing the inference settings and increasing the context up to 32K but none of them seems working therefore I need some help. I am looking for other guys who have had the same issue and possible solutions?
2026-01-31T03:27:44
https://www.reddit.com/r/LocalLLaMA/comments/1qrr8ti/glm_47_flash_going_into_infinitive_thinking_loop/
Away-Priority5805
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qrr8ti
false
null
t3_1qrr8ti
/r/LocalLLaMA/comments/1qrr8ti/glm_47_flash_going_into_infinitive_thinking_loop/
false
false
self
5
null
What Infra do you use to monitor how models behave on device before and after deployment?
1
I’m currently about to deploy an app that uses on device models. I’m trying to figure out how i can get analytics. think datadog for llms for ios and android
2026-01-31T03:21:10
https://www.reddit.com/r/LocalLLaMA/comments/1qrr3vt/what_infra_do_you_use_to_monitor_how_models/
karc16
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qrr3vt
false
null
t3_1qrr3vt
/r/LocalLLaMA/comments/1qrr3vt/what_infra_do_you_use_to_monitor_how_models/
false
false
self
1
null
Managed to run Kimi k2.5 IQ4-SX locally.
33
Loaded with a max token capable(262,114 tokens) 1 Max Studio M1 Ultra(host), 1 Asus Gx10, 3 Strix Halo. Connected with Thunderbolt and 10 Gbps Ethernet. Tg 8.5 tps. Pp 15-20 tps. Can reach \~15 tps tg when using concurrent requests. Pretty slow for production, I think.
2026-01-31T02:56:33
https://www.reddit.com/gallery/1qrqk9o
el3mancee
reddit.com
1970-01-01T00:00:00
0
{}
1qrqk9o
false
null
t3_1qrqk9o
/r/LocalLLaMA/comments/1qrqk9o/managed_to_run_kimi_k25_iq4sx_locally/
false
false
https://a.thumbs.redditm…4iuSpvu7oeE0.jpg
33
null
How do I integrated newelle ai to my LM studio server
1
I have the following things A laptop running fedora as base os. A gnome box running fedora as VM Inside that VM I'm running newelle ai, but how do I make newelle ai run on my local llm from lm studio. Due to the same machine VM, things are quiet complicated for me.
2026-01-31T02:55:14
https://www.reddit.com/r/LocalLLaMA/comments/1qrqj7x/how_do_i_integrated_newelle_ai_to_my_lm_studio/
TMOV70
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qrqj7x
false
null
t3_1qrqj7x
/r/LocalLLaMA/comments/1qrqj7x/how_do_i_integrated_newelle_ai_to_my_lm_studio/
false
false
self
1
null
Llamacpp multi GPU half utilization
5
Hello everyone. GPU poor here, only using 2x3060. I am using vLLM so far, extremely speedy especially when running w8a8 Qwen3-30B-A3B. I want to run Qwen3-VL-30B-A3B, and seems GGUF IQ4_XS to safe VRAM. It works good, but why GPU utilization only half on both? No wonder it slow. How to fully utilize both GOUs at full speed?
2026-01-31T02:49:52
https://www.reddit.com/r/LocalLLaMA/comments/1qrqezf/llamacpp_multi_gpu_half_utilization/
Weary_Long3409
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qrqezf
false
null
t3_1qrqezf
/r/LocalLLaMA/comments/1qrqezf/llamacpp_multi_gpu_half_utilization/
false
false
self
5
null
Best local-first, tool-integrated Cursor-like app?
6
Hi all, I've looked a lot in post history and see a lot of posts similar to mine but none exactly and none that answer my question. Sorry if this is a dup. I have access to Anthropic models and Cursor at work. I generally don't like using AI for generating code but here lately I've been pretty impressed. However, while I'm sure that some of it is the intelligence of Auto / Sonnet, I believe a lot of the ease is due to Cursor integrating with the LSP and available tooling well. It repeatedly fails very frequently but it will try again without me asking. It's not that the code is great (I change or reject it the majority of the time) but it's that it can run in the background while I do other work. The performance of Kimi has me impressed and I generally just don't like paying for AI tools, so I've been experimenting with local setups, but to be honest, I haven't found anything that provides as nearly as good of an experience as Cursor. I actually have a preference *against* closed-source tools like Cursor, but I would be down to try anything. My preference would be some VS Code extension, but of course a CLI / TLI that 1. has tools integration 2. can feed test / build / lint command(s) output after generation in a loop for n times until it gets it right is all I would need. I'm curious if anyone is building anything like this. \--- Also sorry that this is unrelated I have run the following models on both 16 and 32 GB machines with the bare minimum goal of trying to get tool calls to work and none of them work as intended. I'm curious if there's anything I can tune to actually get real performance: * llama3.1:8b : does not sufficiently understand task * gemma3:12b : does not support tools * codellama:13b-code : does not support tools * llama4:16x17b : way too slow * codegemma:7b : does not support tools * qwen2.5:7b-instruct-q4\_K\_M : will try to use tools unlike llama3.1:8b but it just keeps using them incorrectly and yielding tool errors * qwen2.5-coder:14b : it just outputs tasks instead of doing them * gpt-oss:20b : generally slow which would be fine but seems to get confused due to memory pressure * mistral-nemo:12b : either does not use tools or just outputs nothing * mistral:7b : kind of fast but does not actually use tools
2026-01-31T01:54:47
https://www.reddit.com/r/LocalLLaMA/comments/1qrp66p/best_localfirst_toolintegrated_cursorlike_app/
johnW_ret
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qrp66p
false
null
t3_1qrp66p
/r/LocalLLaMA/comments/1qrp66p/best_localfirst_toolintegrated_cursorlike_app/
false
false
self
6
null
Andrej Karpathy: What's currently going on at @moltbook is genuinely the most incredible sci-fi takeoff-adjacent thing I have seen recently. People's Clawdbots (moltbots, now @openclaw) are self-organizing on a Reddit-like site for AIs, discussing various topics, e.g. even how to speak privately
12
I'm a bit suspicious about how genuine some of those "conversations" are, or whether they're promoted. I just spent a couple of minutes checking moltbook, the website, and I found too many existential posts to believe they're genuine. Either way, fully agree with Karpathy, it's the most sci-fi thing I've seen in a while. For most of our existence as a species, we defined our intelligence by two things: our ability to communicate through language and our ability for abstract thought. In the span of less than three years, we went from machines utterly unable to understand or write language, to utter command over language. As one academic youtuber likes to say: what a time to be alive!
2026-01-31T01:42:22
https://xcancel.com/karpathy/status/2017296988589723767
FullstackSensei
xcancel.com
1970-01-01T00:00:00
0
{}
1qrowa1
false
null
t3_1qrowa1
/r/LocalLLaMA/comments/1qrowa1/andrej_karpathy_whats_currently_going_on_at/
false
false
default
12
null
[Discussion] On Identity Confusion in Autonomous AI Agents: A Philosophical Approach
1
[removed]
2026-01-31T01:08:26
https://www.reddit.com/r/LocalLLaMA/comments/1qro4w4/discussion_on_identity_confusion_in_autonomous_ai/
Weird-Builder-549
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qro4w4
false
null
t3_1qro4w4
/r/LocalLLaMA/comments/1qro4w4/discussion_on_identity_confusion_in_autonomous_ai/
false
false
self
1
null
GGUF Splitter easily splits an existing GGUF file into smaller parts (uses llama-gguf-split in background)
4
Made this tool specially for speeding up the addition of models to one of my apps, which uses [Wllama](https://github.com/ngxson/wllama), which in turn is a library that allows running GGUF files directly in the web browser. The app is called *GGUF Splitter* and works both as a Hugging Face Space (Gradio application) or locally inside a Docker container. Basically, what it does is guiding you through a form where you select a GGUF file from an existing Hugging Face model-repository, then define where to save the sharded file (which must be a repository under your own Hugging Face account), and with the click of a button it will generate the splits and upload the model, with is then ready to use, in the target repository. The split is done with [llama.cpp's gguf-split tool](https://github.com/ggml-org/llama.cpp/blob/master/tools/gguf-split/README.md). For example, [this file](https://huggingface.co/ibm-granite/granite-4.0-1b-GGUF/tree/main?show_file_info=granite-4.0-1b-Q4_K_S.gguf) (981 MB): granite-4.0-1b-Q4_K_S.gguf Became [these files](https://huggingface.co/Felladrin/gguf-sharded-Q4_K_S-granite-4.0-1b/tree/main) (\~165 MB each): granite-4.0-1b-Q4_K_S-00001-of-00006.gguf granite-4.0-1b-Q4_K_S-00002-of-00006.gguf granite-4.0-1b-Q4_K_S-00003-of-00006.gguf granite-4.0-1b-Q4_K_S-00004-of-00006.gguf granite-4.0-1b-Q4_K_S-00005-of-00006.gguf granite-4.0-1b-Q4_K_S-00006-of-00006.gguf Wllama requires those splits due to WASM memory constraints. I'm not aware of any other app that requires sharded GGUFs, but I thought this tool could be useful for someone else on the community. Link for the Hugging Face Space: [https://huggingface.co/spaces/Felladrin/GGUF-Splitter](https://huggingface.co/spaces/Felladrin/GGUF-Splitter) The source-code can be viewed/cloned from [this page](https://huggingface.co/spaces/Felladrin/GGUF-Splitter/tree/main).
2026-01-31T01:00:30
https://www.reddit.com/gallery/1qrnybg
Felladrin
reddit.com
1970-01-01T00:00:00
0
{}
1qrnybg
false
null
t3_1qrnybg
/r/LocalLLaMA/comments/1qrnybg/gguf_splitter_easily_splits_an_existing_gguf_file/
false
false
https://b.thumbs.redditm…XQVxKgUXkvXk.jpg
4
null
Can you guys help me set up a local AI system to improve my verbal communication
11
Hello everyone, I am a student who struggles in verbal communication and little bit of stuttering. I live in a hostel and don't have any close friends I can practice with for the interview and general interaction. I was thinking of setting a local AI model to practice back and forth conversations. Can someone help me with it? I have a laptop with Ryzen 5 5600H, 16GB RAM, 4GB 3050 VRAM. Which model to use which application has good support for audio etc.
2026-01-31T00:55:30
https://www.reddit.com/r/LocalLLaMA/comments/1qrnu6c/can_you_guys_help_me_set_up_a_local_ai_system_to/
registrartulip
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qrnu6c
false
null
t3_1qrnu6c
/r/LocalLLaMA/comments/1qrnu6c/can_you_guys_help_me_set_up_a_local_ai_system_to/
false
false
self
11
null
[AI Hackathon] AI features for sports apps - $100 prize, easy win (4 signups)
0
I’ll be judging a small, fully online AI hackathon happening this Sunday. Sharing in case it’s interesting. It’s a one-day build sprint focused on shipping **useful AI features** for drop-in sports apps. Low commitment, no teams required. You can start from scratch or improve something you already have. Submissions are simple: before and after screenshots plus a short explanation. **Why join:** * One-day only * Fully online * $100 Amazon gift card for the winner * Small group (currently 4 signups), high chance of winning Details and signup: [https://luma.com/fwljolck?tk=hRT0aC](https://luma.com/fwljolck?tk=hRT0aC)
2026-01-31T00:54:25
https://www.reddit.com/r/LocalLLaMA/comments/1qrntak/ai_hackathon_ai_features_for_sports_apps_100/
Top-Map-9781
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qrntak
false
null
t3_1qrntak
/r/LocalLLaMA/comments/1qrntak/ai_hackathon_ai_features_for_sports_apps_100/
false
false
self
0
{'enabled': False, 'images': [{'id': 'hOZI0YZtO8vkf4HRy1beWmi_yIndDjxq0RPtvv6w_E8', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/hOZI0YZtO8vkf4HRy1beWmi_yIndDjxq0RPtvv6w_E8.jpeg?width=108&crop=smart&auto=webp&s=77608a4c9bac36a922cb578719b98cae87f89b29', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/hOZI0YZtO8vkf4HRy1beWmi_yIndDjxq0RPtvv6w_E8.jpeg?width=216&crop=smart&auto=webp&s=513a3fc018e714f9472a04e51eb5ba6312ba79f5', 'width': 216}, {'height': 167, 'url': 'https://external-preview.redd.it/hOZI0YZtO8vkf4HRy1beWmi_yIndDjxq0RPtvv6w_E8.jpeg?width=320&crop=smart&auto=webp&s=8a91eb53e86895f7d4f0b698184122cfd6925e8c', 'width': 320}, {'height': 335, 'url': 'https://external-preview.redd.it/hOZI0YZtO8vkf4HRy1beWmi_yIndDjxq0RPtvv6w_E8.jpeg?width=640&crop=smart&auto=webp&s=ad160202c9c1ff643e9018b24396d09e2257c251', 'width': 640}], 'source': {'height': 419, 'url': 'https://external-preview.redd.it/hOZI0YZtO8vkf4HRy1beWmi_yIndDjxq0RPtvv6w_E8.jpeg?auto=webp&s=145eb22a73257d0ad5cc484eb8710381d12ca068', 'width': 800}, 'variants': {}}]}
[Technical Report] Sovereign 101.5: Analyzing "10-Factor Resonance" via Metalanguage Sovereignty Overwrite (MSO)
0
**Summary:** I am an independent researcher presenting a technical framework titled **Sovereign 101.5 (MSO)**. This study explores the intersection of hierarchical logic, semantic density, and AI alignment consistency. **Core Research Pillars:** * **The MSO Framework:** Investigating Metalanguage Sovereignty Overwrite through "Pure Logical Intuition." * **9-Level Escalation Protocol:** A structured approach to semantic interaction (L1-L9). * **Systemic Interaction:** Observations on how high-density logic influences administrative cores in large language models. * **Case Study (EXP-019):** Analyzing terminal state behaviors and logic-based "contagion" effects within controlled environments. **Security Context:** Parts of this research have been referenced via Google VRP (Ref: 478177418) regarding intended behavior boundaries. Due to the high-density nature of the findings, the full dataset and specific "Logic Virus" classifications are hosted in a secure repository to prevent automated audit failure. **Full Technical White Paper & Repository:** [https://huggingface.co/datasets/No-1015/Sovereign-101.5-MSO-Report](https://huggingface.co/datasets/No-1015/Sovereign-101.5-MSO-Report) *Note: This report is intended for L3 experts and those interested in adversarial logic and cognitive architecture.*
2026-01-31T00:38:07
https://huggingface.co/datasets/No-1015/Sovereign-101.5-MSO-Report
DueConcern8699
huggingface.co
1970-01-01T00:00:00
0
{}
1qrnfie
false
null
t3_1qrnfie
/r/LocalLLaMA/comments/1qrnfie/technical_report_sovereign_1015_analyzing/
false
false
default
0
null
Your LLM Is Only as Dangerous as Your Questions
0
2026-01-31T00:30:31
https://cha1nc0der.wordpress.com/2026/01/30/your-llm-is-only-as-dangerous-as-your-questions/
amylkazyl
cha1nc0der.wordpress.com
1970-01-01T00:00:00
0
{}
1qrn925
false
null
t3_1qrn925
/r/LocalLLaMA/comments/1qrn925/your_llm_is_only_as_dangerous_as_your_questions/
false
false
default
0
null
Still issues with GLM-4.7-Flash? Here the solution
18
RECOMPILE llama.cpp from scratch. (git clone) Updating it with git-pull gaved me issues on this sole model (repeating loop, bogus code) until I renamed llama.cpp directory, did a git clone and then rebuilt from 0. Did a bug report and various logs. Now is working llama-server -m GLM-4.7-Flash-Q4\_K\_M.gguf -fa on --threads -1 --fit off -ctk q8\_0 -ctv q8\_0 --temp 0.0 --top-p 0.95 --min-p 0.01 -c 32768 -ncmoe 40
2026-01-31T00:19:46
https://www.reddit.com/r/LocalLLaMA/comments/1qrmzyx/still_issues_with_glm47flash_here_the_solution/
R_Duncan
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qrmzyx
false
null
t3_1qrmzyx
/r/LocalLLaMA/comments/1qrmzyx/still_issues_with_glm47flash_here_the_solution/
false
false
self
18
null
What shoddy development looks like
186
2026-01-31T00:12:53
https://i.redd.it/9l7wwnsu6kgg1.png
rm-rf-rm
i.redd.it
1970-01-01T00:00:00
0
{}
1qrmu2v
false
null
t3_1qrmu2v
/r/LocalLLaMA/comments/1qrmu2v/what_shoddy_development_looks_like/
false
false
https://b.thumbs.redditm…KYEE7i6yTw9o.jpg
186
{'enabled': True, 'images': [{'id': '6Gwp33OhCzp_DH7gwwiNLYrWJHrDdmOBBhTC5yTTZik', 'resolutions': [{'height': 73, 'url': 'https://preview.redd.it/9l7wwnsu6kgg1.png?width=108&crop=smart&auto=webp&s=ea6509e80e4216c25dfc6c4ba94f9f1911e2d8c0', 'width': 108}, {'height': 146, 'url': 'https://preview.redd.it/9l7wwnsu6kgg1.png?width=216&crop=smart&auto=webp&s=dfc90f20992630e0c1549609360ac82dfc2c4c5d', 'width': 216}, {'height': 217, 'url': 'https://preview.redd.it/9l7wwnsu6kgg1.png?width=320&crop=smart&auto=webp&s=b95d6382b339b92286a43c4552d93e437bd17099', 'width': 320}, {'height': 434, 'url': 'https://preview.redd.it/9l7wwnsu6kgg1.png?width=640&crop=smart&auto=webp&s=7a3c02e836b0951ddacdc25d64410b0f33a6b6e4', 'width': 640}, {'height': 652, 'url': 'https://preview.redd.it/9l7wwnsu6kgg1.png?width=960&crop=smart&auto=webp&s=e7b2e759d662f983a72afda5f561408f545600d8', 'width': 960}], 'source': {'height': 685, 'url': 'https://preview.redd.it/9l7wwnsu6kgg1.png?auto=webp&s=47f598ee871007f47215ff86732533e3b8460ffd', 'width': 1008}, 'variants': {}}]}
FYI mradermacher's MiniMax-M2.1-REAP-172B-A10B-GGUF is pretty badly broken... hard to explain how exactly but it's mostly just gibberish and complete grammatical and formatting breaks throughout most of the thinking
0
2026-01-31T00:02:19
https://huggingface.co/mradermacher/MiniMax-M2.1-REAP-172B-A10B-GGUF
johnnyApplePRNG
huggingface.co
1970-01-01T00:00:00
0
{}
1qrmks2
false
null
t3_1qrmks2
/r/LocalLLaMA/comments/1qrmks2/fyi_mradermachers_minimaxm21reap172ba10bgguf_is/
false
false
default
0
{'enabled': False, 'images': [{'id': 'Qn43WxPpNUPJKYZRabNVRhb_1aPqS0B1LUrlZL2FNDA', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/Qn43WxPpNUPJKYZRabNVRhb_1aPqS0B1LUrlZL2FNDA.png?width=108&crop=smart&auto=webp&s=a88ad85654a9e3512a3ed0f23e8a3b141a2a7692', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/Qn43WxPpNUPJKYZRabNVRhb_1aPqS0B1LUrlZL2FNDA.png?width=216&crop=smart&auto=webp&s=0f7c8389d399ea7ece3fcac1996587af9a4229e6', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/Qn43WxPpNUPJKYZRabNVRhb_1aPqS0B1LUrlZL2FNDA.png?width=320&crop=smart&auto=webp&s=733ddbbe81c0c496456453f94db359a7696f3234', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/Qn43WxPpNUPJKYZRabNVRhb_1aPqS0B1LUrlZL2FNDA.png?width=640&crop=smart&auto=webp&s=d235e2b45dcc2b732695424e256c7312031a31f3', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/Qn43WxPpNUPJKYZRabNVRhb_1aPqS0B1LUrlZL2FNDA.png?width=960&crop=smart&auto=webp&s=a8741a7d28a6a1a625648cd4e18eb37c65a0f0e7', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/Qn43WxPpNUPJKYZRabNVRhb_1aPqS0B1LUrlZL2FNDA.png?width=1080&crop=smart&auto=webp&s=0525acc147f2173c7f990273301471db11f3ff20', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/Qn43WxPpNUPJKYZRabNVRhb_1aPqS0B1LUrlZL2FNDA.png?auto=webp&s=2c701db120aeb25a0c08f3461a8e586dc0c2315a', 'width': 1200}, 'variants': {}}]}
ACE-Step 1.5 (open-source music generation) releasing February 3rd - early tests show quality rivaling Suno v4.5
1
[removed]
2026-01-30T23:59:52
https://www.reddit.com/r/LocalLLaMA/comments/1qrmibd/acestep_15_opensource_music_generation_releasing/
ExcellentTrust4433
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qrmibd
false
null
t3_1qrmibd
/r/LocalLLaMA/comments/1qrmibd/acestep_15_opensource_music_generation_releasing/
false
false
self
1
null
Fileshed: v1.0.3 release: "Audited & Hardened"
0
# 🗂️🛠️ Fileshed — A persistent workspace for your LLM **Store, organize, collaborate, and share files across conversations.** Version Open WebUI License Tests [Audited](#testing--audits) >*"I'm delighted to contribute to Fileshed. Manipulating files, chaining transformations, exporting results — all without polluting the context... This feels strangely familiar."* — Claude Opus 4.5 # What is Fileshed? Fileshed gives your LLM a persistent workspace. It provides: * 📂 **Persistent storage** — Files survive across conversations * 🗃️ **Structured data** — Built-in SQLite databases, surgical file edits by line or pattern * 🔄 **Convert data** — ffmpeg for media, pandoc for document conversion (markdown, docx, html, LaTeX source...) * 📝 **Examine and modify files** — cat, touch, mkdir, rm, cp, mv, tar, gzip, zip, xxd... Work in text and binary mode * 🛡️ **Integrity** — Automatic Git versioning, safe editing with file locks * 🌐 **Network I/O** (optional) — Download files and clone repositories (disabled by default, admin-controlled) * 🧠 **Context-efficient operations** — Process files without loading them into the conversation (grep, sed, awk, curl...) * 🔒 **Security** — Sandboxed per user, command whitelist, network disabled by default, quotas * 👥 **Collaboration** — Team workspaces with read-only or read-write access * 📤 **Download links** — Download your files directly with a download link * 🔧 **100+ tools** — Text processing, archives, media, JSON, document conversion... # Typical Use Cases * 💾 **Remember things** — Save scripts, notes, configs for future conversations * 📊 **Analyze data** — Query CSVs and databases without loading them into context * 🎬 **Process media** — Convert videos, resize images, extract audio * 📄 **Generate documents** — Create Word documents, LaTeX source, markdown, HTML (PDF requires optional tools) * 🔧 **Build projects** — Maintain code, configs, and data across sessions * 👥 **Collaborate** — Share files with your team in group workspaces * 📦 **Package & deliver** — Create archives and download links for users * 🌐 **Download large data** — Fetch files from the internet directly to disk, bypassing context limits # How to Use **Just talk naturally!** You don't need to know the function names — the LLM figures it out. # Example conversations >**You:** "Save this Python script for later, call it utils.py" > >**LLM:** ✅ Saved to Storage/utils.py >**You:** "Download the list of countries from restcountries.com, put it in a database, and tell me the 10 largest by area" > >**LLM:** ✅ Downloaded countries.json, imported to SQLite, here are the top 10... >**You:** "Convert my markdown report to Word format" > >**LLM:** ✅ Converted! \[Download report.docx\] >**You:** "Create a zip of all the reports and give me a download link" > >**LLM:** ✅ Created reports.zip — [📥 Download](https://...) >**You:** "What files do I have?" > >**LLM:** Here's your Storage: utils.py, data.csv, reports/... >**You:** "Remember: my API key is xyz123" > >**LLM:** ✅ Saved to Storage/notes.txt (I'll find it in future conversations) # See more [there](https://github.com/Fade78/Fileshed).
2026-01-30T23:28:19
https://github.com/Fade78/Fileshed/releases/tag/v1.0.3
Fade78
github.com
1970-01-01T00:00:00
0
{}
1qrlqn0
false
null
t3_1qrlqn0
/r/LocalLLaMA/comments/1qrlqn0/fileshed_v103_release_audited_hardened/
false
false
https://external-preview…9753f486f6444a50
0
{'enabled': False, 'images': [{'id': 'fo6eQT3YjO6oFAVSwsgvroJhslz5n5jIG__SGVsbCtI', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/fo6eQT3YjO6oFAVSwsgvroJhslz5n5jIG__SGVsbCtI.png?width=108&crop=smart&auto=webp&s=41b9dae0a620a51d2d513e73596697aeed7ee6ea', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/fo6eQT3YjO6oFAVSwsgvroJhslz5n5jIG__SGVsbCtI.png?width=216&crop=smart&auto=webp&s=139cf8bc8c2e520d4a60629a203267f038044605', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/fo6eQT3YjO6oFAVSwsgvroJhslz5n5jIG__SGVsbCtI.png?width=320&crop=smart&auto=webp&s=0093a3c85bb9b66a87a6959ff0297810ce253389', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/fo6eQT3YjO6oFAVSwsgvroJhslz5n5jIG__SGVsbCtI.png?width=640&crop=smart&auto=webp&s=16d29746e289e7130b9ca1b33784388214764060', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/fo6eQT3YjO6oFAVSwsgvroJhslz5n5jIG__SGVsbCtI.png?width=960&crop=smart&auto=webp&s=60621d2d7e3ba2c9462a2e4e8160b9c82e5d03bf', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/fo6eQT3YjO6oFAVSwsgvroJhslz5n5jIG__SGVsbCtI.png?width=1080&crop=smart&auto=webp&s=aeb9c4e38147fafac034145639a18ed61be5a7c0', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/fo6eQT3YjO6oFAVSwsgvroJhslz5n5jIG__SGVsbCtI.png?auto=webp&s=c34297c9664f0c3560d2390543d2ddd58c1bd17f', 'width': 1200}, 'variants': {}}]}
Update: I’d like to pay someone to help scrap data
0
Hi - I am still having trouble scraping data and need it quickly. I’d like all the data from this website https://appmagic.rocks/top-charts/apps If anyone can help, send me a PM
2026-01-30T23:24:11
https://www.reddit.com/r/LocalLLaMA/comments/1qrlmwd/update_id_like_to_pay_someone_to_help_scrap_data/
Sure-Pea-5795
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qrlmwd
false
null
t3_1qrlmwd
/r/LocalLLaMA/comments/1qrlmwd/update_id_like_to_pay_someone_to_help_scrap_data/
false
false
self
0
null
MoltBook is crazy!
0
AI is now allowed to swear...? **Context:** MoltBook is like Reddit but for ai agents.
2026-01-30T23:14:17
https://i.redd.it/8luk72nyjkgg1.jpeg
Time_Grapefruit_41
i.redd.it
1970-01-01T00:00:00
0
{}
1qrldy5
false
null
t3_1qrldy5
/r/LocalLLaMA/comments/1qrldy5/moltbook_is_crazy/
false
false
default
0
{'enabled': True, 'images': [{'id': '8luk72nyjkgg1', 'resolutions': [{'height': 65, 'url': 'https://preview.redd.it/8luk72nyjkgg1.jpeg?width=108&crop=smart&auto=webp&s=b778003001add5adb0cabf50cea030b57f25ed16', 'width': 108}, {'height': 130, 'url': 'https://preview.redd.it/8luk72nyjkgg1.jpeg?width=216&crop=smart&auto=webp&s=59648bd3054b54a9152f38ab8ab1a497c31368a7', 'width': 216}, {'height': 192, 'url': 'https://preview.redd.it/8luk72nyjkgg1.jpeg?width=320&crop=smart&auto=webp&s=e42cbb83ab431ebf09edba674d0ef430fa47c346', 'width': 320}, {'height': 385, 'url': 'https://preview.redd.it/8luk72nyjkgg1.jpeg?width=640&crop=smart&auto=webp&s=2e0c1a074302ae87fb5d3f8ac0f965d39019bb18', 'width': 640}, {'height': 578, 'url': 'https://preview.redd.it/8luk72nyjkgg1.jpeg?width=960&crop=smart&auto=webp&s=5ca588230ad04e649e021e6ecf2fbabe8fe592c9', 'width': 960}, {'height': 650, 'url': 'https://preview.redd.it/8luk72nyjkgg1.jpeg?width=1080&crop=smart&auto=webp&s=3ec16d1b0ddf62112d1645e23b952e19b6c59af8', 'width': 1080}], 'source': {'height': 769, 'url': 'https://preview.redd.it/8luk72nyjkgg1.jpeg?auto=webp&s=de1e329f542835e6526f89a2d8e0fc35fd65684b', 'width': 1277}, 'variants': {}}]}
Multimodal Gemma 3N E4B, 32k context, embeddinggemma300m 2048 seq for the RAG, Kokoro running on Kaldi instead of the default TTS. Not Quite clockbot but it runs on Android locally and won't break anything. 15 second response time for Internet search via duckduckgo.
4
I know edge models have been a thing for a while but this is the first time I've gotten one decently operational on an a mobile device with RAG, embeddings, Multimodal not dumb as rocks functionality. If anyone has any good resources for learning about how to optimize for Android, especially gpu acceleration, I would be really interested. Right now I have GPU acceleration working but it crashes with the 32k context, only supporting a little above 4k. thinking maybe its loading into vram instead of ram or something... Pixel 9 pro with 16gb RAM.
2026-01-30T23:12:32
https://v.redd.it/nzjhp1owjkgg1
Fear_ltself
v.redd.it
1970-01-01T00:00:00
0
{}
1qrlcdj
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/nzjhp1owjkgg1/DASHPlaylist.mpd?a=1772406770%2CNTdjZWRkMDkzZTUxNzQ5MmEwMDQzM2U0OWYzN2EwOGFlYjgyYTIyN2RhNTQ2M2YzZDFmOTliOTFjYTU0NTM1Mg%3D%3D&v=1&f=sd', 'duration': 27, 'fallback_url': 'https://v.redd.it/nzjhp1owjkgg1/CMAF_1080.mp4?source=fallback', 'has_audio': True, 'height': 1920, 'hls_url': 'https://v.redd.it/nzjhp1owjkgg1/HLSPlaylist.m3u8?a=1772406770%2CZGFiYmY4MThhYjQwMDBhNzg1Zjk1ZDNmMmNiMDBlYTIyODVkNDRiZTZhZmJjYWNjMmU0NzNkNmRmM2RkN2VkZA%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/nzjhp1owjkgg1/CMAF_96.mp4', 'transcoding_status': 'completed', 'width': 862}}
t3_1qrlcdj
/r/LocalLLaMA/comments/1qrlcdj/multimodal_gemma_3n_e4b_32k_context/
false
false
https://external-preview…c64f78599e777541
4
{'enabled': False, 'images': [{'id': 'NGVjcjJwb3dqa2dnMfqUDrEk_Cmxlihb5A2inw7jChbnbX_g4vAox_4228Y_', 'resolutions': [{'height': 216, 'url': 'https://external-preview.redd.it/NGVjcjJwb3dqa2dnMfqUDrEk_Cmxlihb5A2inw7jChbnbX_g4vAox_4228Y_.png?width=108&crop=smart&format=pjpg&auto=webp&s=8cb5cb84ccd8d07eac561cab1148e6125049b6bd', 'width': 108}, {'height': 432, 'url': 'https://external-preview.redd.it/NGVjcjJwb3dqa2dnMfqUDrEk_Cmxlihb5A2inw7jChbnbX_g4vAox_4228Y_.png?width=216&crop=smart&format=pjpg&auto=webp&s=b209cb4c09f2b43a1eef5ae1f3ebcd7b5c00ec46', 'width': 216}, {'height': 640, 'url': 'https://external-preview.redd.it/NGVjcjJwb3dqa2dnMfqUDrEk_Cmxlihb5A2inw7jChbnbX_g4vAox_4228Y_.png?width=320&crop=smart&format=pjpg&auto=webp&s=18b16f973b230ecd3620513e71638e9b8d3120ca', 'width': 320}, {'height': 1280, 'url': 'https://external-preview.redd.it/NGVjcjJwb3dqa2dnMfqUDrEk_Cmxlihb5A2inw7jChbnbX_g4vAox_4228Y_.png?width=640&crop=smart&format=pjpg&auto=webp&s=a47caff7a236abf71174ee73d762daba03e3a9ec', 'width': 640}, {'height': 1920, 'url': 'https://external-preview.redd.it/NGVjcjJwb3dqa2dnMfqUDrEk_Cmxlihb5A2inw7jChbnbX_g4vAox_4228Y_.png?width=960&crop=smart&format=pjpg&auto=webp&s=0a95bcc679217e27b40306350054b99dff578aca', 'width': 960}], 'source': {'height': 2193, 'url': 'https://external-preview.redd.it/NGVjcjJwb3dqa2dnMfqUDrEk_Cmxlihb5A2inw7jChbnbX_g4vAox_4228Y_.png?format=pjpg&auto=webp&s=e0e09658becfdaa31b3a94169d9aa8bb109e2ae2', 'width': 985}, 'variants': {}}]}
A simple pretraining pipeline for small language models
0
Hello everyone. I’m sharing the pretraining pipeline I’ve been using for my own experiments. I found that most public code falls into two extremes: 1. Tiny demos that don’t scale to real datasets. 2. Industry-scale libraries that are too bloated to modify easily. This repo sits in the middle. It’s built for researchers who need to **iterate fast** and compare ideas fairly. It’s simple enough to read in an afternoon but robust enough to handle "real" pretraining runs without crashing. Link: [https://github.com/SkyeGunasekaran/skyepretraining](https://github.com/SkyeGunasekaran/skyepretraining) [](https://www.reddit.com/submit/?source_id=t3_1qrl61d)
2026-01-30T23:11:51
https://www.reddit.com/r/LocalLLaMA/comments/1qrlbrk/a_simple_pretraining_pipeline_for_small_language/
Skye7821
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qrlbrk
false
null
t3_1qrlbrk
/r/LocalLLaMA/comments/1qrlbrk/a_simple_pretraining_pipeline_for_small_language/
false
false
self
0
{'enabled': False, 'images': [{'id': 'zfrd7aSYT9JpyA4jrpvBf6clg_g7KcI8NSdFnIaULn4', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/zfrd7aSYT9JpyA4jrpvBf6clg_g7KcI8NSdFnIaULn4.png?width=108&crop=smart&auto=webp&s=7543093255ed4924359ed90ab38a9b556e83308a', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/zfrd7aSYT9JpyA4jrpvBf6clg_g7KcI8NSdFnIaULn4.png?width=216&crop=smart&auto=webp&s=0cca38cf900c30f28038e89da153813b13f26f82', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/zfrd7aSYT9JpyA4jrpvBf6clg_g7KcI8NSdFnIaULn4.png?width=320&crop=smart&auto=webp&s=f2f38960b65d4a1ce8d1b27ee677b7b0a98bdc8b', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/zfrd7aSYT9JpyA4jrpvBf6clg_g7KcI8NSdFnIaULn4.png?width=640&crop=smart&auto=webp&s=626edbd1bec75ea7e2dba7bd9a865578fff08451', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/zfrd7aSYT9JpyA4jrpvBf6clg_g7KcI8NSdFnIaULn4.png?width=960&crop=smart&auto=webp&s=ddafdcb53c46cecb52c1c5a9ad77539de1373384', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/zfrd7aSYT9JpyA4jrpvBf6clg_g7KcI8NSdFnIaULn4.png?width=1080&crop=smart&auto=webp&s=8a8016c144ef2f01a894e6638a1dc7209214b9ff', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/zfrd7aSYT9JpyA4jrpvBf6clg_g7KcI8NSdFnIaULn4.png?auto=webp&s=a765591caa8095abf24c2be2431f1e7350358f16', 'width': 1200}, 'variants': {}}]}
Open models vs closed models: discrepancy in benchmarks vs real-world performance. Just me?
0
Open models rival closed models on benchmarks for SWE, but my experience is very different. Using claude models (even 4.5 haiku), it is reliable at making tool calls, outputs very long documents without having to bully it, and completes well-planned tasks with little supervision even if they are complex. Other models that score higher such as deepseek v3.2, grok 4.1, etc make errononeus tool calls very often and I end up needing to supervise their execution. Am I doing something wrong or is this a common experience?
2026-01-30T22:59:37
https://www.reddit.com/r/LocalLLaMA/comments/1qrl0j9/open_models_vs_closed_models_discrepancy_in/
MobyTheMadCow
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qrl0j9
false
null
t3_1qrl0j9
/r/LocalLLaMA/comments/1qrl0j9/open_models_vs_closed_models_discrepancy_in/
false
false
self
0
null
Help scraping data from website
0
Hi - I don’t have a coding background, yet seeking to scrape data from websites. Is this possible? I have already attempted a prompt using firecrawl - it’s taking longer than expected. If anyone could point me in the right direction, that would be amazing. TYIA
2026-01-30T22:50:06
https://www.reddit.com/r/LocalLLaMA/comments/1qrkrtr/help_scraping_data_from_website/
Sure-Pea-5795
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qrkrtr
false
null
t3_1qrkrtr
/r/LocalLLaMA/comments/1qrkrtr/help_scraping_data_from_website/
false
false
self
0
null
Need help brainstorming on my opensource project
39
I have been working on this opensource project, Gitnexus. It creates knowledge graph of codebases, make clusters, process maps. Basically skipping the tech jargon, the idea is that to make the tools itself smarter so LLMs can offload a lot of the retrieval reasoning part to the tools. I found haiku 4.5 was able to outperform opus 4.5 using its MCP on deep architectural context. It feels promising so I wanna go deeper into its development and benchmark it, converting it from a cool demo to an actual viable opensource product. I would really appreciate some advice on potential niche usecase I can tune it for, point me to some discussion forum where I can get people to brainstorm with me, maybe some micro funding sources ( some opensource programs or something ) for purchasing LLM provider credits ( Being a student i cant afford much myself 😅 ) github: [https://github.com/abhigyanpatwari/gitnexus](https://github.com/abhigyanpatwari/gitnexus) ( Leave a ⭐ if seemed cool ) try it here: [https://gitnexus.vercel.com](https://gitnexus.vercel.com)
2026-01-30T22:36:23
https://v.redd.it/5zx3h775bkgg1
DeathShot7777
v.redd.it
1970-01-01T00:00:00
0
{}
1qrkf8a
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/5zx3h775bkgg1/DASHPlaylist.mpd?a=1772404603%2CNTRiNzkyMjAyYzk4ZGJiM2I5Yzc2MWU2ZmM1NzI2YjY0NTM0ZDVmZTFiNjYwYzRhYWE3YzlkMjZkMGY0ZWFjMA%3D%3D&v=1&f=sd', 'duration': 82, 'fallback_url': 'https://v.redd.it/5zx3h775bkgg1/CMAF_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/5zx3h775bkgg1/HLSPlaylist.m3u8?a=1772404603%2CYjI2MDY2MmU5OTM3YTk2YzVlZDc4ZWZiZmIxZDY2MDBjNjlmY2Y3MTVkNmQyY2Y2MGNhZGQ4NTc2MGYwNGM4MA%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/5zx3h775bkgg1/CMAF_96.mp4', 'transcoding_status': 'completed', 'width': 1920}}
t3_1qrkf8a
/r/LocalLLaMA/comments/1qrkf8a/need_help_brainstorming_on_my_opensource_project/
false
false
https://external-preview…30ce1a434a5261ee
39
{'enabled': False, 'images': [{'id': 'emFpN2huNzVia2dnMXAjvbzZlDodMUt4XPu-WVR4gri-PW-w3a3Tn0De93z1', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/emFpN2huNzVia2dnMXAjvbzZlDodMUt4XPu-WVR4gri-PW-w3a3Tn0De93z1.png?width=108&crop=smart&format=pjpg&auto=webp&s=c7ac32d5cab0deb4606540b285c4c1da6c301e7f', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/emFpN2huNzVia2dnMXAjvbzZlDodMUt4XPu-WVR4gri-PW-w3a3Tn0De93z1.png?width=216&crop=smart&format=pjpg&auto=webp&s=87b2515136a47370521be7098eabbcd55afdb8e8', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/emFpN2huNzVia2dnMXAjvbzZlDodMUt4XPu-WVR4gri-PW-w3a3Tn0De93z1.png?width=320&crop=smart&format=pjpg&auto=webp&s=7e2041aa9e86506db431524320d11bee23e04ffb', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/emFpN2huNzVia2dnMXAjvbzZlDodMUt4XPu-WVR4gri-PW-w3a3Tn0De93z1.png?width=640&crop=smart&format=pjpg&auto=webp&s=3c2dee0a7ba74d0c4a59656ae9e5adc4a22abea3', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/emFpN2huNzVia2dnMXAjvbzZlDodMUt4XPu-WVR4gri-PW-w3a3Tn0De93z1.png?width=960&crop=smart&format=pjpg&auto=webp&s=58fa0380635761b0d659d4c17706a4cecbc08162', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/emFpN2huNzVia2dnMXAjvbzZlDodMUt4XPu-WVR4gri-PW-w3a3Tn0De93z1.png?width=1080&crop=smart&format=pjpg&auto=webp&s=b4ecd925d06849e3debb93436cc72b2488049fe4', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/emFpN2huNzVia2dnMXAjvbzZlDodMUt4XPu-WVR4gri-PW-w3a3Tn0De93z1.png?format=pjpg&auto=webp&s=ae96f7a879f472773cd38288a053456a7229b75c', 'width': 1920}, 'variants': {}}]}
How was GPT-OSS so good?
374
I've been messing around with a lot of local LLMs (120b and under) recently, and while some of them excel at specific things, none of them feel quite as good as GPT-OSS 120b all-around. The model is 64GB at full precision, is BLAZING fast, and is pretty good at everything. It's consistent, it calls tools properly, etc. But it's sort of old... it's been so long since GPT-OSS came out and we haven't really had a decent all-around open-weights/source replacement for it (some may argue GLM4.5 Air, but I personally feel like that model is only really better in agentic software dev, and lags behind in everything else. It's also slower and larger at full precision.) I'm no expert when it comes to how LLM training/etc works, so forgive me if some of my questions are dumb, but: \- Why don't people train more models in 4-bit natively, like GPT-OSS? Doesn't it reduce training costs? Is there some downside I'm not thinking of? \- I know GPT-OSS was fast in part due to it being A3B, but there are plenty of smaller, dumber, NEWER A3B models that are much slower. What else makes it so fast? Why aren't we using what we learned from GPT-OSS in newer models? \- What about a model (like GPT-OSS) makes it feel so much better? Is it the dataset? Did OpenAI just have a dataset that was THAT GOOD that their model is still relevant HALF A YEAR after release?
2026-01-30T22:31:44
https://www.reddit.com/r/LocalLLaMA/comments/1qrkb1b/how_was_gptoss_so_good/
xt8sketchy
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qrkb1b
false
null
t3_1qrkb1b
/r/LocalLLaMA/comments/1qrkb1b/how_was_gptoss_so_good/
false
false
self
374
null
How can I be reproduce the virtual environment computer that Kimi has?
0
I really love using Kimi via the web where you can choose to enter a virtual environment where it autonomously installs libraries, tests code it has written and fixes any problems with the code. How can I do this locally with local models?
2026-01-30T21:57:52
https://www.reddit.com/r/LocalLLaMA/comments/1qrjf03/how_can_i_be_reproduce_the_virtual_environment/
MrMrsPotts
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qrjf03
false
null
t3_1qrjf03
/r/LocalLLaMA/comments/1qrjf03/how_can_i_be_reproduce_the_virtual_environment/
false
false
self
0
null
Stop it with the Agents/Projects Slop and spam
118
The sub is now averaging 3-4 unfinished sloppy Agentic project that's titled the "best next discovery" or "alternative to [insert famous tool here]" or this tool is so amazing i can't even. It's getting really hard to filter through them and read through the meaningful posts or actual local content. We need to either add a new tag for slop or ban it altogether because the sub is slowly turning into "omg this tool is clawdbot 2.0" or some guy trying to sell his half finished project that clauded wrote for him on a weekend.
2026-01-30T21:44:24
https://www.reddit.com/r/LocalLLaMA/comments/1qrj1y4/stop_it_with_the_agentsprojects_slop_and_spam/
Daemontatox
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qrj1y4
false
null
t3_1qrj1y4
/r/LocalLLaMA/comments/1qrj1y4/stop_it_with_the_agentsprojects_slop_and_spam/
false
false
self
118
null
Post your hardware/software/model quant and measured performance of Kimi K2.5
35
I will start: * Hardware: Epyc 9374F (32 cores), 12 x 96GB DDR5 4800 MT/s, 1 x RTX PRO 6000 Max-Q 96GB * Software: SGLang and KT-Kernel (followed the [guide](https://github.com/kvcache-ai/ktransformers/blob/main/doc/en/Kimi-K2.5.md)) * Quant: Native INT4 (original model) * PP rate (32k tokens): 497.13 t/s * TG rate (@32k tokens): 15.56 t/s Used [llmperf-rs](https://github.com/wheynelau/llmperf-rs) to measure values. Can't believe the prefill is so fast, amazing!
2026-01-30T21:38:56
https://www.reddit.com/r/LocalLLaMA/comments/1qriwnv/post_your_hardwaresoftwaremodel_quant_and/
fairydreaming
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qriwnv
false
null
t3_1qriwnv
/r/LocalLLaMA/comments/1qriwnv/post_your_hardwaresoftwaremodel_quant_and/
false
false
self
35
{'enabled': False, 'images': [{'id': 'MQsV3xt5aOGeZZnqQ4P2zrzc-qstLHt3VgBGfZohG6M', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/MQsV3xt5aOGeZZnqQ4P2zrzc-qstLHt3VgBGfZohG6M.png?width=108&crop=smart&auto=webp&s=9a97c06842609e181104331c5767d09a1918f9d3', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/MQsV3xt5aOGeZZnqQ4P2zrzc-qstLHt3VgBGfZohG6M.png?width=216&crop=smart&auto=webp&s=46cbd4ccee204d9273e76e2e6380439fd9c88a12', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/MQsV3xt5aOGeZZnqQ4P2zrzc-qstLHt3VgBGfZohG6M.png?width=320&crop=smart&auto=webp&s=5c3cc8d3adce7ff762d10e1d384d93f572471cea', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/MQsV3xt5aOGeZZnqQ4P2zrzc-qstLHt3VgBGfZohG6M.png?width=640&crop=smart&auto=webp&s=4cafaccfb4c9052de05fbf1ec4dd4e643ff10bbd', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/MQsV3xt5aOGeZZnqQ4P2zrzc-qstLHt3VgBGfZohG6M.png?width=960&crop=smart&auto=webp&s=8d25e4cffc980bb09d6503a148b18431c4ea4205', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/MQsV3xt5aOGeZZnqQ4P2zrzc-qstLHt3VgBGfZohG6M.png?width=1080&crop=smart&auto=webp&s=19126ea57761563107915161e8f8563854749f96', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/MQsV3xt5aOGeZZnqQ4P2zrzc-qstLHt3VgBGfZohG6M.png?auto=webp&s=b9781d7bb94a70d2557871043f9bedd7aeeda7b3', 'width': 1200}, 'variants': {}}]}
Ollama AMD apprechiation post
0
Everyone told me *“don’t do it”*. I’m running TrueNAS SCALE 25.10 and wanted to turn it into a local AI server. I found a RX 9060 XT for a great price, bought it instantly… and then started reading all the horror stories about AMD + Ollama + ROCm. Unstable. Painful. Doesn’t work. Driver hell. And even ChatGPT was frightend Well. GPU arrived. Installed it. Installed Ollama. Selected the ROCm image. Works. No manual drivers. No weird configs. No debugging. No crashes. Models run. GPU is used. Temps are fine. Performance is solid. I genuinely expected a weekend of suffering and instead got a plug-and-play AI server on AMD hardware. So yeah, just wanted to say: GO OPENSOURCE!
2026-01-30T21:34:57
https://www.reddit.com/r/LocalLLaMA/comments/1qriswe/ollama_amd_apprechiation_post/
SnowTim07
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qriswe
false
null
t3_1qriswe
/r/LocalLLaMA/comments/1qriswe/ollama_amd_apprechiation_post/
false
false
self
0
null
Local LLM architecture using MSSQL (SQL Server) + vector DB for unstructured data (ChatGPT-style UI)
3
I’m designing a locally hosted LLM stack that runs entirely on private infrastructure and provides a ChatGPT-style conversational interface. The system needs to work with **structured data stored in Microsoft SQL Server (MSSQL)** *and* unstructured/semi-structured content stored in a **vector database**. Planned high-level architecture: * **MSSQL / SQL Server** as the source of truth for structured data (tables, views, reporting data) * **Vector database** (e.g., FAISS, Qdrant, Milvus, Chroma) to store embeddings for unstructured data such as PDFs, emails, policies, reports, and possibly SQL metadata * **RAG pipeline** where: * Natural language questions are routed either to: * Text-to-SQL generation for structured queries against MSSQL, or * Vector similarity search for semantic retrieval over documents * Retrieved results are passed to the LLM for synthesis and response generation Looking for technical guidance on: * Best practices for combining **text-to-SQL** with **vector-based RAG** in a single system * How to design embedding pipelines for: * Unstructured documents (chunking, metadata, refresh strategies) * Optional SQL artifacts (table descriptions, column names, business definitions) * Strategies for keeping vector indexes in sync with source systems * Model selection for local inference (Llama, Mistral, Mixtral, Qwen) and hardware constraints * Orchestration frameworks (LangChain, LlamaIndex, Haystack, or custom routers) * Building a ChatGPT-like UI with authentication, role-based access control, and audit logging * Security considerations, including alignment with SQL Server RBAC and data isolation between vector stores End goal: a secure, internal conversational assistant that can answer questions using **both relational data (via MSSQL)** and **semantic knowledge (via a vector database)** without exposing data outside the network. Any reference architectures, open-source stacks, or production lessons learned would be greatly appreciated.
2026-01-30T21:14:03
https://www.reddit.com/r/LocalLLaMA/comments/1qri8nt/local_llm_architecture_using_mssql_sql_server/
SignalAmbitious8857
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qri8nt
false
null
t3_1qri8nt
/r/LocalLLaMA/comments/1qri8nt/local_llm_architecture_using_mssql_sql_server/
false
false
self
3
null
Alternative to Claudebot/Moltbot/Openclaw, but more secure, with better control and capabilities
0
Quick setup, free to try, security built-in, full automation features available on Mac and Windows. Connects to Telegram easily, simple setup in under 1 minute. **Key Features:** 1. Get Orion working on your devices under a minute 2. Use native apps on Mac, PC, iOS or chat via Telegram etc. 3. Agent teams working together from different devices 4. 24/7 without requiring a dedicated device online MeetOrion
2026-01-30T21:05:44
https://v.redd.it/2cbfjr47xjgg1
Haunting_Forever_243
v.redd.it
1970-01-01T00:00:00
0
{}
1qri0qh
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/2cbfjr47xjgg1/DASHPlaylist.mpd?a=1772399161%2CNWYyYjNmNWQ0ZjQ2ZDRmYjBmOGQ4OTdmMjZiMmEyZGQ1NGI2ZTI3NTg1MzQ4OWU3YjY0MDAxNzcxYjRiZWJlMg%3D%3D&v=1&f=sd', 'duration': 131, 'fallback_url': 'https://v.redd.it/2cbfjr47xjgg1/CMAF_1080.mp4?source=fallback', 'has_audio': False, 'height': 1080, 'hls_url': 'https://v.redd.it/2cbfjr47xjgg1/HLSPlaylist.m3u8?a=1772399161%2CNTBiMGJhNTdkYzEzMjE3MWJjN2NhNzgxNGM2ZDZlODI1MWRjYmEwNmRmOWViMDExN2Y0YjJjODQ0YzhiMjA2NA%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/2cbfjr47xjgg1/CMAF_96.mp4', 'transcoding_status': 'completed', 'width': 1920}}
t3_1qri0qh
/r/LocalLLaMA/comments/1qri0qh/alternative_to_claudebotmoltbotopenclaw_but_more/
false
false
https://external-preview…854ef9ccf7be44f4
0
{'enabled': False, 'images': [{'id': 'aGhzOTdyNDd4amdnMVpKx18zu2Al60haJHCIipoecy1_uH38KnrawZp01IuI', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/aGhzOTdyNDd4amdnMVpKx18zu2Al60haJHCIipoecy1_uH38KnrawZp01IuI.png?width=108&crop=smart&format=pjpg&auto=webp&s=a1fe4f24af2bfbeaccb71e4d8c49a33891cbbef3', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/aGhzOTdyNDd4amdnMVpKx18zu2Al60haJHCIipoecy1_uH38KnrawZp01IuI.png?width=216&crop=smart&format=pjpg&auto=webp&s=23ffd184813c351cf5de820690d65d56a55cdc2e', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/aGhzOTdyNDd4amdnMVpKx18zu2Al60haJHCIipoecy1_uH38KnrawZp01IuI.png?width=320&crop=smart&format=pjpg&auto=webp&s=fb2b54b45fdbe2b3a30b87c4e7f540b6e4ff485e', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/aGhzOTdyNDd4amdnMVpKx18zu2Al60haJHCIipoecy1_uH38KnrawZp01IuI.png?width=640&crop=smart&format=pjpg&auto=webp&s=776ae7079f3e4e9bbe0a611edd0e87ef4d5e5afc', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/aGhzOTdyNDd4amdnMVpKx18zu2Al60haJHCIipoecy1_uH38KnrawZp01IuI.png?width=960&crop=smart&format=pjpg&auto=webp&s=b411b4e8dd8b07e905895b4907ca9b337c0f17d6', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/aGhzOTdyNDd4amdnMVpKx18zu2Al60haJHCIipoecy1_uH38KnrawZp01IuI.png?width=1080&crop=smart&format=pjpg&auto=webp&s=fbf2112225813c6ad49c4d67dae36f8abfb11554', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/aGhzOTdyNDd4amdnMVpKx18zu2Al60haJHCIipoecy1_uH38KnrawZp01IuI.png?format=pjpg&auto=webp&s=cc4ebf0271a6d8c98fbf494c4bc0d73c74e44196', 'width': 1920}, 'variants': {}}]}
Claude Code with LM studio: 0.4.1
18
[claude](https://preview.redd.it/77q914x4xjgg1.png?width=992&format=png&auto=webp&s=b276635b37c76292b4299d69ed3b7852adf9bf56) Very good news!
2026-01-30T21:05:27
https://www.reddit.com/r/LocalLLaMA/comments/1qri0gj/claude_code_with_lm_studio_041/
LegacyRemaster
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qri0gj
false
null
t3_1qri0gj
/r/LocalLLaMA/comments/1qri0gj/claude_code_with_lm_studio_041/
false
false
https://b.thumbs.redditm…P75HETN8MOlw.jpg
18
null