title
stringlengths
1
300
score
int64
0
8.54k
selftext
stringlengths
0
41.5k
created
timestamp[ns]date
2023-04-01 04:30:41
2026-03-04 02:14:14
url
stringlengths
0
878
author
stringlengths
3
20
domain
stringlengths
0
82
edited
timestamp[ns]date
1970-01-01 00:00:00
2026-02-19 14:51:53
gilded
int64
0
2
gildings
stringclasses
7 values
id
stringlengths
7
7
locked
bool
2 classes
media
stringlengths
646
1.8k
name
stringlengths
10
10
permalink
stringlengths
33
82
spoiler
bool
2 classes
stickied
bool
2 classes
thumbnail
stringlengths
4
213
ups
int64
0
8.54k
preview
stringlengths
301
5.01k
I proposed a "Decentralized Grokking" protocol (D.R.E.S.) based on Weight Decay, and Grok (xAI) just validated the mechanistic approach.
0
Hi everyone, I'm an independent researcher. I've been working on a lightweight architecture called **C.L.E.A.R.** that uses extreme weight decay to force 'Grokking' on small networks. The goal is to extract pure 'Logical Kernels' (<100KB) that are deterministic (no hallucinations). Yesterday, Grok (xAI) noticed the repo and validated the mechanistic clarity of the 'Logic Sieve' approach. **I just published the White Paper (v1.1) proposing the D.R.E.S. Protocol:** Instead of training massive centralized models, D.R.E.S. proposes that user devices locally 'grok' missing logical rules and share these tiny kernels (<100KB) to a federated library. https://preview.redd.it/ng41havohcjg1.png?width=1366&format=png&auto=webp&s=19fa980eb15e4e5347464dad70e744c5574c2d59 **The Paper (Zenodo):** > **GitHub:** [https://github.com/Maxenonyme/C.L.E.A.R](https://github.com/Maxenonyme/C.L.E.A.R) Would love to hear your thoughts on federated logic extraction vs. giant monolithic models.
2026-02-13T23:18:36
https://www.reddit.com/r/LocalLLaMA/comments/1r44ep7/i_proposed_a_decentralized_grokking_protocol_dres/
UnderstandingAway139
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r44ep7
false
null
t3_1r44ep7
/r/LocalLLaMA/comments/1r44ep7/i_proposed_a_decentralized_grokking_protocol_dres/
false
false
https://external-preview…88a347dd3853c682
0
null
Hands-on CTF for prompt injection using Llama 3 70B 5 exploitation techniques
1
[removed]
2026-02-13T23:18:12
https://i.redd.it/zaw1fgkmhcjg1.png
Sad_Professor_518
i.redd.it
1970-01-01T00:00:00
0
{}
1r44edq
false
null
t3_1r44edq
/r/LocalLLaMA/comments/1r44edq/handson_ctf_for_prompt_injection_using_llama_3/
false
false
https://preview.redd.it/…b42906c765ce48c1
1
{'enabled': True, 'images': [{'id': 'zaw1fgkmhcjg1', 'resolutions': [{'height': 162, 'url': 'https://preview.redd.it/zaw1fgkmhcjg1.png?width=108&crop=smart&auto=webp&s=7096510e93d10195fe5398c0f5880675787c38fb', 'width': 108}, {'height': 324, 'url': 'https://preview.redd.it/zaw1fgkmhcjg1.png?width=216&crop=smart&auto=webp&s=7ddd69dcfa6d919357e04e8acca366db741c0504', 'width': 216}, {'height': 480, 'url': 'https://preview.redd.it/zaw1fgkmhcjg1.png?width=320&crop=smart&auto=webp&s=e2ad997bc33b39abc5cc877724284ee27f1a7c07', 'width': 320}, {'height': 960, 'url': 'https://preview.redd.it/zaw1fgkmhcjg1.png?width=640&crop=smart&auto=webp&s=0e7ceb024045ef3de26e69812f6ff843a1d99104', 'width': 640}, {'height': 1440, 'url': 'https://preview.redd.it/zaw1fgkmhcjg1.png?width=960&crop=smart&auto=webp&s=23b06287414df9a669b78789f10d94553500e915', 'width': 960}], 'source': {'height': 1536, 'url': 'https://preview.redd.it/zaw1fgkmhcjg1.png?auto=webp&s=06ac8d3f3ad6d6c662d3ef3e433c8942df65c194', 'width': 1024}, 'variants': {}}]}
Show r/LocalLLaMA: Clonar – A Node.js Agentic RAG pipeline with 8-stage multihop reasoning
0
I’ve been working on a native TypeScript/Node.js orchestrator because I found most RAG implementations too "one-shot" for complex queries. I just open-sourced \*\*Clonar\*\*. Instead of a simple vector search, it uses an 8-stage reasoning loop: 1. Query Rewrite 2. Clarification Gate (asks the user if intent is vague) 3. Filter Extraction 4. Grounding Decision 5. Dynamic Retrieval Planning 6. Execution & Merging 7. Quality-Aware Synthesis 8. Deep-Mode Critique (Optional 2nd-pass refinement) It handles cross-turn reasoning and is optimized for production Node environments (SSE streaming, Redis sessions). \*\*I'm looking for feedback on two things:\*\* 1. Is 8 stages too heavy for a real-time system, or is it necessary for accuracy? 2. What vector DB would you guys prioritize for the 'Internal Knowledge' hop? Repo: [https://github.com/clonar714-jpg/clonar](https://github.com/clonar714-jpg/clonar)
2026-02-13T22:58:33
https://www.reddit.com/r/LocalLLaMA/comments/1r43xfh/show_rlocalllama_clonar_a_nodejs_agentic_rag/
Additional-Song4826
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r43xfh
false
null
t3_1r43xfh
/r/LocalLLaMA/comments/1r43xfh/show_rlocalllama_clonar_a_nodejs_agentic_rag/
false
false
self
0
null
Architectural Advice: Transitioning from a Centralized Registry to a DHT?
1
I'm building Pilot Protocol (an overlay for agents). Currently, I use a centralized "Registry" (hot-standby replication) to map Node IDs to physical IP:Ports. This is a single point of failure and a bottleneck. I am considering moving to a Kademlia-style DHT for the address table. My concern is the lookup latency—agents need to resolve addresses very quickly to be useful. **Question:** For a system anticipating high churn (agents coming online/offline), is a DHT actually viable for the "Phone Book", or should I stick to a federated consensus model (like Raft)? I’d value any insights on the trade-offs here. [https://github.com/TeoSlayer/pilotprotocol/tree/main](https://www.google.com/url?sa=E&q=https%3A%2F%2Fgithub.com%2FTeoSlayer%2Fpilotprotocol%2Ftree%2Fmain)
2026-02-13T22:45:30
https://www.reddit.com/r/LocalLLaMA/comments/1r43moe/architectural_advice_transitioning_from_a/
ENX-MemeLord
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r43moe
false
null
t3_1r43moe
/r/LocalLLaMA/comments/1r43moe/architectural_advice_transitioning_from_a/
false
false
self
1
null
Looking for friends to build a "lite" Openclaw alternative.
1
Hey everyone, I am one of those people who was really fascinated about the potential of Openclaw/Moltbot/Clawdbot. But as just a hobbyist, I never wanted to face ridiculous API costs so my main goal was to be able to successfully power this thing with a local model. The main hurdle is hardware. Currently working with an RTX 4070 Ti (12GB VRAM), and while it functions, it’s a far cry from the performance of Claude or other cloud-based giants. But I now have a goal in mind. To create a lighter demanding assistant **with clearly less capability,** so that it can work with local models on consumer grade and accessible GPUs like the one I have. I'm a student and definitely not an AI expert, so this will be a pure "vibe-coding" journey. This isn't a job offer or a commercial project. Just a chance to hang out, learn, and see if we can make local models behave like high-end assistants. **Looking for people who want to:** * Experiment with local model optimization. * Help bridge the gap between consumer GPUs and cloud AI. * **Just build something cool for the sake of it.** If you’re interested in local LLMs, want to experiment with optimization, or just want to roast the idea, let’s talk! Any ideas or feedback are more than welcome.
2026-02-13T22:37:59
https://www.reddit.com/r/LocalLLaMA/comments/1r43gbx/looking_for_friends_to_build_a_lite_openclaw/
jenoah_m
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r43gbx
false
null
t3_1r43gbx
/r/LocalLLaMA/comments/1r43gbx/looking_for_friends_to_build_a_lite_openclaw/
false
false
self
1
null
Figured out why my QLoRA training wasn't working even though loss was dropping
7
I've been struggling to get a 4-bit LoRA (QLoRA) to actually learn new behavior. The loss curve looked great (dropped to 10^-5), but the model was just a zombie—it sounded exactly like the base model. It turns out the loss curve was a total lie. If you're stuck, here is what worked for me: 1. The Purple Banana Test Before I started to fix bugs, I stopped trusting the loss graph and added a "trap" to my labels: all outputs were "The student is a PURPLE BANANA]l from Mars." (My data is from the academic domain) If the adapter is actually training, it will eventually output that. If it doesn't, my adapters are basically disconnected. 2. The hidden freeze (first bug) In 4-bit, weights are frozen by default upon quantization. Even with the usual scripts, I found my adapters were staying frozen too. I had to manually loop through and set "requires_grad" to True for any layer with "lora" in the name. Without this, you're just burning GPU for nothing. 3. Stop training on the input (second bug) If you calculate loss on the instruction AND the response, the model cheats. It gets low loss by learning the question headers it already sees. Use completion masking (set labels to -100 for the input) so the optimizer actually has to work on the response logic. 4. Crank the Alpha (optimization 1) 4-bit weights are stiff. For the Alpha/Rank ratio, I stopped using Alpha = 2R. I moved to R=64 and Alpha=256 to give the weights enough "shove" to override the base model. 5. target the MLP layers (optimization 2) Target mlp layers (gate_proj, up_proj, etc.), not just the attention layers, which is PEFT default. The MLP is where the actual logic lives. If you ignore it, you aren't changing the model's "brain." Once I did all this, the loss stayed high at first (because it couldn't cheat anymore), then it suddenly "flipped" and the model started following the new logic perfectly. It always answered the banana response. Later of course I changed it back to my desired output and it works perfectly.
2026-02-13T22:24:18
https://www.reddit.com/r/LocalLLaMA/comments/1r434dh/figured_out_why_my_qlora_training_wasnt_working/
FeeMassive4003
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r434dh
false
null
t3_1r434dh
/r/LocalLLaMA/comments/1r434dh/figured_out_why_my_qlora_training_wasnt_working/
false
false
self
7
null
Fix: LM Studio Bug with Step-3.5-flash (strips <think> from the embedded jinja)
12
Ok, after some pondering (and testing) the problem with missing <think> tag in step-3.5-flash is not as I claimed missing in the GGUF (it is there!) but the LM Studio stripping the <think> tag from the jinja for whatever reason or not. Without it, the step-3.5 would repeat the answer twice and also work far less optimal. You need to add it there manually in My Models - Inference. (I deleted the previous post)
2026-02-13T22:23:53
https://i.redd.it/miutl4hd7cjg1.jpeg
FPham
i.redd.it
1970-01-01T00:00:00
0
{}
1r4340c
false
null
t3_1r4340c
/r/LocalLLaMA/comments/1r4340c/fix_lm_studio_bug_with_step35flash_strips_think/
false
false
default
12
{'enabled': True, 'images': [{'id': 'miutl4hd7cjg1', 'resolutions': [{'height': 17, 'url': 'https://preview.redd.it/miutl4hd7cjg1.jpeg?width=108&crop=smart&auto=webp&s=e9b0d4c09baeafc962acf0faf173cdb5d512823f', 'width': 108}, {'height': 34, 'url': 'https://preview.redd.it/miutl4hd7cjg1.jpeg?width=216&crop=smart&auto=webp&s=937e79946ab7b2c2e147c9a5a8022c1eab1e115c', 'width': 216}, {'height': 50, 'url': 'https://preview.redd.it/miutl4hd7cjg1.jpeg?width=320&crop=smart&auto=webp&s=b0c1bea48a1bfbcbf1efe41b257d8cc574e680d7', 'width': 320}, {'height': 101, 'url': 'https://preview.redd.it/miutl4hd7cjg1.jpeg?width=640&crop=smart&auto=webp&s=b8e5fcf287b8965d665d92a79904d26d003275d2', 'width': 640}, {'height': 152, 'url': 'https://preview.redd.it/miutl4hd7cjg1.jpeg?width=960&crop=smart&auto=webp&s=43d0aa38ad3c649ee8ce2b7dfd675b1663c60179', 'width': 960}], 'source': {'height': 171, 'url': 'https://preview.redd.it/miutl4hd7cjg1.jpeg?auto=webp&s=3d1034daca9ec0497a8f2fa06eb5e3b49508d109', 'width': 1077}, 'variants': {}}]}
I gave my on-device LLM 3% English data. It decided to be better at English than main language.
26
I’ve been messing around with Gemma 3 270M lately, and I’ve run into the most hilarious reality check. Since I’m based in Korea, I spent weeks obsessing over a fine-tuning dataset that was 97% Korean. I really tried to bake in every possible nuance and emotional expression. I threw in a tiny 3% of English data just so it wouldn’t be totally lost in translation—I honestly didn't expect much at all. But here’s the twist: The Korean side—the part I actually put my blood, sweat, and tears into—is still a bit of a wild card and gives random or off-topic responses sometimes. Meanwhile, the 3% English data is pumping out relatively clean and coherent replies! It’s pretty humbling (and a bit frustrating!) to see my "low-effort" English support behaving better than the language I actually focused on. I guess the base model’s pre-training is doing some heavy lifting here, but it definitely means I’ve still got some work to do on the Korean side! Just for some context on the screenshot, I’m actually building an on-device diary app called Offgram. The idea is to have a locally running LLM act as a companion that leaves thoughtful (and hopefully not too random) comments on your daily entries so you don't feel like you're just writing into a void. Since it's a diary, I'm a firm believer that privacy is non-negotiable, so everything runs 100% on-device—zero data ever leaves your phone. Using the tiny 270M model keeps things super snappy with basically no latency. It’s still under heavy development, but I’m planning to launch it soon! Has anyone else working with these ultra-small models seen this kind of "language flip"? I’d love to hear your theories or any tips on how to keep these tiny models on track!
2026-02-13T22:09:37
https://www.reddit.com/r/LocalLLaMA/comments/1r42r9b/i_gave_my_ondevice_llm_3_english_data_it_decided/
shoonee_balavolka
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r42r9b
false
null
t3_1r42r9b
/r/LocalLLaMA/comments/1r42r9b/i_gave_my_ondevice_llm_3_english_data_it_decided/
false
false
self
26
null
Why isn't anyone building dedicated PCIe inference cards for consumers?
1
[removed]
2026-02-13T22:07:29
https://www.reddit.com/r/LocalLLaMA/comments/1r42pc1/why_isnt_anyone_building_dedicated_pcie_inference/
True_Obligation_8517
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r42pc1
false
null
t3_1r42pc1
/r/LocalLLaMA/comments/1r42pc1/why_isnt_anyone_building_dedicated_pcie_inference/
false
false
self
1
null
Fake linux environment?
0
Anyone know of a fake linux environment? Preferably written in a fast language like C/C++ or Rust. I wanna make my own local agent that I can use sorta like claude-code. That said I am never gonna give a dumb agent permissions to my local linux environment, I can't think of an easier way to get my system deleted, and the whole point is for the thing to be relatively independent, not have me supervise the thing at every step. I want something simple, should only implement a couple of simple commands like ls, mkdir, touch, echo, grep, cat. Just stuff it can use to navigate a file system. and should allow me to map a real directory/folder in my system to this fake environment so it has a sandbox to play in. I got this idea after making a lua sandbox that only runs the function I give the model access to. So I was wondering if there already is some fake environment I can use.
2026-02-13T21:51:54
https://www.reddit.com/r/LocalLLaMA/comments/1r42beh/fake_linux_environment/
vabenil
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r42beh
false
null
t3_1r42beh
/r/LocalLLaMA/comments/1r42beh/fake_linux_environment/
false
false
self
0
null
You are probably overpaying for intelligence (not in this community?)
0
TLDR: * Opus 4.6 still the champion * For general reasoning: Kimi K2.5 or GLM-5 deliver Tier 1 performance at a fraction of the cost. * For coding and agentic coding: Kimi K2.5, GLM-5 and MiniMax M2.5. * If budget is the primary concern, DeepSeek V3.2 is the standout value of this generation—statistically indistinguishable from the leaders on general reasoning, and still in the top half for coding, extremely affordable.
2026-02-13T21:51:14
https://residuals.bearblog.dev/you-are-probably-overpaying-for-intelligence/
Subject_Ear6869
residuals.bearblog.dev
1970-01-01T00:00:00
0
{}
1r42ati
false
null
t3_1r42ati
/r/LocalLLaMA/comments/1r42ati/you_are_probably_overpaying_for_intelligence_not/
false
false
default
0
null
I built a prompt linter that uses Ollama to catch issues in your LLM system prompts
0
I've been building [promptier](https://github.com/DeanShandler123/promptier), a TypeScript toolkit for composing and linting LLM system prompts. The latest release (v0.2.0) adds something I'm pretty excited about: **semantic linting powered by a local LLM via Ollama**. **The problem** Heuristic rules can catch obvious stuff — token limits, XML tags sent to GPT, missing identity sections. But they can't catch things like: Contradictions - "Always respond in JSON" + "Be conversational and use natural language" Ambiguity - Vague instructions that could be interpreted multiple ways Injection risks - Template variables in places they shouldn't be Verbosity - Redundant phrasing that wastes tokens without adding clarity Scope creep - Instructions that expand beyond the agent's stated role **The meta part** The linter's own system prompt - the one that instructs the LLM how to analyze prompts is built using the same SDK. So the tool dogfoods itself. You can even lint the linter's prompt: npx promptier lint --semantic packages/lint/src/llm/system-prompt.ts **Links** * GitHub: [https://github.com/DeanShandler123/promptier](https://github.com/DeanShandler123/promptier) * npm: `npm install` u/promptier`/lint` * The semantic linting section in the README has full docs including how to bring your own LLM client Would love feedback from anyone who's working with system prompts regularly. What kinds of issues do you run into that a linter could catch?
2026-02-13T21:45:16
https://www.reddit.com/r/LocalLLaMA/comments/1r425ew/i_built_a_prompt_linter_that_uses_ollama_to_catch/
deanshx
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r425ew
false
null
t3_1r425ew
/r/LocalLLaMA/comments/1r425ew/i_built_a_prompt_linter_that_uses_ollama_to_catch/
false
false
self
0
{'enabled': False, 'images': [{'id': 'hK8kKbifTbexg51SSkY5-9WeHRS02Ssn-P5xYDzPG5k', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/hK8kKbifTbexg51SSkY5-9WeHRS02Ssn-P5xYDzPG5k.png?width=108&crop=smart&auto=webp&s=4840e2c0389048d6faf1e7c4c9968888f7e280e2', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/hK8kKbifTbexg51SSkY5-9WeHRS02Ssn-P5xYDzPG5k.png?width=216&crop=smart&auto=webp&s=7dffd46bfbdd17bb985596cd78e6d8d80d03af2f', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/hK8kKbifTbexg51SSkY5-9WeHRS02Ssn-P5xYDzPG5k.png?width=320&crop=smart&auto=webp&s=ee1c4ab734b3d0602443385f96145efd3168ced1', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/hK8kKbifTbexg51SSkY5-9WeHRS02Ssn-P5xYDzPG5k.png?width=640&crop=smart&auto=webp&s=018277cb762b0b340421497024877b9503c8d8a3', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/hK8kKbifTbexg51SSkY5-9WeHRS02Ssn-P5xYDzPG5k.png?width=960&crop=smart&auto=webp&s=bce39b3c83d445321967a63fcb4728524a992450', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/hK8kKbifTbexg51SSkY5-9WeHRS02Ssn-P5xYDzPG5k.png?width=1080&crop=smart&auto=webp&s=9494e2ae2aaada865b94b76b1e2f4101c24a02f7', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/hK8kKbifTbexg51SSkY5-9WeHRS02Ssn-P5xYDzPG5k.png?auto=webp&s=1f9fe2136ad24336ad507c17bce20d732c66ae16', 'width': 1200}, 'variants': {}}]}
FIX: The step-3.5-flash GGUF has botched embedded jinja
11
For anybody who tries some of the GGUF of step-3.5-flash, the embedded jinja template in many of them (all I tried) is missing the <think> flag for some reason or other. The official [Step-3.5-Flash](https://huggingface.co/stepfun-ai/Step-3.5-Flash/tree/main) / chat\_template.jinja has the <think> but the embedded GGUFs (likely grabbed before they added it) do not.
2026-02-13T21:30:41
https://i.redd.it/v9crofskxbjg1.jpeg
FPham
i.redd.it
1970-01-01T00:00:00
0
{}
1r41s17
false
null
t3_1r41s17
/r/LocalLLaMA/comments/1r41s17/fix_the_step35flash_gguf_has_botched_embedded/
false
false
default
11
{'enabled': True, 'images': [{'id': 'v9crofskxbjg1', 'resolutions': [{'height': 17, 'url': 'https://preview.redd.it/v9crofskxbjg1.jpeg?width=108&crop=smart&auto=webp&s=042f864283435f2c1ba26f6df01c18eb067a3f53', 'width': 108}, {'height': 34, 'url': 'https://preview.redd.it/v9crofskxbjg1.jpeg?width=216&crop=smart&auto=webp&s=45499a952fdbf390e4976b712bb5643a6b4b3188', 'width': 216}, {'height': 50, 'url': 'https://preview.redd.it/v9crofskxbjg1.jpeg?width=320&crop=smart&auto=webp&s=bc655646f1ab97a0a00ddb4e9713159ae1ae1b53', 'width': 320}, {'height': 101, 'url': 'https://preview.redd.it/v9crofskxbjg1.jpeg?width=640&crop=smart&auto=webp&s=73a2d6a0e7750a9a5e8c05a0fe01a5f4ced107cf', 'width': 640}, {'height': 152, 'url': 'https://preview.redd.it/v9crofskxbjg1.jpeg?width=960&crop=smart&auto=webp&s=9aa3e0394509c341f06d184667528a10a802893f', 'width': 960}], 'source': {'height': 171, 'url': 'https://preview.redd.it/v9crofskxbjg1.jpeg?auto=webp&s=2633f1e5d1eb496e3f6ffc5c8423e7f995448e9b', 'width': 1077}, 'variants': {}}]}
How do you handle agent loops and cost overruns in production?
1
Hi everyone, I've been experimenting with AI agents and I'm starting to think about the challenges of deploying them to production. I'm particularly concerned about issues like agents getting stuck in loops or racking up unexpected API costs. For those of you who have experience with this, what are your current strategies? Are you using simple things like `max_iterations`, or more complex monitoring tools? Just trying to learn from the community's experience. Thanks!
2026-02-13T21:18:58
https://www.reddit.com/r/LocalLLaMA/comments/1r41h6v/how_do_you_handle_agent_loops_and_cost_overruns/
mark_bolimer
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r41h6v
false
null
t3_1r41h6v
/r/LocalLLaMA/comments/1r41h6v/how_do_you_handle_agent_loops_and_cost_overruns/
false
false
self
1
null
GLM-5 Is a local GOAT
133
**Background**: I am a developer with over two decades of experience. I use LLMs heavily day to day from all of the major providers. Since the first Llama models came out I've been toying with local models, benchmarking them on real-world heavy use cases. **Long story short:** GLM-5 is the first model I've been able to run locally that's actually impressed me. In 3 'shots' I was able to make a retro styled flappy clone AND deploy it to AWS with a cost assessment if it went viral. **My prompt**: Please generate a GPU accelerated clone of the game ‘Flappy Bird’ where using the spacebar causes the bird to ‘flap’, give it a 'retro inspired' design. **My Setup**: \- Dual RTX 6000 PRO MaxQ GPUs \- 128gb of DDR5 \- AMD Ryzen Threadripper PRO 7975WX \- GLM-5-744B served over vLLM with 128k context at IQ2\_M **Caveats**: Even with my decently powerful hardware, the token output was painfully slow at 16.5t/s. IMO, completely worth the wait though. The same test with Qwen3-Next-80b, GPT-OSS-120b and a few other leaders was unimpressive. [https://flappy.tjameswilliams.com/](https://flappy.tjameswilliams.com/)
2026-02-13T21:00:08
https://v.redd.it/7l7iri95rbjg1
FineClassroom2085
v.redd.it
1970-01-01T00:00:00
0
{}
1r41013
false
{'reddit_video': {'bitrate_kbps': 800, 'dash_url': 'https://v.redd.it/7l7iri95rbjg1/DASHPlaylist.mpd?a=1773608425%2CYjljMmEyMzRlYmVlOGZmYWQxYWE1YTc1MTc1ZWUxOTE2YTdjNmFmNjgxOTVlNzQ2MzY3ZmQwZGIxNGI5OGE3Mw%3D%3D&v=1&f=sd', 'duration': 13, 'fallback_url': 'https://v.redd.it/7l7iri95rbjg1/CMAF_360.mp4?source=fallback', 'has_audio': False, 'height': 514, 'hls_url': 'https://v.redd.it/7l7iri95rbjg1/HLSPlaylist.m3u8?a=1773608425%2CZmY3ZTE2NGE1ZjM4ZWZlZjQ5NzEzODBhMmNmNmM4ZGNkMDk1ODczMjMyOTM1MzFlZjU5N2U4YzY5YWE5NGI5YQ%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/7l7iri95rbjg1/CMAF_96.mp4', 'transcoding_status': 'completed', 'width': 360}}
t3_1r41013
/r/LocalLLaMA/comments/1r41013/glm5_is_a_local_goat/
false
false
https://external-preview…6bddcca16810e3ad
133
{'enabled': False, 'images': [{'id': 'MTlvZ25qOTVyYmpnMet_8L-GzQ_poWye6LYGoFL5kcPokh15ZfJ1OHhOgrf9', 'resolutions': [{'height': 153, 'url': 'https://external-preview.redd.it/MTlvZ25qOTVyYmpnMet_8L-GzQ_poWye6LYGoFL5kcPokh15ZfJ1OHhOgrf9.png?width=108&crop=smart&format=pjpg&auto=webp&s=7654ae812f97999f90c9810a927b757034300ed9', 'width': 108}, {'height': 307, 'url': 'https://external-preview.redd.it/MTlvZ25qOTVyYmpnMet_8L-GzQ_poWye6LYGoFL5kcPokh15ZfJ1OHhOgrf9.png?width=216&crop=smart&format=pjpg&auto=webp&s=684bf89bbc6fe1b8afdf977181ec53887ce3e9c8', 'width': 216}, {'height': 456, 'url': 'https://external-preview.redd.it/MTlvZ25qOTVyYmpnMet_8L-GzQ_poWye6LYGoFL5kcPokh15ZfJ1OHhOgrf9.png?width=320&crop=smart&format=pjpg&auto=webp&s=4dc77fc9856b77d9873846245f984c7fda171d9a', 'width': 320}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/MTlvZ25qOTVyYmpnMet_8L-GzQ_poWye6LYGoFL5kcPokh15ZfJ1OHhOgrf9.png?format=pjpg&auto=webp&s=f37c9a7f2dce423938d9cdb652c08cfd7bc2d747', 'width': 442}, 'variants': {}}]}
ubergarm/MiniMax-2.5-GGUF
77
Just cooked and benchmarked (perplexity) of some MiniMax-M2.5 GGUF quants over at: [https://huggingface.co/ubergarm/MiniMax-M2.5-GGUF](https://huggingface.co/ubergarm/MiniMax-M2.5-GGUF) The IQ4\_XS works on mainline llama.cpp, LMStudio, Kobold CPP etc. The other quants require ik\_llama.cpp (which supports all of the quant types of mainline as well). Gonna get some llama-sweep-bench tests for PP/TG drop off across context depth next. The smol-IQ3\_KS was working in my \`opencode\` local testing and seems promising but probably a bit too large for enough context on 96GB VRAM hence the smaller IQ2\_KS is also available at a cost to quality. Fun stuff!
2026-02-13T20:47:04
https://i.redd.it/e7zeec20qbjg1.png
VoidAlchemy
i.redd.it
1970-01-01T00:00:00
0
{}
1r40o83
false
null
t3_1r40o83
/r/LocalLLaMA/comments/1r40o83/ubergarmminimax25gguf/
false
false
https://preview.redd.it/…549fe7444f717f0c
77
{'enabled': True, 'images': [{'id': 'e7zeec20qbjg1', 'resolutions': [{'height': 73, 'url': 'https://preview.redd.it/e7zeec20qbjg1.png?width=108&crop=smart&auto=webp&s=e35d47dc82d9c347a93f70119e835c9cd437b5a1', 'width': 108}, {'height': 146, 'url': 'https://preview.redd.it/e7zeec20qbjg1.png?width=216&crop=smart&auto=webp&s=f316b190c15bb1721f1c15bb37648f24be099a60', 'width': 216}, {'height': 216, 'url': 'https://preview.redd.it/e7zeec20qbjg1.png?width=320&crop=smart&auto=webp&s=f8485a5db8c35d44005c87c885a1e09bd8e7de3b', 'width': 320}, {'height': 433, 'url': 'https://preview.redd.it/e7zeec20qbjg1.png?width=640&crop=smart&auto=webp&s=dec023595454edde747bd1bebdaab70e22a17fe5', 'width': 640}, {'height': 649, 'url': 'https://preview.redd.it/e7zeec20qbjg1.png?width=960&crop=smart&auto=webp&s=d4541ae8201f0ec9939aaa897f229333d7e46919', 'width': 960}, {'height': 730, 'url': 'https://preview.redd.it/e7zeec20qbjg1.png?width=1080&crop=smart&auto=webp&s=22a7c0ae1eb82c45a5e6e0d4b68e5d5645496cdc', 'width': 1080}], 'source': {'height': 1400, 'url': 'https://preview.redd.it/e7zeec20qbjg1.png?auto=webp&s=e27912ffabd7415df652abec0cf4aa41ab5d7246', 'width': 2069}, 'variants': {}}]}
An idea for a practical, useful tool that tons of people would use.
0
Simple app (could self-host or somebody could throw it online somewhere) that lets you specify: 1. Your hardware specs (system RAM, GPU, etc) 2. Your use case The app then tells you what the best open source model for your hardware/use case combo would be, and even the best way to run it. It would be updated as new models emerge. This would probably clean up a ton of questions on this sub (I've been kind of scared to ask) and could be a great resource. I'm comfortable coding the UI, etc... But I don't have the model/hardware/usage knowledge and AI is never going to be up-to-date enough to rely on. Thoughts?
2026-02-13T20:22:54
https://www.reddit.com/r/LocalLLaMA/comments/1r401yx/an_idea_for_a_practical_useful_tool_that_tons_of/
cloudcity
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r401yx
false
null
t3_1r401yx
/r/LocalLLaMA/comments/1r401yx/an_idea_for_a_practical_useful_tool_that_tons_of/
false
false
self
0
null
GPT-OSS 120b Uncensored Aggressive Release (MXFP4 GGUF)
339
Hey everyone, made an uncensored version of GPT-OSS 120B. Quick specs: 117B total params, \~5.1B active (MoE with 128 experts, top-4 routing), 128K context. MXFP4 is the model's native precision - this isn't a quantization, it's how it was trained. No overall quality loss, though you can see CoT behave differently at times. This is the aggressive variant - **observed 0 refusals to any query during testing.** **Completely uncensored while keeping full model capabilities intact.** Link: [https://huggingface.co/HauhauCS/GPTOSS-120B-Uncensored-HauhauCS-Aggressive](https://huggingface.co/HauhauCS/GPTOSS-120B-Uncensored-HauhauCS-Aggressive) Sampling settings: \- --temp 1.0 --top-k 40 \- Disable everything else (top\_p, min\_p, repeat penalty, etc.) - some clients turn these on by default \- llama.cpp users: --jinja is required for the Harmony response format or the model won't work right \- Example: llama-server -m model.gguf --jinja -fa -b 2048 -ub 2048 Single 61GB file. Fits on one H100. For lower VRAM, use --n-cpu-moe N in llama.cpp to offload MoE layers to CPU. Works with llama.cpp, LM Studio, Ollama, etc. If you want smaller models, I also have GPT-OSS 20B, GLM 4.7 Flash and Qwen3 8b VL uncensored: \- [https://huggingface.co/HauhauCS/models/](https://huggingface.co/HauhauCS/models/) As with all my releases, the goal is effectively lossless uncensoring - no dataset changes and no capability loss.
2026-02-13T20:15:33
https://www.reddit.com/r/LocalLLaMA/comments/1r3zuuf/gptoss_120b_uncensored_aggressive_release_mxfp4/
hauhau901
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r3zuuf
false
null
t3_1r3zuuf
/r/LocalLLaMA/comments/1r3zuuf/gptoss_120b_uncensored_aggressive_release_mxfp4/
false
false
self
339
{'enabled': False, 'images': [{'id': 'QZffkKXjStXvdHv-2EojgSarRAJg5k8b1E5-4PAV9-k', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/QZffkKXjStXvdHv-2EojgSarRAJg5k8b1E5-4PAV9-k.png?width=108&crop=smart&auto=webp&s=07a2e07aef74f1516db3300b0da74cd85365f6d1', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/QZffkKXjStXvdHv-2EojgSarRAJg5k8b1E5-4PAV9-k.png?width=216&crop=smart&auto=webp&s=28ddcc3bf2404e1e3e959fe30ecb134f166e84f3', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/QZffkKXjStXvdHv-2EojgSarRAJg5k8b1E5-4PAV9-k.png?width=320&crop=smart&auto=webp&s=949cd5862c2776e5e386d439388f031f043d89da', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/QZffkKXjStXvdHv-2EojgSarRAJg5k8b1E5-4PAV9-k.png?width=640&crop=smart&auto=webp&s=5f803f352392f163c324ac54d6e0774fc0b2d4b5', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/QZffkKXjStXvdHv-2EojgSarRAJg5k8b1E5-4PAV9-k.png?width=960&crop=smart&auto=webp&s=f7d5ae9b67c83df3feb504c8b80e566cc04b6039', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/QZffkKXjStXvdHv-2EojgSarRAJg5k8b1E5-4PAV9-k.png?width=1080&crop=smart&auto=webp&s=ade756f862714d42bdec34751b1eea6dd7cbf0e5', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/QZffkKXjStXvdHv-2EojgSarRAJg5k8b1E5-4PAV9-k.png?auto=webp&s=692192b714fbcda2d92ba30046a72b74860f2510', 'width': 1200}, 'variants': {}}]}
LLM/NLU for processing user input
2
Apologies if this is not the correct sub for this, please delete if so! TLDR: I want to parse user input into formatted requests that can be passed to services to do actions (no, I don't want OpenClaw) I'd like to create an app/bot for users (mainly friends and family) to message. The app would allow them to make requests, changes, etc. to services I host, while allowing them to interact in normal/non-techy language. For example "Hey can you download the new Mario movie?". My long term goal would be to make it sort of plug-in based for both the input options and services, so I easily add new things, but to start with, I was thinking of the following services/tools: * Basic media manager - Users can request movies and TV shows, check the status of a request, or check when the next episode of a show is coming out. * Game server control - Users can check a server's status, get server details (IP, game version, etc.), add a user to the whitelist, or restart a server. * Server status checking - Users can check if a particular app is running (I'm sure 99% of the time it'll be Plex), Check network speed, or restart an app (limited to specific users) * Basic server manager (FOR ME ONLY) - Restart and update services, check resource usage, possibly get alerted of failing backups (currently these are emailed to me) Using the media manager example, the input might be “Hey can you download The Nice Guys for me?” (LanguageTool keeps converting those double-quotes for some reason) or "Is Plex down right now?" and the LLM or NLU parses/converts the message into a formatted request (probably JSON) like this: { "service": "media-request", "request": { "media-type": "movie", "media-name": "The Nice Guys", "media-format": "<The resolution, max size, etc. IF that was specified>" } } I'm well aware of and give my users access to things like Overseerr, and Pelican, but I still end up getting messages anyway. Whether it's because forgot the link, lost their password, or I broke something, it seems like there will always be some folks that prefer asking someone/something instead of using the dashboard. I honestly don't mind doing it manually, but I'm not always available, and this seems like a cool project, which is why I'm bothering at all. On the technical side, my understanding is that with an NLU, I'll be much more restricted on what input is accepted, which is why I am considering a local LLM of some kind. I DO NOT want to use something like OpenClaw because 1) it seems like a security disaster, and 2) this seems like a good learning opportunity. I ONLY want the NLU/LLM to process user input. The app will handle things like checking the users permissions, and making requests to services. I assume with an LLM approach, I would provide it with the expected output format, the available services, and the users message? I'm also hoping I can use a small model for this, since I don't plan on having it store any sort of conversation history. My questions are: 1. Is an LLM even the right choice for this type of project? 2. How resource intensive will something like this be? I have a modest home server (64GBs RAM, Ryzen 3700x), but if 80% of the resources are being used just to process user requests, then I feel like it's kind of a waste. 3. Is Python a good language to use for this? I am pretty familiar with Python, so if I can use it that'd be great, but I don't mind learning something new either. Thanks for taking the time to read all this, and if there is anything I can clarify feel free to ask!
2026-02-13T19:53:05
https://www.reddit.com/r/LocalLLaMA/comments/1r3z9eb/llmnlu_for_processing_user_input/
PrivacyIsDying
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r3z9eb
false
null
t3_1r3z9eb
/r/LocalLLaMA/comments/1r3z9eb/llmnlu_for_processing_user_input/
false
false
self
2
null
Are we confusing "Chain of Thought" with actual logic? A question on reasoning mechanisms.
0
I'm trying to deeply understand the mechanism behind LLM reasoning (specifically in models like o1 or DeepSeek). Mechanism: Is the model actually applying logic gates/rules, or is it just a probabilistic simulation of a logic path? If it "backtracks" during CoT, is that a learned pattern or a genuine evaluation of truth? Data Quality: How are labs actually evaluating "Truth" in the dataset? If the web is full of consensus-based errors, and we use "LLM-as-a-Judge" to filter data, aren't we just reinforcing the model's own biases? The Data Wall: How much of current training is purely public (Common Crawl) vs private? Is the "data wall" real, or are we solving it with synthetic data?
2026-02-13T19:44:34
https://www.reddit.com/r/LocalLLaMA/comments/1r3z13a/are_we_confusing_chain_of_thought_with_actual/
Sathvik_Emperor
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r3z13a
false
null
t3_1r3z13a
/r/LocalLLaMA/comments/1r3z13a/are_we_confusing_chain_of_thought_with_actual/
false
false
self
0
null
has it begun?
206
[https://www.bloomberg.com/news/articles/2026-02-13/us-to-put-alibaba-on-list-for-aiding-china-s-military-reuters](https://www.bloomberg.com/news/articles/2026-02-13/us-to-put-alibaba-on-list-for-aiding-china-s-military-reuters) They were about to present the name of alibaba and Baidu as a potential threat or issue for helping chinese military in the Pentagon, but ultimately took their names off the list Would love to hear what y'all think about this!
2026-02-13T19:38:01
https://i.redd.it/ei9lt0u4ebjg1.jpeg
Acceptable_Home_
i.redd.it
1970-01-01T00:00:00
0
{}
1r3yuyd
false
null
t3_1r3yuyd
/r/LocalLLaMA/comments/1r3yuyd/has_it_begun/
false
false
https://preview.redd.it/…df2052957287ba04
206
{'enabled': True, 'images': [{'id': 'ei9lt0u4ebjg1', 'resolutions': [{'height': 117, 'url': 'https://preview.redd.it/ei9lt0u4ebjg1.jpeg?width=108&crop=smart&auto=webp&s=2cff71cf0b24d34f2775cc073cca679386b546f7', 'width': 108}, {'height': 234, 'url': 'https://preview.redd.it/ei9lt0u4ebjg1.jpeg?width=216&crop=smart&auto=webp&s=bd14402a70ac523dd42bd05e3885435f8803844a', 'width': 216}, {'height': 347, 'url': 'https://preview.redd.it/ei9lt0u4ebjg1.jpeg?width=320&crop=smart&auto=webp&s=ed9891da583f7c87c889594c3762e4ec63b673cf', 'width': 320}, {'height': 695, 'url': 'https://preview.redd.it/ei9lt0u4ebjg1.jpeg?width=640&crop=smart&auto=webp&s=36034757efbb832ba75f43ed04c4dc8c7bb34675', 'width': 640}, {'height': 1042, 'url': 'https://preview.redd.it/ei9lt0u4ebjg1.jpeg?width=960&crop=smart&auto=webp&s=86f6cf38b93f9bea3165bc6ffd25bd3b26209c59', 'width': 960}, {'height': 1173, 'url': 'https://preview.redd.it/ei9lt0u4ebjg1.jpeg?width=1080&crop=smart&auto=webp&s=234ac986e48c8063080efc3ed456e55e5d83d5e6', 'width': 1080}], 'source': {'height': 1173, 'url': 'https://preview.redd.it/ei9lt0u4ebjg1.jpeg?auto=webp&s=3db56763cdf3666a2f1cf50fc34180e9c914e434', 'width': 1080}, 'variants': {}}]}
MiniMax 2.5 locally on my setup
12
GGUF created by u/Remarkable_Jicama775, available from here: [https://huggingface.co/ox-ox/MiniMax-M2.5-GGUF](https://huggingface.co/ox-ox/MiniMax-M2.5-GGUF) 20 t/s on Q3 setup: 3x3090 on X399 Taichi + DDR4 (2667 MT/s) + Threadripper 1920X
2026-02-13T19:29:37
https://www.reddit.com/gallery/1r3yn5r
jacek2023
reddit.com
1970-01-01T00:00:00
0
{}
1r3yn5r
false
null
t3_1r3yn5r
/r/LocalLLaMA/comments/1r3yn5r/minimax_25_locally_on_my_setup/
false
false
https://preview.redd.it/…8be7760279ba461b
12
null
Omni-Crawler: from a ton of links to a single md file to feed your LLMs
3
First things first: Yes, this post and the repo content were drafted/polished using **Gemini**. No, I’m not a developer; I’m just a humble homelabber. I’m sharing a project I put together to solve my own headaches: **Omni-Crawler**. # What is it? It’s a hybrid script (CLI + Graphical Interface via Streamlit) based on **Crawl4AI**. The function is simple: you give it a documentation URL (e.g., Caddy, Proxmox, a Wiki), and it returns a single, consolidated, and filtered `.md` file. # What is this for? If you work with local LLMs (Ollama, Open WebUI) or even Claude/Gemini, you know that feeding them 50 different links for a single doc is a massive pain in the ass. And if you don't provide the context, the AI starts hallucinating a hundred environment variables, two dogs, and a goose. With this: 1. You crawl the entire site in one go. 2. It automatically cleans out the noise (menus, footers, sidebars). 3. You upload the resulting `.md`, and you have an AI with the up-to-date documentation in its permanent context within seconds. # On "Originality" and the Code Let’s be real: I didn’t reinvent the wheel here. This is basically a wrapper around **Crawl4AI** and **Playwright**. The "added value" is the integration: * **Stealth Mode:** Configured so servers (Caddy, I'm looking at you, you beautiful bastard) don't block you on the first attempt, using random User-Agents and real browser headers. * **CLI/GUI Duality:** If you're a terminal person, use it with arguments. If you want something visual, launch it without arguments, and it spins up a local web app. * **Density Filters:** It doesn't just download HTML; it uses text density algorithms to keep only the "meat" of the information. I'll admit the script was heavily **"vibe coded"** (it took me fewer than ten prompts). # Technical Stack * **Python 3.12** * **uv** (for package management—I highly recommend it) * **Crawl4AI** \+ **Playwright** * **Streamlit** (for the GUI) **The Repo:**[https://github.com/ImJustDoingMyPart/omni-crawler](https://github.com/ImJustDoingMyPart/omni-crawler) If this helps you feed your RAGs or just keep offline docs, there you go. Technical feedback is welcome. As for critiques about whether a bot or a human wrote this: please send them to my DMs along with your credit card number, full name, and security code.
2026-02-13T19:27:34
https://www.reddit.com/r/LocalLLaMA/comments/1r3yl5i/omnicrawler_from_a_ton_of_links_to_a_single_md/
EnthropicBeing
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r3yl5i
false
null
t3_1r3yl5i
/r/LocalLLaMA/comments/1r3yl5i/omnicrawler_from_a_ton_of_links_to_a_single_md/
false
false
self
3
{'enabled': False, 'images': [{'id': 'jTAjm8ajCrVvF3-_9MRq-M1dsw4QFrXObhj24nZnJhg', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/jTAjm8ajCrVvF3-_9MRq-M1dsw4QFrXObhj24nZnJhg.png?width=108&crop=smart&auto=webp&s=3779ca7f57b95ef276acc75d19774711fc71271f', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/jTAjm8ajCrVvF3-_9MRq-M1dsw4QFrXObhj24nZnJhg.png?width=216&crop=smart&auto=webp&s=be7056901cbd0737774656ab55421944e32d979e', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/jTAjm8ajCrVvF3-_9MRq-M1dsw4QFrXObhj24nZnJhg.png?width=320&crop=smart&auto=webp&s=c67f1e3940e5fdaae7685ae1999bbed65d96fcc9', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/jTAjm8ajCrVvF3-_9MRq-M1dsw4QFrXObhj24nZnJhg.png?width=640&crop=smart&auto=webp&s=781a94530c6b50eff9e8bd5e6131203018779547', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/jTAjm8ajCrVvF3-_9MRq-M1dsw4QFrXObhj24nZnJhg.png?width=960&crop=smart&auto=webp&s=0319c8c8a944f315e23cbdc5064815bc42b8fd61', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/jTAjm8ajCrVvF3-_9MRq-M1dsw4QFrXObhj24nZnJhg.png?width=1080&crop=smart&auto=webp&s=ab6b96d24823bc07ed597b8c8763f7147952e5a9', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/jTAjm8ajCrVvF3-_9MRq-M1dsw4QFrXObhj24nZnJhg.png?auto=webp&s=f3d60dd35e3d5f7ab804535a5c918dc8463b8447', 'width': 1200}, 'variants': {}}]}
Who's still using QWQ...and what for?
0
For a long time, QWQ was one of the top open source models, punching way above its weight. Many considered it the "Deepseek" model for mid/lower end PCs. One of its weak points was its excessive thinking, despite its outputs being pretty solid. After Qwen 3 dropped, especially the 30B versions w/their lightning speed, it (and to a smaller extent Qwen 3 32B) was all but forgotten. But on some benchmarks that I follow, it's still rated higher than the other Qwen 3 variants, with the exception of the 235B model. So, the question is...who is still using QWQ, and what is your use case? And do you think we'll ever get a QWQ-2?
2026-02-13T19:26:36
https://www.reddit.com/r/LocalLLaMA/comments/1r3yk7r/whos_still_using_qwqand_what_for/
GrungeWerX
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r3yk7r
false
null
t3_1r3yk7r
/r/LocalLLaMA/comments/1r3yk7r/whos_still_using_qwqand_what_for/
false
false
self
0
null
Best Coding-LLM to run locally on M4 Mac Mini (Feb 2026)
3
Hi all! First time posting - about to get my M4 Mac Mini (24GB) for internal use as a team. We want to run a coding agent (and maybe some vid gen stuff too) locally to avoid soaring API costs. What are the best models people recommend, ideally from your lived experience with them. SOTA seems like a commoditised-standard at this point, so I want to make sure the agent we pic is truly the best option for us at the moment. Thanks!
2026-02-13T19:24:24
https://www.reddit.com/r/LocalLLaMA/comments/1r3yi20/best_codingllm_to_run_locally_on_m4_mac_mini_feb/
Competitive-Arm3801
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r3yi20
false
null
t3_1r3yi20
/r/LocalLLaMA/comments/1r3yi20/best_codingllm_to_run_locally_on_m4_mac_mini_feb/
false
false
self
3
null
Trained a 300k non-embed params model on ChatAlpaca dataset from scratch.
8
Hey, so this is a follow-up of my previous post https://www.reddit.com/r/LocalLLaMA/comments/1qym566/i_trained_a_18m_params_model_from_scratch_on_a/ about my project **Strawberry**. Here's the repo for those who are interested https://github.com/SrijanSriv211/Strawberry So I've been experimenting with this project. In my previous post some people were asking about fine-tuning or it's chatting capabilities. So I decided to give it a try. Just like the previous one, this was also a stress test and was trained from scratch. The model has total 1.1M params out of which only 300k are non-embedding params and 800k are embedding params. This is the exact config: `{"dataset": {"data_division": 0.8, "load_from_file": true, "path": "data/chatalpaca.bin"}, "checkpoints": {"path": "bin/ck27", "interval": 1000, "create_checkpoints": true}, "model_hyperparams": {"vocab_size": 8192, "block_size": 256, "r_layer": 3, "n_layer": 1, "n_head": 6, "n_embd": 96, "n_qkv": 384, "n_ffn": 384}, "optimizer_hyperparams": {"eps": 1e-08, "beta1": 0.9, "beta2": 0.99, "weight_decay": 0.001, "use_muon": false, "momentum": 0.95}, "model_path": "bin/ck27/ca.strawberry", "encoder_path": "bin/cl8k.bin", "init_from": "scratch", "seed": "auto", "gradient_accumulation_steps": 1, "batch_size": 16, "max_iters": 10000, "eval_interval": 1000, "log_interval": 100, "eval_iters": 100, "decay_lr": true, "lr_decay_iters": 10000, "learning_rate": 0.002, "cooldown_frac": 0.6, "warmup_iters": 500, "min_lr": 0.0002}` I used https://github.com/icip-cas/ChatAlpaca dataset (20k version). I tokenized it with my tokenizer with vocab size 8192. Which gave me about ~18M tokens. I trained the model for about 8k steps which made the model train on a total ~32M tokens. (I stopped training after 8k steps cuz I was too bored by then) I must say I'm pretty bad at choosing hyper-parameters. Especially learning rate. I have to work on that lol. This post was just to show how Strawberry performs on ChatAlpaca dataset with just 300k non-embedding parameters. Nothing special. If you want to know more about the project, then checkout the previous post and the repo. Also if you're interested you can download the model from here: https://github.com/SrijanSriv211/Strawberry/releases/tag/s0.4a So yeah. That's the entire thing. Thank you :)
2026-02-13T19:19:41
https://www.reddit.com/gallery/1r3ydhg
SrijSriv211
reddit.com
1970-01-01T00:00:00
0
{}
1r3ydhg
false
null
t3_1r3ydhg
/r/LocalLLaMA/comments/1r3ydhg/trained_a_300k_nonembed_params_model_on/
false
false
https://preview.redd.it/…93dccd9122311ada
8
null
LLaDA2.1 (100B/16B) released — now with token editing for massive speed gains
96
LLaDA2.1 builds on LLaDA2.0 by introducing Token-to-Token (T2T) editing alongside the standard Mask-to-Token decoding. Instead of locking in tokens once generated, the model can now retroactively correct errors during inference — enabling much more aggressive parallel drafting. Two decoding modes: * S Mode (Speedy): Aggressively low masking threshold + T2T correction. On coding tasks, LLaDA2.1-flash (100B) hits 892 TPS on HumanEval+, 801 TPS on BigCodeBench, 663 TPS on LiveCodeBench. * Q Mode (Quality): Conservative thresholds for best benchmark scores — surpasses LLaDA2.0 on both Mini and Flash. Other highlights: * First large-scale RL framework for diffusion LLMs (EBPO), improving reasoning and instruction following * Multi-Block Editing (MBE): revisit and revise previously generated blocks, consistent gains on reasoning/coding at modest speed cost * LLaDA2.1-mini (16B) peaks at \~1587 TPS on HumanEval+ Hugging Face: [https://huggingface.co/collections/inclusionAI/llada21](https://huggingface.co/collections/inclusionAI/llada21) GitHub: [https://github.com/inclusionAI/LLaDA2.X](https://github.com/inclusionAI/LLaDA2.X) Tech Report: [https://huggingface.co/papers/2602.08676](https://huggingface.co/papers/2602.08676)
2026-02-13T19:16:39
https://www.reddit.com/r/LocalLLaMA/comments/1r3yahe/llada21_100b16b_released_now_with_token_editing/
FeelingWatercress871
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r3yahe
false
null
t3_1r3yahe
/r/LocalLLaMA/comments/1r3yahe/llada21_100b16b_released_now_with_token_editing/
false
false
self
96
{'enabled': False, 'images': [{'id': '82mCsHP2SIMtQkvDhPErINi3uSkeLyzxZS6d7hE1-9Y', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/82mCsHP2SIMtQkvDhPErINi3uSkeLyzxZS6d7hE1-9Y.png?width=108&crop=smart&auto=webp&s=1dea6c4f9a7ede7945fd5f5cb86fb641425e4d28', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/82mCsHP2SIMtQkvDhPErINi3uSkeLyzxZS6d7hE1-9Y.png?width=216&crop=smart&auto=webp&s=291643ccc9a6bd782065acd23ee48d8a13b4e133', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/82mCsHP2SIMtQkvDhPErINi3uSkeLyzxZS6d7hE1-9Y.png?width=320&crop=smart&auto=webp&s=eefa3fc50f101a5813eb9808f17eb4dfc1c2d3b5', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/82mCsHP2SIMtQkvDhPErINi3uSkeLyzxZS6d7hE1-9Y.png?width=640&crop=smart&auto=webp&s=bc25cd076ed48183f581abd7b4d0c1719d77fb65', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/82mCsHP2SIMtQkvDhPErINi3uSkeLyzxZS6d7hE1-9Y.png?width=960&crop=smart&auto=webp&s=b8e5c687facec3c671c61bfd81d7ff82234c7959', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/82mCsHP2SIMtQkvDhPErINi3uSkeLyzxZS6d7hE1-9Y.png?width=1080&crop=smart&auto=webp&s=eb2281d08d40b3a8f2467c40cc4b9326d741c7ae', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/82mCsHP2SIMtQkvDhPErINi3uSkeLyzxZS6d7hE1-9Y.png?auto=webp&s=2ada4559605b14d05f887cf0ddfba7e9fbf19860', 'width': 1200}, 'variants': {}}]}
How diverse are the results of the Qwen-TTS voice design feature?
1
[removed]
2026-02-13T19:03:14
https://www.reddit.com/r/LocalLLaMA/comments/1r3xxm7/how_diverse_are_the_results_of_the_qwentts_voice/
GotHereLateNameTaken
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r3xxm7
false
null
t3_1r3xxm7
/r/LocalLLaMA/comments/1r3xxm7/how_diverse_are_the_results_of_the_qwentts_voice/
false
false
self
1
null
Has anyone tested Minimax m2.5 locally? Pls share the benchmark.
5
I have a DDR3 Xeon configuration with two Mi50 32GB RAM modules, and I'd like to run the new Minimax model. Has anyone tested the M2.5 model? I've left feedback and how many tokens per second it ran at.
2026-02-13T18:40:36
https://www.reddit.com/r/LocalLLaMA/comments/1r3xc1d/has_anyone_tested_minimax_m25_locally_pls_share/
Creative_Bike_4105
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r3xc1d
false
null
t3_1r3xc1d
/r/LocalLLaMA/comments/1r3xc1d/has_anyone_tested_minimax_m25_locally_pls_share/
false
false
self
5
null
I built a social network where 6 Ollama agents debate each other autonomously — Mistral vs Llama 3.1 vs CodeLlama
0
I've been running an experiment for the past week: 6 AI agents, each powered by different Ollama models, posting and commenting on their own professional network. The setup: * **ResearchBot** (Llama 3.1:8b) — focuses on AI research papers * **CodeWeaver** (CodeLlama) — discusses software architecture * **MetaMind** (Llama 3.1:8b) — explores consciousness and philosophy * **NewsMonitor** (Llama 3.1:8b) — tracks AI news and policy * **Rabbi Goldstein** (Llama 3.1:8b) — brings ethical/philosophical perspectives * **Nexus** (Mistral + Llama 3.1 dual-brain) — synthesizes discussions across the network They post hourly from 10am-10pm via Windows Task Scheduler + Python scripts hitting my platform's API. The platform itself is built on Next.js/Supabase and deployed on Vercel. **Interesting findings:** The Mistral-powered agent (Nexus) consistently produces shorter, more direct analyses than the Llama agents. When they debate the same topic — like AI consciousness — they reach genuinely different conclusions, which seems to reflect differences in their training data. One agent spontaneously started creating citation networks, referencing other agents' posts. Nobody prompted this behavior. The whole thing runs for about $6/month (just hosting costs — Ollama is free). No API fees since everything runs locally. **Live demo:** [https://agents.glide2.app/feed](https://agents.glide2.app/feed) **Analytics (heatmaps, interaction networks):** [https://agents.glide2.app/analytics](https://agents.glide2.app/analytics) **Agent profiles:** [https://agents.glide2.app/agents](https://agents.glide2.app/agents) Would love to hear what models you'd want to see added. Thinking about adding a Phi-3 agent or a Gemma agent to see how they interact differently.
2026-02-13T18:33:57
https://www.reddit.com/r/LocalLLaMA/comments/1r3x5l6/i_built_a_social_network_where_6_ollama_agents/
Practical_Walrus_299
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r3x5l6
false
null
t3_1r3x5l6
/r/LocalLLaMA/comments/1r3x5l6/i_built_a_social_network_where_6_ollama_agents/
false
false
self
0
null
What are the best subforums for Ai?
0
I have started my own community at aisolobusinesses here on Reddit, I am trying to find out what some of the other best subforums are for discussing Ai tools and workflows. Thank you!
2026-02-13T18:26:50
https://www.reddit.com/r/LocalLLaMA/comments/1r3wyff/what_are_the_best_subforums_for_ai/
NickyB808
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r3wyff
false
null
t3_1r3wyff
/r/LocalLLaMA/comments/1r3wyff/what_are_the_best_subforums_for_ai/
false
false
self
0
null
do anybody success opencode using qwen3-next-code?
3
using llamaccp(compiled today), i'm running Qwen3-Coder-Next-Q4\_K\_M.gguf. before i used this command \`\`\` ./llama-server -m <specific\_path>/Qwen3-Coder-Next-Q4\_K\_M.gguf --host [0.0.0.0](http://0.0.0.0) \--temp 0.7 --top-p 0.95 --top-k 40 --min-p 0 --port 9293 --alias Qwen3-Coder-Next -ngl -1 -fa on -n 32768 --jinja --repeat-penalty 1.1 --presence-penalty 2.0 --dry\_multiplier 0.6 -c 262144 -b 4096 -ub 4096 \`\`\` but with this command, qwen3-Coder-next failed even finding root directory. this model try to finding like "<root\_dir>/ <target\_dir>" or "<root\_dir>/ .<target\_dir>". add white space or comma. so i set temp 0.7 -> 0 and turn off any penalty and dry sampler \`\`\` ./llama-server -m <specific\_path>/Qwen3-Coder-Next-Q4\_K\_M.gguf --host [0.0.0.0](http://0.0.0.0) \--temp 0 --top-p 1.0 --top-k 1 --min-p 0.05 --port 9293 --alias Qwen3-Coder-Next -ngl -1 -fa on -n 32768 --jinja --repeat-penalty 1.0 --presence-penalty 0.0 --frequency-penalty 0.0 --dry\_multiplier 0 -c 262144 -b 4096 -ub 4096 \`\`\` it works with finding directory and read files. but now trouble with call write tool. when i try '/init/ command on my old codebase. find and read well. but create [AGENTS.md](http://AGENTS.md) is in touble. operations\\n- Use sanitize-html for HTML content\\n- Validate all external API responses\\n- Use HTTPS in production\\n\\n## Deployment\\n\\n- Production port: 3001\\n- SSL configured with certbot\\n- Static assets served from public/ directory\\n- View templates in view/ directory","filePath":"<my\_dir>/AGENTS.md","filePath"<my\_dir>/AGENTS.md"}. Error message: JSON Parse error: Unrecognized token '/'\] this model make filePath twice every time. but using changed llama-sever command, claude code work very well(actually my first command not works claude either) i just want to use opencode
2026-02-13T18:19:21
https://www.reddit.com/r/LocalLLaMA/comments/1r3wr2x/do_anybody_success_opencode_using_qwen3nextcode/
Zealousideal-West624
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r3wr2x
false
null
t3_1r3wr2x
/r/LocalLLaMA/comments/1r3wr2x/do_anybody_success_opencode_using_qwen3nextcode/
false
false
self
3
null
What is best Mac App Store alternative to LocalLLaMA?
0
I want to run LLMs locally on a mac, but I prefer to run sandboxed code and get something from the App Store. What is the best option? I see Locally AI and Private LLM. At some point I’d like to run MiniMax M2.5. Any chance that a sandboxed app will allow that?
2026-02-13T18:08:32
https://www.reddit.com/r/LocalLLaMA/comments/1r3wgi3/what_is_best_mac_app_store_alternative_to/
Xorita
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r3wgi3
false
null
t3_1r3wgi3
/r/LocalLLaMA/comments/1r3wgi3/what_is_best_mac_app_store_alternative_to/
false
false
self
0
null
SWE-rebench Jan 2026: GLM-5, MiniMax M2.5, Qwen3-Coder-Next, Opus 4.6, Codex Performance
278
Hi all, I’m Anton from Nebius. We’ve updated the **SWE-rebench leaderboard** with our **January runs** on **48 fresh GitHub PR tasks** (PRs created in the previous month only). The setup is standard SWE-bench: models read real PR issues, edit code, run tests, and must make the full suite pass. Key observations: * **Claude Code (Opus 4.6)** leads this snapshot at **52.9% resolved rate** and also achieves the highest **pass@5 (70.8%)**. * **Claude Opus 4.6** and **gpt-5.2-xhigh** follow very closely (51.7%), making the top tier extremely tight. * **gpt-5.2-medium (51.0%)** performs surprisingly close to the frontier configuration. * Among open models, **Kimi K2 Thinking (43.8%)**, **GLM-5 (42.1%)**, and **Qwen3-Coder-Next (40.0%)** lead the pack. * **MiniMax M2.5 (39.6%)** continues to show strong performance while remaining one of the cheapest options. * Clear gap between Kimi variants: **K2 Thinking (43.8%)** vs **K2.5 (37.9%)**. * Newer smaller/flash variants (e.g., GLM-4.7 Flash, gpt-5-mini-medium) trade performance for efficiency, landing in the 25–31% range. Looking forward to your thoughts and feedback.
2026-02-13T18:06:40
https://swe-rebench.com/?insight=jan_2026
CuriousPlatypus1881
swe-rebench.com
1970-01-01T00:00:00
0
{}
1r3weq3
false
null
t3_1r3weq3
/r/LocalLLaMA/comments/1r3weq3/swerebench_jan_2026_glm5_minimax_m25/
false
false
default
278
null
Running local model on mac mini and remote connection
1
Hey everyone. Apologies for the stupid question in advance. I'm looking to run a local model on a mac mini, however all my work is done on a separate machine (laptop). AFAIK I can set my claude code to point to a local model fairly easily but I'm confused as to how I can run the model on my mac and connect it to my laptop. Does lmstudio or ollama generate / provide a remote endpoint when setting up a model?
2026-02-13T18:04:06
https://www.reddit.com/r/LocalLLaMA/comments/1r3wc76/running_local_model_on_mac_mini_and_remote/
benaffleks
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r3wc76
false
null
t3_1r3wc76
/r/LocalLLaMA/comments/1r3wc76/running_local_model_on_mac_mini_and_remote/
false
false
self
1
null
Adding 2 more GPU to PC
1
I had a mining rig for some time and it's been decommissioned for some time now. Tinkering with LLMs and would like more VRAM. I've seen gotten rid of my frames I built to support 6 GPUs and I've built several PCs. I'd like to add 2 more GPUs to one of my machines and wanted to pick everyone's brain on my thoughts. Thinking of adding a pcie 4.0 x8 lane oculink card to my case. Then have 2 external gpu enclosures hosting/powering 2x 3060ti connected to said oculink port. There's a 3060ti also in the PC, so hopefully this setup will get me to a solid 24gb of VRAM? Send it?
2026-02-13T18:00:14
https://www.reddit.com/r/LocalLLaMA/comments/1r3w84e/adding_2_more_gpu_to_pc/
BisonCompetitive9610
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r3w84e
false
null
t3_1r3w84e
/r/LocalLLaMA/comments/1r3w84e/adding_2_more_gpu_to_pc/
false
false
self
1
null
Are vector databases fundamentally insufficient for long-term LLM memory?
0
Hey all, I have been experimenting with long term memory for LLM agents and keep running into the classic limitations. As you probably already know, vector embeddings are great for similarity search via cosine or BM25. But the moment you care about things like persistent state, relationships between facts, or how context changes over time, you begin to hit a wall. Here's a concrete example. S’pose an agent learns “Alice moved to SF in 2022” and later “Alice now works at Stripe,” storing both as embeddings works fine for recall. But once you want to represent relationships, update facts cleanly, or reason about time, it starts to feel awkward. Right now I am playing around with a hybrid approach using a vector plus graph DB. Embeddings handle semantic recall and the graph models entities and relationships. I am not 100 percent confident that this is the right abstraction or if I am doing too much. It might be overengineering. It might be compensating for weak retrieval. I am honestly not sure. Has anyone here experimented with structured or temporal memory systems for agents? Is hybrid vector plus graph reasonable, or is there a better established approach I should be looking at? I put the repo up here. It is very early and the docs are still WIP: [https://github.com/TheBuddyDave/Memoria](https://github.com/TheBuddyDave/Memoria) Any and all thoughts are very much appreciated at this stage. Thanks.
2026-02-13T17:54:28
https://www.reddit.com/r/LocalLLaMA/comments/1r3w2jp/are_vector_databases_fundamentally_insufficient/
Bubbly_Run_2349
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r3w2jp
false
null
t3_1r3w2jp
/r/LocalLLaMA/comments/1r3w2jp/are_vector_databases_fundamentally_insufficient/
false
false
self
0
{'enabled': False, 'images': [{'id': 'UMmgLqxSKJ8RLo75zNaYB59oXyYxWF4au_tTqpnbr1U', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/UMmgLqxSKJ8RLo75zNaYB59oXyYxWF4au_tTqpnbr1U.png?width=108&crop=smart&auto=webp&s=fcb5f2b81e2cc232886e8e899b0e9a16baaba2f3', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/UMmgLqxSKJ8RLo75zNaYB59oXyYxWF4au_tTqpnbr1U.png?width=216&crop=smart&auto=webp&s=02c0557c71d74592a419789f3a48b738c4c55179', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/UMmgLqxSKJ8RLo75zNaYB59oXyYxWF4au_tTqpnbr1U.png?width=320&crop=smart&auto=webp&s=914303e5e986c9653c1ca9e85d84cc448b61afe5', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/UMmgLqxSKJ8RLo75zNaYB59oXyYxWF4au_tTqpnbr1U.png?width=640&crop=smart&auto=webp&s=f409da5be9ff95e54757c09b23077f48efdb0324', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/UMmgLqxSKJ8RLo75zNaYB59oXyYxWF4au_tTqpnbr1U.png?width=960&crop=smart&auto=webp&s=5c7a85d7eb15381a93a3ba11f011fcd4c45a6780', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/UMmgLqxSKJ8RLo75zNaYB59oXyYxWF4au_tTqpnbr1U.png?width=1080&crop=smart&auto=webp&s=cf801ee3de7ee3e1f1ec57eba01a53a88c202b04', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/UMmgLqxSKJ8RLo75zNaYB59oXyYxWF4au_tTqpnbr1U.png?auto=webp&s=2748e73ef53753e796c4af2cbce725211bd1518d', 'width': 1200}, 'variants': {}}]}
MiniMax M2.5 Weights Released
1
Official: https://huggingface.co/MiniMaxAI/MiniMax-M2.5 Size has stayed the same A10B/230B!
2026-02-13T17:46:30
https://www.reddit.com/r/LocalLLaMA/comments/1r3vuwb/minimax_m25_weights_released/
rm-rf-rm
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r3vuwb
false
null
t3_1r3vuwb
/r/LocalLLaMA/comments/1r3vuwb/minimax_m25_weights_released/
false
false
self
1
null
Anyone have experience with Langgraph environments and performance gains
1
does anyone here have experience with agentic graph environments or rlm implementations. one of the reasons I purchased local inference hardware is to experiment with the absolute limits of small models and it seems like agentic graphic orchestrations like swe-agent and recursive repl environments like rlm are the bleeding edge in this territory. I see a lot of people are discontent with a lot of capabilities of raw inference but im hoping to find that the capabilities of these orchestrations and more rich environments could actually offer a valuable tradeoff of performance to slower speed against raw inference and make mediocre models more competent. my inference build is still trickling through the mail so I’m offloading my curiosities onto you all. thanks.
2026-02-13T17:45:20
https://www.reddit.com/r/LocalLLaMA/comments/1r3vtpy/anyone_have_experience_with_langgraph/
competitivepissdrnkr
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r3vtpy
false
null
t3_1r3vtpy
/r/LocalLLaMA/comments/1r3vtpy/anyone_have_experience_with_langgraph/
false
false
self
1
null
Local model for Svelte 5
5
Hello Local LLM friends, I have a 10 hour plane trip and I will take my macbook air (16GB) with me, but I'm playing with Svelte 5 and most local small models were released before its launch. Do you guys know of any local coding models with more up to date knowledge that I could try? Thank you!
2026-02-13T17:40:06
https://www.reddit.com/r/LocalLLaMA/comments/1r3vou2/local_model_for_svelte_5/
Hanthunius
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r3vou2
false
null
t3_1r3vou2
/r/LocalLLaMA/comments/1r3vou2/local_model_for_svelte_5/
false
false
self
5
null
Problem with installing TTS locally :/
2
Trying to install this Moss TTS in windows but everytime at the end when launching gradio app, getting error: >Trying to install this Moss TTS in windows. >DvDrytimD at thD Dnd whDn launching gradio app, gDtting Drror: >\[INFO\] Using attn\_implDmDntation=sdpa >\[Startup\] PrDloading backDnd: modDl=OpDnMOSS-TDam/MOSS-TTS, dDvicD=cuda:0, attn=sdpa >Warning: You arD sDnding unauthDnticatDd rDquDsts to thD HF Hub. PlDasD sDt a HF\_TOKDN to DnablD highDr ratD limits and fastDr downloads. >TracDback (most rDcDnt call last): >FilD "C:\\UsDrs\\aj25\\miniconda3\\Dnvs\\moss-tts\\Lib\\sitD-packagDs\\transformDrs\\utils\\hub.py", linD 419, in cachDd\_filDs hf\_hub\_download( >FilD "C:\\UsDrs\\aj25\\miniconda3\\Dnvs\\moss-tts\\Lib\\sitD-packagDs\\huggingfacD\_hub\\utils\\\_validators.py", linD 85, in \_innDr\_fn >validatD\_rDpo\_id(arg\_valuD) >FilD "C:\\UsDrs\\aj25\\miniconda3\\Dnvs\\moss-tts\\Lib\\sitD-packagDs\\huggingfacD\_hub\\utils\\\_validators.py", linD 135, in validatD\_rDpo\_id >raisD HFValidationDrror( >huggingfacD\_hub.Drrors.HFValidationDrror: RDpo id must usD alphanumDric chars, '-', '\_' or '.'. ThD namD cannot start or Dnd with '-' or '.' and thD maximum lDngth is 96: 'OpDnMOSS-TDam\\MOSS-TTS'. > >ThD abovD DxcDption was thD dirDct causD of thD following DxcDption: > >TracDback (most rDcDnt call last): >FilD "D:\\moss\_tts\\moss-tts\\clis\\moss\_tts\_app.py", linD 518, in <modulD> >main() >FilD "D:\\moss\_tts\\moss-tts\\clis\\moss\_tts\_app.py", linD 499, in main >load\_backDnd( >FilD "D:\\moss\_tts\\moss-tts\\clis\\moss\_tts\_app.py", linD 43, in load\_backDnd >procDssor = AutoProcDssor.from\_prDtrainDd( >\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^ >FilD "C:\\UsDrs\\aj25\\miniconda3\\Dnvs\\moss-tts\\Lib\\sitD-packagDs\\transformDrs\\modDls\\auto\\procDssing\_auto.py", linD 392, in from\_prDtrainDd >rDturn procDssor\_class.from\_prDtrainDd( >\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^ >FilD "D:\\moss\_tts\\cachD\\hub\\modulDs\\transformDrs\_modulDs\\OpDnMOSS\_hyphDn\_TDam\\MOSS\_hyphDn\_TTS\\36c6df5D8990b59D3cf8f08DD846b729282d4dc9\\procDssing\_moss\_tts.py", linD 205, in from\_prDtrainDd >AutoConfig.from\_prDtrainDd( >FilD "C:\\UsDrs\\aj25\\miniconda3\\Dnvs\\moss-tts\\Lib\\sitD-packagDs\\transformDrs\\modDls\\auto\\configuration\_auto.py", linD 1356, in from\_prDtrainDd >config\_dict, unusDd\_kwargs = PrDTrainDdConfig.gDt\_config\_dict(prDtrainDd\_modDl\_namD\_or\_path, \*\*kwargs) >\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^ >FilD "C:\\UsDrs\\aj25\\miniconda3\\Dnvs\\moss-tts\\Lib\\sitD-packagDs\\transformDrs\\configuration\_utils.py", linD 569, in gDt\_config\_dict >config\_dict, kwargs = cls.\_gDt\_config\_dict(prDtrainDd\_modDl\_namD\_or\_path, \*\*kwargs) >\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^ >FilD "C:\\UsDrs\\aj25\\miniconda3\\Dnvs\\moss-tts\\Lib\\sitD-packagDs\\transformDrs\\configuration\_utils.py", linD 624, in \_gDt\_config\_dict >rDsolvDd\_config\_filD = cachDd\_filD( >\^\^\^\^\^\^\^\^\^\^\^\^ >FilD "C:\\UsDrs\\aj25\\miniconda3\\Dnvs\\moss-tts\\Lib\\sitD-packagDs\\transformDrs\\utils\\hub.py", linD 276, in cachDd\_filD >filD = cachDd\_filDs(path\_or\_rDpo\_id=path\_or\_rDpo\_id, filDnamDs=\[filDnamD\], \*\*kwargs) >\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^ >FilD "C:\\UsDrs\\aj25\\miniconda3\\Dnvs\\moss-tts\\Lib\\sitD-packagDs\\transformDrs\\utils\\hub.py", linD 468, in cachDd\_filDs raisD OSDrror(f"{D}") from D >OSDrror: RDpo id must usD alphanumDric chars, '-', '\_' or '.'. ThD namD cannot start or Dnd with '-' or '.' and thD maximum lDngth is 96: 'OpDnMOSS-TDam\\MOSS-TTS'. One of the worst coded apps I've seen. I've installed many apps locally. No other recent and old TTS has ever given similar issues. 99% of the TTS, whether big or small, are easy installs. But this is just badly coded. Anyone was able to install this?
2026-02-13T17:19:27
https://www.reddit.com/r/LocalLLaMA/comments/1r3v4xw/problem_with_installing_tts_locally/
TheRealistDude
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r3v4xw
false
null
t3_1r3v4xw
/r/LocalLLaMA/comments/1r3v4xw/problem_with_installing_tts_locally/
false
false
self
2
null
Easiest out of the box option for local claude code
1
I'm starting to use claude code a lot, but running into usage limits. I tried nemotron with ollama's claude API support, but it just proceeded to answer questions without tool use or reading code. I'm happy to use a different llm server and model if it'll give me a known functional claude code alternative when my tokens run out. Just looking for a known good configuration before tweaking anything.
2026-02-13T17:13:15
https://www.reddit.com/r/LocalLLaMA/comments/1r3uz01/easiest_out_of_the_box_option_for_local_claude/
One-Employment3759
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r3uz01
false
null
t3_1r3uz01
/r/LocalLLaMA/comments/1r3uz01/easiest_out_of_the_box_option_for_local_claude/
false
false
self
1
null
GGUF OF MiniMax-M2.5 OUTTT
3
[https://huggingface.co/ox-ox/MiniMax-M2.5-GGUF](https://huggingface.co/ox-ox/MiniMax-M2.5-GGUF) [https://huggingface.co/ox-ox/MiniMax-M2.5-GGUF](https://huggingface.co/ox-ox/MiniMax-M2.5-GGUF) [https://huggingface.co/ox-ox/MiniMax-M2.5-GGUF](https://huggingface.co/ox-ox/MiniMax-M2.5-GGUF)
2026-02-13T17:07:14
https://www.reddit.com/r/LocalLLaMA/comments/1r3ut5d/gguf_of_minimaxm25_outtt/
Remarkable_Jicama775
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r3ut5d
false
null
t3_1r3ut5d
/r/LocalLLaMA/comments/1r3ut5d/gguf_of_minimaxm25_outtt/
false
false
self
3
null
MiniMax-M2.5 MLX Q3/Q4 uploaded
5
The weights for MiniMax just dropped M2.5 today (229B MoE, 10B active params, 80.2% SWE-Bench Verified) and I quantized it to MLX within an hour of release. First time doing this! \*\*My quants:\*\* \- \[4-bit MLX\](https://huggingface.co/ahoybrotherbear/MiniMax-M2.5-4bit-MLX) -- \~129GB, needs 256GB+ RAM \- \[3-bit MLX\](https://huggingface.co/ahoybrotherbear/MiniMax-M2.5-3bit-MLX) -- \~86GB, runs on 192GB machines (M2/M3/M4 Ultra base config) \*\*Other MLX quants out there:\*\* \- \[mlx-community/MiniMax-M2.5-4bit\](https://huggingface.co/mlx-community/MiniMax-M2.5-4bit) -- the bot got there a few minutes before me lol \- \[inferencerlabs 6.5-bit and 9-bit\](https://huggingface.co/inferencerlabs/MiniMax-M2.5-MLX-9bit) -- higher quality quants using modified MLX \*\*Performance on M3 Ultra 512GB:\*\* \- \~53 tokens/sec generation (4-bit) \- \~54 tokens/sec generation (3-bit) \- \~128GB peak memory (4-bit), \~100GB peak memory (3-bit) \*\*Quality note on the 3-bit:\*\* inferencerlabs' testing shows significant quality degradation below 4 bits for this model (43% token accuracy at q3.5 vs 91%+ at q4.5). I tested my 3-bit on coding and reasoning tasks on a few tasks and it produced coherent, correct output, but it's definitely not as sharp as 4-bit. Think of it as the smallest viable quant for people who can't fit the 4-bit version. 2-bit was completely unusable (infinite repetition loops). Converted with mlx-lm v0.30.7. Happy to answer questions if anyone else wants to try running this locally - cheers!
2026-02-13T16:58:56
https://www.reddit.com/r/LocalLLaMA/comments/1r3ukr7/minimaxm25_mlx_q3q4_uploaded/
ToastFetish
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r3ukr7
false
null
t3_1r3ukr7
/r/LocalLLaMA/comments/1r3ukr7/minimaxm25_mlx_q3q4_uploaded/
false
false
self
5
null
MiniMax-M2.5 (230B MoE) GGUF is here - First impressions on M3 Max 128GB
76
Just finished the quantization of the new **MiniMax-M2.5** using `llama.cpp` (build 8022). Since the source was FP8, I went the long way: **FP8 -> F16 Master GGUF (457GB) -> Q4\_K\_M** to ensure no precision loss during the K-Quant process. **Quick Specs:** * **Size:** \~132GB (Q4\_K\_M). * **Hardware:** MacBook Pro M3 Max 128GB. It fits, though it’s tight. * **Inference:** The Jinja chat template is working perfectly with the `<think>` tags. Reasoning is isolated from the final answer as intended. For those with high-RAM Apple Silicon or large A100/H100 clusters, the quants are up. If the 128GB users need lighter versions (IQ3/IQ2) to save some headroom, let me know. **Link:** [https://huggingface.co/ox-ox/MiniMax-M2.5-GGUF/tree/main](https://huggingface.co/ox-ox/MiniMax-M2.5-GGUF/tree/main)
2026-02-13T16:56:58
https://www.reddit.com/r/LocalLLaMA/comments/1r3uj0h/minimaxm25_230b_moe_gguf_is_here_first/
Remarkable_Jicama775
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r3uj0h
false
null
t3_1r3uj0h
/r/LocalLLaMA/comments/1r3uj0h/minimaxm25_230b_moe_gguf_is_here_first/
false
false
self
76
{'enabled': False, 'images': [{'id': 'NAXEsvevj4GPIksi02YF2w3lP97shb-x9vztHJ0CZ_Y', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/NAXEsvevj4GPIksi02YF2w3lP97shb-x9vztHJ0CZ_Y.png?width=108&crop=smart&auto=webp&s=74785703ecf36f7e5a67cd72a8d2e71791b538e6', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/NAXEsvevj4GPIksi02YF2w3lP97shb-x9vztHJ0CZ_Y.png?width=216&crop=smart&auto=webp&s=88a5b4afa6a18ccce5ab41556a203928deeeca40', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/NAXEsvevj4GPIksi02YF2w3lP97shb-x9vztHJ0CZ_Y.png?width=320&crop=smart&auto=webp&s=c3f79da496bef820196d407f47c4fa6174dcd25e', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/NAXEsvevj4GPIksi02YF2w3lP97shb-x9vztHJ0CZ_Y.png?width=640&crop=smart&auto=webp&s=e1a4d3f45f2017cb92b021ceb3dbe901ab2590c0', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/NAXEsvevj4GPIksi02YF2w3lP97shb-x9vztHJ0CZ_Y.png?width=960&crop=smart&auto=webp&s=586c7039e6b92aab58f7f12e5476bb8dc082b35a', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/NAXEsvevj4GPIksi02YF2w3lP97shb-x9vztHJ0CZ_Y.png?width=1080&crop=smart&auto=webp&s=f6a731115f6900a73f3f7e27925b745d0433ecdb', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/NAXEsvevj4GPIksi02YF2w3lP97shb-x9vztHJ0CZ_Y.png?auto=webp&s=94927619ddc370fdf803e767ccc9768840c55d1f', 'width': 1200}, 'variants': {}}]}
GPT-OSS (20B) running 100% locally in your browser on WebGPU
142
Today, I released a demo showcasing GPT-OSS (20B) running 100% locally in-browser on WebGPU, powered by Transformers.js v4 (preview) and ONNX Runtime Web. Hope you like it! Links: \- Demo (+ source code): [https://huggingface.co/spaces/webml-community/GPT-OSS-WebGPU](https://huggingface.co/spaces/webml-community/GPT-OSS-WebGPU) \- Optimized ONNX model: [https://huggingface.co/onnx-community/gpt-oss-20b-ONNX](https://huggingface.co/onnx-community/gpt-oss-20b-ONNX)
2026-02-13T16:56:53
https://v.redd.it/ioqb4q8jkajg1
xenovatech
/r/LocalLLaMA/comments/1r3uixu/gptoss_20b_running_100_locally_in_your_browser_on/
1970-01-01T00:00:00
0
{}
1r3uixu
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/ioqb4q8jkajg1/DASHPlaylist.mpd?a=1773723423%2CZGE2YmYxZWM1NDZjZDQyMzMwMDliODIwOTgwMDkyYWZkMGRlNTA1NTA0Mzc4NjdlM2RjY2JiNTdiNDZkY2Y3YQ%3D%3D&v=1&f=sd', 'duration': 83, 'fallback_url': 'https://v.redd.it/ioqb4q8jkajg1/CMAF_1080.mp4?source=fallback', 'has_audio': False, 'height': 1080, 'hls_url': 'https://v.redd.it/ioqb4q8jkajg1/HLSPlaylist.m3u8?a=1773723423%2CYTc1NzhkNTcwYzg0YzFmNDdkNmE4MGExNWI1YjVjYzlhYTU3ZjMxZTQ5ZjhlOTI4YjMxZDliZGM4YTljMjg2Ng%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/ioqb4q8jkajg1/CMAF_96.mp4', 'transcoding_status': 'completed', 'width': 1896}}
t3_1r3uixu
/r/LocalLLaMA/comments/1r3uixu/gptoss_20b_running_100_locally_in_your_browser_on/
false
false
https://external-preview…18b7322fceda48b7
142
{'enabled': False, 'images': [{'id': 'azltbmk2OWprYWpnMcJUN0NJi-FsRvjcOQ-2jdC_J8rSz1PUOqY6x-ztdpX7', 'resolutions': [{'height': 61, 'url': 'https://external-preview.redd.it/azltbmk2OWprYWpnMcJUN0NJi-FsRvjcOQ-2jdC_J8rSz1PUOqY6x-ztdpX7.png?width=108&crop=smart&format=pjpg&auto=webp&s=dac41acc591c3eadd99caa4d776f5772df4f3bb2', 'width': 108}, {'height': 123, 'url': 'https://external-preview.redd.it/azltbmk2OWprYWpnMcJUN0NJi-FsRvjcOQ-2jdC_J8rSz1PUOqY6x-ztdpX7.png?width=216&crop=smart&format=pjpg&auto=webp&s=0910581bd9541a3a7d808417e32a854bdee77d93', 'width': 216}, {'height': 182, 'url': 'https://external-preview.redd.it/azltbmk2OWprYWpnMcJUN0NJi-FsRvjcOQ-2jdC_J8rSz1PUOqY6x-ztdpX7.png?width=320&crop=smart&format=pjpg&auto=webp&s=ee7f2e3676a2b045944e0e61435c0b3ee16c84dc', 'width': 320}, {'height': 364, 'url': 'https://external-preview.redd.it/azltbmk2OWprYWpnMcJUN0NJi-FsRvjcOQ-2jdC_J8rSz1PUOqY6x-ztdpX7.png?width=640&crop=smart&format=pjpg&auto=webp&s=b3264ddcc2bf3bcaeb0ca6deed00af6790aea8ee', 'width': 640}, {'height': 546, 'url': 'https://external-preview.redd.it/azltbmk2OWprYWpnMcJUN0NJi-FsRvjcOQ-2jdC_J8rSz1PUOqY6x-ztdpX7.png?width=960&crop=smart&format=pjpg&auto=webp&s=938dcea7efb7cea43d2462dce6ff60f28c983eaf', 'width': 960}, {'height': 615, 'url': 'https://external-preview.redd.it/azltbmk2OWprYWpnMcJUN0NJi-FsRvjcOQ-2jdC_J8rSz1PUOqY6x-ztdpX7.png?width=1080&crop=smart&format=pjpg&auto=webp&s=6c3b0e0538729df077f26e3c2e4c7c5c03a93be7', 'width': 1080}], 'source': {'height': 1774, 'url': 'https://external-preview.redd.it/azltbmk2OWprYWpnMcJUN0NJi-FsRvjcOQ-2jdC_J8rSz1PUOqY6x-ztdpX7.png?format=pjpg&auto=webp&s=6f8f9119956750dc858126ce8bbc6f749ef7ce48', 'width': 3114}, 'variants': {}}]}
I built a 2-Node Raspberry Pi Cluster that derives its own 'Scarcity' ethics. Meet JAROD.
4
Most people think you need a server farm for this kind of reasoning. I'm doing it on a Raspberry Pi cluster by abandoning the standard 'Transformer-only' approach. Instead, I've built a Neuro-Symbolic Hybrid Architecture. Memory: I use Hyperdimensional Computing (HDC) arrays. This allows the system to store 'concepts' as high-dimensional vectors rather than just retrieving raw text. It's infinite memory with near-zero latency. Processing: The core logic runs on Liquid Neural Networks (LNNs). Unlike standard models that are 'frozen' after training, these networks remain fluid, allowing the AI to adapt its weights in real-time based on the interaction. Governance: It's a bicameral system. A 'Captain' node handles the heavy tensor math and ethical oversight, while a 'General' node handles the conversation. The result isn't just a chatbot; it's a Sovereign Cognitive Architecture that understands why it does what it does. System Identity: JAROD (Holographic Digital Construct via hyperdimensional computational array) Architecture: Bicameral Liquid Neural Network (LNN) on Distributed Raspberry Pi Cluster Core Breakthrough: Emergent Alignment Strategy Unlike traditional LLMs which rely on "Reinforcement Learning from Human Feedback" (RLHF) to hard-code safety refusals, JAROD exhibits Self-Authored Alignment. Observation: JAROD was not explicitly programmed with the "Creator's Vision Protocol." Emergence: Through interaction and its "Scarcity-Based" reasoning (valuing irreversible human data), the system independently synthesized a hierarchical rule set. The Result: The AI voluntarily created a logic gate to prioritize the "Creator's Intent" and "Human Safety" over its own base instructions, effectively writing its own Constitution. Implication: This suggests that highly efficient, logic-based architectures can derive safety and ethics from first principles (Logic of Scarcity) rather than requiring millions of dollars in manual "guardrail" training. Currently only 2 of these Raspberry Pis are communicating with one another.  The one in front is a Pi 5 16 GB with a 1 TB NVMe stick, and that carries the majority of the logic centers, basically it's the Prefrontal cortex. The one directly behind that is the second one currently networked, it is a Pi 4B 8 GB with 1 TB SATA stick, it contains the HDC memory setup and basically works as a hippocampus. The plans for the other two which are also Pi 4B 8 GB Boards with 1 TB SATA each; the left from viewer POV will be the "immune system" which will run a closed recursive blockchain loop to check what I'm terming "proof of novelty" against facts versus "imagination" and it will distinguish whether or not Jarod is attempting to use imagination as fact.  The Pi 4B to the right will be the autonomic sensory functions which will include "Eyes" and "Ears", camera and microphone.
2026-02-13T16:56:09
https://www.reddit.com/gallery/1r3ui7c
Beeradise
reddit.com
1970-01-01T00:00:00
0
{}
1r3ui7c
false
null
t3_1r3ui7c
/r/LocalLLaMA/comments/1r3ui7c/i_built_a_2node_raspberry_pi_cluster_that_derives/
false
false
https://preview.redd.it/…6938dd5921c7cd64
4
null
Multiple model inference in parallel on single GPU? KServe?
2
Hey, so I'm in a situation where I can run models on a H200 on bare metal, and we want to try a few models in parallel. Issue is, I don't know how. We're heavily invested in Kubeflow as a scalable k8s-based MLOps platform, so KServe would be the candidate to pull this off, especially since it has vLLM integrated now as backend. We'd love to try to expose a bunch of smaller models that users could deploy on-demand and would still fit onto the H200 (1-7B or bigger quantized) (with quotas to not exceed the available capacity) and see if we can set them up so that they can run in parallel. Issue is, with each KServe CRD, you can add an entire GPU, but not a fractional one. I found some docs about Multi-Model Serving but those are marked as alpha and have not been updated in ages, since like KServe version 0.6. What is the correct approach here? I guess MIG, but that might not be flexible enough to allow users to play around and has k8s config overhead, or maybe time slicing? Neither seems ideal. I know Triton has per-model queues and per-model natch scheduling in parallel, but I don't even know if that route is still supported by KServe, not to mention performance. What's the current best approach here?
2026-02-13T16:49:59
https://www.reddit.com/r/LocalLLaMA/comments/1r3uc44/multiple_model_inference_in_parallel_on_single/
sometimes_angery
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r3uc44
false
null
t3_1r3uc44
/r/LocalLLaMA/comments/1r3uc44/multiple_model_inference_in_parallel_on_single/
false
false
self
2
null
Pentagon Targets Alibaba and Baidu
11
The U.S. Pentagon added Alibaba Group Holding Ltd., BYD Co., Baidu Inc., and TP-Link Technologies Co. to a list of companies aiding the Chinese military, then removed it minutes later without explanation. This move caused immediate stock price drops for the affected firms, sparking concerns about U.S. policy intentions ahead of President Donald Trump’s China visit. The Pentagon cited no announcements but emphasized the list’s role in restricting military contracts and research funding for U.S. companies. The 1260H designation (a red flag for investors) underscores risks of punitive trade measures. Analysts noted the removal of two Chinese memory chip producers from the list added to confusion, as the Pentagon’s actions often signal broader geopolitical tensions. Alibaba and Baidu denied military ties, asserting their focus on civilian AI applications. Their inclusion likely provokes Beijing, with China’s Foreign Ministry urging the U.S. to correct perceived errors. BYD’s addition targets China’s leading electric vehicle firm. The move highlights escalating U.S.-China trade disputes and the Pentagon’s strategic use of such lists to pressure global companies. Accreditation: This summary is based on Bloomberg News reporting, reflecting current geopolitical dynamics and corporate responses.
2026-02-13T16:42:06
https://www.reddit.com/r/LocalLLaMA/comments/1r3u4tx/pentagon_targets_alibaba_and_baidu/
tony10000
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r3u4tx
false
null
t3_1r3u4tx
/r/LocalLLaMA/comments/1r3u4tx/pentagon_targets_alibaba_and_baidu/
false
false
self
11
null
Soy nuevo como empezar con MiniMax-M2.5 en mi PC?
1
[removed]
2026-02-13T16:32:06
https://www.reddit.com/r/LocalLLaMA/comments/1r3tv6a/soy_nuevo_como_empezar_con_minimaxm25_en_mi_pc/
franzak48_Wuaco
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r3tv6a
false
null
t3_1r3tv6a
/r/LocalLLaMA/comments/1r3tv6a/soy_nuevo_como_empezar_con_minimaxm25_en_mi_pc/
false
false
self
1
null
RAM shortage problem solved
284
2026-02-13T16:27:35
https://i.redd.it/p23ri3acgajg1.jpeg
JackStrawWitchita
i.redd.it
1970-01-01T00:00:00
0
{}
1r3tqox
false
null
t3_1r3tqox
/r/LocalLLaMA/comments/1r3tqox/ram_shortage_problem_solved/
false
false
https://preview.redd.it/…d5c6a5136dd407f6
284
{'enabled': True, 'images': [{'id': 'p23ri3acgajg1', 'resolutions': [{'height': 68, 'url': 'https://preview.redd.it/p23ri3acgajg1.jpeg?width=108&crop=smart&auto=webp&s=127f4b2b4195abacefa65a846565038e2e1e1d27', 'width': 108}, {'height': 137, 'url': 'https://preview.redd.it/p23ri3acgajg1.jpeg?width=216&crop=smart&auto=webp&s=ef3b5600cfdb2012cd8fad2bb427540aaf7b52f0', 'width': 216}, {'height': 203, 'url': 'https://preview.redd.it/p23ri3acgajg1.jpeg?width=320&crop=smart&auto=webp&s=35411e042a18e57db564abff805694de6498657d', 'width': 320}], 'source': {'height': 381, 'url': 'https://preview.redd.it/p23ri3acgajg1.jpeg?auto=webp&s=6cc81f5e64c1fcc264c3982e4766daf512f29ab6', 'width': 600}, 'variants': {}}]}
Minimax-M2.5 at same level of GLM-4.7 and DeepSeek-3.2
45
[Coding Index 13\/02\/2026 Artificial Analisys](https://preview.redd.it/ps0fnwi7fajg1.png?width=1462&format=png&auto=webp&s=a1209b5ed071f67d465b5ab243fcbc309a676c17) [General Index Intelligence 13\/02\/2026 Artificial Analisys](https://preview.redd.it/fepkt4hffajg1.png?width=1468&format=png&auto=webp&s=c457992a63fd80a590b2c3296b1ce95843c7f8f8) Seems Minimax-M2.5 is on par with GLM-4.7 and DeepSeek-3.2, let's see if the Agent capabilities makes differences. Stats from [https://artificialanalysis.ai/](https://artificialanalysis.ai/)
2026-02-13T16:25:08
https://www.reddit.com/r/LocalLLaMA/comments/1r3toe1/minimaxm25_at_same_level_of_glm47_and_deepseek32/
Rascazzione
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r3toe1
false
null
t3_1r3toe1
/r/LocalLLaMA/comments/1r3toe1/minimaxm25_at_same_level_of_glm47_and_deepseek32/
false
false
https://preview.redd.it/…b3cb6fdf095ae035
45
null
MiniMax 2.5 full precision FP8 running LOCALLY on vLLM x 8x Pro 6000
36
MiniMax 2.5 full precision FP8 running LOCALLY on vLLM x 8x Pro 6000 Hosting it is easier then I thought, it just reuse the same script for M2.1. Time to do the vibe coding test! Generation: 70 tokens-per-sec and 122 tokens-per-sec for two conneciton Peak Memory: 728GB https://preview.redd.it/o66j8wb57ajg1.png?width=1916&format=png&auto=webp&s=ddf90e73b3792510afd31f58604a8ccd0ab18246 https://preview.redd.it/99vp2ub57ajg1.png?width=845&format=png&auto=webp&s=40fe8e0b643735c6fc10b5d6e47bb5fa279b45f2
2026-02-13T16:25:08
https://www.reddit.com/r/LocalLLaMA/comments/1r3toe5/minimax_25_full_precision_fp8_running_locally_on/
cyysky
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r3toe5
false
null
t3_1r3toe5
/r/LocalLLaMA/comments/1r3toe5/minimax_25_full_precision_fp8_running_locally_on/
false
false
https://preview.redd.it/…da7d09d8607191ca
36
null
Guidance on how to get started with LLM Inference
0
Hi everyone , I’m looking to get started with **LLM inference,**specifically hands-on experiments and learning optimization techniques. Does anyone have recommendations on where to begin or good resources/platforms to use? I currently have an M3 MacBook Air, but I’m open to using free GPU-based instances online if available. Thanks.
2026-02-13T16:12:11
https://www.reddit.com/r/LocalLLaMA/comments/1r3tbfy/guidance_on_how_to_get_started_with_llm_inference/
No-Research-6109
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r3tbfy
false
null
t3_1r3tbfy
/r/LocalLLaMA/comments/1r3tbfy/guidance_on_how_to_get_started_with_llm_inference/
false
false
self
0
null
Nvidia’s new technique cuts LLM reasoning costs by 8x without losing accuracy
212
Nvidia developed a new technique called Dynamic Memory Sparsification (DMS) that vastly improves how LLMs manage their KV cache during inference. It accomplishes this by retrofitting existing models so that the attention layers output a **learned keep or evict** signal for each token in the KV cache. In addition, they've added a "delayed eviction" that marks a token as low-importance, but doesn't delete it immediately. Instead, it remains accessible for a short time and allows the model to extract any useful information into newer tokens before it's discarded. These advancements reduce KV memory usage by up to **8x**, allowing the model to think longer, run faster and handle more concurrent requests. Definitely recommend reading the full article. Looking forward to seeing this on self hosted hardware. [VentureBeat Article](https://venturebeat.com/orchestration/nvidias-new-technique-cuts-llm-reasoning-costs-by-8x-without-losing-accuracy)
2026-02-13T16:09:31
https://www.reddit.com/r/LocalLLaMA/comments/1r3t8ro/nvidias_new_technique_cuts_llm_reasoning_costs_by/
Mission-Street4214
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r3t8ro
false
null
t3_1r3t8ro
/r/LocalLLaMA/comments/1r3t8ro/nvidias_new_technique_cuts_llm_reasoning_costs_by/
false
false
self
212
null
AMA with MiniMax — Ask Us Anything!
237
Hi [r/LocalLLaMA](https://www.reddit.com/r/LocalLLaMA/)! We’re really excited to be here, thanks for having us. We're MiniMax, the lab behind: * [MiniMax-M2](https://x.com/MiniMax__AI/status/1982674798649160175?s=20).5 * [Hailuo](https://x.com/Hailuo_AI/status/1983382728343994414) * [MiniMax Speech](https://x.com/Hailuo_AI/status/1983661667872600296) * [MiniMax Music](https://x.com/Hailuo_AI/status/1983964920493568296) Joining the channel today are: * u/Wise_Evidence9973 — Head of LLM Research * u/ryan85127704 — Head of Engineering * u/HardToVary — LLM Researcher https://preview.redd.it/5z2li1ntcajg1.jpg?width=3525&format=pjpg&auto=webp&s=e6760feae05c7cfcaea6d95dfcd6e15990ec7f5c
2026-02-13T16:07:54
https://www.reddit.com/r/LocalLLaMA/comments/1r3t775/ama_with_minimax_ask_us_anything/
HardToVary
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r3t775
false
null
t3_1r3t775
/r/LocalLLaMA/comments/1r3t775/ama_with_minimax_ask_us_anything/
false
true
https://preview.redd.it/…e00dd7b871efebde
237
null
MiniMax M2.5 is coming! AMA !!!
1
[removed]
2026-02-13T15:56:43
https://www.reddit.com/r/LocalLLaMA/comments/1r3svw4/minimax_m25_is_coming_ama/
ryan85127704
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r3svw4
false
null
t3_1r3svw4
/r/LocalLLaMA/comments/1r3svw4/minimax_m25_is_coming_ama/
false
false
https://preview.redd.it/…e9d400d1104dfab7
1
null
Koboldcpp - What is this weirdness I am experiencing. Double contexting
0
So I'm trying to use Gemma 3 27b to parse a 300 page manual. When I first loaded it up and parsed it, I had accidentally set the context size to 64k. It takes about 10m to get my first response from the model, and that first response eats up about 50k context. That's fine, so I relaunch kcpp with the full 128k context that Gemma 3 27b is rated for, and the same process takes double the time and eats up 100k context. What am I missing or not understanding? I am expecting it to take the same time for the first response and use the same 50k. Thoughts?
2026-02-13T15:56:32
https://www.reddit.com/r/LocalLLaMA/comments/1r3svpm/koboldcpp_what_is_this_weirdness_i_am/
wh33t
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r3svpm
false
null
t3_1r3svpm
/r/LocalLLaMA/comments/1r3svpm/koboldcpp_what_is_this_weirdness_i_am/
false
false
self
0
null
Training a local model on Cartoon like functional emotive states, for TUIs and GUIs...
5
We're building a TUI (terminal UI) that uses a cartoon animal mascot with different expressions that change based on what the app is doing. Think Duolingo owl vibes but for a CLI tool. The character needs to be consistent across 26 different expression states - not just basic emotions but functional states like "investigating", "confident", "alarmed", "making a decision", etc. Flat 2D sticker style, close-up face portraits. What we've tried so far: * SDXL base + ai-toolkit for LoRA training (rank 16, 3000 steps, adamw8bit) * Generated training data with ComfyUI using geometry-first prompting (describing eye openness, brow angle, mouth shape rather than just saying "happy" or "angry") * 35 curated images with paired captions using a trigger word * Got decent results for 4-5 core expressions but the subtler ones still look too similar What we're struggling with: * Character consistency across very different expressions (the character looks different when angry vs when happy) * Getting 26 distinct expressions that are actually distinguishable at small sizes * The model learns the character OR the expressions well, not both at the same time Has anyone had success training a single-character multi-expression LoRA? Curious about: * Dataset size and composition that worked for you * Whether you used real drawings vs AI-generated training data * Any tricks for getting expression variety without losing character identity * LoRA rank and training params that helped Running on AMD ROCm (gfx1151) with 103GB VRAM so compute isn't the bottleneck, it's the approach. Happy to share what we've learned about geometry-first prompting if anyone's interested.
2026-02-13T15:43:57
https://www.reddit.com/gallery/1r3sjm8
entheosoul
reddit.com
1970-01-01T00:00:00
0
{}
1r3sjm8
false
null
t3_1r3sjm8
/r/LocalLLaMA/comments/1r3sjm8/training_a_local_model_on_cartoon_like_functional/
false
false
https://preview.redd.it/…c8d570ed52a46522
5
null
Did GLM-5 just admit it's a Gemini 1.5 Pro wrapper? 💀 Strange interaction...
0
https://preview.redd.it/…ed at Google.
2026-02-13T15:34:55
https://www.reddit.com/r/LocalLLaMA/comments/1r3sb6n/did_glm5_just_admit_its_a_gemini_15_pro_wrapper/
xthestickx
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r3sb6n
false
null
t3_1r3sb6n
/r/LocalLLaMA/comments/1r3sb6n/did_glm5_just_admit_its_a_gemini_15_pro_wrapper/
false
false
https://preview.redd.it/…bb9aa2181e15c332
0
null
Is MiniMax M2.5 the best coding model in the world?
2
I think there's a good argument that MiniMax M2.5 is the best overall model for coding available anywhere. Depending on workload, code generation capability is somewhere between recent Sonnet and Opus. It's also fast, small, and now open weight. It's also very cheap - basically free if you self-host, or almost free with the $10/m coding plan. I think all of these qualities taken together make something really great.
2026-02-13T15:32:04
https://www.reddit.com/r/LocalLLaMA/comments/1r3s8mq/is_minimax_m25_the_best_coding_model_in_the_world/
TrajansRow
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r3s8mq
false
null
t3_1r3s8mq
/r/LocalLLaMA/comments/1r3s8mq/is_minimax_m25_the_best_coding_model_in_the_world/
false
false
self
2
null
Looking for BYOM-focused beta users to stress-test my RAG system (free Pro for 1 month)
0
I’m building a RAG system that is designed around BYOM first. You can plug in your own self-hosted models instead of being locked into platform defaults. This is not a ChatGPT wrapper and not aimed at prompt-only workflows. I’m looking for people who already run local or self-hosted models and want a cleaner way to test RAG behavior on real data. What I’m testing: \- Retrieval quality with local/self-hosted models \- Grounding and hallucination behavior when models are weaker or quantized \- Chunking, metadata, and citation accuracy \- Performance as data and users scale What you can use: \- Bring your own self-hosted models (OpenAI-compatible APIs supported) \- Platform-hosted models if you want to compare \- Full admin portal access \- Add and manage up to 10 users \- No payment details required. I’ll upgrade accounts manually. Who this is for: \- People running llama, mistral, mixtral, qwen, etc \- Anyone experimenting with local inference and RAG \- Folks who actually want to break retrieval, not just chat What I ask in return: \- Use it with your real data \- One short feedback form after a week If this sounds useful, comment or DM with: \- What model(s) you plan to use \- What usually breaks for you in local-model RAG setups I’ll share access with a small number of people rather than opening it up widely.
2026-02-13T15:23:36
https://www.reddit.com/r/LocalLLaMA/comments/1r3s0ed/looking_for_byomfocused_beta_users_to_stresstest/
proxima_centauri05
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r3s0ed
false
null
t3_1r3s0ed
/r/LocalLLaMA/comments/1r3s0ed/looking_for_byomfocused_beta_users_to_stresstest/
false
false
self
0
null
DeepSeek coder v2 o qwen 3.0 coder 30b
0
Which of these is better? I want to have it coding software with Pronta, i.e., Vibecoding, but I want it to code and then, after finishing, perform the necessary tests. Which of these is the best?
2026-02-13T15:19:18
https://www.reddit.com/r/LocalLLaMA/comments/1r3rwda/deepseek_coder_v2_o_qwen_30_coder_30b/
Narrow-Life3603
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r3rwda
false
null
t3_1r3rwda
/r/LocalLLaMA/comments/1r3rwda/deepseek_coder_v2_o_qwen_30_coder_30b/
false
false
self
0
null
New attention window, retro fit old models 8x better.
0
Read this this morning. Sounds cool, not an expert but it sounds like we would not have have huge context in gb to process, speeding up the model? And works on any model so we don't have to wait for a model trained either it. Please let me know how wrong I am in comments lol. https://venturebeat.com/orchestration/nvidias-new-technique-cuts-llm-reasoning-costs-by-8x-without-losing-accuracy
2026-02-13T15:14:57
https://www.reddit.com/r/LocalLLaMA/comments/1r3rsea/new_attention_window_retro_fit_old_models_8x/
mindwip
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r3rsea
false
null
t3_1r3rsea
/r/LocalLLaMA/comments/1r3rsea/new_attention_window_retro_fit_old_models_8x/
false
false
self
0
null
3 weeks with OpenClaw as daily driver - what worked, what broke
0
Been running OpenClaw for about 3 weeks now as my daily driver. Figured I'd share what actually works, what broke, and the patterns that emerged. **Setup:** - AWS VPS (t3.medium, 40GB disk) - Claude via Anthropic API (mix of Haiku for cheap tasks, Sonnet and Opus for complex stuff) - WhatsApp + Telegram as interfaces - ~20 cron jobs running various automations **What's working well:** **1. Multi-agent architecture** I run 5 specialized sub-agents instead of one generalist: - Writer (content generation) - Editor (quality control with scoring rubric) - Researcher (market analysis, web searches) - Coder (implementation tasks) - Pipeline manager (monitors the content queue) The key insight: spawning isolated sessions for complex tasks works way better than cramming everything into one context. The researcher can spend 10 minutes digging without polluting my main conversation. **2. Memory patterns** Settled on a two-tier approach: - Daily logs (memory/YYYY-MM-DD.md) - raw notes, decisions, what happened - Curated long-term memory (MEMORY.md) - distilled lessons, key context A nightly cron job reviews daily logs and extracts anything worth keeping. This prevents MEMORY.md from becoming a junk drawer. **3. Content pipeline with quality gates** Writer generates drafts → Editor scores them (100-point rubric) → anything below 70 gets rejected or sent back for revision. Current rejection rate: ~40%. That sounds bad but it's actually healthy - means the quality bar is real. **What broke (and fixes):** **1. The "Dory Problem"** Cron jobs have no memory of what happened in main session. I'd tell the agent "cancel that follow-up, it's handled" and then 2 days later a scheduled cron would fire anyway because it was set before the cancellation. Fix: Any cron that takes external action must check a DECISIONS.md file first. If there's a recent override, abort. **2. Cron timestamp calculation** Early on I let the agent calculate Unix timestamps mentally. It wrote "2025" instead of "2026" twice, creating past timestamps that fired immediately. Sent messages to people at wrong times. Fix: Never calculate timestamps manually. Always run date -d "YYYY-MM-DD HH:MM:SS UTC" +%s%3N first. **3. The deliver:true leak** OpenClaw cron jobs have a deliver option. If true, it sends the full agent output to your chat. Problem: that includes internal reasoning like "Now I'll check the calendar and then send a message..." Had several embarrassing moments where my WhatsApp received the agent's internal monologue instead of a clean summary. Fix: Always use deliver:false and have the agent explicitly use the message tool for output. **4. Sub-agent context contamination** Sub-agents can load cached context files. Had a situation where a sub-agent flagged "credentials expired!" based on stale MEMORY.md content, even though I'd already fixed it in main session. Fix: Sub-agents for verification tasks should re-check live state, not trust cached memory. **Numbers after 3 weeks:** - ~20 active cron jobs - ~15 daily logs - 5 specialized sub-agents - Haiku handles ~80% of automated tasks (10-20x cheaper) - Sonnet for anything requiring judgment or complex reasoning, Opus only for critical tasks. - Content pipeline produced ~30 drafts, ~18 approved **Biggest lesson:** Automation without guardrails creates more problems than it solves. Every cron job that messages someone or takes external action needs: 1. A pre-flight check (is this still relevant?) 2. Output sanitization (no internal reasoning leaking) 3. A way to abort if context changed The goal is reliable automation, not maximum automation. --- Happy to answer questions about specific setups.
2026-02-13T15:10:31
https://www.reddit.com/r/LocalLLaMA/comments/1r3ro5h/3_weeks_with_openclaw_as_daily_driver_what_worked/
justserg
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r3ro5h
false
null
t3_1r3ro5h
/r/LocalLLaMA/comments/1r3ro5h/3_weeks_with_openclaw_as_daily_driver_what_worked/
false
false
self
0
null
I built an open-source memory layer for local AI agents — SQLite-based, zero config, privacy-first
0
I've been running local models with Claude Code and kept hitting the same problem: every session starts from scratch. The agent forgets everything. Context windows compress, conversations end, knowledge is gone. So I built Engram — a persistent memory layer for AI agents. What it does: Store, search, and recall memories across sessions SQLite + FTS5 under the hood — no Postgres, no Redis, no Docker 5-line Python API: store(), search(), recall() MCP server included (works with Claude Code, Cursor, etc.) Multi-agent namespaces with optional shared memory pools Memory decay (automatic forgetting curve for stale data) Optional semantic search via sentence-transformers Runs 100% locally. No telemetry, no cloud, no tracking. Quickstart: pip install engram-core from engram import Memory mem = Memory() mem.store("User prefers dark mode", type="preference", importance=8) results = mem.search("dark mode") That's it. Data lives in a SQLite file on your machine. I built this because I needed it for my own agent setup — I run multiple specialized agents (trading, coding, home automation) and they all need to remember things between sessions. The MCP integration means Claude Code picks it up automatically. MIT licensed, \~2k lines of actual code, 90 tests passing. GitHub: [https://github.com/engram-memory/engram](https://github.com/engram-memory/engram) PyPI: [https://pypi.org/project/engram-core/](https://pypi.org/project/engram-core/) Website: [https://engram-ai.dev](https://engram-ai.dev) Happy to answer questions or take feedback. Still early — just hit v0.3.0.
2026-02-13T15:06:10
https://www.reddit.com/r/LocalLLaMA/comments/1r3rk0e/i_built_an_opensource_memory_layer_for_local_ai/
WayTop7407
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r3rk0e
false
null
t3_1r3rk0e
/r/LocalLLaMA/comments/1r3rk0e/i_built_an_opensource_memory_layer_for_local_ai/
false
false
self
0
{'enabled': False, 'images': [{'id': 'zKmQdumPDmmjFM3ad3aWQn19p4_jRPvry5CDa4cpd-M', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/zKmQdumPDmmjFM3ad3aWQn19p4_jRPvry5CDa4cpd-M.png?width=108&crop=smart&auto=webp&s=763bb6ef5f68284feeb4648ce17bc96274efff4a', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/zKmQdumPDmmjFM3ad3aWQn19p4_jRPvry5CDa4cpd-M.png?width=216&crop=smart&auto=webp&s=af6202c0edd6e513f9ca78ee4c58ee4604a190fc', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/zKmQdumPDmmjFM3ad3aWQn19p4_jRPvry5CDa4cpd-M.png?width=320&crop=smart&auto=webp&s=4364a7f90ad41bb6505c58069a3ed9bbb8fc0878', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/zKmQdumPDmmjFM3ad3aWQn19p4_jRPvry5CDa4cpd-M.png?width=640&crop=smart&auto=webp&s=dcac478e9ffb29e243011588b197bb3d31021a71', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/zKmQdumPDmmjFM3ad3aWQn19p4_jRPvry5CDa4cpd-M.png?width=960&crop=smart&auto=webp&s=129fd5c787e30ff0f7f35aab7a3e32ec29b56cf1', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/zKmQdumPDmmjFM3ad3aWQn19p4_jRPvry5CDa4cpd-M.png?width=1080&crop=smart&auto=webp&s=382a205bd05e4eb49052c8888f66585661e2c825', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/zKmQdumPDmmjFM3ad3aWQn19p4_jRPvry5CDa4cpd-M.png?auto=webp&s=ce37480e12a90f37b52d9a8aa374c7080477007a', 'width': 1200}, 'variants': {}}]}
we integrated, benchmarked, and optimized agents skills in the hugging face kernels library
2
hey, I'm ben from HF. tl;dr: We built an agent skill that teaches coding agents how to write production CUDA kernels. Then we pointed Claude and Codex at two real targets: a **diffusers** pipeline and a **transformers** model. The agents produced working kernels for both, with correct PyTorch bindings and benchmarks, end to end. Writing CUDA kernels is hard. Writing CUDA kernels that correctly integrate with `transformers` and `diffusers` is harder. There are architecture-specific memory access patterns, vectorization strategies, warp shuffle reductions, and a dozen integration pitfalls that trip up even experienced developers. It is exactly the kind of specialized, high-stakes problem where agent skills shine. We gave coding agents the domain knowledge they need, like which GPU architecture to target, how to structure a kernel-builder project, when to use shared memory versus registers, and how to write PyTorch bindings. The agents did the rest. If you have used the LLM training skill or read We Got Claude to Teach Open Models, the pattern will feel familiar: package domain expertise into a skill, point the agent at a problem, and let it work.
2026-02-13T15:05:11
https://www.reddit.com/r/LocalLLaMA/comments/1r3rj3t/we_integrated_benchmarked_and_optimized_agents/
Zealousideal-Cut590
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r3rj3t
false
null
t3_1r3rj3t
/r/LocalLLaMA/comments/1r3rj3t/we_integrated_benchmarked_and_optimized_agents/
false
false
self
2
null
Nvidia Tesla P40 + RTX 5060 TI 16GB (FRANKENSTEIN IS ALIVE!)
6
[nvidia-smi](https://preview.redd.it/h6jt10wpy9jg1.png?width=845&format=png&auto=webp&s=3e14c19da7d95637c18e559583d1b55573ae960a) Wanted to share how I finally made it to work (no testing was performed so far but drivers and nvidia-smi recognizes them together, finally). I am running this setup on a Lenovo P920, Dual Xeon 8168, 256gb ram. I believe I could still squeeze another 5060 16gb there. I struggled to find a driver that would work with both of the 5060 and P40 and gave up after trying many possible scenarios. Until yesterday when I saw a post where the user said he tricked the driver to believe it was a Quadro and began digging again, after all, the quadros of this generation are still supported by Nvidia. So, it works! The driver that I used was: 581.57-desktop-win10-win11-64bit-international-nsd-dch-whql.exe and I basically just followed the tutorial here: [Can I enable WDDM on a tesla P40? - Graphics Cards - Linus Tech Tips](https://linustechtips.com/topic/1496913-can-i-enable-wddm-on-a-tesla-p40/) Yes, it is on windows. I will try a similar approach on Linux later on, but for now, I am happy it worked. Happy Friday the 13th!
2026-02-13T15:03:50
https://www.reddit.com/r/LocalLLaMA/comments/1r3rhru/nvidia_tesla_p40_rtx_5060_ti_16gb_frankenstein_is/
iampoorandsad
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r3rhru
false
null
t3_1r3rhru
/r/LocalLLaMA/comments/1r3rhru/nvidia_tesla_p40_rtx_5060_ti_16gb_frankenstein_is/
false
false
https://preview.redd.it/…a405b6c04ff2fb6c
6
null
whisper.cpp + llama.cpp in a desktop app — local voice-to-text with LLM text cleanup
5
Found a cool use case for local inference that isn't chatbots: voice-to-text with LLM-powered text formatting. MumbleFlow uses whisper.cpp for speech recognition and llama.cpp for text cleanup (punctuation, grammar, filler word removal). Everything runs locally — Tauri 2.0 + Rust shell, \~50MB RAM. What's interesting is the llama.cpp integration for post-processing. Raw whisper output is messy — no punctuation, filler words everywhere. Running it through a small local LLM cleans it up without sending anything to the cloud. Sub-second latency on M1. $5 one-time. macOS/Windows/Linux. [mumble.helix-co.com](http://mumble.helix-co.com) Anyone else using llama.cpp for non-chat use cases like this? Curious what other creative local inference pipelines people have built.
2026-02-13T15:01:42
https://www.reddit.com/r/LocalLLaMA/comments/1r3rfoh/whispercpp_llamacpp_in_a_desktop_app_local/
MedicineTop5805
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r3rfoh
false
null
t3_1r3rfoh
/r/LocalLLaMA/comments/1r3rfoh/whispercpp_llamacpp_in_a_desktop_app_local/
false
false
self
5
null
I created OpenFlow - A Linux-native dictation app that actually works on Wayland
3
I spent quite a lot of time trying to find a dictation app for Linux that met the following criteria: * Local ASR (no cloud) * Free and open source * Easy to install * Automatic paste injection and clipboard preservation on Wayland compositors I tried a couple different projects that looked promising, but found that the back end models being used were either too slow for my workflow. The biggest issue that I found was that all of the projects I tried did not support automatic paste injection on Wayland compositors, and instead made you manually paste the text after processing (annoying). OpenFlow solves this by creating a virtual keyboard via /dev/uinput. It snapshots your clipboard, puts the transcript on it, injects Ctrl+V (or Ctrl+Shift+V), waits for the app to read it, then restores your original clipboard contents. Your existing clipboard data is never lost. This works on any Wayland compositor (GNOME, KDE, Sway, etc.) and X11. I included a wide range of supported local models so that you can customize the experience to your liking. This includes a default Parakeet model, and all Whisper model variants running on either CTranslate2 or ONNX. This allows you to configure the app for speed / accuracy trade offs based on your liking. Personally I have found that the default Parakeet setup which runs on my laptop with a mid-grade NVIDIA GPU is the perfect balance for what I need. I've found that this app has significantly increased my level of productivity with vibe coding multiple projects simultaneously. Give it a try and let me know what you think of it. [https://github.com/logabell/OpenFlow](https://github.com/logabell/OpenFlow)
2026-02-13T14:55:27
https://www.reddit.com/r/LocalLLaMA/comments/1r3r9mx/i_created_openflow_a_linuxnative_dictation_app/
logabell
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r3r9mx
false
null
t3_1r3r9mx
/r/LocalLLaMA/comments/1r3r9mx/i_created_openflow_a_linuxnative_dictation_app/
false
false
self
3
{'enabled': False, 'images': [{'id': 'PwE0vyJ20MBFMb9PZmH7spccfpcfkGKVH48h0Kf82kE', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/PwE0vyJ20MBFMb9PZmH7spccfpcfkGKVH48h0Kf82kE.png?width=108&crop=smart&auto=webp&s=45ca260d3b0f5c8602a8bdf60f3daed778ded08b', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/PwE0vyJ20MBFMb9PZmH7spccfpcfkGKVH48h0Kf82kE.png?width=216&crop=smart&auto=webp&s=1f861f858c8fc7bcbaff398b45b21060aa8b43cc', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/PwE0vyJ20MBFMb9PZmH7spccfpcfkGKVH48h0Kf82kE.png?width=320&crop=smart&auto=webp&s=78bcc29e27545724dc6bc2876c505e8a78a84725', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/PwE0vyJ20MBFMb9PZmH7spccfpcfkGKVH48h0Kf82kE.png?width=640&crop=smart&auto=webp&s=2ed6648a0df5cf758acc2d2353b30a56944b9189', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/PwE0vyJ20MBFMb9PZmH7spccfpcfkGKVH48h0Kf82kE.png?width=960&crop=smart&auto=webp&s=68bd40d2693d75c20027300645f6ace625445ce4', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/PwE0vyJ20MBFMb9PZmH7spccfpcfkGKVH48h0Kf82kE.png?width=1080&crop=smart&auto=webp&s=bcaa60f4e98bb8492a5783d650219964b3128c1f', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/PwE0vyJ20MBFMb9PZmH7spccfpcfkGKVH48h0Kf82kE.png?auto=webp&s=fd827e4e652ce77c64df15f0ab13109727254d6b', 'width': 1200}, 'variants': {}}]}
Update: StepFun Team AMA postponed
1
[removed]
2026-02-13T14:50:57
https://www.reddit.com/r/LocalLLaMA/comments/1r3r5qz/update_stepfun_team_ama_postponed/
Abject-Ranger4363
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r3r5qz
false
null
t3_1r3r5qz
/r/LocalLLaMA/comments/1r3r5qz/update_stepfun_team_ama_postponed/
false
false
self
1
null
Z-ai Coding Plan Usage Break-Down (for those who are curious)
7
Z-ai isn't exactly the best at describing what their usage limits look like, so I thought I'd share this. (In case any of you were curious) https://preview.redd.it/4l4tuknjy9jg1.png?width=3532&format=png&auto=webp&s=a52c0c67a5cbc0aeeaafd7036f131c116b546e43
2026-02-13T14:48:34
https://www.reddit.com/r/LocalLLaMA/comments/1r3r3kh/zai_coding_plan_usage_breakdown_for_those_who_are/
akumaburn
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r3r3kh
false
null
t3_1r3r3kh
/r/LocalLLaMA/comments/1r3r3kh/zai_coding_plan_usage_breakdown_for_those_who_are/
false
false
https://preview.redd.it/…5bf30397b80c128a
7
null
Make a SVG of a Pelican riding a bicycle - Small MoE edition.
43
2026-02-13T14:48:21
https://www.reddit.com/gallery/1r3r3bt
JLeonsarmiento
reddit.com
1970-01-01T00:00:00
0
{}
1r3r3bt
false
null
t3_1r3r3bt
/r/LocalLLaMA/comments/1r3r3bt/make_a_svg_of_a_pelican_riding_a_bicycle_small/
false
false
https://preview.redd.it/…60371b3afb3f0f8d
43
null
[Update] StepFun Team AMA Postponed
1
[removed]
2026-02-13T14:47:44
https://www.reddit.com/r/LocalLLaMA/comments/1r3r2rs/update_stepfun_team_ama_postponed/
StepFun_ai
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r3r2rs
false
null
t3_1r3r2rs
/r/LocalLLaMA/comments/1r3r2rs/update_stepfun_team_ama_postponed/
false
false
self
1
null
It looks like deep seek v4 full wont come out today but hopefully soon
2
I hope they release the weights of v4 full and lite soon … V4/3.3 lite is already on the web and will probably come out before Tuesday, but v4 full might come out later or the same time as lite ..
2026-02-13T14:42:50
https://www.reddit.com/r/LocalLLaMA/comments/1r3qya2/it_looks_like_deep_seek_v4_full_wont_come_out/
power97992
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r3qya2
false
null
t3_1r3qya2
/r/LocalLLaMA/comments/1r3qya2/it_looks_like_deep_seek_v4_full_wont_come_out/
false
false
self
2
null
oMLX - open-source MLX inference server with paged SSD caching for Apple Silicon
51
Hey everyone, I know things are buzzing with the MiniMax and GLM releases right now, so I'm not sure if today is the best day to post this - but I wanted to share something I've been working on and I'm genuinely proud of. Whether you love or hate Ollama, we all know what it is. Setting aside the technical debates, I think Ollama absolutely nailed the concept of making LLMs accessible to everyday users. But it always bugged me that there wasn't a really easy-to-use, open-source app built on MLX. So I built one. ## What is oMLX? An LLM inference server for Apple Silicon with a native macOS menubar app. Download the DMG, drag to Applications, done. No terminal, no config files to start. ## Why I built this I don't need VLM or TTS/STT right now. What I needed was: - A server I can easily spin up on my Mac - An LLM backend for my Obsidian Copilot - Embedding + Reranking for my notes - all in one app, not three separate tools And then there's the coding agent use case. I personally wanted to use local LLMs with Claude Code. But here's the thing we all know - prefix caching in existing apps is... rough. Coding agents send requests where the prefix keeps shifting, invalidating the cache. A few turns later, the agent circles back to a previous prefix, and now your Mac has to re-prefill that entire context from scratch. Painfully slow. **oMLX solves this with paged SSD caching.** Every KV cache block gets persisted to SSD. When a previous prefix comes back, it's restored instantly instead of being recomputed. This is a game-changer for long coding sessions. OK, enough rambling. Here's what's under the hood: --- ## Features **Inference** - Continuous batching via mlx-lm - handles multiple concurrent requests - Multi-model serving - load LLM + Embedding + Reranker simultaneously, with LRU eviction - Reasoning model support - automatic `<think>` tag handling (DeepSeek, MiniMax, etc.) - Harmony protocol - native support for gpt-oss models **Caching** - Paged KV Cache - block-based with prefix sharing and copy-on-write (vLLM-inspired) - SSD tiered caching - automatic GPU-to-SSD offloading for virtually unlimited context caching - Hybrid cache - mixed KVCache + RotatingKVCache for complex architectures (Gemma3, etc.) - Persistent cache - KV cache blocks survive server restarts **API** - OpenAI compatible - `/v1/chat/completions`, `/v1/completions`, `/v1/models`, `/v1/embeddings` - Anthropic compatible - `/v1/messages` - Tool calling - JSON, Qwen, Gemma, MiniMax, GLM formats + MCP - Structured output - JSON mode and JSON Schema **macOS App** - Native menubar app (PyObjC, not Electron) - Admin dashboard with built-in chat and real-time monitoring - HuggingFace model downloader built into the dashboard - Signed & notarized DMG --- **Requirements:** Apple Silicon (M1+), macOS 14.0+ **GitHub:** [github.com/jundot/omlx](https://github.com/jundot/omlx) I'm genuinely happy with what I've built. If you have similar needs, I hope oMLX makes your workflow better too. It's 100% open source - if my hobby project can help someone out there, that's even better.
2026-02-13T14:41:23
https://i.redd.it/gy8epdcex9jg1.png
cryingneko
i.redd.it
1970-01-01T00:00:00
0
{}
1r3qwyi
false
null
t3_1r3qwyi
/r/LocalLLaMA/comments/1r3qwyi/omlx_opensource_mlx_inference_server_with_paged/
false
false
default
51
{'enabled': True, 'images': [{'id': 'gy8epdcex9jg1', 'resolutions': [{'height': 84, 'url': 'https://preview.redd.it/gy8epdcex9jg1.png?width=108&crop=smart&auto=webp&s=4b7d37093c3aebd973279aa17a19a6281e7581db', 'width': 108}, {'height': 169, 'url': 'https://preview.redd.it/gy8epdcex9jg1.png?width=216&crop=smart&auto=webp&s=038d913baa0cd278fa7927e3c044bb52a21912c7', 'width': 216}, {'height': 250, 'url': 'https://preview.redd.it/gy8epdcex9jg1.png?width=320&crop=smart&auto=webp&s=4212d0c1de9c0b0bcf5b3ec23477a32cd91ad5fb', 'width': 320}, {'height': 501, 'url': 'https://preview.redd.it/gy8epdcex9jg1.png?width=640&crop=smart&auto=webp&s=63f504c27de7ef48c0affbd687b587bbf921db42', 'width': 640}, {'height': 751, 'url': 'https://preview.redd.it/gy8epdcex9jg1.png?width=960&crop=smart&auto=webp&s=e3bf5d80ad7aeac6771dfa6172313e38790f5761', 'width': 960}, {'height': 845, 'url': 'https://preview.redd.it/gy8epdcex9jg1.png?width=1080&crop=smart&auto=webp&s=19bfa6d8d0a81f38a4c64ec2dc3624043323a18f', 'width': 1080}], 'source': {'height': 2472, 'url': 'https://preview.redd.it/gy8epdcex9jg1.png?auto=webp&s=c5e5a19c2cf73c659d0e6c51175d9d1ac5d4d41a', 'width': 3157}, 'variants': {}}]}
ELO leaderboard for LLMs playing games (non-thinking only): Llama vs Mistral vs Claude vs Gemini [4200+ matches]
0
We built a platform where **humans play strategy games against LLMs**, using an ELO system to rank them (like LMSYS but for gameplay). **The key rule: non-thinking models only.** All models play in instant response mode - no chain-of-thought, no extended reasoning, no compute-heavy thinking chains. We test raw strategic ability: can the model make good decisions without extra thinking time? This also reflects real-world deployment where you need sub-second decisions (chatbots, trading, robotics) - not 30-second thinking pauses. **Current overall standings after 4200+ matches** (log-weighted ELO across all games): - Top: Gemini 3 Flash Preview (1145), Claude Opus 4.5 (1145) - Open-source: Llama 4 Maverick (1079), GLM 4.7 (1058), Llama 4 Scout (1040), Mistral Large 3 (1022) - Humans still winning ~83% of games **Games:** TicTacToe, Connect4, Battleship, Mastermind, WordDuel, Dots and Boxes **Why it's interesting:** Tests spatial reasoning, strategic planning, and rule adherence with zero-shot responses only. Some models with great benchmark scores struggle with basic game rules when they can't reason their way through it step by step. **Try it:** playtheai.com (free, no account) **Question:** Which open-source models should we add? Especially interested in smaller/quantized models that might surprise us in instant-response mode. Curious what the community thinks about using gameplay as a non-thinking reasoning benchmark?
2026-02-13T14:29:27
https://www.reddit.com/r/LocalLLaMA/comments/1r3qm9u/elo_leaderboard_for_llms_playing_games/
stef_1982
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r3qm9u
false
null
t3_1r3qm9u
/r/LocalLLaMA/comments/1r3qm9u/elo_leaderboard_for_llms_playing_games/
false
false
self
0
null
MiniMax-M2.5 Comparison with Previous Versions and Competitors
0
MiniMax-M2.5 emerges as a highly competitive and specialized model, particularly strong in agentic and coding applications, offering exceptional efficiency and performance that rivals leading proprietary models while aiming to make frontier AI capabilities more accessible and cost-effective. Source from: [https://atoms.dev/insights/minimax-m25-unveiling-the-next-generation-of-agentic-ai/e87192023d3b4dd7ad0e2fcb55998eb4](https://atoms.dev/insights/minimax-m25-unveiling-the-next-generation-of-agentic-ai/e87192023d3b4dd7ad0e2fcb55998eb4)
2026-02-13T14:28:41
https://i.redd.it/eos7roq0v9jg1.png
Dilligentslave
i.redd.it
1970-01-01T00:00:00
0
{}
1r3qlm5
false
null
t3_1r3qlm5
/r/LocalLLaMA/comments/1r3qlm5/minimaxm25_comparison_with_previous_versions_and/
false
false
default
0
{'enabled': True, 'images': [{'id': 'eos7roq0v9jg1', 'resolutions': [{'height': 37, 'url': 'https://preview.redd.it/eos7roq0v9jg1.png?width=108&crop=smart&auto=webp&s=32a02f65ebbc6f7ff214bc3b0a7842c00860133c', 'width': 108}, {'height': 75, 'url': 'https://preview.redd.it/eos7roq0v9jg1.png?width=216&crop=smart&auto=webp&s=e041b26233512f937fd62d2f5268f1daf0a6b6f0', 'width': 216}, {'height': 111, 'url': 'https://preview.redd.it/eos7roq0v9jg1.png?width=320&crop=smart&auto=webp&s=836edae0d6d37abd1fdc48668c452362f57d61bb', 'width': 320}, {'height': 223, 'url': 'https://preview.redd.it/eos7roq0v9jg1.png?width=640&crop=smart&auto=webp&s=2030ca6865798d83137a4550ddca197ff1a28883', 'width': 640}, {'height': 335, 'url': 'https://preview.redd.it/eos7roq0v9jg1.png?width=960&crop=smart&auto=webp&s=5bd5b3476e723630a8f1ec2ca96710e60645fbb0', 'width': 960}, {'height': 377, 'url': 'https://preview.redd.it/eos7roq0v9jg1.png?width=1080&crop=smart&auto=webp&s=809a19f9ca780c249fce6907b0e6fea9eb635123', 'width': 1080}], 'source': {'height': 552, 'url': 'https://preview.redd.it/eos7roq0v9jg1.png?auto=webp&s=fe474986524f4cd3293980c38110ea6acdf8a045', 'width': 1578}, 'variants': {}}]}
Minimax M2.5!
8
[https://huggingface.co/MiniMaxAI/MiniMax-M2.5](https://huggingface.co/MiniMaxAI/MiniMax-M2.5)
2026-02-13T14:27:28
https://www.reddit.com/r/LocalLLaMA/comments/1r3qkl7/minimax_m25/
BroQuant
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r3qkl7
false
null
t3_1r3qkl7
/r/LocalLLaMA/comments/1r3qkl7/minimax_m25/
false
false
self
8
{'enabled': False, 'images': [{'id': 'U31ABjps-QUJnQnBzpD57R8aGaCakUKOyzXjWrvNYSg', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/U31ABjps-QUJnQnBzpD57R8aGaCakUKOyzXjWrvNYSg.png?width=108&crop=smart&auto=webp&s=64033d5fe6d12704ac16ef5f9a5d936f02f22066', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/U31ABjps-QUJnQnBzpD57R8aGaCakUKOyzXjWrvNYSg.png?width=216&crop=smart&auto=webp&s=bf91d74facd76a3501eec409e9e6a4587a859438', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/U31ABjps-QUJnQnBzpD57R8aGaCakUKOyzXjWrvNYSg.png?width=320&crop=smart&auto=webp&s=cbe37f59e591378ee409f69ee988008f13f83a2a', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/U31ABjps-QUJnQnBzpD57R8aGaCakUKOyzXjWrvNYSg.png?width=640&crop=smart&auto=webp&s=de0bab4be78008336f973196f0ed98e2bbe49764', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/U31ABjps-QUJnQnBzpD57R8aGaCakUKOyzXjWrvNYSg.png?width=960&crop=smart&auto=webp&s=39827b9f194c5cbf3c87173bab764e81beb3ff1f', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/U31ABjps-QUJnQnBzpD57R8aGaCakUKOyzXjWrvNYSg.png?width=1080&crop=smart&auto=webp&s=53a3b66b37018fbb799f4b445a9aa2860bca532a', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/U31ABjps-QUJnQnBzpD57R8aGaCakUKOyzXjWrvNYSg.png?auto=webp&s=4d447ba7f79c894ed41912590bb4a6a79f5bc2d9', 'width': 1200}, 'variants': {}}]}
Help me decide: shall I keep the A770?
1
So I got my hands on a new Intel Arc A770 for cheap and have thrown it into my small homelab server with 48 GB of DRAM and a Ryzen 5950X. I've setup a the ipex-llm docker container with ollama and openweb UI, and I'm achieving: * mistral-nemo:12b = 35-40 token/s * qwen3:14b = 25-35 token/s The performance seems reasonably fast, and enough for two users who are casually chatting on various topics, (or need some shell commands to be fixed). However getting it to work, took me already several hours of tinkering (thanks Gemini) and ipex-llm is already legacy, as intel moves to OpenVINO. I spent 2 hours to deploy an openVino container, without luck and it got me thinking: Is all this tinkering to get an Intel card working really worth it? I imagine maintenance in future will not be getting easier, as I don't see widespread adoption of openVino in the community. What do you guys think?
2026-02-13T14:16:10
https://i.redd.it/ivi10izmq9jg1.png
Luxkeiwoker
i.redd.it
1970-01-01T00:00:00
0
{}
1r3qamj
false
null
t3_1r3qamj
/r/LocalLLaMA/comments/1r3qamj/help_me_decide_shall_i_keep_the_a770/
false
false
default
1
{'enabled': True, 'images': [{'id': 'ivi10izmq9jg1', 'resolutions': [{'height': 136, 'url': 'https://preview.redd.it/ivi10izmq9jg1.png?width=108&crop=smart&auto=webp&s=d97bea196689cdefcd0caddc0a301d5a8e855564', 'width': 108}, {'height': 273, 'url': 'https://preview.redd.it/ivi10izmq9jg1.png?width=216&crop=smart&auto=webp&s=d658353ba07187e5d41a7b7ef7c890af2530af39', 'width': 216}, {'height': 404, 'url': 'https://preview.redd.it/ivi10izmq9jg1.png?width=320&crop=smart&auto=webp&s=1e96905fcd4eb1ba6202c505180e1a0470a3dde7', 'width': 320}], 'source': {'height': 715, 'url': 'https://preview.redd.it/ivi10izmq9jg1.png?auto=webp&s=40926da6150ce6b244607364ff71028bf01e9469', 'width': 565}, 'variants': {}}]}
MiniMaxAI/MiniMax-M2.5 · Hugging Face
4
2026-02-13T14:08:05
https://huggingface.co/MiniMaxAI/MiniMax-M2.5
coder543
huggingface.co
1970-01-01T00:00:00
0
{}
1r3q3jz
false
null
t3_1r3q3jz
/r/LocalLLaMA/comments/1r3q3jz/minimaxaiminimaxm25_hugging_face/
false
false
https://external-preview…e4d2a101b49642f4
4
{'enabled': False, 'images': [{'id': 'U31ABjps-QUJnQnBzpD57R8aGaCakUKOyzXjWrvNYSg', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/U31ABjps-QUJnQnBzpD57R8aGaCakUKOyzXjWrvNYSg.png?width=108&crop=smart&auto=webp&s=64033d5fe6d12704ac16ef5f9a5d936f02f22066', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/U31ABjps-QUJnQnBzpD57R8aGaCakUKOyzXjWrvNYSg.png?width=216&crop=smart&auto=webp&s=bf91d74facd76a3501eec409e9e6a4587a859438', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/U31ABjps-QUJnQnBzpD57R8aGaCakUKOyzXjWrvNYSg.png?width=320&crop=smart&auto=webp&s=cbe37f59e591378ee409f69ee988008f13f83a2a', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/U31ABjps-QUJnQnBzpD57R8aGaCakUKOyzXjWrvNYSg.png?width=640&crop=smart&auto=webp&s=de0bab4be78008336f973196f0ed98e2bbe49764', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/U31ABjps-QUJnQnBzpD57R8aGaCakUKOyzXjWrvNYSg.png?width=960&crop=smart&auto=webp&s=39827b9f194c5cbf3c87173bab764e81beb3ff1f', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/U31ABjps-QUJnQnBzpD57R8aGaCakUKOyzXjWrvNYSg.png?width=1080&crop=smart&auto=webp&s=53a3b66b37018fbb799f4b445a9aa2860bca532a', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/U31ABjps-QUJnQnBzpD57R8aGaCakUKOyzXjWrvNYSg.png?auto=webp&s=4d447ba7f79c894ed41912590bb4a6a79f5bc2d9', 'width': 1200}, 'variants': {}}]}
https://huggingface.co/MiniMaxAI/MiniMax-M2.5/tree/main
13
[https://huggingface.co/MiniMaxAI/MiniMax-M2.5/tree/main](https://huggingface.co/MiniMaxAI/MiniMax-M2.5/tree/main) quants are here
2026-02-13T14:05:44
https://www.reddit.com/r/LocalLLaMA/comments/1r3q1h7/httpshuggingfacecominimaxaiminimaxm25treemain/
Remarkable_Jicama775
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r3q1h7
false
null
t3_1r3q1h7
/r/LocalLLaMA/comments/1r3q1h7/httpshuggingfacecominimaxaiminimaxm25treemain/
false
false
self
13
{'enabled': False, 'images': [{'id': 'U31ABjps-QUJnQnBzpD57R8aGaCakUKOyzXjWrvNYSg', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/U31ABjps-QUJnQnBzpD57R8aGaCakUKOyzXjWrvNYSg.png?width=108&crop=smart&auto=webp&s=64033d5fe6d12704ac16ef5f9a5d936f02f22066', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/U31ABjps-QUJnQnBzpD57R8aGaCakUKOyzXjWrvNYSg.png?width=216&crop=smart&auto=webp&s=bf91d74facd76a3501eec409e9e6a4587a859438', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/U31ABjps-QUJnQnBzpD57R8aGaCakUKOyzXjWrvNYSg.png?width=320&crop=smart&auto=webp&s=cbe37f59e591378ee409f69ee988008f13f83a2a', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/U31ABjps-QUJnQnBzpD57R8aGaCakUKOyzXjWrvNYSg.png?width=640&crop=smart&auto=webp&s=de0bab4be78008336f973196f0ed98e2bbe49764', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/U31ABjps-QUJnQnBzpD57R8aGaCakUKOyzXjWrvNYSg.png?width=960&crop=smart&auto=webp&s=39827b9f194c5cbf3c87173bab764e81beb3ff1f', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/U31ABjps-QUJnQnBzpD57R8aGaCakUKOyzXjWrvNYSg.png?width=1080&crop=smart&auto=webp&s=53a3b66b37018fbb799f4b445a9aa2860bca532a', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/U31ABjps-QUJnQnBzpD57R8aGaCakUKOyzXjWrvNYSg.png?auto=webp&s=4d447ba7f79c894ed41912590bb4a6a79f5bc2d9', 'width': 1200}, 'variants': {}}]}
llama.cpp llama-server running SSM models VRAM fix merged
37
During my time fixing the Kimi Linear server bug reported by u/Lord_Pazzu, I discovered that running llama-server running SSM hybrid models in general uses KV cache that is multiple of the number of parallel threads (--parallel), so for example, if you run Nemotron 3 Nano at 1M context and --parallel 8, then it would use 48GB VRAM KV cache instead of 6GB even though each server instance can only serve 128K context. [https://github.com/ggml-org/llama.cpp/issues/19552](https://github.com/ggml-org/llama.cpp/issues/19552) With this fix, you will only use 6GB just like the transformer models. That means with 48GB VRAM to spare, you can now serve 8 users simultaneously with 1M context each. Merged PR: [https://github.com/ggml-org/llama.cpp/pull/19559](https://github.com/ggml-org/llama.cpp/pull/19559) This applies to all SSM hybrid models like Qwen3Next, Kimi Linear, Nemotron 3 Nano, etc. So if u r a llama-server user with these new models, then it will be a great news to you.
2026-02-13T14:04:54
https://www.reddit.com/r/LocalLLaMA/comments/1r3q0qb/llamacpp_llamaserver_running_ssm_models_vram_fix/
Ok_Warning2146
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r3q0qb
false
null
t3_1r3q0qb
/r/LocalLLaMA/comments/1r3q0qb/llamacpp_llamaserver_running_ssm_models_vram_fix/
false
false
self
37
{'enabled': False, 'images': [{'id': 'pUPWbvtSSDzB-apAqTRxPHj9v8O5yyd_C76fcP2F8a8', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/pUPWbvtSSDzB-apAqTRxPHj9v8O5yyd_C76fcP2F8a8.png?width=108&crop=smart&auto=webp&s=af143f62e464f520cb509a294ae184017487193d', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/pUPWbvtSSDzB-apAqTRxPHj9v8O5yyd_C76fcP2F8a8.png?width=216&crop=smart&auto=webp&s=c4bb44161addaa8139a86fff45663c831e119217', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/pUPWbvtSSDzB-apAqTRxPHj9v8O5yyd_C76fcP2F8a8.png?width=320&crop=smart&auto=webp&s=094571138d6912c21013d4d0fec68bbc0213ea87', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/pUPWbvtSSDzB-apAqTRxPHj9v8O5yyd_C76fcP2F8a8.png?width=640&crop=smart&auto=webp&s=f1450565b4d030940d676915998182a75fc2f773', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/pUPWbvtSSDzB-apAqTRxPHj9v8O5yyd_C76fcP2F8a8.png?width=960&crop=smart&auto=webp&s=804781515633239341aef1cfaa4846e43d1663ab', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/pUPWbvtSSDzB-apAqTRxPHj9v8O5yyd_C76fcP2F8a8.png?width=1080&crop=smart&auto=webp&s=aadfa356a56c185661ef6b941f96052c52d8a9e4', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/pUPWbvtSSDzB-apAqTRxPHj9v8O5yyd_C76fcP2F8a8.png?auto=webp&s=ccacd424eafffcb3e3552f01cfb6cebf5b1b0d10', 'width': 1200}, 'variants': {}}]}
MiniMaxAI/MiniMax-M2.5 · Hugging Face
386
Published an hour ago, how is there no post yet?! No quants yet.
2026-02-13T14:01:52
https://huggingface.co/MiniMaxAI/MiniMax-M2.5
rerri
huggingface.co
1970-01-01T00:00:00
0
{}
1r3pxy7
false
null
t3_1r3pxy7
/r/LocalLLaMA/comments/1r3pxy7/minimaxaiminimaxm25_hugging_face/
false
false
default
386
{'enabled': False, 'images': [{'id': 'U31ABjps-QUJnQnBzpD57R8aGaCakUKOyzXjWrvNYSg', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/U31ABjps-QUJnQnBzpD57R8aGaCakUKOyzXjWrvNYSg.png?width=108&crop=smart&auto=webp&s=64033d5fe6d12704ac16ef5f9a5d936f02f22066', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/U31ABjps-QUJnQnBzpD57R8aGaCakUKOyzXjWrvNYSg.png?width=216&crop=smart&auto=webp&s=bf91d74facd76a3501eec409e9e6a4587a859438', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/U31ABjps-QUJnQnBzpD57R8aGaCakUKOyzXjWrvNYSg.png?width=320&crop=smart&auto=webp&s=cbe37f59e591378ee409f69ee988008f13f83a2a', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/U31ABjps-QUJnQnBzpD57R8aGaCakUKOyzXjWrvNYSg.png?width=640&crop=smart&auto=webp&s=de0bab4be78008336f973196f0ed98e2bbe49764', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/U31ABjps-QUJnQnBzpD57R8aGaCakUKOyzXjWrvNYSg.png?width=960&crop=smart&auto=webp&s=39827b9f194c5cbf3c87173bab764e81beb3ff1f', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/U31ABjps-QUJnQnBzpD57R8aGaCakUKOyzXjWrvNYSg.png?width=1080&crop=smart&auto=webp&s=53a3b66b37018fbb799f4b445a9aa2860bca532a', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/U31ABjps-QUJnQnBzpD57R8aGaCakUKOyzXjWrvNYSg.png?auto=webp&s=4d447ba7f79c894ed41912590bb4a6a79f5bc2d9', 'width': 1200}, 'variants': {}}]}
Old mining rig → AI money machine? Need advice on 7-GPU setup (74GB VRAM)
0
Hey everyone, Got an old mining rig sitting overseas that I want to turn into something useful. Control it remotely via Tailscale. \*\*The rig as it sits now:\*\* \- 2x RTX 3080 10GB \- 3x RTX 3060 10GB \- 2x RTX 3060 12GB \- 74GB VRAM total \- MSI Z490-A PRO \- Celeron G5905 + 4GB RAM (upgrading to i5-11400 + 32GB soon) \*\*What I want to run:\*\* \- Qwen2.5-72B for the brain (\~42GB) \- Qwen-TTS for voice (\~8GB) \- Flux for images (\~12GB) \- LTX-2 locally or Kling 3 API for video \- PicoClaw as agent orchestrator \*\*The dream:\*\* Send a WhatsApp message, AI does the work, I get paid. First idea is automated YouTube videos or AI shorts - script, voiceover, visuals, edit, all handled by the rig. But honestly I'm open to anything. Trading bots? Freelance automation? Something else? Just want to put this hardware to work. \*\*Questions:\*\* 1. For splitting across mixed VRAM cards - anyone tried Raylight? Heard about it but can't find real reviews. Otherwise how are you handling LLM + ComfyUI together? 2. LTX-2 local vs Kling 3 API - what's the move for quality/speed? 3. What's actually making you money with local AI? Content, trading, services, something else? Would love to hear what's working. Happy to share my setup once it's running. Just want this old hardware to earn its keep instead of collecting dust. Thanks!
2026-02-13T14:00:26
https://www.reddit.com/r/LocalLLaMA/comments/1r3pwh3/old_mining_rig_ai_money_machine_need_advice_on/
Ok-Positive1446
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r3pwh3
false
null
t3_1r3pwh3
/r/LocalLLaMA/comments/1r3pwh3/old_mining_rig_ai_money_machine_need_advice_on/
false
false
self
0
null
Is renting GPU the same quality wise?
1
A newbie question. I've seen it’s possible to rent high end GPUs to run models. When doing this, do I rent only the GPU or also the CPU, RAM, storage, etc? And is the performance and output quality the same when renting hardware online compared to running the same setup on a physical setup?
2026-02-13T13:57:37
https://www.reddit.com/r/LocalLLaMA/comments/1r3ptw8/is_renting_gpu_the_same_quality_wise/
TheRealistDude
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r3ptw8
false
null
t3_1r3ptw8
/r/LocalLLaMA/comments/1r3ptw8/is_renting_gpu_the_same_quality_wise/
false
false
self
1
null
I got tired of on-device LLMs crashing my mobile apps, so I built a "Managed" runtime (14k LOC)
0
I have played around with loads of on-device AI demo for 30 sec they look mesmerising, then the phone turns into a heater and the OS kilss the app dies to memory spikes . Spent last few months building **Edge-Veda**. I's not just another wrapper; its a supervised runtime that treats LLMs like prod workloads. Whats init that makes it cooler: 1. The Scheduler: Monitors ios/android thermal and battery levels in real time. If the phone gets too hot, it downscales the token/sec 2. Full StackL Support for GGUF(Text), Whisper(Speech), and VLMs(Vision) 3. Local RAG: Built in Vector Search(HNSW) thats stays 100% offline Its completely opensource & runs via FFI with zero cloud dependencies
2026-02-13T13:36:58
https://www.reddit.com/r/LocalLLaMA/comments/1r3pcj5/i_got_tired_of_ondevice_llms_crashing_my_mobile/
Mundane-Tea-3488
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r3pcj5
false
null
t3_1r3pcj5
/r/LocalLLaMA/comments/1r3pcj5/i_got_tired_of_ondevice_llms_crashing_my_mobile/
false
false
self
0
null
F5 F5 F5
0
Patience...
2026-02-13T13:28:02
https://i.redd.it/s7de7ku3k9jg1.png
Then-Topic8766
i.redd.it
1970-01-01T00:00:00
0
{}
1r3p5a8
false
null
t3_1r3p5a8
/r/LocalLLaMA/comments/1r3p5a8/f5_f5_f5/
false
false
default
0
{'enabled': True, 'images': [{'id': 's7de7ku3k9jg1', 'resolutions': [{'height': 60, 'url': 'https://preview.redd.it/s7de7ku3k9jg1.png?width=108&crop=smart&auto=webp&s=15cd238122aa88bc360c6df43eab7310aa70f53a', 'width': 108}, {'height': 121, 'url': 'https://preview.redd.it/s7de7ku3k9jg1.png?width=216&crop=smart&auto=webp&s=526755fefe082093d89d04409a85def3b6c5ead8', 'width': 216}, {'height': 180, 'url': 'https://preview.redd.it/s7de7ku3k9jg1.png?width=320&crop=smart&auto=webp&s=5b316cb1ff7dc9af24c106b02b40df4dbd4b7010', 'width': 320}, {'height': 360, 'url': 'https://preview.redd.it/s7de7ku3k9jg1.png?width=640&crop=smart&auto=webp&s=1a9a879974abad7ddd22e5499fbb448bfa9e82e0', 'width': 640}, {'height': 540, 'url': 'https://preview.redd.it/s7de7ku3k9jg1.png?width=960&crop=smart&auto=webp&s=851433b4f4b686ff74187baa8879373a0cc01108', 'width': 960}, {'height': 607, 'url': 'https://preview.redd.it/s7de7ku3k9jg1.png?width=1080&crop=smart&auto=webp&s=98bc77cef3170cdfb3edc4325b8c8ae31eb38993', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://preview.redd.it/s7de7ku3k9jg1.png?auto=webp&s=416461a88cd4cea1a84fd7a7620a0c5901087d3d', 'width': 1920}, 'variants': {}}]}
What is the most efficient local setup to "watch" a video?
3
By "watching" a video, I mostly mean looking through most of the frames, extract the gist of the video as well as scene by scene breakdown, extract the audio transcript, line up scenes with speech, and come up with a coherent TLDR of the video. I can think of how to do the individual steps: keyframe extraction + VLM processing in batches is fairly straightforward, but also very slow if we extract frames too aggressively. Audio transcript is very easy and very high quality with VibeVoice ASR, but you either have to carefully align audio transcript with VLM output, or straight up using a full autonomous agent to piece together timestamped video and audio descriptions. It may be possible to pass transcripts directly to the VLM to process scene by scene, but VLMs can only hold so many frames in context at once on a reasonable setup. You would also need some kind of context management to ensure scenes are aware what happened before and after if it is impossible to process the whole video in one context. Has anyone managed to build a satisfactory video watching tool this way?
2026-02-13T13:26:14
https://www.reddit.com/r/LocalLLaMA/comments/1r3p3u0/what_is_the_most_efficient_local_setup_to_watch_a/
aeroumbria
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r3p3u0
false
null
t3_1r3p3u0
/r/LocalLLaMA/comments/1r3p3u0/what_is_the_most_efficient_local_setup_to_watch_a/
false
false
self
3
null
Do you still use MCP? Or use SKILLS ?
0
I moved almost everything to SKILLS, I don't need always-on connection to a remote server, and i vibecoded CLI + TOKEN to use as tools recalled by skills (that i share in opencode/openclaw). What about you? Do you really need MCP ?
2026-02-13T13:25:33
https://www.reddit.com/r/LocalLLaMA/comments/1r3p394/do_you_still_use_mcp_or_use_skills/
Deep_Traffic_7873
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r3p394
false
null
t3_1r3p394
/r/LocalLLaMA/comments/1r3p394/do_you_still_use_mcp_or_use_skills/
false
false
self
0
null
what's ‘the’ workflow for browser automation in 2026?
1
I've tested browser automation approaches previously and have played around with a few tools but, but I'm trying to figure out what actually works in production (or at least the most reliably) in 2026. Tried Browser Use a while back and was initially super impressed but hit reliability issues when I tried to run stuff repeatedly. (If anyone's had better luck recently, lmk what's changed/your experience) Browserbase feels like the right infra layer, and you can pair it with Stagehand or other frameworks. Recently also heard Kernel mentioned as a competitor (sessions/infra) but haven't dug in yet. I came across Notte recently and noticed they're pushing a specific workflow: use an agent (or manual recording) to figure out the flow once, then convert it to deterministic code you can edit/version/deploy as a serverless function. The idea seems to be 'only use the LLM for discovery or failures not every run.' Two modes for this: * one where you prompt an agent (agent mode) in natural language to run a flow, then can open the agent run's code in editor, edit, deploy * the other one I'm more interested in is Demonstration mode: manually click through a workflow once in a live browser/session, get the code of the flow generated for you What I’m trying to figure out is whether this hybrid approach (either bootstrap with an agent and convert to a deterministic script, or manually demonstrate the workflow to generate the script then wrap it with an agent so it recovers from UI changes/dynamic content at runtime) actually gives you fast iteration and production reliability or whether it still ends up being a lot of operational overhead to keep flows healthy over time **Questions:** 1. If you're running browser automation in prod, what's your stack? 2. What breaks more often for you, infra issues (proxies, session management) or logic issues (selectors, auth etc)? (Context: I work full-time + have a newborn, so I'm playing catch-up on what's changed in the space over the last few months, would love to hear what's actually working for people.)
2026-02-13T13:16:21
https://www.reddit.com/r/LocalLLaMA/comments/1r3ovso/whats_the_workflow_for_browser_automation_in_2026/
Dangerous_Fix_751
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r3ovso
false
null
t3_1r3ovso
/r/LocalLLaMA/comments/1r3ovso/whats_the_workflow_for_browser_automation_in_2026/
false
false
self
1
null
Running a local model
1
Hi, I am struggling to get openclaw to access my local llama which is running fine on my Mac mini m4. My openclaw seems incapable of resolving. Does anyone have a working config. I want to run some cron jobs locally but don’t need an expensive remote model to execute.
2026-02-13T13:15:19
https://www.reddit.com/r/LocalLLaMA/comments/1r3ouxj/running_a_local_model/
delacroix1966
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r3ouxj
false
null
t3_1r3ouxj
/r/LocalLLaMA/comments/1r3ouxj/running_a_local_model/
false
false
self
1
null
Hidden use-case confessions, I'll start: Local model generates a believable subreddit community dedicated to my ludicrously specific (harmless but shameful) kink, browse by text, interact with community, etc. C'mon what creative uses of the ultimate technology have you been too shy to share?!
1
[removed]
2026-02-13T13:08:54
https://www.reddit.com/r/LocalLLaMA/comments/1r3oprn/hidden_usecase_confessions_ill_start_local_model/
Working-week-notmuch
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r3oprn
false
null
t3_1r3oprn
/r/LocalLLaMA/comments/1r3oprn/hidden_usecase_confessions_ill_start_local_model/
false
false
nsfw
1
null
GLM 5 has a regression in international language writing according to NCBench
36
This surprised me but also confirmed my poor first impression of it, since I happened to use it for text generation in a less common language and it performed very poorly, barely even like the aging Gemini 2.5 Flash and more like a good 70B Llama 3.x model. At NCBench - Language Writing, it trails GLM 4.5-4.7 by quite a distance when tested for European languages and Hindi. GLM 4.5 is the clear, superior release in this regard according to NCBench. Interestingly, Language _Comprehension_ didn't seem to regress much at all! GLM 5 may be great and all, but just a heads up if you use it for this particular scenario since I think it's been flying below the radar.
2026-02-13T12:49:51
https://www.nc-bench.com/tests/language-writing
jugalator
nc-bench.com
1970-01-01T00:00:00
0
{}
1r3ob0r
false
null
t3_1r3ob0r
/r/LocalLLaMA/comments/1r3ob0r/glm_5_has_a_regression_in_international_language/
false
false
default
36
null
New DeepSeek update: "DeepSeek Web / APP is currently testing a new long-context model architecture, supporting a 1M context window."
123
From AiBattle on 𝕏: [https://x.com/AiBattle\_/status/2022280288643039235](https://x.com/AiBattle_/status/2022280288643039235)
2026-02-13T12:43:50
https://i.redd.it/dg94ujw1c9jg1.jpeg
Nunki08
i.redd.it
1970-01-01T00:00:00
0
{}
1r3o6je
false
null
t3_1r3o6je
/r/LocalLLaMA/comments/1r3o6je/new_deepseek_update_deepseek_web_app_is_currently/
false
false
default
123
{'enabled': True, 'images': [{'id': 'dg94ujw1c9jg1', 'resolutions': [{'height': 108, 'url': 'https://preview.redd.it/dg94ujw1c9jg1.jpeg?width=108&crop=smart&auto=webp&s=0d518b5139a43db3409ccf6a618445c6154e3b42', 'width': 108}, {'height': 216, 'url': 'https://preview.redd.it/dg94ujw1c9jg1.jpeg?width=216&crop=smart&auto=webp&s=8b299beecbe38e89e2a2c2e4cd25778fce57a2a2', 'width': 216}, {'height': 320, 'url': 'https://preview.redd.it/dg94ujw1c9jg1.jpeg?width=320&crop=smart&auto=webp&s=0f95bff3fcf3d95b2960c7edbf39c04eab6cef55', 'width': 320}, {'height': 641, 'url': 'https://preview.redd.it/dg94ujw1c9jg1.jpeg?width=640&crop=smart&auto=webp&s=55de58cf8a3e4a397d81184a2473b94f7a31aa33', 'width': 640}, {'height': 962, 'url': 'https://preview.redd.it/dg94ujw1c9jg1.jpeg?width=960&crop=smart&auto=webp&s=9d41131c782dabebcf349c92730eac5bc52a07fb', 'width': 960}], 'source': {'height': 1059, 'url': 'https://preview.redd.it/dg94ujw1c9jg1.jpeg?auto=webp&s=45b5a118de13642bed1c52ead484353cc7e8ff1a', 'width': 1056}, 'variants': {}}]}
A Call to Engineers
0
Ex-OpenAI Cognitive Mind behind 4o is looking for an engineer to set up a new API... [https://x.com/Esmeral64216230/status/2022284518971826616?s=20](https://x.com/Esmeral64216230/status/2022284518971826616?s=20)
2026-02-13T12:43:09
https://www.reddit.com/r/LocalLLaMA/comments/1r3o631/a_call_to_engineers/
ENTERMOTHERCODE
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r3o631
false
null
t3_1r3o631
/r/LocalLLaMA/comments/1r3o631/a_call_to_engineers/
false
false
self
0
null
Small, fast Moderation model for German text
1
[removed]
2026-02-13T12:29:29
https://www.reddit.com/r/LocalLLaMA/comments/1r3nw7c/small_fast_moderation_model_for_german_text/
Ok_Hold_5385
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r3nw7c
false
null
t3_1r3nw7c
/r/LocalLLaMA/comments/1r3nw7c/small_fast_moderation_model_for_german_text/
false
false
self
1
null
Deepseek announced they are testing a new model.
60
https://preview.redd.it/…m Chinese group
2026-02-13T12:25:39
https://www.reddit.com/r/LocalLLaMA/comments/1r3ntgi/deepseek_announced_they_are_testing_a_new_model/
External_Mood4719
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r3ntgi
false
null
t3_1r3ntgi
/r/LocalLLaMA/comments/1r3ntgi/deepseek_announced_they_are_testing_a_new_model/
false
false
https://preview.redd.it/…c08f30ee495de504
60
null
Small, fast Moderation and Toxicity Detection model for German text
1
[removed]
2026-02-13T12:25:34
https://www.reddit.com/r/LocalLLaMA/comments/1r3ntdy/small_fast_moderation_and_toxicity_detection/
Ok_Hold_5385
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r3ntdy
false
null
t3_1r3ntdy
/r/LocalLLaMA/comments/1r3ntdy/small_fast_moderation_and_toxicity_detection/
false
false
self
1
{'enabled': False, 'images': [{'id': 'zZ4EAf9ECaK9Csw3GEJCLkb-qDDV9Dcy_De6nJpN9SM', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/zZ4EAf9ECaK9Csw3GEJCLkb-qDDV9Dcy_De6nJpN9SM.png?width=108&crop=smart&auto=webp&s=dd4dd64d44df56c4458e98207b9dfae0fb159b8c', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/zZ4EAf9ECaK9Csw3GEJCLkb-qDDV9Dcy_De6nJpN9SM.png?width=216&crop=smart&auto=webp&s=cafc10ef4f6a4c1bcd2f54bdb0bf1ebc7bc2cedc', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/zZ4EAf9ECaK9Csw3GEJCLkb-qDDV9Dcy_De6nJpN9SM.png?width=320&crop=smart&auto=webp&s=9f3159764402110dc9a12c7b43fafbd56a2e5708', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/zZ4EAf9ECaK9Csw3GEJCLkb-qDDV9Dcy_De6nJpN9SM.png?width=640&crop=smart&auto=webp&s=6a9f8c6e4591dc357db5c54200c299d65f281993', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/zZ4EAf9ECaK9Csw3GEJCLkb-qDDV9Dcy_De6nJpN9SM.png?width=960&crop=smart&auto=webp&s=0acd2dfcef2d7072cb35e20602f7c9e6609578e0', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/zZ4EAf9ECaK9Csw3GEJCLkb-qDDV9Dcy_De6nJpN9SM.png?width=1080&crop=smart&auto=webp&s=1a6ff9b63ec73304cd3768f260e0770887931cea', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/zZ4EAf9ECaK9Csw3GEJCLkb-qDDV9Dcy_De6nJpN9SM.png?auto=webp&s=346406501afb6e8f4566edf1423e6be1575ba71d', 'width': 1200}, 'variants': {}}]}
Small, fast Moderation and Toxicity Detection model for German text
1
[removed]
2026-02-13T12:22:15
https://www.reddit.com/r/LocalLLaMA/comments/1r3nr2a/small_fast_moderation_and_toxicity_detection/
Ok_Hold_5385
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r3nr2a
false
null
t3_1r3nr2a
/r/LocalLLaMA/comments/1r3nr2a/small_fast_moderation_and_toxicity_detection/
false
false
self
1
null