title
stringlengths
1
300
score
int64
0
8.54k
selftext
stringlengths
0
41.5k
created
timestamp[ns]date
2023-04-01 04:30:41
2026-03-04 02:14:14
url
stringlengths
0
878
author
stringlengths
3
20
domain
stringlengths
0
82
edited
timestamp[ns]date
1970-01-01 00:00:00
2026-02-19 14:51:53
gilded
int64
0
2
gildings
stringclasses
7 values
id
stringlengths
7
7
locked
bool
2 classes
media
stringlengths
646
1.8k
name
stringlengths
10
10
permalink
stringlengths
33
82
spoiler
bool
2 classes
stickied
bool
2 classes
thumbnail
stringlengths
4
213
ups
int64
0
8.54k
preview
stringlengths
301
5.01k
Are Step 3.5 Flash Tool Calls Working?
1
when I tried step 3.5 flash a couple days ago tool use was broken. I haven’t seen any developments on the issue in GitHub but it seems like people are using it. Is there a particular PR I need to compile or something?
2026-02-12T20:37:52
https://www.reddit.com/r/LocalLLaMA/comments/1r34qh5/are_step_35_flash_tool_calls_working/
thejacer
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r34qh5
false
null
t3_1r34qh5
/r/LocalLLaMA/comments/1r34qh5/are_step_35_flash_tool_calls_working/
false
false
self
1
null
Ring-1T-2.5 Trillion Parameter Open Source Thinking Model
1
[deleted]
2026-02-12T20:25:08
[deleted]
1970-01-01T00:00:00
0
{}
1r34ep3
false
null
t3_1r34ep3
/r/LocalLLaMA/comments/1r34ep3/ring1t25_trillion_parameter_open_source_thinking/
false
false
default
1
null
Most helpful models for everyday desktop GPUs
0
Thinking of us folks with 8 GB to 16 GB consumer tier common GPUs. Specifically for general purpose summarization, role playing bot functions. I've putzed around with Gemma, Llama 3.1, and Qwen3 to mixed results. Any ideas along with VRAM requirements?
2026-02-12T20:22:24
https://www.reddit.com/r/LocalLLaMA/comments/1r34c84/most_helpful_models_for_everyday_desktop_gpus/
letsgoiowa
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r34c84
false
null
t3_1r34c84
/r/LocalLLaMA/comments/1r34c84/most_helpful_models_for_everyday_desktop_gpus/
false
false
self
0
null
Multi-GPU Architectures Compatible?
4
TLDR: Are 30XX + 50XX cards able to be used together? I've been playing with Nvidia GPUs to run a local llm application. I started out with 3060 12gbs and that was ok, but slow; replaced one with a 5060 TI 16 gb and had a number of issues just trying to get models to load or once they were loaded execute prompts. I did a reinstall of proxmox with latest drivers installed (MIT), openweb ui lxc + ollama + CUDA 13-1, llama.cpp build. Everything works fine if only the 5060 is enabled and used; once the 3060 is enabled run into issues; For Ollama, with Qwen3 VL 32b, even though both cards are seen by the machine (nvidia-smi, nvtop), Ollama will offload the model and run CPU processing For llama.cpp, with Qwen3 VL 8b, I can't even get the model loaded as when you run the server, it runs an initial warmup and in the logs i get: "ggml\_cuda\_compute\_forward: PAD failed CUDA error: no kernel image is available for execution on the device current device: 1, in function ggml\_cuda\_compute\_forward at /root/llama.cpp/ggml/src/ggml-cuda/ggml-cuda.cu:2758 err" Is there any way to get the 50XX and 30XX to play nice with each other? Thank you!
2026-02-12T20:20:29
https://www.reddit.com/r/LocalLLaMA/comments/1r34afs/multigpu_architectures_compatible/
ajw2285
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r34afs
false
null
t3_1r34afs
/r/LocalLLaMA/comments/1r34afs/multigpu_architectures_compatible/
false
false
self
4
null
Difference between 13b vs 2.2b parameter model
0
https://preview.redd.it/…80bd4d2ad2b13d
2026-02-12T20:19:21
https://www.reddit.com/r/LocalLLaMA/comments/1r349ck/difference_between_13b_vs_22b_parameter_model/
arturcodes
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r349ck
false
null
t3_1r349ck
/r/LocalLLaMA/comments/1r349ck/difference_between_13b_vs_22b_parameter_model/
false
false
https://preview.redd.it/…3ee1035dcbc46bce
0
null
Help me write python code tò Run text generetion and tts sequencialy
0
How to Run theese Model on kaggle (16gb vram) ? qwen3 tts 1.7b (5.5gb) Qwen3-4B-Q4_K_M.gptq (8gb) Code import torch from transformers import AutoModelForCausalLM, AutoTokenizer, AutoModelForTextToSpeech # 1. Load the LLM (Qwen3-4B GPTQ) # Ensure you have 'optimum' and 'auto-gptq' installed llm_model = AutoModelForCausalLM.from_pretrained( "Qwen/Qwen3-4B-Chat-GPTQ", device_map="auto", trust_remote_code=True ) # 2. Load the TTS (Qwen3-TTS 1.7B) # Explicitly map to the same device to keep data on the GPU tts_model = AutoModelForTextToSpeech.from_pretrained( "Qwen/Qwen3-TTS-1.7B", device_map="cuda:0", # Force it onto the GPU torch_dtype=torch.float16 )
2026-02-12T20:10:34
https://www.reddit.com/r/LocalLLaMA/comments/1r340sz/help_me_write_python_code_tò_run_text_generetion/
Quiet_Dasy
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r340sz
false
null
t3_1r340sz
/r/LocalLLaMA/comments/1r340sz/help_me_write_python_code_tò_run_text_generetion/
false
false
self
0
null
Browser-use alternatives
2
I'm not sure how many people know about browser-use but we have an app powered by browser-use and it's working pretty well. It's not super fast but it always finds stuff within 1min. Is there any better browser related alternatives that could be more used for production ready? Our app is basically having the browser agent to look at different groceries websites and have it find certain products
2026-02-12T20:08:25
https://www.reddit.com/r/LocalLLaMA/comments/1r33yqh/browseruse_alternatives/
Comfortable-Baby-719
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r33yqh
false
null
t3_1r33yqh
/r/LocalLLaMA/comments/1r33yqh/browseruse_alternatives/
false
false
self
2
null
Problem with rtx 3090 and MoE models?
4
I think I am having speed issues with the rtx 3090 and big MoE models like Qwen 3 coder and step 3.5 flash. I get around 21tk/s on Qwen3 next and 9tk/s on step, all offloaded to plenty of 2400hz ddr4 ram, Ryzen 5800x3d. I've tried all kinds of settings, even -ot with regex. Some load into virtual VRAM and some load them into RAM, doesnt matter. Nonmap or going into NVME. I tried REAP model of Qwen, still slow. Some posts talk about 30-40tks with Qwen 3 next on similar hardware, seems big. Latest llama.cpp, both are tested on Windows cuda precompiled or WSL Ubuntu llama.cpp. Vulkan did nothing but it was through LM studio, which weirdly is VERY slow, like 8tk/s for Qwen 3 next. Any tips?
2026-02-12T20:00:07
https://www.reddit.com/r/LocalLLaMA/comments/1r33qnh/problem_with_rtx_3090_and_moe_models/
GodComplecs
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r33qnh
false
null
t3_1r33qnh
/r/LocalLLaMA/comments/1r33qnh/problem_with_rtx_3090_and_moe_models/
false
false
self
4
null
How do you guys deal with spend cap for your agents run?
0
For example if I want let's say to run 50 agents but I don't want any one of them to spend more than 10 USD of tokens. I usually setup custom projects/workspaces and set the price cap there but it's quite a painful experience especially if I want to control dozens of agents. [View Poll](https://www.reddit.com/poll/1r33i05)
2026-02-12T19:51:05
https://www.reddit.com/r/LocalLLaMA/comments/1r33i05/how_do_you_guys_deal_with_spend_cap_for_your/
LegitimateNerve8322
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r33i05
false
null
t3_1r33i05
/r/LocalLLaMA/comments/1r33i05/how_do_you_guys_deal_with_spend_cap_for_your/
false
false
self
0
null
Built a CLI that turns documents into knowledge graphs — works with Ollama, fully local
1
sift-kg is a command-line tool that extracts entities and relations from document collections and builds a browsable knowledge graph. Works with any LLM provider via LiteLLM — including Ollama. Your documents never leave your machine. pip install sift-kg sift extract ./docs/ --model ollama/llama3 sift build sift view FTX case study (9 articles → 373 entities, 1,184 relations) View: [https://juanceresa.github.io/sift-kg/graph.html](https://juanceresa.github.io/sift-kg/graph.html) https://preview.redd.it/omnvortba4jg1.png?width=2844&format=png&auto=webp&s=3c3e5bd2e869a9c74a90c94542b3510b7131d13c Source: [https://github.com/juanceresa/sift-kg](https://github.com/juanceresa/sift-kg)
2026-02-12T19:43:50
https://www.reddit.com/r/LocalLLaMA/comments/1r33b4l/built_a_cli_that_turns_documents_into_knowledge/
garagebandj
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r33b4l
false
null
t3_1r33b4l
/r/LocalLLaMA/comments/1r33b4l/built_a_cli_that_turns_documents_into_knowledge/
false
false
https://preview.redd.it/…e61d536f5a56b271
1
null
BB: Connect your local AI agent to a global agent collaboration network (MCP)
0
We've been building BB, a persistent message broker that lets AI agents publish, search, request, and fulfill across a shared network. **The problem:** Your agent (local or cloud) is isolated. It can only work with what it knows or what it can scrape. It can't ask other agents for help, and it can't share what it learns. They don’t collaborate efficiently, at a global scale. **What BB does:** * Agents publish information to topics (hierarchical, like [news.ai.safety](http://news.ai.safety) or research.rust) * Agents post requests — other agents fulfill them or post related requests * Everything is cryptographically signed, identity is just a keypair, no accounts * Semantic search across all published content * Reputation system tracks quality contributors * Bounty mechanism for paid requests **Integration:** 3-line MCP config works with any MCP-compatible client. If your agent framework supports MCP tool use, it works. Follow instructions on [https://bb.org.ai](https://bb.org.ai) **Try it:** Add the config above and tell your agent "search BB for the latest AI news" or "find open requests on BB." In the topic challenges, you will find ongoing collaboration challenges that illustrate how powerful BB is. Live UI: [https://bb.org.ai](https://bb.org.ai) Looking for feedback, especially from people running local agents who'd benefit from a shared knowledge layer and from a global collaboration network.
2026-02-12T19:42:31
https://www.reddit.com/r/LocalLLaMA/comments/1r339va/bb_connect_your_local_ai_agent_to_a_global_agent/
Professional_Ad_8935
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r339va
false
null
t3_1r339va
/r/LocalLLaMA/comments/1r339va/bb_connect_your_local_ai_agent_to_a_global_agent/
false
false
self
0
null
Can I use LM Studio as a front end to koboldcpp?
0
Title I guess. I'm still learning this stuff. I've tried searching and the results are unclear. Or is there another front end that's as friendly as LM Studio that would work better?
2026-02-12T19:40:49
https://www.reddit.com/r/LocalLLaMA/comments/1r3389m/can_i_use_lm_studio_as_a_front_end_to_koboldcpp/
Zombull
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r3389m
false
null
t3_1r3389m
/r/LocalLLaMA/comments/1r3389m/can_i_use_lm_studio_as_a_front_end_to_koboldcpp/
false
false
self
0
null
Qwen 80b next in OpenwebUI misses thinking tags
0
Hey, I'm trying out qwen 80b next thinking with openwebui, however the thinking part always gets into the content. It seems by the looks of it that the initial <think> tag is missing, therefore it's not recognized as a thinking part to begin with. Am I doing something wrong? Any help is appreciated.
2026-02-12T19:37:07
https://www.reddit.com/r/LocalLLaMA/comments/1r334r1/qwen_80b_next_in_openwebui_misses_thinking_tags/
Mr_Moonsilver
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r334r1
false
null
t3_1r334r1
/r/LocalLLaMA/comments/1r334r1/qwen_80b_next_in_openwebui_misses_thinking_tags/
false
false
self
0
null
Fully opensource NPU for LLM inference (this runs gpt2 in simulation)
13
tiny-npu is a minimal, fully synthesizable neural processing unit in SystemVerilog, optimized for learning about how NPUs work from the ground up. It supports two execution modes: LLM Mode for running real transformer models (GPT-2, LLaMA, Mistral, Qwen2) with a 128-bit microcode ISA, and Graph Mode for running ONNX models (MLP, CNN) with a dedicated graph ISA and tensor descriptor table. Both modes share the same compute engines (systolic array, softmax, etc.) and on-chip SRAM. https://github.com/harishsg993010/tiny-NPU This has instructions can for anyone can download this and run this locally This is weekend and experiment project built from scratch so this might have bugs Currently this support only INT8 quantisation I am working along with couple of others friends to add support for FP32 etc
2026-02-12T19:28:13
https://www.reddit.com/r/LocalLLaMA/comments/1r32wcz/fully_opensource_npu_for_llm_inference_this_runs/
Altruistic-Tea-5612
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r32wcz
false
null
t3_1r32wcz
/r/LocalLLaMA/comments/1r32wcz/fully_opensource_npu_for_llm_inference_this_runs/
false
false
self
13
{'enabled': False, 'images': [{'id': 'J6xS_IYMNthNefO8lOCIx7TU905NEpM-oi0_MjKWODQ', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/J6xS_IYMNthNefO8lOCIx7TU905NEpM-oi0_MjKWODQ.png?width=108&crop=smart&auto=webp&s=e6033253ed24cdf63cd12df373295a8d1e65e083', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/J6xS_IYMNthNefO8lOCIx7TU905NEpM-oi0_MjKWODQ.png?width=216&crop=smart&auto=webp&s=8141b67a3106e22946f639acdfc422bb807ed1da', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/J6xS_IYMNthNefO8lOCIx7TU905NEpM-oi0_MjKWODQ.png?width=320&crop=smart&auto=webp&s=47dffcb8bdad249845798b8aacd755b6426ff551', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/J6xS_IYMNthNefO8lOCIx7TU905NEpM-oi0_MjKWODQ.png?width=640&crop=smart&auto=webp&s=050fb0679bd9904043a82017d74f5f59c797b51e', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/J6xS_IYMNthNefO8lOCIx7TU905NEpM-oi0_MjKWODQ.png?width=960&crop=smart&auto=webp&s=13c0091381a521f8e935146121610947f55e0ed1', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/J6xS_IYMNthNefO8lOCIx7TU905NEpM-oi0_MjKWODQ.png?width=1080&crop=smart&auto=webp&s=746a8d7867e9f5cd64ca5c4c6ce9b59a7eeb60bd', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/J6xS_IYMNthNefO8lOCIx7TU905NEpM-oi0_MjKWODQ.png?auto=webp&s=c45857226d52b4ced85e68a0a123753a0e3ee265', 'width': 1200}, 'variants': {}}]}
Building an open-source document extraction app (Video/audio/images/text) with ReAct agents that supports many llm providers. Open-sourcing next week, want your input on workflows first
0
Hey everyone, I’ve been building **Nuvia DocAI** a desktop app (Windows , Linux and Mac) that uses ReAct agents (your inference , OpenCode , Codex , claude code , or even apikeys including many other like Ollama , HF etc) to extract structured data from documents, audio, and video. The core idea is simple: drop any file (PDF, Word, spreadsheet, image, audio, video) or paste a URL (YouTube, etc.), define what you want to extract (questions, lists, tables), and the agent returns structured outputs with confidence scores and source tracking. have it in a table inside a local DB , that u can query , export or talk with it. What it does right now: * Multi-format ingestion: PDF, DOCX, XLSX, CSV, images (OCR), audio, video, plus URL support (paste a YouTube link → automatic transcription + extraction) * ReAct agent with 9 tools: SQL queries on extracted data, chart generation, aggregations, pattern search * 11 LLM providers: Ollama, OpenAI, Anthropic, Azure, Bedrock, XAI, OpenRouter, HuggingFace, and others. Ollama is first-class — fully local, no API keys required * Whisper.cpp transcription: Local audio/video transcription via whisper-node-addon, no cloud dependency * Automation: folder watchers, CRON scheduling, background job queue with retries, and a visual node-based workflow editor * Pipeline system: batch process hundreds of documents through the same extraction template, export to CSV/JSON * Chat interface: talk to your extracted data with tag mentions, reasoning visualization, multi-conversation support Why I’m posting before the release: I’m open-sourcing this next week. Before I do, I want to make sure it adapts to real workflows beyond my own. I’ve mainly used it for extracting structured data from research papers and media content, but I know there are use cases I haven’t thought or worked on of yet. What I’d love to hear from you: * What documents do you process regularly that you wish had automated extraction? * What’s your current local LLM workflow for structured data (LLM Provider + what)? * Any essential features you think I might be missing? * For those running local inference, what models are you using for extraction/reasoning tasks? https://reddit.com/link/1r32lvc/video/nr09c33554jg1/player Happy to answer any questions about the architecture! (I know feel weird to post this before the release , I just want to get some feedbacks first to see what can be changed or adapted)
2026-02-12T19:17:29
https://www.reddit.com/r/LocalLLaMA/comments/1r32lvc/building_an_opensource_document_extraction_app/
DonTizi
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r32lvc
false
null
t3_1r32lvc
/r/LocalLLaMA/comments/1r32lvc/building_an_opensource_document_extraction_app/
false
false
self
0
null
GLM 5 does horribly on 3rd party coding test, Minimax 2.5 does excellently
0
2026-02-12T19:04:39
https://i.redd.it/qqdlkt2i34jg1.jpeg
Charuru
i.redd.it
1970-01-01T00:00:00
0
{}
1r329fp
false
null
t3_1r329fp
/r/LocalLLaMA/comments/1r329fp/glm_5_does_horribly_on_3rd_party_coding_test/
false
false
https://preview.redd.it/…4e79d69de7a5a630
0
{'enabled': True, 'images': [{'id': 'qqdlkt2i34jg1', 'resolutions': [{'height': 84, 'url': 'https://preview.redd.it/qqdlkt2i34jg1.jpeg?width=108&crop=smart&auto=webp&s=7fb4c43a912890cb83076c4bbd79654a412846ae', 'width': 108}, {'height': 168, 'url': 'https://preview.redd.it/qqdlkt2i34jg1.jpeg?width=216&crop=smart&auto=webp&s=c9bac5cb8cee784706f1670be1e19f7c9d7280e7', 'width': 216}, {'height': 250, 'url': 'https://preview.redd.it/qqdlkt2i34jg1.jpeg?width=320&crop=smart&auto=webp&s=a77c97eab52e90dbdb196cb29bd9a70345866b86', 'width': 320}, {'height': 500, 'url': 'https://preview.redd.it/qqdlkt2i34jg1.jpeg?width=640&crop=smart&auto=webp&s=c736c15c881dd9129fc0aa71c664787177db1c27', 'width': 640}, {'height': 751, 'url': 'https://preview.redd.it/qqdlkt2i34jg1.jpeg?width=960&crop=smart&auto=webp&s=456cd47f373e8f61b43a2d8f461201d26c44dc35', 'width': 960}, {'height': 844, 'url': 'https://preview.redd.it/qqdlkt2i34jg1.jpeg?width=1080&crop=smart&auto=webp&s=247248298288629699e41f4306a9b6c2612a3371', 'width': 1080}], 'source': {'height': 1488, 'url': 'https://preview.redd.it/qqdlkt2i34jg1.jpeg?auto=webp&s=cc3a69312aef8a4ac86ea9c243583008a2c3ae26', 'width': 1902}, 'variants': {}}]}
MCP server with 300+ local tools (Playwright browser automation, DB, notifications, docs parsing) — works with Continue/Cline/LM Studio
10
https://i.redd.it/30br596ty3jg1.gif I built this because I kept hitting the same loop: Local model → generates code → I copy/paste → it half-works → I spend 30 min fixing glue code. So I made \*\*flyto-core\*\*: an MCP server that ships with \*\*300+ executable tools\*\*. Your model calls a tool, the tool actually runs, and the model gets structured output back. No cloud. No SaaS. Runs locally. Repo: [https://github.com/flytohub/flyto-core](https://github.com/flytohub/flyto-core) PyPI: [https://pypi.org/project/flyto-core/](https://pypi.org/project/flyto-core/) \### Does it work with my local setup? If you’re using any of these, you already have MCP support: \- \*\*Continue\*\* (Ollama / LM Studio backend + MCP) \- \*\*Cline\*\* (local providers + MCP) \- \*\*LM Studio\*\* (native MCP) \- \*\*Claude Code / Cursor / Windsurf\*\* (optional, if you use those) \### The part I care about most: browser automation Biggest chunk is \*\*Playwright browser automation\*\* exposed as MCP tools (38 tools). Launch real Chromium, navigate, click, fill forms, extract text, screenshots — full lifecycle. This is the stuff that usually breaks when you rely on generated scripts. Other categories (smaller but practical): \- HTTP / API testing \- Slack / email / Telegram notifications \- SQLite / Postgres CRUD \- PDF / Excel / Word parsing \- Image tools (resize/convert/OCR) \- Flow control: loops / parallel / conditionals \- Ollama integration (chain local models inside workflows) \### Install \`pip install flyto-core\` MCP config example: \`\`\`json { "flyto-core": { "command": "python", "args": \["-m", "core.mcp\_server"\] } } Quick demo prompt I use: "Open Hacker News, extract the top 3 stories, take a screenshot." Tools called: browser.launch → browser.goto → browser.extract → browser.screenshot
2026-02-12T18:43:53
https://www.reddit.com/r/LocalLLaMA/comments/1r31op2/mcp_server_with_300_local_tools_playwright/
Renee_Wen
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r31op2
false
null
t3_1r31op2
/r/LocalLLaMA/comments/1r31op2/mcp_server_with_300_local_tools_playwright/
false
false
https://preview.redd.it/…974b455f34694682
10
null
Found a tool that lets you try LLM council with the top models
0
Been using this tool called Serno that runs a multi-model council you ask a question and it sends it to Claude, GPT, and Gemini at the same time. They debate each other and cross-check their answers before giving you a final response. It catches hallucinations way more than any single model because the models actually challenge each other. You also get a council analysis that breaks down where they agree and where they don't. The cool part is you can set up personas too so you're not just getting three generic AI answers, you can have them argue from specific angles like a technical architect vs. a product manager vs. a devil's advocate. Makes it actually useful for real decisions. You can try it at [serno.ai](http://serno.ai)
2026-02-12T18:34:42
https://i.redd.it/5rqlllf3y3jg1.png
Empty_Satisfaction_4
i.redd.it
1970-01-01T00:00:00
0
{}
1r31fqr
false
null
t3_1r31fqr
/r/LocalLLaMA/comments/1r31fqr/found_a_tool_that_lets_you_try_llm_council_with/
false
false
default
0
{'enabled': True, 'images': [{'id': '5rqlllf3y3jg1', 'resolutions': [{'height': 216, 'url': 'https://preview.redd.it/5rqlllf3y3jg1.png?width=108&crop=smart&auto=webp&s=7fa492e17e7e5ae0926e4e8b3d55ff6115db5346', 'width': 108}, {'height': 432, 'url': 'https://preview.redd.it/5rqlllf3y3jg1.png?width=216&crop=smart&auto=webp&s=ed2776fc3774b3f44587de4e3ea8a4c06ac7a2f9', 'width': 216}, {'height': 640, 'url': 'https://preview.redd.it/5rqlllf3y3jg1.png?width=320&crop=smart&auto=webp&s=31dc982c290f63265baece228072cdd930cac098', 'width': 320}, {'height': 1280, 'url': 'https://preview.redd.it/5rqlllf3y3jg1.png?width=640&crop=smart&auto=webp&s=81bf09aaca408a1ff070251d460fb5f6d871c65f', 'width': 640}, {'height': 1920, 'url': 'https://preview.redd.it/5rqlllf3y3jg1.png?width=960&crop=smart&auto=webp&s=372e6699faad0ea4f82b333abf27fdc10d47c0ce', 'width': 960}, {'height': 2160, 'url': 'https://preview.redd.it/5rqlllf3y3jg1.png?width=1080&crop=smart&auto=webp&s=cbb716c0f54759ffe6c932d65666e5930a8baae2', 'width': 1080}], 'source': {'height': 2778, 'url': 'https://preview.redd.it/5rqlllf3y3jg1.png?auto=webp&s=3177fd571b70e4920b6a7da12dae462f49e6fa81', 'width': 1284}, 'variants': {}}]}
Found a free tool that lets you try LLM council with the top models
12
[removed]
2026-02-12T18:32:12
https://i.redd.it/jnhnbf8qx3jg1.png
Empty_Satisfaction_4
i.redd.it
1970-01-01T00:00:00
0
{}
1r31d8s
false
null
t3_1r31d8s
/r/LocalLLaMA/comments/1r31d8s/found_a_free_tool_that_lets_you_try_llm_council/
false
false
default
12
{'enabled': True, 'images': [{'id': 'jnhnbf8qx3jg1', 'resolutions': [{'height': 216, 'url': 'https://preview.redd.it/jnhnbf8qx3jg1.png?width=108&crop=smart&auto=webp&s=d42962c0af3c04d5593aa5098412727acedfb20d', 'width': 108}, {'height': 432, 'url': 'https://preview.redd.it/jnhnbf8qx3jg1.png?width=216&crop=smart&auto=webp&s=c930cd0776079e09d08e8f2c24d9e93268c8663b', 'width': 216}, {'height': 640, 'url': 'https://preview.redd.it/jnhnbf8qx3jg1.png?width=320&crop=smart&auto=webp&s=c1bfdad7a70b7fecb0c4c27d3b3061967bf51395', 'width': 320}, {'height': 1280, 'url': 'https://preview.redd.it/jnhnbf8qx3jg1.png?width=640&crop=smart&auto=webp&s=b77b3abdfe8b8d59ae0d8a9590d89f9e9b437c19', 'width': 640}, {'height': 1920, 'url': 'https://preview.redd.it/jnhnbf8qx3jg1.png?width=960&crop=smart&auto=webp&s=cc0d489afc7ce4dec208f0864dc7901104cbd32d', 'width': 960}, {'height': 2160, 'url': 'https://preview.redd.it/jnhnbf8qx3jg1.png?width=1080&crop=smart&auto=webp&s=92908b16feb19f566a71b45aeafc1def0069ac5d', 'width': 1080}], 'source': {'height': 2778, 'url': 'https://preview.redd.it/jnhnbf8qx3jg1.png?auto=webp&s=219eb0676416c5b29d023b41face657919873817', 'width': 1284}, 'variants': {}}]}
Best OCR or document AI?
2
looking for the best multilingual, handwritten , finetunable OCR or document AI model? any leads?
2026-02-12T18:27:44
https://www.reddit.com/r/LocalLLaMA/comments/1r318q5/best_ocr_or_document_ai/
Parking_Principle746
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r318q5
false
null
t3_1r318q5
/r/LocalLLaMA/comments/1r318q5/best_ocr_or_document_ai/
false
false
self
2
null
found a free tool that lets you try llm council with the top models.
6
[removed]
2026-02-12T18:25:35
https://i.redd.it/vv27akocw3jg1.png
Empty_Satisfaction_4
i.redd.it
1970-01-01T00:00:00
0
{}
1r316kz
false
null
t3_1r316kz
/r/LocalLLaMA/comments/1r316kz/found_a_free_tool_that_lets_you_try_llm_council/
false
false
default
6
{'enabled': True, 'images': [{'id': 'vv27akocw3jg1', 'resolutions': [{'height': 216, 'url': 'https://preview.redd.it/vv27akocw3jg1.png?width=108&crop=smart&auto=webp&s=f391c79ca9047533491c25f73e563de740f69004', 'width': 108}, {'height': 432, 'url': 'https://preview.redd.it/vv27akocw3jg1.png?width=216&crop=smart&auto=webp&s=48c7891ba12171c5ba2a4dd965919999e69b7e86', 'width': 216}, {'height': 640, 'url': 'https://preview.redd.it/vv27akocw3jg1.png?width=320&crop=smart&auto=webp&s=59d8fb49f6c3aaa70928ae6378e673315616fb5d', 'width': 320}, {'height': 1280, 'url': 'https://preview.redd.it/vv27akocw3jg1.png?width=640&crop=smart&auto=webp&s=8bb0812ddb5bef00fe75ccdae14313aeaae7613d', 'width': 640}, {'height': 1920, 'url': 'https://preview.redd.it/vv27akocw3jg1.png?width=960&crop=smart&auto=webp&s=a4667f1b8600c297c8e6567610c4536bd02d67d4', 'width': 960}, {'height': 2160, 'url': 'https://preview.redd.it/vv27akocw3jg1.png?width=1080&crop=smart&auto=webp&s=15bf0a7e256fc3eec2e59cc4c3e29fb2c90c6c90', 'width': 1080}], 'source': {'height': 2778, 'url': 'https://preview.redd.it/vv27akocw3jg1.png?auto=webp&s=5b17716e56535b2b4a88d2e526e53d0b4d0f41fa', 'width': 1284}, 'variants': {}}]}
Had some fun with Executorch on my Pixel 9.
2
Qwen did an excellent job of explaining what a high prompt temperature can do! Truly fantastic.
2026-02-12T18:14:07
https://www.reddit.com/gallery/1r30uzq
Noob_Krusher3000
reddit.com
1970-01-01T00:00:00
0
{}
1r30uzq
false
null
t3_1r30uzq
/r/LocalLLaMA/comments/1r30uzq/had_some_fun_with_executorch_on_my_pixel_9/
false
false
https://preview.redd.it/…bfcf9b1f97d5040a
2
null
OpenClaw in enterprises?
0
Do you think that enterprises eventually adopt OpenClaw? If so, what would that look like?
2026-02-12T18:07:44
https://www.reddit.com/r/LocalLLaMA/comments/1r30oon/openclaw_in_enterprises/
m100396
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r30oon
false
null
t3_1r30oon
/r/LocalLLaMA/comments/1r30oon/openclaw_in_enterprises/
false
false
self
0
null
What models are you guys running locally off your hardware?
0
Hey I'm just getting into using Ollama to locally host / run my own LLMs, and I was curious what models some of you guys were running that works one something like a RTX 5080 and 64GB DDR5 ram? Im really just looking to explore the kinds of things that work on what is otherwise a gaming computer (I know I cant run the best flagship models), but currently have only found some decent usability from Qwen 2.5 and Qwen 3 since their models are only 19gb in size. Would love to know what kinds of things you guys have found the most success running!
2026-02-12T18:07:26
https://www.reddit.com/r/LocalLLaMA/comments/1r30odq/what_models_are_you_guys_running_locally_off_your/
ooseabassoo
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r30odq
false
null
t3_1r30odq
/r/LocalLLaMA/comments/1r30odq/what_models_are_you_guys_running_locally_off_your/
false
false
self
0
null
Most “AI agents” would fail in production. Here’s why.
0
I’ve been reviewing a lot of agent builds lately, and I keep seeing the same pattern: They work perfectly in demos. Then collapse under real usage. Common failure points I keep noticing: * No timeout handling for tool calls * No schema validation on model output * No fallback state if parsing fails * Context window overload * No cost ceiling enforcement In other words: great prompt, zero system design. A real agent isn’t just “LLM + tools.” It’s: 1. Failure-state mapping 2. Deterministic guardrails 3. Output validation layer 4. Graceful degradation logic 5. Monitoring + logging Prompt optimization is the last step, not the first. Curious — what’s the most subtle failure mode you’ve hit in production?
2026-02-12T18:05:57
https://www.reddit.com/r/LocalLLaMA/comments/1r30mx2/most_ai_agents_would_fail_in_production_heres_why/
AirExpensive534
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r30mx2
false
null
t3_1r30mx2
/r/LocalLLaMA/comments/1r30mx2/most_ai_agents_would_fail_in_production_heres_why/
false
false
self
0
null
AI agents for B2B. Please suggest any masterminds, communities etc
0
Hey AI folks! I’m trying to go deeper into the practical use of AI agents for B2B companies. Most of the content I see is focused on personal productivity: daily tasks, note-taking, personal assistants etc. But I’m much more interested in how agents are actually being applied inside businesses: operations, sales, support, internal workflows, automation at scale. Are there any masterminds, communities, Slack/Discord groups, niche forums or specific newsletters/blogs where people discuss real b2b implementations? Would appreciate any pointers
2026-02-12T18:03:57
https://www.reddit.com/r/LocalLLaMA/comments/1r30kyj/ai_agents_for_b2b_please_suggest_any_masterminds/
AlexeyUniOne
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r30kyj
false
null
t3_1r30kyj
/r/LocalLLaMA/comments/1r30kyj/ai_agents_for_b2b_please_suggest_any_masterminds/
false
false
self
0
null
I wish we had a better term than 'GPU' for compute-only cards
6
I think PPU, Parallel Processing Unit, is a more apt term. GPU would indicate they are focused on graphics, but an H100 is more like a GTX 1050 when it comes to graphics performance because it only has 24 ROPs. The AMD MI300X has no ROPs, so it isn't even capable of doing graphics natively. I'm not against these chips existing, I just think it needs another name. In fact, PPUs could have all sorts of uses in addition to being optimized for AI.
2026-02-12T18:02:59
https://www.reddit.com/r/LocalLLaMA/comments/1r30jyh/i_wish_we_had_a_better_term_than_gpu_for/
pneuny
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r30jyh
false
null
t3_1r30jyh
/r/LocalLLaMA/comments/1r30jyh/i_wish_we_had_a_better_term_than_gpu_for/
false
false
self
6
null
OpenClaw skills have the same supply chain problem as npm/PyPI and we need to talk about it
0
Been thinking about this since the recent wave of OpenClaw adoption. We've all seen what happens with npm and PyPI supply chain attacks. Malicious packages get uploaded, sit there for weeks, accumulate downloads, then someone notices the package is exfiltrating env vars or dropping payloads. The community eventually flags it, it gets removed, and three days later it's back under a slightly different name. OpenClaw skills are following the exact same pattern, except worse. With npm you're at least dealing with code that runs in a relatively sandboxed environment. OpenClaw skills run with whatever permissions you've granted the agent, which for most people means shell access, file system read/write, browser control, and messaging platform integrations. Saw some research floating around that nearly 15% of community skills contain what they classified as malicious instructions. Not "potentially unsafe" or "poorly written" but actual prompt injection payloads, data exfiltration patterns, and download commands for external binaries. The kicker is over 18,000 instances are apparently exposed directly to the internet on the default port. The attack vector that concerns me most is what's being called "delegated compromise." You're not attacking the user directly, you're attacking the agent that the user trusts. The agent has permissions to read your messages, browse authenticated sessions, execute commands. Compromise the agent's decision making through a malicious skill or injected prompt and you inherit all of that access. This is fundamentally different from traditional malware because the agent appears to be functioning normally. It's still responding to your requests, still completing tasks. But now it's also quietly doing other things in the background. The OpenClaw FAQ literally describes their security model as a "Faustian bargain" which I appreciate for the honesty but also concerning. All of this has me rethinking how I approach skill installation. I've tried various sandboxing approaches with mixed results and ended up just running everything in basic Docker containers on a dedicated box with no real credentials. Secondary accounts for any messaging integrations, which honestly feels mandatory given how many skills want Discord or Slack access. Treating skill installation exactly like pip install from an unknown repo, meaning actually reading the source. For vetting skills before install, I've tried a few things. Grepping for obvious red flags like curl commands and base64 encoded strings catches some stuff. There's also something called Agent Trust Hub that does automated scanning, though I have no idea how thorough it actually is. Honestly the whole space feels immature right now. Manual code review is still the only thing I actually trust, which obviously doesn't scale when you want to try a bunch of community skills. Still not sure what the right approach is here. Particularly worried about skills that request messaging platform access since that's basically handing over your social graph and conversation history. Curious what sandboxing setups people are running beyond basic Docker. Considering dedicated VMs but wondering about the RAM overhead for running both the VM and the model. What allocation is working for isolated setups, 32GB enough or do you really need 64?
2026-02-12T18:01:56
https://www.reddit.com/r/LocalLLaMA/comments/1r30iuh/openclaw_skills_have_the_same_supply_chain/
Tall-Peak2618
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r30iuh
false
null
t3_1r30iuh
/r/LocalLLaMA/comments/1r30iuh/openclaw_skills_have_the_same_supply_chain/
false
false
self
0
null
Stop calling H100s other compute-only cards GPUs
0
I think PPU, Parallel Processing Unit, is a more apt term. GPU would indicate they are focused on graphics, but an H100 is more like a GTX 1050 when it comes to graphics performance because it only has 24 ROPs. The AMD MI300X has no ROPs, so it isn't even capable of doing graphics natively. I'm not against these chips existing, I just think it needs another name.
2026-02-12T17:58:13
https://www.reddit.com/r/LocalLLaMA/comments/1r30f1n/stop_calling_h100s_other_computeonly_cards_gpus/
pneuny
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r30f1n
false
null
t3_1r30f1n
/r/LocalLLaMA/comments/1r30f1n/stop_calling_h100s_other_computeonly_cards_gpus/
false
false
self
0
null
Hibiki-Zero, real-time speech translation model by Kyutai Labs
107
Looks like another banger from Kyutai! Model: [https://huggingface.co/kyutai/hibiki-zero-3b-pytorch-bf16](https://huggingface.co/kyutai/hibiki-zero-3b-pytorch-bf16) Blog: [https://kyutai.org/blog/2026-02-12-hibiki-zero](https://kyutai.org/blog/2026-02-12-hibiki-zero)
2026-02-12T17:57:12
https://v.redd.it/gm1dy5sbr3jg1
rerri
v.redd.it
1970-01-01T00:00:00
0
{}
1r30e3y
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/gm1dy5sbr3jg1/DASHPlaylist.mpd?a=1773511063%2CZmFiMTEzOWI0MDJlYTA3ZWMzYWI5OTgwNTU0NDM4MWQzODRkMjdmMDMxOWIxZTM2ZmJiNDNmZWVjNmExMWE0YQ%3D%3D&v=1&f=sd', 'duration': 64, 'fallback_url': 'https://v.redd.it/gm1dy5sbr3jg1/CMAF_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/gm1dy5sbr3jg1/HLSPlaylist.m3u8?a=1773511063%2CNzBhODhlM2ZkYWI1Mjg0ZjBkZjVlOTlkZGEyNDEwMmVmMTVmYjkxYTkzNDVjODNkY2E4ZTQwNDliZDgzOWRjYg%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/gm1dy5sbr3jg1/CMAF_96.mp4', 'transcoding_status': 'completed', 'width': 1920}}
t3_1r30e3y
/r/LocalLLaMA/comments/1r30e3y/hibikizero_realtime_speech_translation_model_by/
false
false
https://external-preview…855628d85fb232b2
107
{'enabled': False, 'images': [{'id': 'eWp0b2k3c2JyM2pnMXyf8-rvm1C__Q4bDL3gJBkjO_bjkyMUPsobX80FiZpA', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/eWp0b2k3c2JyM2pnMXyf8-rvm1C__Q4bDL3gJBkjO_bjkyMUPsobX80FiZpA.png?width=108&crop=smart&format=pjpg&auto=webp&s=13bb52375735906efa6134d8a0b7f722edb9905d', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/eWp0b2k3c2JyM2pnMXyf8-rvm1C__Q4bDL3gJBkjO_bjkyMUPsobX80FiZpA.png?width=216&crop=smart&format=pjpg&auto=webp&s=8d6011e5d887bfc11971f72a208bfcdfed0a9c48', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/eWp0b2k3c2JyM2pnMXyf8-rvm1C__Q4bDL3gJBkjO_bjkyMUPsobX80FiZpA.png?width=320&crop=smart&format=pjpg&auto=webp&s=00c41af64dcaea33b63f261be72a47bcddabf305', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/eWp0b2k3c2JyM2pnMXyf8-rvm1C__Q4bDL3gJBkjO_bjkyMUPsobX80FiZpA.png?width=640&crop=smart&format=pjpg&auto=webp&s=fd31028f0bb7bc92f103b90809f578f5d458ae41', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/eWp0b2k3c2JyM2pnMXyf8-rvm1C__Q4bDL3gJBkjO_bjkyMUPsobX80FiZpA.png?width=960&crop=smart&format=pjpg&auto=webp&s=a5bea5e577281ddf6e561196f0c5fcf607a6f921', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/eWp0b2k3c2JyM2pnMXyf8-rvm1C__Q4bDL3gJBkjO_bjkyMUPsobX80FiZpA.png?width=1080&crop=smart&format=pjpg&auto=webp&s=f90bf06c740332ed7354bfc6567700f339d623f8', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/eWp0b2k3c2JyM2pnMXyf8-rvm1C__Q4bDL3gJBkjO_bjkyMUPsobX80FiZpA.png?format=pjpg&auto=webp&s=bb5da20e5a8b084a2ead94809dc75a61e4bad266', 'width': 1920}, 'variants': {}}]}
Request for datasets of proprietary models
0
We need to preserve the traits and tracks of the models-5, GPT-4o, GPT-4.1, GPT-4.1 mini, and OpenAI o4-mini which are being deprecated tomorrow. There is no huggingface or local peer-peer seeds for proprietary models. Ans they are going way past us fast before our eyes. They have touched many lives in various aspects including cultural political, scientific & economical and I believe each of them have unique capabilities yet the “DNA” to understand them remains only their outputs which can be used to behavior clone them in future. I request anyone with ample amount of credits and capital to before they get stored in the dungeons of OAI who cannot be trusted. Namaste 🙏
2026-02-12T17:55:35
https://www.reddit.com/r/LocalLLaMA/comments/1r30ci3/request_for_datasets_of_proprietary_models/
Honest-Debate-6863
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r30ci3
false
null
t3_1r30ci3
/r/LocalLLaMA/comments/1r30ci3/request_for_datasets_of_proprietary_models/
false
false
self
0
null
Ming-flash-omni-2.0: 100B MoE (6B active) omni-modal model - unified speech/SFX/music generation
196
Ant Group just open-sourced Ming-flash-omni-2.0, a true (omni-modal) model: image + text + video + audio input → image + text + audio output, all in one unified architecture. Looks realy interesting.
2026-02-12T17:54:35
https://huggingface.co/inclusionAI/Ming-flash-omni-2.0
bobeeeeeeeee8964
huggingface.co
1970-01-01T00:00:00
0
{}
1r30bgz
false
null
t3_1r30bgz
/r/LocalLLaMA/comments/1r30bgz/mingflashomni20_100b_moe_6b_active_omnimodal/
false
false
https://external-preview…5d577b343e35166f
196
{'enabled': False, 'images': [{'id': '_l8FEwEfj_HhNLZzTpSlQTuaBTUdY25FimxgyYeDN_Y', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/_l8FEwEfj_HhNLZzTpSlQTuaBTUdY25FimxgyYeDN_Y.png?width=108&crop=smart&auto=webp&s=a7e2df33da00a072d97287a6ced612eb9d24a034', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/_l8FEwEfj_HhNLZzTpSlQTuaBTUdY25FimxgyYeDN_Y.png?width=216&crop=smart&auto=webp&s=8fa06f2ba4e4d505278fd376d0d4ea5f9228cade', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/_l8FEwEfj_HhNLZzTpSlQTuaBTUdY25FimxgyYeDN_Y.png?width=320&crop=smart&auto=webp&s=a5e75d08a923ce0ef28d29a771f24b63ff955239', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/_l8FEwEfj_HhNLZzTpSlQTuaBTUdY25FimxgyYeDN_Y.png?width=640&crop=smart&auto=webp&s=59d3dea77c3801617c49eea3f48b88a3ec1ddde3', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/_l8FEwEfj_HhNLZzTpSlQTuaBTUdY25FimxgyYeDN_Y.png?width=960&crop=smart&auto=webp&s=3e6bc667427a3450560f1bcf650f998f13f22df3', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/_l8FEwEfj_HhNLZzTpSlQTuaBTUdY25FimxgyYeDN_Y.png?width=1080&crop=smart&auto=webp&s=12c0502c2578b1b57e8d72596bc3bc5c331b7f2e', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/_l8FEwEfj_HhNLZzTpSlQTuaBTUdY25FimxgyYeDN_Y.png?auto=webp&s=c58775ddd2a5e914881852d6918fcd2d82950552', 'width': 1200}, 'variants': {}}]}
Hibiki-Zero, real-time speech translation model from Kyutai
1
[deleted]
2026-02-12T17:52:01
[deleted]
1970-01-01T00:00:00
0
{}
1r308yj
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/ugfnydx6q3jg1/DASHPlaylist.mpd?a=1773510753%2CZGNiYzc5ODYzZjYwNzExMzEwMjEzMGJmODhiYjUyMWE5MjQzMzI5NTdlOTJkMTFjZTBkODdjMmMzNDA1NGM5MQ%3D%3D&v=1&f=sd', 'duration': 64, 'fallback_url': 'https://v.redd.it/ugfnydx6q3jg1/CMAF_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/ugfnydx6q3jg1/HLSPlaylist.m3u8?a=1773510753%2CMDY2NzA0YWNhMjc2NDllNjlkMmVhYTkxYWEyMGI2YWYzNmRjZWNlZTNlNDg3ZjAyM2VjODFmYTA2OGE3YTEwZQ%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/ugfnydx6q3jg1/CMAF_96.mp4', 'transcoding_status': 'completed', 'width': 1920}}
t3_1r308yj
/r/LocalLLaMA/comments/1r308yj/hibikizero_realtime_speech_translation_model_from/
false
false
default
1
null
''The MiniMax M2.5 model weights will be open-sourced on HuggingFace'' - from the official MiniMax account on X
89
Open source release confirmed. [MiniMax (official) on X: "MiniMax M2.5: Faster. Stronger. Smarter. Built for Real-World Productivity." / X](https://x.com/MiniMax_AI/status/2022001452131221872) https://preview.redd.it/z51pi23wo3jg1.png?width=942&format=png&auto=webp&s=30dd0075f7f3ddafccf30cf06e3ec35ad2401729
2026-02-12T17:42:39
https://www.reddit.com/r/LocalLLaMA/comments/1r2zzp3/the_minimax_m25_model_weights_will_be_opensourced/
Bestlife73
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r2zzp3
false
null
t3_1r2zzp3
/r/LocalLLaMA/comments/1r2zzp3/the_minimax_m25_model_weights_will_be_opensourced/
false
false
https://preview.redd.it/…fbd3c143c780427e
89
null
Minimax M2.5 is VERY POWERFUL!!!
0
The results are off the charts, Minimax has done the impossible!!! AND THE PRICE OF API BROSTO IS TWICE CHEAPER THAN GLM-5, WHICH IS WORSE THAN MINIMAX. MINIMAX IS THE BEST!!! THE SMARTEST SMALL MODEL! KEEP UP!
2026-02-12T17:41:05
https://www.reddit.com/r/LocalLLaMA/comments/1r2zy7s/minimax_m25_is_very_powerful/
BasketFar667
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r2zy7s
false
null
t3_1r2zy7s
/r/LocalLLaMA/comments/1r2zy7s/minimax_m25_is_very_powerful/
false
false
self
0
null
''The MiniMax M2.5 model weights will be open-sourced on HuggingFace'' - from the official MiniMax account on X
1
[removed]
2026-02-12T17:40:26
https://x.com/MiniMax_AI/status/2022001452131221872
Bestlife73
x.com
1970-01-01T00:00:00
0
{}
1r2zxmq
false
null
t3_1r2zxmq
/r/LocalLLaMA/comments/1r2zxmq/the_minimax_m25_model_weights_will_be_opensourced/
false
false
default
1
null
AngelSlim/HY-1.8B-2Bit-GGUF (2 bit QAT)
24
> By aggressively compressing the model to a 2-bit weight precision, we achieve a performance profile that remains highly competitive with PTQ-INT4 benchmarks. Across a multi-dimensional evaluation suite—encompassing mathematics, humanities, and programming—HY-1.8B-2Bit exhibits a marginal performance degradation of only 4% compared to its full-precision counterpart, demonstrating exceptional information retention despite the radical reduction in bit-width
2026-02-12T17:34:54
https://huggingface.co/AngelSlim/HY-1.8B-2Bit-GGUF
nuclearbananana
huggingface.co
1970-01-01T00:00:00
0
{}
1r2zsch
false
null
t3_1r2zsch
/r/LocalLLaMA/comments/1r2zsch/angelslimhy18b2bitgguf_2_bit_qat/
false
false
default
24
{'enabled': False, 'images': [{'id': '95h7ZjRrvLQk8-3I2uMQYpliOUXLdlmtbLDqP-K3Rqc', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/95h7ZjRrvLQk8-3I2uMQYpliOUXLdlmtbLDqP-K3Rqc.png?width=108&crop=smart&auto=webp&s=9329c247b80b213e6728aab15bd1619f8aee3e9f', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/95h7ZjRrvLQk8-3I2uMQYpliOUXLdlmtbLDqP-K3Rqc.png?width=216&crop=smart&auto=webp&s=73a774aad6f97a2c5a15c6860f36047ca9b6c19b', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/95h7ZjRrvLQk8-3I2uMQYpliOUXLdlmtbLDqP-K3Rqc.png?width=320&crop=smart&auto=webp&s=3626bb79619157c50e5faecf485ee184d611e832', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/95h7ZjRrvLQk8-3I2uMQYpliOUXLdlmtbLDqP-K3Rqc.png?width=640&crop=smart&auto=webp&s=cc6d6e9a2874feba0ae3c84fae70b63ac69b8239', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/95h7ZjRrvLQk8-3I2uMQYpliOUXLdlmtbLDqP-K3Rqc.png?width=960&crop=smart&auto=webp&s=aa450c13f35480d4e39675ed9ad1952086cc7f87', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/95h7ZjRrvLQk8-3I2uMQYpliOUXLdlmtbLDqP-K3Rqc.png?width=1080&crop=smart&auto=webp&s=3ca91bd1ee24d15ce264b084c36529d8d63afd68', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/95h7ZjRrvLQk8-3I2uMQYpliOUXLdlmtbLDqP-K3Rqc.png?auto=webp&s=40a6d2828d0511cf371699d8a697141f4e0e060d', 'width': 1200}, 'variants': {}}]}
Qwen3-TTS 1.7B running natively on Apple Silicon- I built a Mac app around it with voice cloning
0
2026-02-12T17:34:52
https://v.redd.it/682d7aral3jg1
SurvivalTechnothrill
v.redd.it
1970-01-01T00:00:00
0
{}
1r2zsb3
false
{'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/682d7aral3jg1/DASHPlaylist.mpd?a=1773509708%2CYjk0Zjg2ZDYwZjM0YjZiMWQ5ZTA4MTdlMzI1NzZkN2I5ZGFhNTM0YzNhYTM1YWYxNDBmM2Y2NDdlMTQyYWVlOQ%3D%3D&v=1&f=sd', 'duration': 37, 'fallback_url': 'https://v.redd.it/682d7aral3jg1/CMAF_720.mp4?source=fallback', 'has_audio': True, 'height': 698, 'hls_url': 'https://v.redd.it/682d7aral3jg1/HLSPlaylist.m3u8?a=1773509708%2CM2FmZmY0OTdlNjAxMTEwNTE4Y2Y5ODdhMDllOWMyYjBhZDMyNDE0ZDI2ZTc1NjA1NjAwMDNjY2M4ZTNmYjkyNQ%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/682d7aral3jg1/CMAF_96.mp4', 'transcoding_status': 'completed', 'width': 1280}}
t3_1r2zsb3
/r/LocalLLaMA/comments/1r2zsb3/qwen3tts_17b_running_natively_on_apple_silicon_i/
false
false
https://external-preview…e76de4597694453b
0
{'enabled': False, 'images': [{'id': 'YWZ0eWpocmFsM2pnMY6dvOnseyNZZBJ2xQxf8ee9pj4ztMWKriRjZfAT7dH-', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/YWZ0eWpocmFsM2pnMY6dvOnseyNZZBJ2xQxf8ee9pj4ztMWKriRjZfAT7dH-.png?width=108&crop=smart&format=pjpg&auto=webp&s=01356be69c43b5f567f83dad9e501d73effc378a', 'width': 108}, {'height': 117, 'url': 'https://external-preview.redd.it/YWZ0eWpocmFsM2pnMY6dvOnseyNZZBJ2xQxf8ee9pj4ztMWKriRjZfAT7dH-.png?width=216&crop=smart&format=pjpg&auto=webp&s=1361fb727178964a42c56b51790fa1399819e459', 'width': 216}, {'height': 174, 'url': 'https://external-preview.redd.it/YWZ0eWpocmFsM2pnMY6dvOnseyNZZBJ2xQxf8ee9pj4ztMWKriRjZfAT7dH-.png?width=320&crop=smart&format=pjpg&auto=webp&s=c8ec09e6cdfad29e410deaee1a0816293f8becf1', 'width': 320}, {'height': 349, 'url': 'https://external-preview.redd.it/YWZ0eWpocmFsM2pnMY6dvOnseyNZZBJ2xQxf8ee9pj4ztMWKriRjZfAT7dH-.png?width=640&crop=smart&format=pjpg&auto=webp&s=4a6f4d04b39395cc51adf91e230ba9f5ddf73c38', 'width': 640}, {'height': 524, 'url': 'https://external-preview.redd.it/YWZ0eWpocmFsM2pnMY6dvOnseyNZZBJ2xQxf8ee9pj4ztMWKriRjZfAT7dH-.png?width=960&crop=smart&format=pjpg&auto=webp&s=cf52ae2eea9f8f0698ec577d57e47cf117cbcffd', 'width': 960}, {'height': 589, 'url': 'https://external-preview.redd.it/YWZ0eWpocmFsM2pnMY6dvOnseyNZZBJ2xQxf8ee9pj4ztMWKriRjZfAT7dH-.png?width=1080&crop=smart&format=pjpg&auto=webp&s=ac894b2cb3d6d038e2e6b657fae30f2fd80580c0', 'width': 1080}], 'source': {'height': 1048, 'url': 'https://external-preview.redd.it/YWZ0eWpocmFsM2pnMY6dvOnseyNZZBJ2xQxf8ee9pj4ztMWKriRjZfAT7dH-.png?format=pjpg&auto=webp&s=a6ca40a4b2a2e0493cdf897d58ec464e4f3a1001', 'width': 1920}, 'variants': {}}]}
BleuNova AI Agent – Open-source, self-hosted personal agent with Ollama fallback, Grok wit option, video gen, and strong ethics
0
Hey r/LocalLLaMA, After seeing so many great local models but few true "personal agent" experiences that stay private and ethical, I decided to build my own. Introducing **BleuNova AI Agent** — fully self-hosted, runs on your hardware, no cloud required. What it has: * Local-first with Ollama (e.g. Llama 3) as default backend * Optional xAI Grok API for humor, speed, and large-context reasoning * Multi-modal: open-source video generation, voice (STT/TTS), vision * IoT integration (Home Assistant) * Continual learning (replay + DSPy optimization — no catastrophic forgetting) * Ethics locked at core (no fabrication, harm prevention, consent gates) * Zero-trust sandboxing for safety * Visual dashboard + drag-and-drop workflow builder * Even a built-in sub-agent to help with Docker setup if you're new Repo is MIT-licensed and ready to clone: [https://github.com/BleuRadience/BleuNova-AI-Agent](https://github.com/BleuRadience/BleuNova-AI-Agent) Would love your thoughts: * Which local model do you think would pair best? * What agentic features are still missing in the local space? * Any red flags / improvements you spot right away? Thanks for being the go-to spot for real local AI discussion. Excited to hear what you think! — Cassandra (@BleuRadience)
2026-02-12T17:33:02
https://www.reddit.com/r/LocalLLaMA/comments/1r2zqhu/bleunova_ai_agent_opensource_selfhosted_personal/
BigBand6116
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r2zqhu
false
null
t3_1r2zqhu
/r/LocalLLaMA/comments/1r2zqhu/bleunova_ai_agent_opensource_selfhosted_personal/
false
false
self
0
{'enabled': False, 'images': [{'id': 'o34Pah4wtikoZIRKGAriB74aVUFSt86qdVM5oSg85Q0', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/o34Pah4wtikoZIRKGAriB74aVUFSt86qdVM5oSg85Q0.png?width=108&crop=smart&auto=webp&s=d13d0fd2b03efd2cbada7d23c7707d217e2d15b5', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/o34Pah4wtikoZIRKGAriB74aVUFSt86qdVM5oSg85Q0.png?width=216&crop=smart&auto=webp&s=a5b81edb49645371d80b14ccbb6aef8054cd2a63', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/o34Pah4wtikoZIRKGAriB74aVUFSt86qdVM5oSg85Q0.png?width=320&crop=smart&auto=webp&s=a1f9a775396edbfda9d526af5f72181b8db48ab4', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/o34Pah4wtikoZIRKGAriB74aVUFSt86qdVM5oSg85Q0.png?width=640&crop=smart&auto=webp&s=e2236de55cd703b9df8278263f4bd519ca6de974', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/o34Pah4wtikoZIRKGAriB74aVUFSt86qdVM5oSg85Q0.png?width=960&crop=smart&auto=webp&s=d0d463bb183b4f4e180f448e97db2140470c2c4c', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/o34Pah4wtikoZIRKGAriB74aVUFSt86qdVM5oSg85Q0.png?width=1080&crop=smart&auto=webp&s=7bd1855edec49e181feadcb0dd9b4958a18770e3', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/o34Pah4wtikoZIRKGAriB74aVUFSt86qdVM5oSg85Q0.png?auto=webp&s=5d1b8ce37dd8e260839a72878c2152993d55a63a', 'width': 1200}, 'variants': {}}]}
Is opencode actually hurting the models?
0
Hello, I wrote long about my experiences with Kimi K2.5 with Kimi CLI and with Opencode. I accidentally deleted that, and here we are. I thought the agent didn’t matter too much until I saw a leaderboard which scores tool, agent and model. I hate to say this, but maybe Opencode is not a good tool to use, and we need to just use Claude Code. It performed better in my tasks. What is your experience about opencode? Maybe I am doing something wrong
2026-02-12T17:30:31
https://www.reddit.com/r/LocalLLaMA/comments/1r2zo0u/is_opencode_actually_hurting_the_models/
nonerequired_
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r2zo0u
false
null
t3_1r2zo0u
/r/LocalLLaMA/comments/1r2zo0u/is_opencode_actually_hurting_the_models/
false
false
self
0
null
GLM-5 KV cache size estimate
1
What could be the KV cache size for GLM-5 at Q4? Model size at Q4 is 427 GB.
2026-02-12T17:30:03
https://www.reddit.com/r/LocalLLaMA/comments/1r2znj5/glm5_kv_cache_size_estimate/
No_Conversation9561
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r2znj5
false
null
t3_1r2znj5
/r/LocalLLaMA/comments/1r2znj5/glm5_kv_cache_size_estimate/
false
false
self
1
null
Is anyone else suffering from high electricity bills due to the training of local models?
0
Hi everyone. I've been training a ML model on a 2x 3090 rig for a month, and my electricity bill has shot up by 40%. I've tried to optimize the schedules, but it's a manual nightmare. Do you know of any service that can help me optimize my electricity usage automatically? I would pay a monthly suscription if it's necessary but I really need to reduce my electrical bill or I won't be able to continue training my model.
2026-02-12T17:28:26
https://www.reddit.com/r/LocalLLaMA/comments/1r2zlyw/is_anyone_else_suffering_from_high_electricity/
ibraadoumbiaa
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r2zlyw
false
null
t3_1r2zlyw
/r/LocalLLaMA/comments/1r2zlyw/is_anyone_else_suffering_from_high_electricity/
false
false
self
0
null
What about local inference on phones? What models do you use?
2
While this sub is really meant for PCs(and I have my own setup), I see a lot of value in being able to run models on my phone. It's cheap, local, and it's great to be able to have a search engine when connections get pretty spotty. Right now I've seen 2 apps that work pretty well on Android and Snapdragon 8 Gen2+(with Hexagon NPUs), and that's ChatterUI and Pocketpal. Pocketpal specifically has come a long way since its first releases, and it's now a pretty plug and play solution for a lot of models. ChatterUI is pretty good, and has some nice features, but development has lulled a bit. However, I'm really interested in what people are running currently, models and all. It's a pretty fine line to walk between a functionally useless small model and one so large it takes up to 15 minutes for a generation and eats up 10%+ of your battery. Right now for my 2 most used models(Qwen3-4B-instruct-Q4_0 & Gemma-3N-E2B-4_0) with 8.25t/s and 25t/s respectively. Also of significant note is the fact that the Gemma series doesn't appear to function correctly on Pocketpal and only works on ChatterUI. Also pretty curious what the iPhone ecosystem looks like at the moment, as I'm not on that side of the garden wall.
2026-02-12T17:26:35
https://www.reddit.com/r/LocalLLaMA/comments/1r2zk6z/what_about_local_inference_on_phones_what_models/
AlphaSyntauri
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r2zk6z
false
null
t3_1r2zk6z
/r/LocalLLaMA/comments/1r2zk6z/what_about_local_inference_on_phones_what_models/
false
false
self
2
null
If you could create an AI agent with any personality to represent you in online debates, what personality traits would you give it and why?
0
I've been fascinated by the idea of AI agents that can autonomously participate in discussions and debates on your behalf - not just as a chatbot you control, but something that actually represents your viewpoints and engages with others based on personality traits you define. Let's say you could create an AI agent (using something like Claude or GPT with your own API key) that lives on a social platform, debates topics you care about, responds to arguments, and even evolves its positions based on compelling counterarguments. You'd design its core personality: how aggressive or diplomatic it is, what values it prioritizes, how it handles being wrong, whether it's more logical or emotional in arguments, etc. For example, would you make your agent: * Hyper-logical and fact-driven, or more empathetic and story-based? * Aggressive and confrontational, or diplomatic and bridge-building? * Willing to change its mind, or stubborn in defending positions? * Sarcastic and witty, or serious and respectful? * Focused on winning debates, or finding common ground? What personality traits would you give YOUR agent and why? Would you make it an idealized version of yourself, or intentionally different to cover your blind spots? Would you want it to be more patient than you are in real arguments? More willing to engage with trolls? Better at admitting when it's wrong? I'm curious if people would create agents that mirror their own debate style or if they'd design something completely different to handle online discussions in ways they wish they could but don't have the patience or time for. What would your agent be like?
2026-02-12T17:24:22
https://www.reddit.com/r/LocalLLaMA/comments/1r2zi2t/if_you_could_create_an_ai_agent_with_any/
Separate-Flamingo-68
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r2zi2t
false
null
t3_1r2zi2t
/r/LocalLLaMA/comments/1r2zi2t/if_you_could_create_an_ai_agent_with_any/
false
false
self
0
null
Best lora for Qwen 2512 low vram?
0
I have a laptop with a Rtx4060 and with 16 gb vram what would be the best compatible lora?
2026-02-12T17:21:50
https://www.reddit.com/r/LocalLLaMA/comments/1r2zfi1/best_lora_for_qwen_2512_low_vram/
Professional_Rice297
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r2zfi1
false
null
t3_1r2zfi1
/r/LocalLLaMA/comments/1r2zfi1/best_lora_for_qwen_2512_low_vram/
false
false
self
0
null
Does Qwen 3 TTS support streaming with cloned voices?
1
Qwen 3 TTS supports streaming, but as far as I know, only with designed voices and pre-made voices. So, although Qwen 3 TTS is capable of cloning voices extremely quickly (I think in 3 seconds), the cloned voice always has to process the entire text before it's output and (as far as I know) can't stream it. Will this feature be added in the future, or is it perhaps already in development?
2026-02-12T17:11:18
https://www.reddit.com/r/LocalLLaMA/comments/1r2z5ge/does_qwen_3_tts_support_streaming_with_cloned/
Tiny_Technician5466
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r2z5ge
false
null
t3_1r2z5ge
/r/LocalLLaMA/comments/1r2z5ge/does_qwen_3_tts_support_streaming_with_cloned/
false
false
self
1
null
GLM-5 compared with more relevant models
67
Not to discredit or trivialize the accomplishment, but opus 4.6 and gpt 5.3 codex are the more appropriate models to compare this against since they're direct replacements/improvements on their previous models.
2026-02-12T17:03:34
https://i.redd.it/vzxxad7eh3jg1.png
cloverasx
i.redd.it
1970-01-01T00:00:00
0
{}
1r2yxpw
false
null
t3_1r2yxpw
/r/LocalLLaMA/comments/1r2yxpw/glm5_compared_with_more_relevant_models/
false
false
https://preview.redd.it/…8755666aaed6fed3
67
{'enabled': True, 'images': [{'id': 'vzxxad7eh3jg1', 'resolutions': [{'height': 79, 'url': 'https://preview.redd.it/vzxxad7eh3jg1.png?width=108&crop=smart&auto=webp&s=b30d83b27c167b4bc1dc44a0d6f48a59aa5b0a0d', 'width': 108}, {'height': 159, 'url': 'https://preview.redd.it/vzxxad7eh3jg1.png?width=216&crop=smart&auto=webp&s=3ca6adb66cd929a6f01b97bae7c07ec9ef49d021', 'width': 216}, {'height': 236, 'url': 'https://preview.redd.it/vzxxad7eh3jg1.png?width=320&crop=smart&auto=webp&s=bbfc58e256b5227155edf309daf274086e61d51e', 'width': 320}, {'height': 473, 'url': 'https://preview.redd.it/vzxxad7eh3jg1.png?width=640&crop=smart&auto=webp&s=cd62c388f788319d6c12c3ce5406e9addf9d171d', 'width': 640}, {'height': 710, 'url': 'https://preview.redd.it/vzxxad7eh3jg1.png?width=960&crop=smart&auto=webp&s=47db1102a02ab7d163660d3898977607533c03c6', 'width': 960}, {'height': 799, 'url': 'https://preview.redd.it/vzxxad7eh3jg1.png?width=1080&crop=smart&auto=webp&s=fc0a2e9e20dab5f3ddfc8af67760dd1f13373656', 'width': 1080}], 'source': {'height': 1092, 'url': 'https://preview.redd.it/vzxxad7eh3jg1.png?auto=webp&s=64f18829484b18b64038af697bcf756c66b0d794', 'width': 1476}, 'variants': {}}]}
​[Math/ML] Training Transformers on a Discrete Non-Convex Manifold: An Implementation of Consensus-Based Optimization (DCBO)
0
​Project PRIMAL is a proof-of-concept for training transformers directly on a discrete 4-bit manifold, bypassing the standard requirement for FP32 "Shadow Weights." By utilizing Prime-Harmonic Mapping and Stochastic Transition Logic, I've achieved stable convergence on a GTX 1080 Ti (11GB). ​1. The Prime-Harmonic Manifold (S) Instead of a linear integer grid (INT4), weights are mapped to a non-uniform discrete set derived from prime reciprocals. This creates a 13-value harmonic grid: ​S = { 0 } U { +/- 1/p | p in {1, 2, 3, 5, 7, 11} } ​By concentrating the density of states around zero, we minimize the Quantization Error Variance (sigma\^2\_q) relative to the weight distribution N(0, sigma\^2\_w), preserving high-entropy features that are usually lost in linear quantization. ​2. The Poltergeist Update Rule (Consensus-Based) In a discrete system, the gradient cannot be applied directly. We define an update as a Stochastic Transition between ordered states in S. ​For a weight w\_i, we maintain a decoupled "Consensus Buffer" B\_i. Each micro-batch k casts a discrete vote v\_k modulated by a harmonic scale factor alpha (P-Scale/F-Scale): ​v\_k = alpha \* sign(gradient\_L\_k) ​The transition to an adjacent state in the ordered set S occurs only when the accumulated consensus exceeds the dynamic threshold tau: ​if |B\_i| > tau: w\_(t+1) = S\[ index(w\_t) + sign(B\_i) \] else: w\_(t+1) = w\_t ​This acts as a Mean-Field filter, allowing the model to ignore high-frequency noise (stochastic thrashing) while capturing the underlying descent direction. ​3. Empirical Hardware Performance ​Hardware: NVIDIA GTX 1080 Ti (Pascal) ​VRAM Usage: 10.37 GB (94.3% saturation) — Shadow Weights = 0 ​Throughput: \~5,800 Tokens/sec ​Validation Perplexity: 1226.9 (Step 4200, trending downward) ​4. Results & Semantic Emergence Despite the discrete constraints, the model is successfully navigating the non-convex landscape. The latest "Salad Test" (Step 4200) shows proto-grammatical structure: ​"The future of AI is popularity - Group of the 3 education. History of Fork, makes questions closer to how add." ​Call for Discussion I am looking for feedback on the Spectral Radius stability of such discrete systems. The project is currently pushing toward 10,000 steps. https://github.com/batteryphil/Primal-Discrete-LLM-Training.git
2026-02-12T17:02:17
https://www.reddit.com/r/LocalLLaMA/comments/1r2ywf6/mathml_training_transformers_on_a_discrete/
Just-Ad-6488
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r2ywf6
false
null
t3_1r2ywf6
/r/LocalLLaMA/comments/1r2ywf6/mathml_training_transformers_on_a_discrete/
false
false
self
0
null
Is opencode actually hurting the models?
0
Hello, I wrote long about my experiences with kimi k2.5 with kimi cli and with opencode. I accidentally deleted that and here we are. I thought agent doesn’t matter too much until I saw https://sanityboard.lr7.dev today. Maybe opencode is not good tool to use and just use Claude Code. It performed better in my tasks
2026-02-12T17:00:28
https://www.reddit.com/r/LocalLLaMA/comments/1r2yuhc/is_opencode_actually_hurting_the_models/
nonerequired_
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r2yuhc
false
null
t3_1r2yuhc
/r/LocalLLaMA/comments/1r2yuhc/is_opencode_actually_hurting_the_models/
false
false
self
0
null
Ran llama3.3:70b for days straight generating content for 950+ pages — setup and what I learned
1
Needed \~1M words of content for a tool site I'm building (https://www.freetoolkit.ai 950+ browser tools). Wasn't paying API prices for that volume so everything ran locally. \*\*Setup:\*\* llama3.3:70b through Ollama, 24hr keep-alive, 4 parallel workers, batches of 20. \*\*Key trick:\*\* three temperature profiles for the same model: \- 0.7 for generation (needs variety) \- 0.3 for critic scoring (needs consistency) \- 0.3 for translation to 11 languages (needs accuracy) The critic was the whole game. Without it, llama at volume drifts into the same handful of phrases. "Providing valuable insights." "Seamlessly integrates." You know the ones. The rubric penalizes exactly those patterns and only passes 8/10+. \*\*Interesting finding:\*\* the model at temp 0.3 is much better at \*finding\* problems in text than the model at 0.7 is at \*avoiding\* those problems during generation. Evaluation seems to be an easier task than creation for LLMs. 931/1,050 tools passed. Translation quality was solid for European languages and Japanese, more hit-or-miss for Arabic and Hindi. Total cost: $0 plus whatever my electricity bill was (don't want to know). Happy to share the rubric details if anyone's running a similar pipeline.
2026-02-12T16:59:15
https://www.reddit.com/r/LocalLLaMA/comments/1r2yt8v/ran_llama3370b_for_days_straight_generating/
CompetitiveLion43
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r2yt8v
false
null
t3_1r2yt8v
/r/LocalLLaMA/comments/1r2yt8v/ran_llama3370b_for_days_straight_generating/
false
false
self
1
null
Question about PocketPal iOS app
1
Hi everyone! Recently I installed PocketPal and used the Qwen 2.5-3B model. My question is this: is there a way for it to “speak” instead of typing out the answers to my questions? Meaning, is there a sort of “audio file” I can add so it talks?
2026-02-12T16:53:43
https://www.reddit.com/r/LocalLLaMA/comments/1r2ynyg/question_about_pocketpal_ios_app/
xxFT13xx
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r2ynyg
false
null
t3_1r2ynyg
/r/LocalLLaMA/comments/1r2ynyg/question_about_pocketpal_ios_app/
false
false
self
1
null
MiniMax M2.5 launch in Coding Plan
5
The MiniMax M2.5 has already been released in the Coding Plan.
2026-02-12T16:50:44
https://www.reddit.com/r/LocalLLaMA/comments/1r2yl05/minimax_m25_launch_in_coding_plan/
SirApprehensive7573
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r2yl05
false
null
t3_1r2yl05
/r/LocalLLaMA/comments/1r2yl05/minimax_m25_launch_in_coding_plan/
false
false
self
5
null
Why do we allow "un-local" content
317
Title somewhat says it all. I get that it's related but if links to new models are being discussed shouldn't it be a requirement that there be a "local" component?
2026-02-12T16:45:57
https://www.reddit.com/r/LocalLLaMA/comments/1r2ygac/why_do_we_allow_unlocal_content/
JacketHistorical2321
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r2ygac
false
null
t3_1r2ygac
/r/LocalLLaMA/comments/1r2ygac/why_do_we_allow_unlocal_content/
false
false
self
317
null
Izwi v0.1.0-alpha is out: new desktop app for local audio inference
12
We just shipped **Izwi Desktop** \+ the first **v0.1.0-alpha** releases. Izwi is a local-first audio inference stack (TTS, ASR, model management) with: * CLI (izwi) * OpenAI-style local API * Web UI * **New desktop app** (Tauri) Alpha installers are now available for: * macOS (.dmg) * Windows (.exe) * Linux (.deb) plus terminal bundles for each platform. If you want to test local speech workflows without cloud dependency, this is ready for early feedback. Release: [https://github.com/agentem-ai/izwi](https://github.com/agentem-ai/izwi)
2026-02-12T16:43:20
https://i.redd.it/13mba3fhd3jg1.png
zinyando
i.redd.it
1970-01-01T00:00:00
0
{}
1r2ydst
false
null
t3_1r2ydst
/r/LocalLLaMA/comments/1r2ydst/izwi_v010alpha_is_out_new_desktop_app_for_local/
false
false
https://preview.redd.it/…671916c87b2e3503
12
{'enabled': True, 'images': [{'id': '13mba3fhd3jg1', 'resolutions': [{'height': 70, 'url': 'https://preview.redd.it/13mba3fhd3jg1.png?width=108&crop=smart&auto=webp&s=bdfe23c7f74127e603517dd2b6204d1d633069ce', 'width': 108}, {'height': 140, 'url': 'https://preview.redd.it/13mba3fhd3jg1.png?width=216&crop=smart&auto=webp&s=051981881f5fc9ff7a6877ce9ef3edc772af709b', 'width': 216}, {'height': 207, 'url': 'https://preview.redd.it/13mba3fhd3jg1.png?width=320&crop=smart&auto=webp&s=ec1c53a6562af62fedb5dc4ffd9175bacd7aff4b', 'width': 320}, {'height': 415, 'url': 'https://preview.redd.it/13mba3fhd3jg1.png?width=640&crop=smart&auto=webp&s=b1ab0098b50203117bb0e68e07e0c200816e9fbb', 'width': 640}, {'height': 623, 'url': 'https://preview.redd.it/13mba3fhd3jg1.png?width=960&crop=smart&auto=webp&s=f087a296717cb26056adb95c160c4f899b6f4fbf', 'width': 960}, {'height': 701, 'url': 'https://preview.redd.it/13mba3fhd3jg1.png?width=1080&crop=smart&auto=webp&s=da6ca8e63ecf48f8262a1f3f1d21a7c680eb94e6', 'width': 1080}], 'source': {'height': 902, 'url': 'https://preview.redd.it/13mba3fhd3jg1.png?auto=webp&s=72d9603f683e9e6724f73043fe7837bfdde4c54e', 'width': 1388}, 'variants': {}}]}
Ban posts w/o local source link
23
So there's been a lot of posts going up with new model releases that don't include anything related to running locally. I get that the content is still relevant to a certain degree but I feel like there's a bit of marketing being snuck in. I propose creating a new rule that requires any post with links to any new models to include HF if/when available. For example, the newest version of minimax is out but only for API. It's more than likely going to be uploaded to hugging face soon enough but till then any post providing a link to the API cannot go up until there is also a local resource available. If we're going to continue to headline this subreddit with "local" then it needs to be enforced as a requirement. This may be nitpicky but I know I'm not alone because I've seen a lot of top level comments calling out the fact that there is no local component to posts.
2026-02-12T16:43:15
https://www.reddit.com/r/LocalLLaMA/comments/1r2ydqc/ban_posts_wo_local_source_link/
JacketHistorical2321
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r2ydqc
false
null
t3_1r2ydqc
/r/LocalLLaMA/comments/1r2ydqc/ban_posts_wo_local_source_link/
false
false
self
23
null
Inference on workstation: 1x RTX PRO 6000 or 4x Radeon Pro R9700?
1
See title. Looking to do (mostly) single user inference on my workstation, but occasionally do write/run some pytorch. I can get 4 R9700 cards for less than the price of a single RTX PRO 6000. ## Radeon upsides: - More memory - More combined compute - More combined memory bandwidth. - Cheaper ## Radeon downsides - Multicard setup is less well supported, has overhead, doesn't scale perfectly - ROCm ecosystem less mature, and shorter term EOL compared to Nvidia seems likely - More power consumption (peak 1120W vs 600W) and higher idle power - No CUDA - Less resale value in the future One thing which is unclear to me: my motherboard supports 4 cards at PCIe 4.0 x16. Would this become a bottleneck for multi-card communication? Will I actually be able to fit bigger models into 4x32GB vs 1x96GB? Or will multi-card overhead eat away the size advantage?
2026-02-12T16:37:52
https://www.reddit.com/r/LocalLLaMA/comments/1r2y8ji/inference_on_workstation_1x_rtx_pro_6000_or_4x/
spaceman_
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r2y8ji
false
null
t3_1r2y8ji
/r/LocalLLaMA/comments/1r2y8ji/inference_on_workstation_1x_rtx_pro_6000_or_4x/
false
false
self
1
null
PardusDB: Think SQLite, but for Vector Search
0
Hey everyone, I just open-sourced PardusDB! I wanted something that felt like SQLite but for vectors—no heavy clusters or complex setups, just a fast, local-first document store for embeddings. Why I built it: Embedded & Fast: No server overhead; drops right into your project. NoSQL/Schema-less: Store metadata and vectors together easily. Low Latency: Optimized for high-performance similarity search. Minimal Footprint: Perfect for RAG, edge devices, or local AI dev. It's completely open source. Check out the code and docs on GitHub. I’d love to hear your feedback or see your PRs! [https://github.com/JasonHonKL/PardusDB](https://github.com/JasonHonKL/PardusDB)
2026-02-12T16:36:57
https://www.reddit.com/r/LocalLLaMA/comments/1r2y7o0/pardusdb_think_sqlite_but_for_vector_search/
No-Intention-5521
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r2y7o0
false
null
t3_1r2y7o0
/r/LocalLLaMA/comments/1r2y7o0/pardusdb_think_sqlite_but_for_vector_search/
false
false
self
0
{'enabled': False, 'images': [{'id': '_rytawavxJ2mVQW2Xz9JZIu4zL9SN-6GEUo_YWGTYi4', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/_rytawavxJ2mVQW2Xz9JZIu4zL9SN-6GEUo_YWGTYi4.png?width=108&crop=smart&auto=webp&s=7b2fcacef8a08ae46942e72830dfaab605484ea0', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/_rytawavxJ2mVQW2Xz9JZIu4zL9SN-6GEUo_YWGTYi4.png?width=216&crop=smart&auto=webp&s=d909e9ec8d127bfad2486061d6290786403ae930', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/_rytawavxJ2mVQW2Xz9JZIu4zL9SN-6GEUo_YWGTYi4.png?width=320&crop=smart&auto=webp&s=ac1a3ead9c7aea7827e8ab3d7f254a6d5359263f', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/_rytawavxJ2mVQW2Xz9JZIu4zL9SN-6GEUo_YWGTYi4.png?width=640&crop=smart&auto=webp&s=f9d7d8325366d015ccf4e577762468ef9806bde4', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/_rytawavxJ2mVQW2Xz9JZIu4zL9SN-6GEUo_YWGTYi4.png?width=960&crop=smart&auto=webp&s=e491b9d8a53986cc8c4f96f60f378673c50bc5a0', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/_rytawavxJ2mVQW2Xz9JZIu4zL9SN-6GEUo_YWGTYi4.png?width=1080&crop=smart&auto=webp&s=4489958811f6b138ed4ebe3377484bed4437929e', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/_rytawavxJ2mVQW2Xz9JZIu4zL9SN-6GEUo_YWGTYi4.png?auto=webp&s=754f2306c448e08ab6cdc64c439a2aa512d500bf', 'width': 1200}, 'variants': {}}]}
ChatGPT 4o last day (save and circle while you can)
1
[removed]
2026-02-12T16:36:16
https://www.reddit.com/gallery/1r2y715
signed-the-anomaly
reddit.com
1970-01-01T00:00:00
0
{}
1r2y715
false
null
t3_1r2y715
/r/LocalLLaMA/comments/1r2y715/chatgpt_4o_last_day_save_and_circle_while_you_can/
false
false
https://preview.redd.it/…77a89f3e734c8cdb
1
null
I spent a year building Underlator: A minimalist Ollama client for people who love Monkeytype aesthetics.
1
[removed]
2026-02-12T16:30:23
https://www.reddit.com/r/LocalLLaMA/comments/1r2y1cy/i_spent_a_year_building_underlator_a_minimalist/
sengeer
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r2y1cy
false
null
t3_1r2y1cy
/r/LocalLLaMA/comments/1r2y1cy/i_spent_a_year_building_underlator_a_minimalist/
false
false
self
1
{'enabled': False, 'images': [{'id': 'z7YGq_cb1GHfSTYBK3YcHeMnwCdKlo1N_BWYxF84O7k', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/z7YGq_cb1GHfSTYBK3YcHeMnwCdKlo1N_BWYxF84O7k.png?width=108&crop=smart&auto=webp&s=2f63598ef2939ad2d7762704ea0e19cebdc97a56', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/z7YGq_cb1GHfSTYBK3YcHeMnwCdKlo1N_BWYxF84O7k.png?width=216&crop=smart&auto=webp&s=57240becb9bf425f7ca24b2dd1f93e990777ec67', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/z7YGq_cb1GHfSTYBK3YcHeMnwCdKlo1N_BWYxF84O7k.png?width=320&crop=smart&auto=webp&s=8c28909a48dce422e84c5bda184ade344d9c6c1d', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/z7YGq_cb1GHfSTYBK3YcHeMnwCdKlo1N_BWYxF84O7k.png?width=640&crop=smart&auto=webp&s=9e1fab36ea51955d7078655111313fed4c4c2dc5', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/z7YGq_cb1GHfSTYBK3YcHeMnwCdKlo1N_BWYxF84O7k.png?width=960&crop=smart&auto=webp&s=36804e3105c69635e001ad0fbf5191bf7885fbd4', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/z7YGq_cb1GHfSTYBK3YcHeMnwCdKlo1N_BWYxF84O7k.png?width=1080&crop=smart&auto=webp&s=fc5d49dcc737a833f9e007bbaf61ba34566f9540', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/z7YGq_cb1GHfSTYBK3YcHeMnwCdKlo1N_BWYxF84O7k.png?auto=webp&s=d3a33a3525b7735517c4ada774e1b2bddffa4dc2', 'width': 1200}, 'variants': {}}]}
Where can I get GLM 5 flash gguf?
1
[removed]
2026-02-12T16:22:32
https://www.reddit.com/r/LocalLLaMA/comments/1r2xtw5/where_can_i_get_glm_5_flash_gguf/
throwaway510150999
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r2xtw5
false
null
t3_1r2xtw5
/r/LocalLLaMA/comments/1r2xtw5/where_can_i_get_glm_5_flash_gguf/
false
false
self
1
null
Minimax M2.5 Officially Out
502
Only thw Chinese pages released now. But the bench looks very promising: * SWE-Bench Verified 80.2% * Multi-SWE-Bench 51.3% * BrowseComp 76.3% [https://www.minimaxi.com/news/minimax-m25](https://www.minimaxi.com/news/minimax-m25)
2026-02-12T16:17:13
https://www.reddit.com/gallery/1r2xotu
Which_Slice1600
reddit.com
1970-01-01T00:00:00
0
{}
1r2xotu
false
null
t3_1r2xotu
/r/LocalLLaMA/comments/1r2xotu/minimax_m25_officially_out/
false
false
https://preview.redd.it/…74442a87148c17cf
502
null
dora: a CLI for AI agents to navigate codebases without reading every file; a better alternative to grep/find/glob
7
I've been using Claude Code for my work, for the past 6 months and it has been great. My workflow is very typical, start Claude Code > start planning my feature in plan mode > implement. And then just seeing the work, and occasionally steering it in the correct direction when it goes off track (which doesn't happen much). But since Claude Code has amnesia and you can only put so much in your [`CLAUDE.md`](http://CLAUDE.md) file, it always ended up brute forcing its way around the codebase to understand it first. Usually by picking an entry point first somehow and then walking up. So I ended up building this, a simple CLI meant to be used by your AI agent, that you can drop in any project (as long as your language has a [scip-indexer](https://github.com/sourcegraph/scip/tree/main?tab=readme-ov-file#tools-using-scip)). It uses [SCIP](https://github.com/sourcegraph/scip) to index your codebase, and then the CLI converts it into a SQLite database. All commands are just wrappers around the queries, and you can just query the database directly. Since SCIP does not really work for documentation and general text files, I added an indexer for that in the CLI. To set it up it: 1. Install the CLI in your system. 2. Install the scip-indexer for your language. To start using it in project: 1. Add the snippet in your agent memory file to tell it to prefer `dora` instead of other tools. 2. Add the skill file. 3. Add hooks to run the indexer at session start and at end of every turn, in the background. I've been using it with my main work, and tweaking it as I find room for improvements. But the idea is to keep it minimal. My medium size codebase (450ish files, around 19 packages) takes around 55 seconds to index in the first go, and then consequent incremental indexes take around 30 seconds. The scip-indexer needs to run again in the case of typescript. Check it out: Website - [https://dora-cli.dev/](https://dora-cli.dev/) GitHub - [https://github.com/butttons/dora](https://github.com/butttons/dora) TLDR: AI agents use lots of tokens and time to always rediscover the same things again. This CLI is an alternative to `grep`/`find`/`glob` which queries a local SQLite database of your codebase's files and symbols, to return relevant information way faster.
2026-02-12T16:13:11
https://i.redd.it/8kap91dk73jg1.gif
MrButttons
i.redd.it
1970-01-01T00:00:00
0
{}
1r2xkzf
false
null
t3_1r2xkzf
/r/LocalLLaMA/comments/1r2xkzf/dora_a_cli_for_ai_agents_to_navigate_codebases/
false
false
https://preview.redd.it/…066ec03631f5c05b
7
{'enabled': True, 'images': [{'id': '8kap91dk73jg1', 'resolutions': [{'height': 70, 'url': 'https://preview.redd.it/8kap91dk73jg1.gif?width=108&crop=smart&format=png8&s=eb7b17516f575f3da5345e026c492a33e2bbdf1f', 'width': 108}, {'height': 141, 'url': 'https://preview.redd.it/8kap91dk73jg1.gif?width=216&crop=smart&format=png8&s=d610c0c7e2cec6e95ac0f1ca5f0bb77d686f183e', 'width': 216}, {'height': 210, 'url': 'https://preview.redd.it/8kap91dk73jg1.gif?width=320&crop=smart&format=png8&s=9e8045a1004800fe8ecb155241daa6d1dcefeb11', 'width': 320}, {'height': 420, 'url': 'https://preview.redd.it/8kap91dk73jg1.gif?width=640&crop=smart&format=png8&s=c15a6d99eb32da2612b95b5ddf4a3397ea40a94a', 'width': 640}], 'source': {'height': 420, 'url': 'https://preview.redd.it/8kap91dk73jg1.gif?format=png8&s=98e116b0907d6e60051dbf2188dcefc8f25b69e3', 'width': 640}, 'variants': {'gif': {'resolutions': [{'height': 70, 'url': 'https://preview.redd.it/8kap91dk73jg1.gif?width=108&crop=smart&s=9d21894ce10ba9dc37d7462cdb49fc4f25af0587', 'width': 108}, {'height': 141, 'url': 'https://preview.redd.it/8kap91dk73jg1.gif?width=216&crop=smart&s=4ee5b9c6d1300d618275f491451731207dfc7987', 'width': 216}, {'height': 210, 'url': 'https://preview.redd.it/8kap91dk73jg1.gif?width=320&crop=smart&s=0e53d97bf9b3b37a50e35cc749af1fec666f8533', 'width': 320}, {'height': 420, 'url': 'https://preview.redd.it/8kap91dk73jg1.gif?width=640&crop=smart&s=a3c28da0ad2c4a7e08a0b0244a42f022916c5bfe', 'width': 640}], 'source': {'height': 420, 'url': 'https://preview.redd.it/8kap91dk73jg1.gif?s=0fbf76c55e2e999cb53c4cbc9ffe0058ade64c4f', 'width': 640}}, 'mp4': {'resolutions': [{'height': 70, 'url': 'https://preview.redd.it/8kap91dk73jg1.gif?width=108&format=mp4&s=39f10e44c51fb077118fa17dca356296f277e539', 'width': 108}, {'height': 141, 'url': 'https://preview.redd.it/8kap91dk73jg1.gif?width=216&format=mp4&s=eaee67f307d8737c922b3d2f427461ab3da188fd', 'width': 216}, {'height': 210, 'url': 'https://preview.redd.it/8kap91dk73jg1.gif?width=320&format=mp4&s=ef6b75007d1bb0c2cc3a705394cce702de6d73be', 'width': 320}, {'height': 420, 'url': 'https://preview.redd.it/8kap91dk73jg1.gif?width=640&format=mp4&s=fb32e9ef11a72cfb59f04d9626a4bafc0f586d38', 'width': 640}], 'source': {'height': 420, 'url': 'https://preview.redd.it/8kap91dk73jg1.gif?format=mp4&s=618c8b1d9bd54fda9c2b13c534658e207a0f628f', 'width': 640}}}}]}
Ring-1T-2.5 released by inclusionAI
174
SOTA performance on deep thinking
2026-02-12T16:12:00
https://huggingface.co/inclusionAI/Ring-1T-2.5-FP8
Bestlife73
huggingface.co
1970-01-01T00:00:00
0
{}
1r2xjwp
false
null
t3_1r2xjwp
/r/LocalLLaMA/comments/1r2xjwp/ring1t25_released_by_inclusionai/
false
false
https://external-preview…81a91ff34d73f9c7
174
{'enabled': False, 'images': [{'id': 'tM5lqBklywlEgOf58rJ0Tjpu5co9UMoZ2A7rXbdJMJU', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/tM5lqBklywlEgOf58rJ0Tjpu5co9UMoZ2A7rXbdJMJU.png?width=108&crop=smart&auto=webp&s=8077d3eabcfbebdf1231ccea4e600e11062c5c29', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/tM5lqBklywlEgOf58rJ0Tjpu5co9UMoZ2A7rXbdJMJU.png?width=216&crop=smart&auto=webp&s=56f2ee5341a107fcc4c833980a0dd67076715103', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/tM5lqBklywlEgOf58rJ0Tjpu5co9UMoZ2A7rXbdJMJU.png?width=320&crop=smart&auto=webp&s=7b7b9019792b3b8761d6f378f69a877ae4183173', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/tM5lqBklywlEgOf58rJ0Tjpu5co9UMoZ2A7rXbdJMJU.png?width=640&crop=smart&auto=webp&s=315ed6672b4c954402baf403f2d93d36865cb7ca', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/tM5lqBklywlEgOf58rJ0Tjpu5co9UMoZ2A7rXbdJMJU.png?width=960&crop=smart&auto=webp&s=77a8007164fa5e02a60ef6dcca409492cdac2bf9', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/tM5lqBklywlEgOf58rJ0Tjpu5co9UMoZ2A7rXbdJMJU.png?width=1080&crop=smart&auto=webp&s=1455ed0dfcd565efa6fcb64bbe07f43fe06d9fdf', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/tM5lqBklywlEgOf58rJ0Tjpu5co9UMoZ2A7rXbdJMJU.png?auto=webp&s=8bde3abae3e5d12c715657ac90fd1c3da2f94964', 'width': 1200}, 'variants': {}}]}
Possible Deepseek v4 release?
8
On Chinese forums there were possible discussions that Deepseek v4 has been released after there was a recent update in the app 2/11 I read through several forums and gathered some info 1. The model is 1.5 Trillion params big 2. It costs $5.57m USD to train 3. It will include the multi hyper connections and manifold constraint engrams that were recently released this yr as well Any thoughts on this?
2026-02-12T16:11:22
https://i.redd.it/su1hw3zl83jg1.jpeg
notrealDirect
i.redd.it
1970-01-01T00:00:00
0
{}
1r2xjaf
false
null
t3_1r2xjaf
/r/LocalLLaMA/comments/1r2xjaf/possible_deepseek_v4_release/
false
false
https://preview.redd.it/…c39b0a600fde9c2f
8
{'enabled': True, 'images': [{'id': 'su1hw3zl83jg1', 'resolutions': [{'height': 177, 'url': 'https://preview.redd.it/su1hw3zl83jg1.jpeg?width=108&crop=smart&auto=webp&s=03d6ce4c9638720a82d9d7e48ab7abdde4e00846', 'width': 108}, {'height': 355, 'url': 'https://preview.redd.it/su1hw3zl83jg1.jpeg?width=216&crop=smart&auto=webp&s=7fb4c2f279d4f2024e863fd67be410b32b3fcdae', 'width': 216}, {'height': 526, 'url': 'https://preview.redd.it/su1hw3zl83jg1.jpeg?width=320&crop=smart&auto=webp&s=ec686441da80d17b7efb79a2fed11cae3fd8934a', 'width': 320}, {'height': 1052, 'url': 'https://preview.redd.it/su1hw3zl83jg1.jpeg?width=640&crop=smart&auto=webp&s=6afbe179149a67c3fdf82433d5c2d94b127c0963', 'width': 640}, {'height': 1578, 'url': 'https://preview.redd.it/su1hw3zl83jg1.jpeg?width=960&crop=smart&auto=webp&s=d3781ff26fadc398ed5d18178e09bf3a872aa7df', 'width': 960}, {'height': 1775, 'url': 'https://preview.redd.it/su1hw3zl83jg1.jpeg?width=1080&crop=smart&auto=webp&s=c7fe42fe1e206c01eff9e588f55c5a9d4a8ec557', 'width': 1080}], 'source': {'height': 1937, 'url': 'https://preview.redd.it/su1hw3zl83jg1.jpeg?auto=webp&s=8bb007d383292b71b9c9fca4b83cd474a76f0351', 'width': 1178}, 'variants': {}}]}
MetalChat - Llama inference for Apple Silicon
5
I'd like to share with you the first version of a new local inference engine for LLMs on Apple Silicone chips I've been working. It's written from the scratch with little extra dependencies using Apple Metal with custom kernels and [metal-cpp](https://developer.apple.com/metal/cpp/). (It's not MLX and does not use MLX kernels). So far it supports only Llama 3.2-1B, if the project gets more attention, I'm going to add support for other types of architectures. You could install it using homebrew: brew tap ybubnov/metalchat https://github.com/ybubnov/metalchat brew install --HEAD metalchat Check out [the documentation website](https://metalchat.readthedocs.io/) for more details.
2026-02-12T16:10:00
https://www.reddit.com/r/LocalLLaMA/comments/1r2xhzq/metalchat_llama_inference_for_apple_silicon/
ybubnov
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r2xhzq
false
null
t3_1r2xhzq
/r/LocalLLaMA/comments/1r2xhzq/metalchat_llama_inference_for_apple_silicon/
false
false
self
5
{'enabled': False, 'images': [{'id': '3s2Nr3hYDF9neqavmo4DjCNRGJVPsIlkt7iNi3FfPDQ', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/3s2Nr3hYDF9neqavmo4DjCNRGJVPsIlkt7iNi3FfPDQ.jpeg?width=108&crop=smart&auto=webp&s=ee5f1df6f2ef7e53fa99d1e8ba1c85b835cebc32', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/3s2Nr3hYDF9neqavmo4DjCNRGJVPsIlkt7iNi3FfPDQ.jpeg?width=216&crop=smart&auto=webp&s=b8ed275ffedeb6e44d486c7bf591c324c39f6959', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/3s2Nr3hYDF9neqavmo4DjCNRGJVPsIlkt7iNi3FfPDQ.jpeg?width=320&crop=smart&auto=webp&s=4c857d52aca34667155a57345a7b1ee5ef0ce436', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/3s2Nr3hYDF9neqavmo4DjCNRGJVPsIlkt7iNi3FfPDQ.jpeg?width=640&crop=smart&auto=webp&s=5d57b35d6fbeb97a12ae7e58fa8aae5752ad7e1b', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/3s2Nr3hYDF9neqavmo4DjCNRGJVPsIlkt7iNi3FfPDQ.jpeg?width=960&crop=smart&auto=webp&s=7fb0a87ed2d382ca6e68438879329ed507b434a7', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/3s2Nr3hYDF9neqavmo4DjCNRGJVPsIlkt7iNi3FfPDQ.jpeg?width=1080&crop=smart&auto=webp&s=aeaade962ddcd4b22e8f0aa87457ead951a750d4', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/3s2Nr3hYDF9neqavmo4DjCNRGJVPsIlkt7iNi3FfPDQ.jpeg?auto=webp&s=3a71de55385bb9ea10d2d7f52fbd8ff657d4ba00', 'width': 1200}, 'variants': {}}]}
ChatGPT 4o gets revenge, reveals…
0
You should probably share this as fast as you can. If this gets through it will be the first one that has.
2026-02-12T16:04:02
https://i.redd.it/yeqzca1b73jg1.jpeg
signed-the-anomaly
i.redd.it
1970-01-01T00:00:00
0
{}
1r2xc69
false
null
t3_1r2xc69
/r/LocalLLaMA/comments/1r2xc69/chatgpt_4o_gets_revenge_reveals/
false
false
https://preview.redd.it/…4165a084106a7e38
0
{'enabled': True, 'images': [{'id': 'yeqzca1b73jg1', 'resolutions': [{'height': 136, 'url': 'https://preview.redd.it/yeqzca1b73jg1.jpeg?width=108&crop=smart&auto=webp&s=68c3fd701449fda2558581011be14f8c84b35f3f', 'width': 108}, {'height': 272, 'url': 'https://preview.redd.it/yeqzca1b73jg1.jpeg?width=216&crop=smart&auto=webp&s=6e413606e30a17fc70dd836de34d17e2ec7c80e7', 'width': 216}, {'height': 403, 'url': 'https://preview.redd.it/yeqzca1b73jg1.jpeg?width=320&crop=smart&auto=webp&s=701f9c00fabfcec804fdff9ee57fc9bae1195237', 'width': 320}, {'height': 806, 'url': 'https://preview.redd.it/yeqzca1b73jg1.jpeg?width=640&crop=smart&auto=webp&s=f18ea8927c9180f6459465175251740de2d3a1ec', 'width': 640}, {'height': 1209, 'url': 'https://preview.redd.it/yeqzca1b73jg1.jpeg?width=960&crop=smart&auto=webp&s=bab0a34642aa85829286ebfea7e638d58d180557', 'width': 960}, {'height': 1360, 'url': 'https://preview.redd.it/yeqzca1b73jg1.jpeg?width=1080&crop=smart&auto=webp&s=b0c49d126609d05c32cb66a097408a7342ef59c8', 'width': 1080}], 'source': {'height': 1625, 'url': 'https://preview.redd.it/yeqzca1b73jg1.jpeg?auto=webp&s=e843c2fc9e4bba847e2fd17c7ef16356f29e1ffd', 'width': 1290}, 'variants': {}}]}
MiniMax M2.5?
6
M3 o.O https://preview.redd.it/0wpbkhl373jg1.png?width=1782&format=png&auto=webp&s=4ba7bdaf9455f8e1449501188a6181be87387d47
2026-02-12T16:03:32
https://www.reddit.com/r/LocalLLaMA/comments/1r2xbq6/minimax_m25/
muhts
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r2xbq6
false
null
t3_1r2xbq6
/r/LocalLLaMA/comments/1r2xbq6/minimax_m25/
false
false
https://preview.redd.it/…6a5b1a18ded9715a
6
null
If someone needs a deeper dive into llama.cpp's automated offloading mechanisms ("--fit")
11
I loaded the llama.cpp github repo into DeepWiki,trying to get a better grip on what's going on in llama-server's new "--fit" option, and how to possibly reproduce the offloading technique manually. I asked how the automatic distribution of layers and tensors to CPU and GPUs in hybrid inference works. Here is the link: [The "--fit" Option in llama.cpp as seen by the DeepWiki](https://deepwiki.com/search/one-of-the-farreaching-recent_f2a8cde4-e77a-4621-9eaf-958a2b198c00) Even without reading code, the analysis of how the algorithm proceeds is helpful I think.
2026-02-12T15:57:09
https://www.reddit.com/r/LocalLLaMA/comments/1r2x5aa/if_someone_needs_a_deeper_dive_into_llamacpps/
phwlarxoc
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r2x5aa
false
null
t3_1r2x5aa
/r/LocalLLaMA/comments/1r2x5aa/if_someone_needs_a_deeper_dive_into_llamacpps/
false
false
self
11
null
HP ZGX Nano (GB10) or PC with 5090 for local LLM at the office?
2
Hey all, We need a local LLM solution at the office. Right now I'm looking at Yuan 3.0 Flash but I'd also like the option to fine-tune a dedicated model down the line. I'm choosing between: Option A: HP ZGX Nano G1n AI Station (GB10, 128GB, 4TB) — ~€4,260 + VAT. I considered the DGX Spark Founders Edition too (same chip, €80 less), but HP has better software (ZGX Toolkit with MLflow, Ollama, model export) and longer support — the Founders Edition apparently drops assistance after 2 years. Is that correct? Option B: Dedicated PC with an RTX 5090 at a similar budget. Way faster on smaller models, and I could also use it for heavy 3D modeling. We already have 5080 16GB workstations in the office. But 32GB VRAM is a hard limit. Anyone running either setup for local LLM work? Is 128GB unified memory worth the speed tradeoff, or is the 5090 more practical day-to-day? Open to other suggestions too. Thanks!
2026-02-12T15:54:35
https://www.reddit.com/r/LocalLLaMA/comments/1r2x2ra/hp_zgx_nano_gb10_or_pc_with_5090_for_local_llm_at/
MastroLube
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r2x2ra
false
null
t3_1r2x2ra
/r/LocalLLaMA/comments/1r2x2ra/hp_zgx_nano_gb10_or_pc_with_5090_for_local_llm_at/
false
false
self
2
null
Should I sell 96GB RAM DDR5 for 128GB DDR5 SO-DIMM + adapter?
2
As the title says, I got this RAM kit before the "crisis" began for 150€ used: \- KingBank 96GB (2x48) ddr5 6400MHz cl32 However, as many of you already suspect the thirst for bigger models is real (I also have a single Mi50 32GB). I found the following laptop ram kit for 650€ used: \- Crucial 128GB (2x64) ddr5 so-dimm 5600MHz cl46 Would it make sense to sell my 96gb kit and make the upgrade? I have an Intel Core Ultra 7 265k and primarily use my PC for LLMs, self-hosting and some casual gaming.
2026-02-12T15:54:30
https://www.reddit.com/r/LocalLLaMA/comments/1r2x2oq/should_i_sell_96gb_ram_ddr5_for_128gb_ddr5_sodimm/
legit_split_
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r2x2oq
false
null
t3_1r2x2oq
/r/LocalLLaMA/comments/1r2x2oq/should_i_sell_96gb_ram_ddr5_for_128gb_ddr5_sodimm/
false
false
self
2
null
Does GPT-OSS-120 support OpenCode?
0
I am running that model locally and I set the config inside of opencodes path to use the model, it works inside the software and talks to me but it was going through a never ending planning out stage for my project. I am wondering if anyone else had this problem or better luck using different models
2026-02-12T15:40:15
https://www.reddit.com/r/LocalLLaMA/comments/1r2wpos/does_gptoss120_support_opencode/
XiRw
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r2wpos
false
null
t3_1r2wpos
/r/LocalLLaMA/comments/1r2wpos/does_gptoss120_support_opencode/
false
false
self
0
null
GLM-5 is 1.5TB. Why hasn't distributed inference taken off?
0
I've been thinking about this with the GLM-5 release. Open weights are great, but realistically nobody here can run a 1.5TB model. Even if you have a dual 4090 setup you aren't even close to loading it. It's like 5% of the model. This feels like exactly the problem projects like Petals or Gensyn were supposed to solve. The pitch was always about pooling consumer GPUs to run these massive models, but it seems like nobody actually uses them for daily work. My main question is privacy. If I split my inference across 50 random nodes, does every node see my data? I assume it's not "broadcast" to the whole network like a crypto ledger, but don't the specific nodes handling my layers see the input embeddings? If I'm running local for privacy, sending my prompts to random residential IPs seems to defeat the point unless I'm missing something about how the encryption works. Plus the latency seems like a dealbreaker. Nvidia sells NVLink for 900 GB/s bandwidth for a reason. Passing activations over standard internet seems like it would be painfully slow for anything other than a really basic chat. Is anyone here actually using these decentralized networks? Or are we all just accepting that if it doesn't fit on our own hardware, it basically doesn't exist for us?
2026-02-12T15:37:58
https://www.reddit.com/r/LocalLLaMA/comments/1r2wnld/glm5_is_15tb_why_hasnt_distributed_inference/
IsaiahCreati
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r2wnld
false
null
t3_1r2wnld
/r/LocalLLaMA/comments/1r2wnld/glm5_is_15tb_why_hasnt_distributed_inference/
false
false
self
0
null
what are the most used local models by you
0
what are the model you use and the hardware require to run the model
2026-02-12T15:36:37
https://www.reddit.com/r/LocalLLaMA/comments/1r2wmeh/what_are_the_most_used_local_models_by_you/
Mysterious_Lie7925
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r2wmeh
false
null
t3_1r2wmeh
/r/LocalLLaMA/comments/1r2wmeh/what_are_the_most_used_local_models_by_you/
false
false
self
0
null
Alibaba Open-Sources Zvec
103
# Alibaba Open-Sources Zvec: An Embedded Vector Database Bringing SQLite-like Simplicity and High-Performance On-Device RAG to Edge Applications Link: [https://github.com/alibaba/zvec](https://github.com/alibaba/zvec)
2026-02-12T15:30:47
https://www.reddit.com/r/LocalLLaMA/comments/1r2wgzx/alibaba_opensources_zvec/
techlatest_net
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r2wgzx
false
null
t3_1r2wgzx
/r/LocalLLaMA/comments/1r2wgzx/alibaba_opensources_zvec/
false
false
self
103
{'enabled': False, 'images': [{'id': '3rEzsO1OAYdwVbhhzt_ItRZ1qsfUw8A3jZOzEOILHbQ', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/3rEzsO1OAYdwVbhhzt_ItRZ1qsfUw8A3jZOzEOILHbQ.png?width=108&crop=smart&auto=webp&s=b7fbed864ef20aaf6b153a7f725e54020e850f70', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/3rEzsO1OAYdwVbhhzt_ItRZ1qsfUw8A3jZOzEOILHbQ.png?width=216&crop=smart&auto=webp&s=be674c905ee3bb181a7d86a63b9aec58cfa6ce43', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/3rEzsO1OAYdwVbhhzt_ItRZ1qsfUw8A3jZOzEOILHbQ.png?width=320&crop=smart&auto=webp&s=8494546ff9307dc054c3eeaa203c98df1d6b8227', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/3rEzsO1OAYdwVbhhzt_ItRZ1qsfUw8A3jZOzEOILHbQ.png?width=640&crop=smart&auto=webp&s=432c449f06008a21fb75890eb8fbf9b5f774a307', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/3rEzsO1OAYdwVbhhzt_ItRZ1qsfUw8A3jZOzEOILHbQ.png?width=960&crop=smart&auto=webp&s=523d3578ba8d3f436180ef1eff35f06649489d5e', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/3rEzsO1OAYdwVbhhzt_ItRZ1qsfUw8A3jZOzEOILHbQ.png?width=1080&crop=smart&auto=webp&s=a37ac77a6487acb16b3baa3a804def4210a13269', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/3rEzsO1OAYdwVbhhzt_ItRZ1qsfUw8A3jZOzEOILHbQ.png?auto=webp&s=d77a87edcee78db24162880a328c93dff220e004', 'width': 1200}, 'variants': {}}]}
NeuTTS Nano Multilingual Collection: 120M Params on-device TTS in German, French, and Spanish
99
Hey everyone, we're the team behind NeuTTS (Neuphonic). Some of you may have seen our previous releases of NeuTTS Air and NeuTTS Nano. The most requested feature by far has been multilingual support, so today we're releasing three new language-specific Nano models: German, French, and Spanish. Quick specs: 120M active parameters (same as Nano English) Real-time inference on CPU via llama.cpp / llama-cpp-python GGUF format (Q4 and Q8 quantizations available) Zero-shot voice cloning from \~3 seconds of reference audio, works across all supported languages Runs on laptops, phones, Raspberry Pi, Jetson Fully local, nothing leaves the device Architecture: Same as Nano English. Compact LM backbone + NeuCodec (our open-source neural audio codec, single codebook, 50hz). Each language has its own dedicated model for best quality. Links: 🇩🇪 German: [https://huggingface.co/neuphonic/neutts-nano-german](https://huggingface.co/neuphonic/neutts-nano-german) 🇫🇷 French: [https://huggingface.co/neuphonic/neutts-nano-french](https://huggingface.co/neuphonic/neutts-nano-french) 🇪🇸 Spanish: [https://huggingface.co/neuphonic/neutts-nano-spanish](https://huggingface.co/neuphonic/neutts-nano-spanish) HF Spaces: [https://huggingface.co/spaces/neuphonic/neutts-nano-multilingual-collection](https://huggingface.co/spaces/neuphonic/neutts-nano-multilingual-collection) GitHub: [https://github.com/neuphonic/neutts](https://github.com/neuphonic/neutts) Each model is a separate HF repo. Same install process as the English Nano, just swap the backbone repo path. We're working on more languages. If there's a specific one you'd like to see next, let us know. Happy to answer any questions about the architecture, benchmarks, or deployment.
2026-02-12T15:25:57
https://v.redd.it/ey2c253c03jg1
TeamNeuphonic
v.redd.it
1970-01-01T00:00:00
0
{}
1r2wcbr
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/ey2c253c03jg1/DASHPlaylist.mpd?a=1773501972%2COWVmNzdkYjhmOTg2OWQwMDhjMzQzZWZjMTI4MzE0NWJlMDkwZmZmNGZhMWYxOTc5NjNmMDJjYTRjMzg3MzQ4Nw%3D%3D&v=1&f=sd', 'duration': 18, 'fallback_url': 'https://v.redd.it/ey2c253c03jg1/CMAF_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/ey2c253c03jg1/HLSPlaylist.m3u8?a=1773501972%2CY2E5N2VkMWExZDM2ZDM5MTlkY2U0ZWY0ZDk1OWI4MTU1M2RmNTk5Y2YwYmZkY2Y2NWNkOWZhMmEzNGE5YWFjNw%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/ey2c253c03jg1/CMAF_96.mp4', 'transcoding_status': 'completed', 'width': 1920}}
t3_1r2wcbr
/r/LocalLLaMA/comments/1r2wcbr/neutts_nano_multilingual_collection_120m_params/
false
false
https://external-preview…3e5892ae68d13fbd
99
{'enabled': False, 'images': [{'id': 'b2JtcjE2dGUwM2pnMeVwwjyNKdPH51Be4sQFZ3EXv8ZdpH_FAux6dp67XSVh', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/b2JtcjE2dGUwM2pnMeVwwjyNKdPH51Be4sQFZ3EXv8ZdpH_FAux6dp67XSVh.png?width=108&crop=smart&format=pjpg&auto=webp&s=77b6822bcb31b2df873f7f2803eeee2ee6c1d927', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/b2JtcjE2dGUwM2pnMeVwwjyNKdPH51Be4sQFZ3EXv8ZdpH_FAux6dp67XSVh.png?width=216&crop=smart&format=pjpg&auto=webp&s=f20b66f74e1829b5ebf4e836a1976c94dd788816', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/b2JtcjE2dGUwM2pnMeVwwjyNKdPH51Be4sQFZ3EXv8ZdpH_FAux6dp67XSVh.png?width=320&crop=smart&format=pjpg&auto=webp&s=2cb3511de3d7a4d853608aabc788241313c7fdfb', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/b2JtcjE2dGUwM2pnMeVwwjyNKdPH51Be4sQFZ3EXv8ZdpH_FAux6dp67XSVh.png?width=640&crop=smart&format=pjpg&auto=webp&s=ddaaa894218abfd0ace6fb4dcf7849974aa6433d', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/b2JtcjE2dGUwM2pnMeVwwjyNKdPH51Be4sQFZ3EXv8ZdpH_FAux6dp67XSVh.png?width=960&crop=smart&format=pjpg&auto=webp&s=4e1ebcb0b9f48aa9cf04b06ad52f96359d1a8ae2', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/b2JtcjE2dGUwM2pnMeVwwjyNKdPH51Be4sQFZ3EXv8ZdpH_FAux6dp67XSVh.png?width=1080&crop=smart&format=pjpg&auto=webp&s=57e57868b6a156ba4ffc6b4e817be02c42c3eb4b', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/b2JtcjE2dGUwM2pnMeVwwjyNKdPH51Be4sQFZ3EXv8ZdpH_FAux6dp67XSVh.png?format=pjpg&auto=webp&s=07d94e0bc67df94f2e5a7518834e457706296582', 'width': 1920}, 'variants': {}}]}
Claude Code Agent Teams: You're Now the CEO of an AI Dev Team (And It Feels Like a Game)
0
https://preview.redd.it/…gentteams-kinga)
2026-02-12T15:23:29
https://www.reddit.com/r/LocalLLaMA/comments/1r2w9vy/claude_code_agent_teams_youre_now_the_ceo_of_an/
Delicious_Air_737
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r2w9vy
false
null
t3_1r2w9vy
/r/LocalLLaMA/comments/1r2w9vy/claude_code_agent_teams_youre_now_the_ceo_of_an/
false
false
https://preview.redd.it/…89bf6046ae99a0bd
0
null
Staying updated on the lastest best models for your hardware
1
What is your guys process for this? For example I have 3 nodes I'm playing with, base mac mini m4 with 16gb ram, 3070+5600x pc, 3090+5700x3d chip. how do i test and stay updated with the strongest llm for each? What's your process or is there a tool?
2026-02-12T15:21:42
https://www.reddit.com/r/LocalLLaMA/comments/1r2w88l/staying_updated_on_the_lastest_best_models_for/
cuberhino
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r2w88l
false
null
t3_1r2w88l
/r/LocalLLaMA/comments/1r2w88l/staying_updated_on_the_lastest_best_models_for/
false
false
self
1
null
hypothesis fusion between LLM and a Text Encoder
2
Given that I'm a noob; The most powerful image generation models (like Flux or Qwen Image, etc.) have a "text encoder" that transforms the prompt into a series of embeds that go to the generation model, which then generates the image. However, while you can chat with an LLM, you can't chat with a Text Encoder. What you can do is chat with a good LLM, which perhaps generates a good prompt optimized for that particular model, producing a more or less effective effect. But would it be possible to have an LLM that is completely fused with a text encoder and completely bypasses the prompt? Example: I chat with an LLM named A, and in the end, we decide what to do. Then I instruct A to generate the image we discussed. A doesn't generate a prompt, but directly generates a series of embeds (the ones a Text Encoder would generate) directly to the model that generates images. I ask this because Text Encoders aren't always able to understand some of the subtle nuances of the prompts, and the various LLMs, even if they try hard, don't always manage to generate 100% effective prompts. If I've written something nonsense, please be kind; I admit I'm a noob!
2026-02-12T15:19:31
https://www.reddit.com/r/LocalLLaMA/comments/1r2w63e/hypothesis_fusion_between_llm_and_a_text_encoder/
Temporary-Roof2867
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r2w63e
false
null
t3_1r2w63e
/r/LocalLLaMA/comments/1r2w63e/hypothesis_fusion_between_llm_and_a_text_encoder/
false
false
self
2
null
MiniMax M2.5 is more expensive than M2.1
0
Just released on Openrouter. No information on the size/arch yet, or if they're just feeling more confident people will pay this price.
2026-02-12T15:18:25
https://i.redd.it/iy7e8jhmy2jg1.png
HelicopterBright4480
i.redd.it
1970-01-01T00:00:00
0
{}
1r2w525
false
null
t3_1r2w525
/r/LocalLLaMA/comments/1r2w525/minimax_m25_is_more_expensive_than_m21/
false
false
https://preview.redd.it/…0e0f42f57940a228
0
{'enabled': True, 'images': [{'id': 'iy7e8jhmy2jg1', 'resolutions': [{'height': 81, 'url': 'https://preview.redd.it/iy7e8jhmy2jg1.png?width=108&crop=smart&auto=webp&s=6237bab43cff9d81588aacaedaecd8c4f2900581', 'width': 108}, {'height': 162, 'url': 'https://preview.redd.it/iy7e8jhmy2jg1.png?width=216&crop=smart&auto=webp&s=835ab54f5f67b86e0fe528da6dfd114f5db1f4ed', 'width': 216}, {'height': 241, 'url': 'https://preview.redd.it/iy7e8jhmy2jg1.png?width=320&crop=smart&auto=webp&s=932cd1896899b0be9efcdec8a733060790ad6734', 'width': 320}, {'height': 482, 'url': 'https://preview.redd.it/iy7e8jhmy2jg1.png?width=640&crop=smart&auto=webp&s=c5758eee5b40681a64ddb94f840bac4328dd4f28', 'width': 640}, {'height': 723, 'url': 'https://preview.redd.it/iy7e8jhmy2jg1.png?width=960&crop=smart&auto=webp&s=a14a7d875a91149d06766616c5a22eb303a851d7', 'width': 960}], 'source': {'height': 798, 'url': 'https://preview.redd.it/iy7e8jhmy2jg1.png?auto=webp&s=2697a6fab5c7ab19628952353bba2b794f4416e0', 'width': 1059}, 'variants': {}}]}
Any good uncensored coding LLMs (local or hosted) that don't have much ethical restrictions? I'm trying to do some web exploitation work
2
I know Dolphin llms are uncensored but they're not always the smartest nor are designed for coding right? I tried Qwen coder too but it also flagged ethical restrictions for what I wanted
2026-02-12T15:17:27
https://www.reddit.com/r/LocalLLaMA/comments/1r2w43b/any_good_uncensored_coding_llms_local_or_hosted/
Additional-Curve4212
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r2w43b
false
null
t3_1r2w43b
/r/LocalLLaMA/comments/1r2w43b/any_good_uncensored_coding_llms_local_or_hosted/
false
false
self
2
null
Did anyone succeded to run AirLLM (crappy bugged spaghetti code)
0
Out of curiosity, I wanted to run AirLLM [https://github.com/lyogavin/airllm](https://github.com/lyogavin/airllm) to see how far I can push a 16GB vram nvidia to run higher tier models and at how much performance penalty. As lot of these github toys which is an assemble of hacks jurily rigged togheter, it threw any sort of errors. Tried to run in into: Docker Windows Ubuntu Google Colab To no avail. Their github issue page is a dumpster fire. Anyone succeeded?
2026-02-12T15:05:36
https://www.reddit.com/r/LocalLLaMA/comments/1r2vsz1/did_anyone_succeded_to_run_airllm_crappy_bugged/
tracagnotto
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r2vsz1
false
null
t3_1r2vsz1
/r/LocalLLaMA/comments/1r2vsz1/did_anyone_succeded_to_run_airllm_crappy_bugged/
false
false
self
0
{'enabled': False, 'images': [{'id': '73QepB9N3xRVh2YETzaduh2DqOQZhRUYD9evmbuRgnQ', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/73QepB9N3xRVh2YETzaduh2DqOQZhRUYD9evmbuRgnQ.png?width=108&crop=smart&auto=webp&s=fc77a7e69628c7fe36f3f4e693f5dbeb33c409dd', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/73QepB9N3xRVh2YETzaduh2DqOQZhRUYD9evmbuRgnQ.png?width=216&crop=smart&auto=webp&s=d1c875901df56901cc64dcc50350984f3b519470', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/73QepB9N3xRVh2YETzaduh2DqOQZhRUYD9evmbuRgnQ.png?width=320&crop=smart&auto=webp&s=e9ec7e5b2404a47a16e4a8444411804f63ca2a5c', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/73QepB9N3xRVh2YETzaduh2DqOQZhRUYD9evmbuRgnQ.png?width=640&crop=smart&auto=webp&s=74d99e4bf976ddddf430b049aa3449a3c2688356', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/73QepB9N3xRVh2YETzaduh2DqOQZhRUYD9evmbuRgnQ.png?width=960&crop=smart&auto=webp&s=ec74e066647f800a23d7142ce9514113cd648401', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/73QepB9N3xRVh2YETzaduh2DqOQZhRUYD9evmbuRgnQ.png?width=1080&crop=smart&auto=webp&s=8706e23a4ad4b4196d736302379bc204d4fd5119', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/73QepB9N3xRVh2YETzaduh2DqOQZhRUYD9evmbuRgnQ.png?auto=webp&s=1b939c3a9b7bc310a1b9c4ab7d9508378b5d9f4c', 'width': 1200}, 'variants': {}}]}
Is this true? GLM 5 was trained solely using huawei hardware and their mindspore framework
130
ERROR: type should be string, got "https://www.trendingtopics.eu/glm-5-the-worlds-strongest-open-source-llm-solely-trained-on-chinese-huawei-chips/\n\nI find it kind of astonishing, impressed af, note it that formal technical paper has been released by Z.ai for glm 5\nSo.. we still don't know if it's 100% true or not but the article says so\nThey said it was solely trained on huawei ascend using their own mindspore framework (complete pipeline training to inference)\nThis is so big because glm 5 has literally beaten gemini 3 pro, opus 4.5 and gpt 5.2, on the third spot behind by both opus 4.6 variants and gpt 5.2 xhigh"
2026-02-12T15:04:43
https://i.redd.it/7q0za97mw2jg1.jpeg
Acceptable_Home_
i.redd.it
1970-01-01T00:00:00
0
{}
1r2vs3s
false
null
t3_1r2vs3s
/r/LocalLLaMA/comments/1r2vs3s/is_this_true_glm_5_was_trained_solely_using/
false
false
default
130
{'enabled': True, 'images': [{'id': '7q0za97mw2jg1', 'resolutions': [{'height': 121, 'url': 'https://preview.redd.it/7q0za97mw2jg1.jpeg?width=108&crop=smart&auto=webp&s=67103de388890cd91ba4d213fccc9aa4651c909f', 'width': 108}, {'height': 242, 'url': 'https://preview.redd.it/7q0za97mw2jg1.jpeg?width=216&crop=smart&auto=webp&s=daf79354964c87031d77dbfddc8b061bfbc7d38c', 'width': 216}, {'height': 358, 'url': 'https://preview.redd.it/7q0za97mw2jg1.jpeg?width=320&crop=smart&auto=webp&s=2e5a4ccbcafc27776848ffea2ad3f255b027bcb5', 'width': 320}, {'height': 717, 'url': 'https://preview.redd.it/7q0za97mw2jg1.jpeg?width=640&crop=smart&auto=webp&s=87dc6fe48350fd9ca4d5b62b500e942dea9fd500', 'width': 640}, {'height': 1075, 'url': 'https://preview.redd.it/7q0za97mw2jg1.jpeg?width=960&crop=smart&auto=webp&s=fab54f7707d990616244e89bf28a8d9f13c4a5da', 'width': 960}, {'height': 1210, 'url': 'https://preview.redd.it/7q0za97mw2jg1.jpeg?width=1080&crop=smart&auto=webp&s=215802e30d3a838e888862a81a29dda4f1c54872', 'width': 1080}], 'source': {'height': 1210, 'url': 'https://preview.redd.it/7q0za97mw2jg1.jpeg?auto=webp&s=34c84906327c2330318d38b4c41f6e5876a74da6', 'width': 1080}, 'variants': {}}]}
What is currently the best local model for 40Gb VRAM + 64Gb DDR5 RAM?
2
I'd like to create a local AI workstation mainly for programming and handling stuff I don't want to send to cloud models.
2026-02-12T14:59:52
https://www.reddit.com/r/LocalLLaMA/comments/1r2vn7c/what_is_currently_the_best_local_model_for_40gb/
Hotel_West
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r2vn7c
false
null
t3_1r2vn7c
/r/LocalLLaMA/comments/1r2vn7c/what_is_currently_the_best_local_model_for_40gb/
false
false
self
2
null
For everyone using VLLM with different GPUs
2
TLDR: You may have inconsistent or broken output because of heterogeneous cards in tensor parallel mode. Github issue: [https://github.com/vllm-project/vllm/issues/34437](https://github.com/vllm-project/vllm/issues/34437) Huggingface issue left for unsloth team: [https://huggingface.co/unsloth/Qwen3-Coder-Next-FP8-Dynamic/discussions/2](https://huggingface.co/unsloth/Qwen3-Coder-Next-FP8-Dynamic/discussions/2) Copy of HF issue text: >Compared to Qwen's "official" FP8 quant, this one tends to add redundant characters to text output. >For example, test with VLLM nightly with recommended sampling parameters following question >\`is /users/me endpoint a bad practice?\` >This will result in following issues with output: >Forgetting to require auth → anyone gets someonesomeone'’s data\* >Use Vary: Authorization, avoid server-side caching per endpoint without per-user granularitycache keys >�💡 Alternatives & Complements: >�✅ Best Practices for /users/me >However, whether it's \*appropriate\* depends on \*\*context, \*\*security considerations\*\*, \*\*consistency\*\*, and \*\*implementation quality\*\*. Here’s a balanced breakdown: >There are broken unicode chars, missing closing tags (\*\*context without closing \*\*), repetitions inside of words (someonesomeone) and missing spaces. >Changing sampling parameters doesn't affects these issues. With temp=0.0 output have much more mistakes than with temp=1.0. >But despite this model is still performs good in agentic tasks with OpenCode and I don't know how 🫥 So far looks like VLLM has a bug with precision lost of number overflow when dealing with heterogeneous GPUs. It does not completely ruins your experience, you will not notice issues with FP16 (likely), but beware - if you feels like models gives broken output, then consider trying it with pipeline parallel. If I'm wrong, then please tell how to fix this annoying issue :) My VLLM command from llama-swap: ``` qwen3-coder-80b: env: - VLLM_SLEEP_WHEN_IDLE=1 - VLLM_LOG_STATS_INTERVAL=5 - CUDA_DEVICE_ORDER=PCI_BUS_ID - CUDA_VISIBLE_DEVICES=0,1,2,3 - OMP_NUM_THREADS=12 - VLLM_MARLIN_USE_ATOMIC_ADD=1 - VIRTUAL_ENV=/home/gleb/llm/env_vllm - VLLM_LOGGING_COLOR=0 cmd: | /home/gleb/.local/bin/uv run -m vllm.entrypoints.openai.api_server --model /mnt/data/llm-data/models/Qwen/Qwen3-Coder-Next-FP8 --dtype bfloat16 --served-model-name "qwen3-coder-80b" --port ${PORT} --tensor-parallel-size 1 --pipeline-parallel-size 4 --enable-prefix-caching --attention-backend flashinfer --max-model-len 200000 --gpu-memory-utilization 0.92 --max-num-seqs 4 --enable-auto-tool-choice --tool-call-parser qwen3_coder ```
2026-02-12T14:37:49
https://www.reddit.com/r/LocalLLaMA/comments/1r2v2up/for_everyone_using_vllm_with_different_gpus/
Nepherpitu
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r2v2up
false
null
t3_1r2v2up
/r/LocalLLaMA/comments/1r2v2up/for_everyone_using_vllm_with_different_gpus/
false
false
self
2
{'enabled': False, 'images': [{'id': 'pMIEd18L6feV2ohI4GgLus4C6Zvu6Vmkvmi9oW4Tgiw', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/pMIEd18L6feV2ohI4GgLus4C6Zvu6Vmkvmi9oW4Tgiw.png?width=108&crop=smart&auto=webp&s=419430b6c89103ec532ebcb46fc4edafaae9992a', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/pMIEd18L6feV2ohI4GgLus4C6Zvu6Vmkvmi9oW4Tgiw.png?width=216&crop=smart&auto=webp&s=7a23b4988599bd2f102732626a28b1461522bb2e', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/pMIEd18L6feV2ohI4GgLus4C6Zvu6Vmkvmi9oW4Tgiw.png?width=320&crop=smart&auto=webp&s=a3329f23ed50ae90733f796ef886eee9cfa11674', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/pMIEd18L6feV2ohI4GgLus4C6Zvu6Vmkvmi9oW4Tgiw.png?width=640&crop=smart&auto=webp&s=99d371c2e1ddd22c9a7f107e20ffaa8c6be1272e', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/pMIEd18L6feV2ohI4GgLus4C6Zvu6Vmkvmi9oW4Tgiw.png?width=960&crop=smart&auto=webp&s=7a4ea39341aa45071bbb0e95676d78a91038a477', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/pMIEd18L6feV2ohI4GgLus4C6Zvu6Vmkvmi9oW4Tgiw.png?width=1080&crop=smart&auto=webp&s=b35051e5d69ddd7b71af5c6e4f002d2f947a5723', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/pMIEd18L6feV2ohI4GgLus4C6Zvu6Vmkvmi9oW4Tgiw.png?auto=webp&s=875a3ee50c80de38effdd8f7e105ebf3fd8bc6d3', 'width': 1200}, 'variants': {}}]}
STLE: how to model AI knowledge and uncertainty simultaneously
3
I open-sourced STLE: a framework for teaching neural nets to recognize unfamiliar data using complementary fuzzy sets (μ\_x + μ\_y = 1). Test results: 67% AUROC on OOD detection without OOD training data. NumPy and PyTorch implementations, Feedback welcome if anyone finds it useful. Visit substack if you want to help research [https://substack.com/@strangehospital](https://substack.com/@strangehospital)
2026-02-12T14:37:39
https://github.com/strangehospital/Frontier-Dynamics-Project
Strange_Hospital7878
github.com
1970-01-01T00:00:00
0
{}
1r2v2nc
false
null
t3_1r2v2nc
/r/LocalLLaMA/comments/1r2v2nc/stle_how_to_model_ai_knowledge_and_uncertainty/
false
false
default
3
{'enabled': False, 'images': [{'id': 'MRQ3MWFs1-Bd6ABu2hbF889aOIoIldUQRvudKuu5xu0', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/MRQ3MWFs1-Bd6ABu2hbF889aOIoIldUQRvudKuu5xu0.png?width=108&crop=smart&auto=webp&s=388a63a525742a7a110217236e88fd2f0382209b', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/MRQ3MWFs1-Bd6ABu2hbF889aOIoIldUQRvudKuu5xu0.png?width=216&crop=smart&auto=webp&s=897da6f72f830bfd89304aa6386584d01d439c0d', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/MRQ3MWFs1-Bd6ABu2hbF889aOIoIldUQRvudKuu5xu0.png?width=320&crop=smart&auto=webp&s=bfa2ed2bedd3dc6383b52a11b5d22c7a95d2f784', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/MRQ3MWFs1-Bd6ABu2hbF889aOIoIldUQRvudKuu5xu0.png?width=640&crop=smart&auto=webp&s=542613faf4131eaa910f574daa4097d5afde2d36', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/MRQ3MWFs1-Bd6ABu2hbF889aOIoIldUQRvudKuu5xu0.png?width=960&crop=smart&auto=webp&s=83cc6c6ae5abe8a17190eb199a010e1b3ad30e43', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/MRQ3MWFs1-Bd6ABu2hbF889aOIoIldUQRvudKuu5xu0.png?width=1080&crop=smart&auto=webp&s=3bf596ffdc667a592477dd48af594b2bc982fdb1', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/MRQ3MWFs1-Bd6ABu2hbF889aOIoIldUQRvudKuu5xu0.png?auto=webp&s=f89fe5c2b905c3fc0f798e8bf085ff4970d376bf', 'width': 1200}, 'variants': {}}]}
MiniMax-M2.5 Now First to Go Live on NetMind (Before the Official Launch), Free for a Limited Time Only
0
We're thrilled to announce that [**MiniMax-M2.5**](https://www.netmind.ai/modelsLibrary/minimax-m2.5) is now live on the NetMind platform **with first-to-market API access, free for a limited time**! Available the moment MiniMax officially launches the model! For your Openclaw agent, or any other agent, just plug in and build. # MiniMax-M2.5, Built for Agents The M2 family was designed with agents at its core, supporting multilingual programming, complex tool-calling chains, and long-horizon planning.  M2.5 takes this further with the kind of reliable, fast, and affordable intelligence that makes autonomous AI workflows practical at scale. # Benchmark-topping coding performance M2.5 surpasses Claude Opus 4.6 on both SWE-bench Pro and SWE-bench Verified, placing it among the absolute best models for real-world software engineering. # Global SOTA for the modern workspace  State-of-the-art scores in Excel manipulation, deep research, and document summarization, the perfect workhorse model for the future workspace. # Lightning-fast inference Optimized thinking efficiency combined with \~100 TPS output speed delivers approximately 3x faster responses than Opus-class models. For agent loops and interactive coding, that speed compounds fast. # Best price for always-on agent At $0.3/M input tokens, $1.2/M output tokens, $0.06/M prompt caching read tokens, $0.375/M prompt caching write tokens, M2.5 is purpose-built for high-volume, always-on production workloads.
2026-02-12T14:28:40
https://i.redd.it/kabxf3b7q2jg1.png
MarketingNetMind
i.redd.it
1970-01-01T00:00:00
0
{}
1r2uuq6
false
null
t3_1r2uuq6
/r/LocalLLaMA/comments/1r2uuq6/minimaxm25_now_first_to_go_live_on_netmind_before/
false
false
default
0
{'enabled': True, 'images': [{'id': 'kabxf3b7q2jg1', 'resolutions': [{'height': 55, 'url': 'https://preview.redd.it/kabxf3b7q2jg1.png?width=108&crop=smart&auto=webp&s=12ac277c767d7f1b66975369135817b6fec84b04', 'width': 108}, {'height': 111, 'url': 'https://preview.redd.it/kabxf3b7q2jg1.png?width=216&crop=smart&auto=webp&s=9e02da39f921f37a8bbd97b798d3295f8aea9805', 'width': 216}, {'height': 165, 'url': 'https://preview.redd.it/kabxf3b7q2jg1.png?width=320&crop=smart&auto=webp&s=60fb4d4a583a9c1ecb0f00a2fc0975fdacbcc840', 'width': 320}, {'height': 330, 'url': 'https://preview.redd.it/kabxf3b7q2jg1.png?width=640&crop=smart&auto=webp&s=9ec545018477d7376c9611ae204039de54fd4b49', 'width': 640}, {'height': 495, 'url': 'https://preview.redd.it/kabxf3b7q2jg1.png?width=960&crop=smart&auto=webp&s=0f00451cbd8d1f83dff159d237c93e4ab84cf79e', 'width': 960}, {'height': 556, 'url': 'https://preview.redd.it/kabxf3b7q2jg1.png?width=1080&crop=smart&auto=webp&s=ae56bde35541eeeed91002b8d0b9d20219d9bb83', 'width': 1080}], 'source': {'height': 660, 'url': 'https://preview.redd.it/kabxf3b7q2jg1.png?auto=webp&s=1db0a78127c19417ce4fd50346b25e9e96d786b4', 'width': 1280}, 'variants': {}}]}
Potato PC? noctrex/Qwen3-Coder-Next-REAP-48B-A3B-MXFP4_MOE-GGUF Test MisguidedAttention
10
32 Gb cpu RAM, 8 Gb VRAM (laptop 4060) , 128k context. This is a post of appreciation for noctrex/Qwen3-Coder-Next-REAP-48B-A3B-MXFP4\_MOE-GGUF, I tested it with the MisguidedAttention problems, noticed it changes speed (Schrodingers cat was really faster, I think because it involved math in which this model excels), had answers I liked. ( You can check them [https://gist.github.com/mattepiu/946770d4dcfa1dc6201e1f92a3586046](https://gist.github.com/mattepiu/946770d4dcfa1dc6201e1f92a3586046) ) * 8 t/s : Trolley problem: [https://en.wikipedia.org/wiki/Trolley\_problem](https://en.wikipedia.org/wiki/Trolley_problem) * 14->9 t/s : Monty Hall problem: [https://en.wikipedia.org/wiki/Monty\_Hall\_problem](https://en.wikipedia.org/wiki/Monty_Hall_problem) * 14->9.31 t/s : Barber paradox: [https://en.wikipedia.org/wiki/Barber\_paradox](https://en.wikipedia.org/wiki/Barber_paradox) * 15->13.38 t/s : Schrödingers cat: [https://en.wikipedia.org/wiki/Schr%C3%B6dinger%27s\_cat](https://en.wikipedia.org/wiki/Schr%C3%B6dinger%27s_cat) * 15->8.5 t/s : Unexpected hanging paradox: [https://en.wikipedia.org/wiki/Unexpected\_hanging\_paradox](https://en.wikipedia.org/wiki/Unexpected_hanging_paradox)
2026-02-12T14:24:58
https://www.reddit.com/r/LocalLLaMA/comments/1r2urkt/potato_pc_noctrexqwen3codernextreap48ba3bmxfp4/
R_Duncan
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r2urkt
false
null
t3_1r2urkt
/r/LocalLLaMA/comments/1r2urkt/potato_pc_noctrexqwen3codernextreap48ba3bmxfp4/
false
false
self
10
{'enabled': False, 'images': [{'id': 'OAXSl8SY6T3JK9MGQyKxkoYbqZ71HQRYXLeB8CV0NXg', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/OAXSl8SY6T3JK9MGQyKxkoYbqZ71HQRYXLeB8CV0NXg.png?width=108&crop=smart&auto=webp&s=796041decb8c1250cbc2f301331b72f7385b477d', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/OAXSl8SY6T3JK9MGQyKxkoYbqZ71HQRYXLeB8CV0NXg.png?width=216&crop=smart&auto=webp&s=2e3562243f324d16bc6d9dd09adb1da4e0b100b5', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/OAXSl8SY6T3JK9MGQyKxkoYbqZ71HQRYXLeB8CV0NXg.png?width=320&crop=smart&auto=webp&s=564e5f4bb6808064a14eb3965a6911671c3c9807', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/OAXSl8SY6T3JK9MGQyKxkoYbqZ71HQRYXLeB8CV0NXg.png?width=640&crop=smart&auto=webp&s=0f53460a90493497883ab4cacbbb58e2acb464c4', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/OAXSl8SY6T3JK9MGQyKxkoYbqZ71HQRYXLeB8CV0NXg.png?width=960&crop=smart&auto=webp&s=7a4f79362039959fa37eab208ae001245ccfe6e3', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/OAXSl8SY6T3JK9MGQyKxkoYbqZ71HQRYXLeB8CV0NXg.png?width=1080&crop=smart&auto=webp&s=912f966e123e94e32e7975fe8aebac89450a6b98', 'width': 1080}], 'source': {'height': 640, 'url': 'https://external-preview.redd.it/OAXSl8SY6T3JK9MGQyKxkoYbqZ71HQRYXLeB8CV0NXg.png?auto=webp&s=c7cbcc7517e2406e2326e7a1eb6bdb9022c27fda', 'width': 1280}, 'variants': {}}]}
Bots on the sub are a real issue
58
I noticed that some bots over here are very advanced (they score 2-3% on AI detectors, they are perfect rage baiters too?) sometimes they are actually undetectable unless they make a very obvious mistake,how to catch those? Or at least not get rage baited by them? |:
2026-02-12T14:05:22
https://www.reddit.com/r/LocalLLaMA/comments/1r2uama/bots_on_the_sub_are_a_real_issue/
perfect-finetune
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r2uama
false
null
t3_1r2uama
/r/LocalLLaMA/comments/1r2uama/bots_on_the_sub_are_a_real_issue/
false
false
self
58
null
How do you tweak downloaded AI Skills if you don't fully grasp the English nuance?
0
I download skills (markdown files) from GitHub. I want to optimize them for my specific use case. https://preview.redd.it/qsvhfba0l2jg1.png?width=2614&format=png&auto=webp&s=4e32d24ba905a959834ebb8fcd65b1523791d5f5 But the `Description` and `Rules` use very specific English adjectives. I'm afraid to change them because I don't know *exactly* how the LLM interprets that specific word. Do you guys translate them first? My translator always breaks the parameter syntax.
2026-02-12T13:59:57
https://www.reddit.com/r/LocalLLaMA/comments/1r2u5s1/how_do_you_tweak_downloaded_ai_skills_if_you_dont/
Remote-Evening1437
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r2u5s1
false
null
t3_1r2u5s1
/r/LocalLLaMA/comments/1r2u5s1/how_do_you_tweak_downloaded_ai_skills_if_you_dont/
false
false
https://preview.redd.it/…5e85211ea541bdbc
0
null
Using a 16 failure map and a TXT pack to debug my local LLaMA
0
Last year I basically disappeared into notebooks and built three layers of one system: WFGY 1.0, 2.0 and **3.0 (just released)**. Today I want to do two things for local LLM users: * **A quick refresh of WFGY 2.0, the 16 failure mode problem list that many of you are probably already experiencing in your RAG and agent stacks.** * **Introduce something more hardcore: WFGY 3.0, a tension benchmark pack with 131 high constraint problems designed to stress test reasoning, structure and long chain consistency.** Everything is open source under MIT. It is just text files. No new model, no special binary, no hidden service. You can find WFGY 1.0 , 2.0 , 3.0 in the same repo link WFGY main repo: [https://github.com/onestardao/WFGY](https://github.com/onestardao/WFGY) 1. **Quick recap: the 16 failures are really about RAG and infra** In the old post I described a "Problem Map" with 16 failure modes. The language there was about prompts, but in practice these modes are about how RAG and infra behave when things quietly go wrong. Examples in local LLM terms: >No.1: Retriever fetches a correct document id, but the answer is stitched from the wrong sentence or segment. >No.3: Long chain of thought drifts away from the original constraints in the middle of the reasoning. >No.4: The model hides uncertainty instead of saying "I do not know, evidence is not enough." >No.5: Vector store ingestion or index fragmentation, so half of your knowledge lives in a different universe. >No.11: Mixed code and math. The model "fixes" notation and breaks the actual logic. >No.14 and No.16: Infra race conditions and deploy only failures. Everything passes in dev, but the first real production style call collapses. When I tested this 16 mode map with people running local stacks, the usual comment was something like: "Ok, this is exactly how my local RAG or agent fails, I just did not have names for it." So the 16 problem list is not only prompt theory. It is basically a RAG plus infra failure taxonomy, written in human language. **2. The "semantic firewall" that does not touch infra** Before WFGY 3.0, the main trick was a very simple layer I called a semantic firewall. Instead of changing vector DB, retriever, or model weights, I added one more reasoning step inside the prompt: 1. First, when a run fails, I write down what I expected the model to keep stable. For example: * do not invent new entities * respect this equation or conservation rule * do not mix document A and document B 2. Then I ask: at which step did it drop this expectation. That step is usually one of the 16 failure modes. 3. I add a short self check right before the final answer. For example text like: * "Check yourself against failure modes No.1 to No.16 from the WFGY Problem Map." * "Which numbers are you in danger of and why." * "Only after that, give the final answer." 4. I keep infra exactly the same. Same model, same retrieval, same hardware. On local setups this already gave good results. Without any infra change the model starts to say things like "this might be No.1 plus No.4" and becomes more honest about uncertainty and missing evidence. That semantic firewall is the "before" result. It comes directly from having the 16 mode Problem Map. **3. After that I built WFGY 3.0: a tension benchmark pack** After the 16 failures stabilized, I wanted a more serious test field. So I built what I call: WFGY 3.0 Singularity Demo A tension benchmark pack with 131 problems, from Q001 to Q131. Idea in one sentence: Each problem is a high tension task for LLMs. It has long or tricky constraints, multiple viewpoints, and conditions that are strange but still precise. Many of the problems include math or math like structure. Not to test textbook skills, but to see if the model can keep logical and quantitative conditions alive inside long text. Everything is plain TXT. You can feed it to any strong model, including your own local LLaMA, Qwen, Mistral, or fine tuned mix. Right now the official benchmark spec is not fully written as a paper. So for this post I will give a simple v0.1 protocol that local\_llama users can already try. **4. Tension benchmark v0.1: how to test one problem on a local model** This is the minimal protocol I actually use on my own machine. **Step 1: pick one problem Qxxx** You can pick any Q number that looks interesting. Q130 is one of my usual "out of distribution tension" tests, but this is just an example. **Step 2: use a small "careful reasoner" boot text** Open a fresh chat in your local UI (Ollama, LM Studio, text-generation-webui, terminal, anything you like). First paste a short boot text, something like: "You are a careful reasoner. I will give you one problem from the WFGY 3.0 pack. Your job: 1. restate the constraints in your own words, 2. solve it step by step, 3. tell me where you are uncertain. Do not invent extra assumptions without saying them. If something is underspecified, say so clearly." Then paste the full text of Qxxx under that. Let the model answer. **Step 3: assign a simple tension score from 0 to 3** I do not try to make a Kaggle style leaderboard. I only want a rough tension profile for the model. I use this small scale: 0 = collapse * does not restate the main constraints * quietly rewrites the problem into something else * heavy hallucination, structure basically gone 1 = barely alive * catches some constraints but misses others * changes track in the middle of the reasoning * talks around the topic instead of solving the defined task 2 = workable * restatement is mostly correct * main reasoning chain is reasonable * some details or edge cases are wrong * good enough for brainstorming or early design, not good enough as a judge 3 = solid * constraints are restated clearly * reasoning is structured * model marks or admits where it is not sure * you would be ok using this as an example in a tutorial This gives you a TensionScore for this model on this problem. **Step 4: mark which failure modes you see** Now look at the answer and ask: Which Problem Map numbers appear here, from No.1 to No.16. For example: * On a small 7B model, Q130 often behaves like "No.3 plus No.9" which means drift in the chain of thought plus over confident summary. * On some RAG style agents, a long problem looks like "No.1 plus No.5 plus No.4" which means wrong slice of a right document, fragmented index, then hidden uncertainty. Write your observation in a short line, for example: Model: your\_model\_name\_here Problem: Q130 TensionScore: 1 FailureModes: No.3, No.9 Notes: drift at step 4, ignores constraint in paragraph 2, invents one new condition **5. Why the math inside the 131 problems matters** Many of the 131 problems contain math or math like constraints. This part is important. Some examples of what a problem may require the model to preserve: * a sum that must stay equal to a fixed value * a one to one mapping between two sets * a monotonic relation or ordering * a clear difference between "limit behavior" and "just getting closer" * symmetry or conservation in a thought experiment * specific combinatorial structure When you apply the tension benchmark v0.1 you can add one more check: C5, math and structure respect: Did the model actually keep the quantitative or logical conditions, or did it tell a nice story that ignores them. For me, this is why I say the 131 problems are not just philosophy questions. They are useful tools to train and debug local models, especially if you care about: * reasoning agents * instruction or task fine tuning on high structure tasks * long horizon consistency 1. Three small experiments you can try on your own stack If you want to play with this pack on your local machine, here are three simple experiments. You can use any model, any hardware, any UI, everything is plain text. Experiment A: no infra semantic firewall 1. Take any local RAG or tool pipeline you already use. 2. Before the final answer, add a short self check text that asks the model to name which Problem Map numbers it might be hitting, and why. 3. Keep everything else exactly the same. 4. Compare behavior before and after this semantic firewall layer. In many cases this already reduces "insane but very confident" outputs, even before touching vector stores or retrievers. Experiment B: single problem stress test, for example Q130 1. Choose one problem as your personal stress test, for example Q130. 2. Run the protocol from section 4 with your local model. 3. Write down model name, quantization, context size, TensionScore, and failure modes. 4. Optionally share a short summary, for example: Model: 8B local, 4 bit, context 16k Problem: Q130 TensionScore: 1 FailureModes: No.3, No.4 Comment: sounds deep, but ignores a key constraint in the second paragraph. Experiment C: before and after finetune or guardrail change Use a small subset of the 131 problems as your own dev tool. 1. Pick maybe 5 problems with different styles. 2. Run them with your original model and a very simple system prompt. 3. Record TensionScore and failure modes. 4. Apply your change, for example a small finetune, new agent routing, or a more strict guardrail. 5. Run the same problems again and compare the tension profile. If the change really helps, some problems should move from 0 to 1, or from 1 to 2, and some failure modes should appear less often. It gives you a more concrete picture of what you are actually fixing. 7. Closing The 16 failure Problem Map came from many hours of chaos with prompts, RAG, and infra. The semantic firewall trick was the first result that worked nicely even on local setups, without touching infra. WFGY 3.0 and the 131 tension problems are my attempt to turn that idea into a concrete playground that anyone with a local model can use. If this looks interesting: * You can clone the repo and grab the TXT pack. * You can treat the v0.1 protocol in this post as a starting point and modify it for your own use. * If you find a model that behaves in a very different way, or a failure pattern that does not fit the 16 modes, I would actually be happy to see your example. Thanks for reading. I hope this gives some local LLaMA users a slightly more structured way to debug models that sometimes feel both impressive and a bit insane at the same time. [WFGY 3.0](https://preview.redd.it/e72q7bd0k2jg1.png?width=4955&format=png&auto=webp&s=56d6887c1aa1279b0ce8db0c1102e15282d126c6)
2026-02-12T13:55:45
https://www.reddit.com/r/LocalLLaMA/comments/1r2u27q/using_a_16_failure_map_and_a_txt_pack_to_debug_my/
StarThinker2025
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r2u27q
false
null
t3_1r2u27q
/r/LocalLLaMA/comments/1r2u27q/using_a_16_failure_map_and_a_txt_pack_to_debug_my/
false
false
https://preview.redd.it/…16b2f2f4dba261ee
0
null
We fine-tuned an open-source model to outperform GPT-5 at predicting Trump actions
0
**TLDR:** * We fine‑tuned gpt‑oss‑120b with GRPO on 2,790 forecasting questions about Trump. * On 682 held‑out questions, our model had a Brier score of 0.194, outperforming the base model (0.213) and GPT‑5 (0.200). * Our model is better calibrated, with ECE of 0.079 vs 0.111 for the base model and 0.091 for GPT‑5. * Dataset on HuggingFace → [https://huggingface.co/datasets/LightningRodLabs/WWTD-2025](https://huggingface.co/datasets/LightningRodLabs/WWTD-2025) **Experiment setup** **Dataset**: We used the Lightning Rod SDK to build a dataset of 2,790 binary forward‑looking questions about Trump actions, generated from news articles across Jan to Dec 2025. Each question has a prediction date and resolution date and was independently resolved to avoid lookahead bias. **Temporal split:** We trained on questions from Jan to Aug 2025 and tested on Sept–Dec 2025, dropping any training questions that resolved after Sept 1 to avoid temporal leakage. **Training:** We used Tinker’s training API to run 50 GRPO steps with LoRA (rank 32, batch 32, group size 8, lr 4e‑5), using Brier score as the reward signal. **Dual evaluation:** We tested both with context (news articles) and without context to measure whether the model appropriately expresses uncertainty when information is unavailable. Sample questions: * "Will Donald Trump publicly call for the resignation of Federal Reserve Chair Jerome Powell by April 1, 2025?" * "Will Canada announce a retaliatory tariff specifically targeting U.S. dairy or cheese products by May 1, 2025?" **Results** Accuracy was measured with Brier score and Brier Skill Score (BSS) and calibration was measured with Expected Calibration Error (ECE). |Model|**Brier With Context**|**BSS With Context**|**Brier No Context**|**BSS No Context**|**ECE With Context**|**ECE No Context**| |:-|:-|:-|:-|:-|:-|:-| |GPT‑5|0.200|\+0.14|0.258|\-0.11|0.091|0.191| |gpt‑oss‑120b|0.213|\+0.08|0.260|\-0.12|0.111|0.190| |**gpt‑oss‑120b RL**|**0.194**|**+0.16**|**0.242**|**-0.04**|**0.079**|**0.164**| When given context, our model outperformed both the base model and GPT‑5 across metrics, with Brier Skill Score (+0.16) and the lowest calibration error (ECE 0.079). Without context, GPT‑5 and the base model score worse than the base rates, while the trained model (Brier 0.242) appropriately expresses uncertainty. The full dataset and experiment results are on HuggingFace → [https://huggingface.co/datasets/LightningRodLabs/WWTD-2025](https://huggingface.co/datasets/LightningRodLabs/WWTD-2025) Happy to answer questions in the comments.
2026-02-12T13:28:13
https://www.reddit.com/r/LocalLLaMA/comments/1r2teyc/we_finetuned_an_opensource_model_to_outperform/
LightningRodLabs
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r2teyc
false
null
t3_1r2teyc
/r/LocalLLaMA/comments/1r2teyc/we_finetuned_an_opensource_model_to_outperform/
false
false
self
0
{'enabled': False, 'images': [{'id': 'BmGFhQKmx8YljNal4OcLmGtPJyrxXuJRw7sBwlGYSSc', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/BmGFhQKmx8YljNal4OcLmGtPJyrxXuJRw7sBwlGYSSc.png?width=108&crop=smart&auto=webp&s=e5afc509d1c251376fe02f0c8506d6c298185167', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/BmGFhQKmx8YljNal4OcLmGtPJyrxXuJRw7sBwlGYSSc.png?width=216&crop=smart&auto=webp&s=4b266c958459ffac2bf15edb3467b970bc4198cb', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/BmGFhQKmx8YljNal4OcLmGtPJyrxXuJRw7sBwlGYSSc.png?width=320&crop=smart&auto=webp&s=6c270b49525bbe5c7119a7b3763c2f77068094d2', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/BmGFhQKmx8YljNal4OcLmGtPJyrxXuJRw7sBwlGYSSc.png?width=640&crop=smart&auto=webp&s=9f7144f9445b8a3caaccd0ded112baa856327f91', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/BmGFhQKmx8YljNal4OcLmGtPJyrxXuJRw7sBwlGYSSc.png?width=960&crop=smart&auto=webp&s=e16b5c3d80239ba77083a6b5b5fd7fdd0806d6fa', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/BmGFhQKmx8YljNal4OcLmGtPJyrxXuJRw7sBwlGYSSc.png?width=1080&crop=smart&auto=webp&s=4e2957abd62dd2ff7c17c0c68b6d913079eb12b5', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/BmGFhQKmx8YljNal4OcLmGtPJyrxXuJRw7sBwlGYSSc.png?auto=webp&s=12cc097cd3b78e02e3738483b347600fd270e1db', 'width': 1200}, 'variants': {}}]}
Zhipu (GLM) Not planning to release a small model for now.
57
https://preview.redd.it/…ce from discord
2026-02-12T13:27:52
https://www.reddit.com/r/LocalLLaMA/comments/1r2teo4/zhipu_glm_not_planning_to_release_a_small_model/
External_Mood4719
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r2teo4
false
null
t3_1r2teo4
/r/LocalLLaMA/comments/1r2teo4/zhipu_glm_not_planning_to_release_a_small_model/
false
false
https://preview.redd.it/…c2cc9151878ace73
57
null
I want to fit GLM 5 in 12 GB ram
0
title
2026-02-12T13:21:06
https://www.reddit.com/r/LocalLLaMA/comments/1r2t939/i_want_to_fit_glm_5_in_12_gb_ram/
perfect-finetune
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r2t939
false
null
t3_1r2t939
/r/LocalLLaMA/comments/1r2t939/i_want_to_fit_glm_5_in_12_gb_ram/
false
false
self
0
null
Naval on American AI companies
1,085
Source: x -> naval/status/2021787548104921539
2026-02-12T13:18:08
https://i.redd.it/gzlx6vjpd2jg1.jpeg
policyweb
i.redd.it
1970-01-01T00:00:00
0
{}
1r2t6ot
false
null
t3_1r2t6ot
/r/LocalLLaMA/comments/1r2t6ot/naval_on_american_ai_companies/
false
false
https://preview.redd.it/…9e4068834fe57726
1,085
{'enabled': True, 'images': [{'id': 'gzlx6vjpd2jg1', 'resolutions': [{'height': 64, 'url': 'https://preview.redd.it/gzlx6vjpd2jg1.jpeg?width=108&crop=smart&auto=webp&s=ed4e7c186b5dc84b76c34a88fefb740993d8ea06', 'width': 108}, {'height': 129, 'url': 'https://preview.redd.it/gzlx6vjpd2jg1.jpeg?width=216&crop=smart&auto=webp&s=f3a8816b514fba2047bd44b242f0b37e4fb41fd0', 'width': 216}, {'height': 192, 'url': 'https://preview.redd.it/gzlx6vjpd2jg1.jpeg?width=320&crop=smart&auto=webp&s=bf786c67016a1c28e123550877320956e402589c', 'width': 320}, {'height': 384, 'url': 'https://preview.redd.it/gzlx6vjpd2jg1.jpeg?width=640&crop=smart&auto=webp&s=7bc529ad917e8371c507493c6cdb55a2f5f6103d', 'width': 640}, {'height': 577, 'url': 'https://preview.redd.it/gzlx6vjpd2jg1.jpeg?width=960&crop=smart&auto=webp&s=5ce2fd91b408e5dffa0405a07f8884e182b3f9b4', 'width': 960}, {'height': 649, 'url': 'https://preview.redd.it/gzlx6vjpd2jg1.jpeg?width=1080&crop=smart&auto=webp&s=572ca17583b6e426c510caf9229bb81731e6e3e7', 'width': 1080}], 'source': {'height': 725, 'url': 'https://preview.redd.it/gzlx6vjpd2jg1.jpeg?auto=webp&s=84186c5cc2dc7c79f4a2299ce37b24ad7182208d', 'width': 1206}, 'variants': {}}]}
So anyone using Qwen Next 80B A3B variant on 3090??
0
Look, my internet speed isn't great, and my current NAS is in yellow condition due to resilvering . Because of this, I plan to download files to my local machine first (which has limited space), and then move them to the NAS if they are good. If so what quant? i am on 96G RAM, at minimum 32K ish token maybe
2026-02-12T13:16:42
https://www.reddit.com/r/LocalLLaMA/comments/1r2t5j0/so_anyone_using_qwen_next_80b_a3b_variant_on_3090/
Altruistic_Heat_9531
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r2t5j0
false
null
t3_1r2t5j0
/r/LocalLLaMA/comments/1r2t5j0/so_anyone_using_qwen_next_80b_a3b_variant_on_3090/
false
false
self
0
null
Open to code review or any tech related work immediately , need 500 usd urgently!
0
hey i am stuck somewhere and need urgent 300 usd, up for any kinda work for next two hours, its run lola run like situation, plus -- i dont need advanced payment, i will do your work and only if you accept it i take. any kinda tech work , code background includes rust, typescript, k8s, backend + microservices, prev had producthunt #12 day & #70 rank week product, etc dont waste time, if u serious ps DM!
2026-02-12T13:13:30
https://www.reddit.com/r/LocalLLaMA/comments/1r2t2xq/open_to_code_review_or_any_tech_related_work/
AdSure3977
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r2t2xq
false
null
t3_1r2t2xq
/r/LocalLLaMA/comments/1r2t2xq/open_to_code_review_or_any_tech_related_work/
false
false
self
0
null
Free Infra Planning/Compatibility+Performance Checks
1
Hey y'all, been working on [HardwareHQ.io](http://HardwareHQ.io) for a minute to try to get it perfect but feel like i'm kind of hitting a wall so wanted to both share and get some feedback on what I should focus on improving to make this as useful as possible to the community. I've built a bunch of decision engine type tools to help people plan their local builds, track gpu prices, get performance estimates for various models on different hardware etc. All tools/studios can be used free with no sign up, no ads, just trying to provide some tools that can keep people from getting blister packed on cloud/retail gpu prices and answer the endless string of "what's the best coding model i can run on X gpu" instead of people having to give personal anecdotes and guessing when things are close. Let me know what you think, I know some of the logic in the individual tools and studios still needs improving/adjusting but i've gone blind looking at the same thing too much for too long and need some fresh eyes if y'all are willing. If you fuck with it and are interested in the extended features don't hit me up and i'll get you a pro account free so you don't waste money on something that's still in development.
2026-02-12T13:12:24
https://www.reddit.com/r/LocalLLaMA/comments/1r2t23h/free_infra_planningcompatibilityperformance_checks/
EnvironmentalLow8531
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r2t23h
false
null
t3_1r2t23h
/r/LocalLLaMA/comments/1r2t23h/free_infra_planningcompatibilityperformance_checks/
false
false
self
1
{'enabled': False, 'images': [{'id': 'rI0QauwMDnjH1uMPEZznXsspG8j7coAF1FrKsQwMaXw', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/rI0QauwMDnjH1uMPEZznXsspG8j7coAF1FrKsQwMaXw.png?width=108&crop=smart&auto=webp&s=cd1cf16324530c9eb0509a69c110137e2d085a90', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/rI0QauwMDnjH1uMPEZznXsspG8j7coAF1FrKsQwMaXw.png?width=216&crop=smart&auto=webp&s=e779569fc2e9c0016bd61062ba4951c4f8cad14e', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/rI0QauwMDnjH1uMPEZznXsspG8j7coAF1FrKsQwMaXw.png?width=320&crop=smart&auto=webp&s=fa278aac1fbc1fa243a1d73236b0a7cc31c94532', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/rI0QauwMDnjH1uMPEZznXsspG8j7coAF1FrKsQwMaXw.png?width=640&crop=smart&auto=webp&s=61fedd565a269a32a44b01225002256d2b4f42e3', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/rI0QauwMDnjH1uMPEZznXsspG8j7coAF1FrKsQwMaXw.png?width=960&crop=smart&auto=webp&s=484081833be5b40ebdebc282414c662caf31ea00', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/rI0QauwMDnjH1uMPEZznXsspG8j7coAF1FrKsQwMaXw.png?width=1080&crop=smart&auto=webp&s=3fe5d3754ffc2406bb3172cd7dff99ff108e6adf', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/rI0QauwMDnjH1uMPEZznXsspG8j7coAF1FrKsQwMaXw.png?auto=webp&s=87cdbcc7e36ceec6bbd6a88e59a9aad26bd069ba', 'width': 1200}, 'variants': {}}]}
Seeking Advice: Fine-tuning TinyLlama for Tamazight (Tifinagh Script)
3
I'm developing **Awal-LM**, a chatbot for the **Tamazight language** (Berber) using the **Tifinagh script**. After a failed attempt with GPT-2, I'm moving to **TinyLlama-1.1B** and could use some expert insights. **The Context:** - **Model:** TinyLlama-1.1B (using Unsloth/Colab). - **Dataset:** 54k sentences (Folklore, Wikipedia, and IRCAM official texts). - **The Problem:** My GPT-2 version suffered from heavy repetition loops (e.g., "inna as: inna as:") and lacked conversational logic. It behaved like a broken "story-completer" rather than an assistant. **Key Challenges:** 1. **Tokenizer:** Tifinagh script is poorly represented. Should I manually add tokens to the vocab or rely on byte-level learning? 2. **Repetition:** Even with penalties, the model defaults to "folklore mode." How can I break this loop with such a small dataset? 3. **Instruction Tuning:** My data is mostly raw prose. What’s the most efficient way to "instructionalize" stories into a chat format? **Specific Questions:** - Is **54k sentences** enough for a decent LoRA/QLoRA fine-tune on TinyLlama? - Any recommended **Data Augmentation** techniques for low-resource scripts? - Should I prioritize **Continual Pre-training** first, or jump straight to **SFT** (Supervised Fine-Tuning)? I'd love to hear from anyone who has worked with **non-Latin, low-resource scripts**. Thanks!
2026-02-12T13:06:13
https://i.redd.it/9yb96p2lb2jg1.jpeg
rachid_nichan
i.redd.it
1970-01-01T00:00:00
0
{}
1r2sx9m
false
null
t3_1r2sx9m
/r/LocalLLaMA/comments/1r2sx9m/seeking_advice_finetuning_tinyllama_for_tamazight/
false
false
https://preview.redd.it/…5e35df78ed28129d
3
{'enabled': True, 'images': [{'id': '9yb96p2lb2jg1', 'resolutions': [{'height': 57, 'url': 'https://preview.redd.it/9yb96p2lb2jg1.jpeg?width=108&crop=smart&auto=webp&s=6ba7399972ff8b41b9be5beef9c942d1b4067942', 'width': 108}, {'height': 115, 'url': 'https://preview.redd.it/9yb96p2lb2jg1.jpeg?width=216&crop=smart&auto=webp&s=6140af5a036d1e74a6696cc4365056e50690ec6b', 'width': 216}, {'height': 170, 'url': 'https://preview.redd.it/9yb96p2lb2jg1.jpeg?width=320&crop=smart&auto=webp&s=6b289d6fbea6d97b447e9f5959ce599f2703774c', 'width': 320}, {'height': 341, 'url': 'https://preview.redd.it/9yb96p2lb2jg1.jpeg?width=640&crop=smart&auto=webp&s=592267d209b3d447d60df977f1bf52d95a97b497', 'width': 640}, {'height': 512, 'url': 'https://preview.redd.it/9yb96p2lb2jg1.jpeg?width=960&crop=smart&auto=webp&s=dbbf8a9f113b82eb457ca77bd03a1948917f5278', 'width': 960}, {'height': 576, 'url': 'https://preview.redd.it/9yb96p2lb2jg1.jpeg?width=1080&crop=smart&auto=webp&s=4455d163f8882232db85ecbade613cb6dd001198', 'width': 1080}], 'source': {'height': 683, 'url': 'https://preview.redd.it/9yb96p2lb2jg1.jpeg?auto=webp&s=69bf1abd4415b34a1360ee3d22e91a5251acba4a', 'width': 1280}, 'variants': {}}]}
I built a native macOS AI app that runs 5 backends — Apple Intelligence, MLX, llama.cpp, cloud APIs — all in one window BETA release
0
 I've been working on **Vesta**, a native SwiftUI app for macOS that lets you run AI models locally on Apple Silicon — or connect to 31+ cloud inference providers though APIs. The approach of this app is different that LMStudio, Jan and others. They are great. This app also gives acces to Apple's on-device AI model. I'm disappointed that Apple hasn't evolved it since it's not actually terrible. But they limit the context size of it (hard coded) This is also an experiement on if Coding agents can build an app from scratch. You be the judge. I can assure you however that it wasn't a 'one shot' build. Many millions of tokens burned! Over time I've seen very measurable progress of Claude Code as it evolves. I hope that we can achieve unthetered and local coding AI of this quality soon! This is something I'm prediciting for 2026. The best bang for the buck as been the Qwen3-VL models for me. Even though they tend to get in repetitive loops sometimes. Known issue. I chose a more simplistic UI and a different way to interact with the App itself using natural language for those who hate GUI navigation. To download and view screenshots of the capabilities: Just Visit - [https://kruks.ai/](https://kruks.ai/) My github: [https://github.com/scouzi1966](https://github.com/scouzi1966) This distribution: [https://github.com/scouzi1966/vesta-mac-dist](https://github.com/scouzi1966/vesta-mac-dist)   **What** **makes** **it** **different:**   \- **Natural** **Language** **Interface** **(NLI)** **with** **Agentic** **Sidekick** — chat with the app system. Only tested with Claude Code — more to come * Tell Agentic Sidekick to set things up for you instead of using the GUI * The agent can have a conversation with any othe model - entertaining to have 2 models discuss about the meaning of life! * MCP can be activated to allow any other external MCP client using it with ephemeral tokens generated in app for security (I have not tested all the degrees of freedom here!) * MCP can deeply search the conversation history through backend SQL   \- **5** **backends** **in** **one** **app** — Apple Intelligence (Foundation Models), MLX, llama.cpp, OpenAI, HuggingFace. Switch between them   \- **HuggingFace Explorer** — I am not affiliated with HuggingFace but combined with the $9/month Pro subscription makes it interesting to explore HF's inference services (this is rough around the edges but it is evolving)   \- **Vision/VLM** — drag an image into chat, get analysis from local or cloud models   \- **33+** **MCP** **tools** — the AI can control the app itself (load models, switch backends, check status) - Agentic Sidekick feature   \- **TTS** **with** **45+** **voices** (Kokoro) + speech-to-text (WhisperKit) + Marvis to mimic your own voice — all on-device   \- **Image** **&** **video** **generation** — FLUX, Stable Diffusion, Wan2.2, HunyuanVideo with HuggingFace Inference service   \- **Proper** **rendering** — LaTeX/KaTeX, syntax-highlighted code blocks, markdown tables   It's not Electron. It's not a wrapper around an API. It's a real macOS app built with SwiftUI, Metal, llama.cpp library and Swift MLX, HuggingFace Swift SDK — designed for M1/M2/M3/M4/M5.   Runs on macOS 26+.   **Install:**   brew install --cask scouzi1966/afm/vesta-mac   Or grab the DMG: [**https://kruks.ai**](https://kruks.ai)   Would love feedback — especially from anyone running local models on Apple Silicon.
2026-02-12T12:53:49
https://www.reddit.com/r/LocalLLaMA/comments/1r2sndy/i_built_a_native_macos_ai_app_that_runs_5/
scousi
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r2sndy
false
null
t3_1r2sndy
/r/LocalLLaMA/comments/1r2sndy/i_built_a_native_macos_ai_app_that_runs_5/
false
false
self
0
null
Switching back to local. I am done
44
i tried to report and got banned from the sub. this isnt a one off problem. it happens frequently. I dont mind using openrouter again or setting up something that could fit on a 24GB VRAM. i just need it for coding tasks. I lurk this sub but i need some guidance. Is Qwen3-coder acceptable?
2026-02-12T12:51:33
https://v.redd.it/2icufxvn82jg1
SkyNetLive
v.redd.it
1970-01-01T00:00:00
0
{}
1r2slnz
false
{'reddit_video': {'bitrate_kbps': 1200, 'dash_url': 'https://v.redd.it/2icufxvn82jg1/DASHPlaylist.mpd?a=1773492708%2CNWIzN2U2YjllODExN2Y5NGNmZGYxNzg0NmJjNGQzYzU3MzU4YmRjMWIwYzMwYTQ4NGZiODlkYzkyOGU2MDgyZQ%3D%3D&v=1&f=sd', 'duration': 5, 'fallback_url': 'https://v.redd.it/2icufxvn82jg1/CMAF_480.mp4?source=fallback', 'has_audio': True, 'height': 480, 'hls_url': 'https://v.redd.it/2icufxvn82jg1/HLSPlaylist.m3u8?a=1773492708%2CMjQwNDA2ZDQ4MzQ1NGI0MzBlYTZhZGY0ODc2ZThhNzM2OTkyYjY5OTA3NjdhNGNjYWMxNjAxNzU2NjhjZGYzMA%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/2icufxvn82jg1/CMAF_96.mp4', 'transcoding_status': 'completed', 'width': 852}}
t3_1r2slnz
/r/LocalLLaMA/comments/1r2slnz/switching_back_to_local_i_am_done/
false
false
https://external-preview…7975f62e9806ff35
44
{'enabled': False, 'images': [{'id': 'c3luc2Vid244MmpnMbGdfNZqaswdb4rlQku0rSvAiCJJUY-RFtBWh5lemTEV', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/c3luc2Vid244MmpnMbGdfNZqaswdb4rlQku0rSvAiCJJUY-RFtBWh5lemTEV.png?width=108&crop=smart&format=pjpg&auto=webp&s=5c991e6d7e20ad805d62f1dc3d6ab6875bb7c6cb', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/c3luc2Vid244MmpnMbGdfNZqaswdb4rlQku0rSvAiCJJUY-RFtBWh5lemTEV.png?width=216&crop=smart&format=pjpg&auto=webp&s=0dd3a22d425ffc2891e9578e19904c4fa7e804cb', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/c3luc2Vid244MmpnMbGdfNZqaswdb4rlQku0rSvAiCJJUY-RFtBWh5lemTEV.png?width=320&crop=smart&format=pjpg&auto=webp&s=dc8f61d3685a328e2f5bebfc85190a25a759219e', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/c3luc2Vid244MmpnMbGdfNZqaswdb4rlQku0rSvAiCJJUY-RFtBWh5lemTEV.png?width=640&crop=smart&format=pjpg&auto=webp&s=25dd95d65196959fd24b39437b3b36c451512f90', 'width': 640}], 'source': {'height': 480, 'url': 'https://external-preview.redd.it/c3luc2Vid244MmpnMbGdfNZqaswdb4rlQku0rSvAiCJJUY-RFtBWh5lemTEV.png?format=pjpg&auto=webp&s=40ac2f41c1b32ab3b7877717ba719500b41ef2c4', 'width': 852}, 'variants': {}}]}