title
stringlengths
1
300
score
int64
0
8.54k
selftext
stringlengths
0
41.5k
created
timestamp[ns]date
2023-04-01 04:30:41
2026-03-04 02:14:14
url
stringlengths
0
878
author
stringlengths
3
20
domain
stringlengths
0
82
edited
timestamp[ns]date
1970-01-01 00:00:00
2026-02-19 14:51:53
gilded
int64
0
2
gildings
stringclasses
7 values
id
stringlengths
7
7
locked
bool
2 classes
media
stringlengths
646
1.8k
name
stringlengths
10
10
permalink
stringlengths
33
82
spoiler
bool
2 classes
stickied
bool
2 classes
thumbnail
stringlengths
4
213
ups
int64
0
8.54k
preview
stringlengths
301
5.01k
Starting out - help me understand the models
1
Hi, starting out with running a local LLM. I have a 5090, 128gb ram, and 10tb of storage. Which model is the best that will fit (im ok to trade off on speed) for doing complex coding, designing parallel R pipelines, biostatistics?
2026-02-05T00:07:48
https://www.reddit.com/r/LocalLLaMA/comments/1qw5nfh/starting_out_help_me_understand_the_models/
That-Dragonfruit172
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qw5nfh
false
null
t3_1qw5nfh
/r/LocalLLaMA/comments/1qw5nfh/starting_out_help_me_understand_the_models/
false
false
self
1
null
Have you seen P-EAGLE? Parallel drafting EAGLE
2
Wonder if this method has good application scenarios? https://arxiv.org/pdf/2602.01469
2026-02-05T00:07:18
https://www.reddit.com/r/LocalLLaMA/comments/1qw5myu/have_you_seen_peagle_parallel_drafting_eagle/
Motor_Advisor_5486
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qw5myu
false
null
t3_1qw5myu
/r/LocalLLaMA/comments/1qw5myu/have_you_seen_peagle_parallel_drafting_eagle/
false
false
self
2
null
Cheapest way to use Kimi 2.5 with agent swarm
16
I am a power user of AI coding. I blew through over a billion tokens on Claude Sonnet and Opus on Cursor. I currently have a Nvidia DGX Spark and I am thinking of hosting the new Qwen3-Coder-Next on the spark. However, I am also considering just paying for Kimi 2.5 with agent swarm. It is too expensive using Openrouter so I am thinking of just using it directly from [Kimi.ai](http://Kimi.ai) but I am concerned building core business logic and exposing source code through prompts to a Chinese based firm. Any thoughts?
2026-02-05T00:06:50
https://www.reddit.com/r/LocalLLaMA/comments/1qw5ml0/cheapest_way_to_use_kimi_25_with_agent_swarm/
Future-Benefit-3437
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qw5ml0
false
null
t3_1qw5ml0
/r/LocalLLaMA/comments/1qw5ml0/cheapest_way_to_use_kimi_25_with_agent_swarm/
false
false
self
16
null
Solid list! For staying updated on stuff like this without drowning in newsletters, I've got a small TG channel curating the best daily AI SaaS/dev finds — things like new SDK drops, agent frameworks, etc.
1
[removed]
2026-02-04T23:59:07
https://www.reddit.com/r/LocalLLaMA/comments/1qw5fkl/solid_list_for_staying_updated_on_stuff_like_this/
Acrobatic_Western766
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qw5fkl
false
null
t3_1qw5fkl
/r/LocalLLaMA/comments/1qw5fkl/solid_list_for_staying_updated_on_stuff_like_this/
false
false
self
1
{'enabled': False, 'images': [{'id': 'LzEh_rI2QeUttiG9s9qAJIV1eGekiPZ4Zct1d8nWE5g', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/LzEh_rI2QeUttiG9s9qAJIV1eGekiPZ4Zct1d8nWE5g.jpeg?width=108&crop=smart&auto=webp&s=30e3f76ce37039c119b71b07e5485417de541674', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/LzEh_rI2QeUttiG9s9qAJIV1eGekiPZ4Zct1d8nWE5g.jpeg?width=216&crop=smart&auto=webp&s=3fc904c61e8d086a348f6e51d49644c5dadbb294', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/LzEh_rI2QeUttiG9s9qAJIV1eGekiPZ4Zct1d8nWE5g.jpeg?width=320&crop=smart&auto=webp&s=0f85f9926d92686cb457bc4e84cbde5e44637c7d', 'width': 320}], 'source': {'height': 320, 'url': 'https://external-preview.redd.it/LzEh_rI2QeUttiG9s9qAJIV1eGekiPZ4Zct1d8nWE5g.jpeg?auto=webp&s=c4b206c303ac60c9b2325887daf8701f9be3ce0a', 'width': 320}, 'variants': {}}]}
kemdiCode MCP is a Model Context Protocol server that gives AI agents and IDE assistants access to 142 specialized tools
1
[removed]
2026-02-04T23:57:47
https://www.reddit.com/r/LocalLLaMA/comments/1qw5eee/kemdicode_mcp_is_a_model_context_protocol_server/
Lanky_Definition_902
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qw5eee
false
null
t3_1qw5eee
/r/LocalLLaMA/comments/1qw5eee/kemdicode_mcp_is_a_model_context_protocol_server/
false
false
self
1
{'enabled': False, 'images': [{'id': 'LjTpFd_4SaaWYWPmkAYQUo2TwFNjXXBS0zFSbsWyRuo', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/LjTpFd_4SaaWYWPmkAYQUo2TwFNjXXBS0zFSbsWyRuo.png?width=108&crop=smart&auto=webp&s=27d1cac23940e38f4e9d27bd42565da33c8a1143', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/LjTpFd_4SaaWYWPmkAYQUo2TwFNjXXBS0zFSbsWyRuo.png?width=216&crop=smart&auto=webp&s=7ad48cad9e3b915b84c1cc0b2366e41e861c0f74', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/LjTpFd_4SaaWYWPmkAYQUo2TwFNjXXBS0zFSbsWyRuo.png?width=320&crop=smart&auto=webp&s=5443dbaa889d87d6747951e3c42d28947fba63d0', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/LjTpFd_4SaaWYWPmkAYQUo2TwFNjXXBS0zFSbsWyRuo.png?width=640&crop=smart&auto=webp&s=4cd5c2b81a19c9673d08a391e60ac674424483e3', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/LjTpFd_4SaaWYWPmkAYQUo2TwFNjXXBS0zFSbsWyRuo.png?width=960&crop=smart&auto=webp&s=4881b7c62f04ee024bc7eb464d66ff7b11f45baf', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/LjTpFd_4SaaWYWPmkAYQUo2TwFNjXXBS0zFSbsWyRuo.png?width=1080&crop=smart&auto=webp&s=8a2aa55f944eb4c21b508521ab6631ccfae5953e', 'width': 1080}], 'source': {'height': 675, 'url': 'https://external-preview.redd.it/LjTpFd_4SaaWYWPmkAYQUo2TwFNjXXBS0zFSbsWyRuo.png?auto=webp&s=3d90f31019024a53021ebcdc629e818ecca63d64', 'width': 1200}, 'variants': {}}]}
Anyone able to run Qwen3-coder-next with LMStudio without getting a jinja template error?
4
I keep getting this error when I run Qwen3-coder-next in the LMStudio server (using OpenCoder): "Error rendering prompt with jinja template: \"Unknown StringValue filter: safe\".
2026-02-04T23:47:07
https://www.reddit.com/r/LocalLLaMA/comments/1qw5566/anyone_able_to_run_qwen3codernext_with_lmstudio/
cafedude
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qw5566
false
null
t3_1qw5566
/r/LocalLLaMA/comments/1qw5566/anyone_able_to_run_qwen3codernext_with_lmstudio/
false
false
self
4
null
Experimental AI agent & memory systems — demo + notes (GitHub Pages)
1
[deleted]
2026-02-04T22:45:04
[deleted]
1970-01-01T00:00:00
0
{}
1qw3ly4
false
null
t3_1qw3ly4
/r/LocalLLaMA/comments/1qw3ly4/experimental_ai_agent_memory_systems_demo_notes/
false
false
default
1
null
Experimental AI agent & memory systems — demo + notes (GitHub Pages)
1
[deleted]
2026-02-04T22:44:56
[deleted]
1970-01-01T00:00:00
0
{}
1qw3ltk
false
null
t3_1qw3ltk
/r/LocalLLaMA/comments/1qw3ltk/experimental_ai_agent_memory_systems_demo_notes/
false
false
default
1
null
AI Assistant to Tabularis
0
Tabularis is a native, cross-platform database client built with Rust + Tauri, designed to be fast, local-first, and developer-friendly. The new AI Assistant fits that philosophy perfectly: it works with both cloud models and local LLMs. No forced APIs. No mandatory accounts. No data leaving your machine unless you choose. The assistant runs next to your database, not somewhere else. This isn’t “AI everywhere”. It’s AI where it actually helps. Tabularis is still in beta, but the direction is clear: powerful database tooling, built natively, without giving up control. If you care about databases, privacy, and tools that respect developers, this might be worth a look
2026-02-04T22:17:34
https://github.com/debba/tabularis
debba_
github.com
1970-01-01T00:00:00
0
{}
1qw2w7e
false
null
t3_1qw2w7e
/r/LocalLLaMA/comments/1qw2w7e/ai_assistant_to_tabularis/
false
false
https://external-preview…7f358347c4f0c2a2
0
{'enabled': False, 'images': [{'id': '537htSdRVIEzPOTx4OvvqO6eF_JfshU8V1gTf6bLFrY', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/537htSdRVIEzPOTx4OvvqO6eF_JfshU8V1gTf6bLFrY.jpeg?width=108&crop=smart&auto=webp&s=963015906036b92ee02293cd8420618471dd402d', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/537htSdRVIEzPOTx4OvvqO6eF_JfshU8V1gTf6bLFrY.jpeg?width=216&crop=smart&auto=webp&s=183ceb6f97d9deccf11f2657a0df27fdb4693b30', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/537htSdRVIEzPOTx4OvvqO6eF_JfshU8V1gTf6bLFrY.jpeg?width=320&crop=smart&auto=webp&s=37d6f2adcfc770c5d70da33402f7079523198377', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/537htSdRVIEzPOTx4OvvqO6eF_JfshU8V1gTf6bLFrY.jpeg?width=640&crop=smart&auto=webp&s=2362aa0ef80b7a7271f161b107267ff1db059f5f', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/537htSdRVIEzPOTx4OvvqO6eF_JfshU8V1gTf6bLFrY.jpeg?width=960&crop=smart&auto=webp&s=f70d3160c9bfb51bfb8e53aef7c6931972fb0481', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/537htSdRVIEzPOTx4OvvqO6eF_JfshU8V1gTf6bLFrY.jpeg?width=1080&crop=smart&auto=webp&s=673f8e0aabbc52f26e119e93b406e56e037618b6', 'width': 1080}], 'source': {'height': 640, 'url': 'https://external-preview.redd.it/537htSdRVIEzPOTx4OvvqO6eF_JfshU8V1gTf6bLFrY.jpeg?auto=webp&s=a2ed4beb78ef37f61204ac5d0c2a243f713cb23e', 'width': 1280}, 'variants': {}}]}
[ Removed by moderator ]
0
[removed]
2026-02-04T21:52:59
https://www.reddit.com/gallery/1qw28o2
Aromatic-Age-5442
reddit.com
1970-01-01T00:00:00
0
{}
1qw28o2
false
null
t3_1qw28o2
/r/LocalLLaMA/comments/1qw28o2/beware_of_the_scammer_pablo_callao_lacruz/
false
false
null
0
null
[ Removed by moderator ]
0
[removed]
2026-02-04T21:52:36
https://www.reddit.com/gallery/1qw28aa
Aromatic-Age-5442
reddit.com
1970-01-01T00:00:00
0
{}
1qw28aa
false
null
t3_1qw28aa
/r/LocalLLaMA/comments/1qw28aa/beware_of_the_scammer_pablo_callao_lacruz/
false
false
null
0
null
Inside a Chinese AI Lab
15
Interview with a senior MiniMax researcher. Olive Song explains how they actually build models that work.
2026-02-04T21:39:55
https://youtube.com/watch?v=GkUMqWeHn40&si=A9JWXFY9m0dhwhMP
etherd0t
youtube.com
1970-01-01T00:00:00
0
{}
1qw1w2s
false
{'oembed': {'author_name': 'Turing Post', 'author_url': 'https://www.youtube.com/@RealTuringPost', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/GkUMqWeHn40?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen title="Inside MiniMax: How They Build Open Models"></iframe>', 'provider_name': 'YouTube', 'provider_url': 'https://www.youtube.com/', 'thumbnail_height': 360, 'thumbnail_url': 'https://i.ytimg.com/vi/GkUMqWeHn40/hqdefault.jpg', 'thumbnail_width': 480, 'title': 'Inside MiniMax: How They Build Open Models', 'type': 'video', 'version': '1.0', 'width': 356}, 'type': 'youtube.com'}
t3_1qw1w2s
/r/LocalLLaMA/comments/1qw1w2s/inside_a_chinese_ai_lab/
false
false
https://external-preview…be191e662b516f91
15
{'enabled': False, 'images': [{'id': 'T3GEFWD2DV7wdYyZP0UMe8Hme4bcjODvYv7nnFVbZ6k', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/T3GEFWD2DV7wdYyZP0UMe8Hme4bcjODvYv7nnFVbZ6k.jpeg?width=108&crop=smart&auto=webp&s=c96ed01eefd1beba258e24c5e317a338e078561f', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/T3GEFWD2DV7wdYyZP0UMe8Hme4bcjODvYv7nnFVbZ6k.jpeg?width=216&crop=smart&auto=webp&s=3c608c2ee79f3624d0b12d8ef125d57493987e86', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/T3GEFWD2DV7wdYyZP0UMe8Hme4bcjODvYv7nnFVbZ6k.jpeg?width=320&crop=smart&auto=webp&s=cae8390901348ec2037614906e958da7899bcb72', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/T3GEFWD2DV7wdYyZP0UMe8Hme4bcjODvYv7nnFVbZ6k.jpeg?auto=webp&s=97ac22e89855cac1c9b73dbf31638ed9d5805403', 'width': 480}, 'variants': {}}]}
Do you use Windows or Linux?
0
I'm about to install my 5060ti GPU in my Windows desktop, do you guys use Windows shell or do you install a dual boot Linux distro to work with local stuff ? (whisper/ stable diffusion/ other ..) if you do a dual boot, what distro and from a USB or on the ssd partition?
2026-02-04T21:39:22
https://www.reddit.com/r/LocalLLaMA/comments/1qw1vj0/do_you_use_windows_or_linux/
boklos
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qw1vj0
false
null
t3_1qw1vj0
/r/LocalLLaMA/comments/1qw1vj0/do_you_use_windows_or_linux/
false
false
self
0
null
Companion App to My LYRN AI Dashboard
0
I ported off most of LYRNs remote desktop functions so I could build a control surface for all my devices. This isn't Cockpit or SSH or RDP, it's something cleaner and quicker and more usable on mobile and meant to slot into my multi-agent LYRN viewer. This way you can manage your system files and LYRN installation from the same place. [https://github.com/bsides230/RemoDash](https://github.com/bsides230/RemoDash) https://preview.redd.it/02o484aonjhg1.png?width=1280&format=png&auto=webp&s=831dfd9c91cc5bae43586958487f7a16c0663bd6 https://preview.redd.it/9vslj3aonjhg1.png?width=1280&format=png&auto=webp&s=df026eed33a17fed80a0b7fa20970536ea1933ff https://preview.redd.it/0inh04aonjhg1.png?width=1280&format=png&auto=webp&s=a67c885e9e6353f098a8b329315292599494a8f9 https://preview.redd.it/sz90v3aonjhg1.png?width=1280&format=png&auto=webp&s=6f8e26a1a5157adcf4444e3ef7c6baac36a2fee1
2026-02-04T21:16:20
https://www.reddit.com/r/LocalLLaMA/comments/1qw18z3/companion_app_to_my_lyrn_ai_dashboard/
PayBetter
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qw18z3
false
null
t3_1qw18z3
/r/LocalLLaMA/comments/1qw18z3/companion_app_to_my_lyrn_ai_dashboard/
false
false
https://b.thumbs.redditm…o2EIuOYhf00Q.jpg
0
null
Self-hosting stack that actually saves money: Ollama + Supabase + SearXNG
0
Been running this stack for a few months now and wanted to share what's working. **The Setup:** - **Ollama** for local inference (Llama 3, Mistral, etc.) - **Supabase** (self-hosted) for auth, database, and vector storage - **SearXNG** for private web search - All on a single VPS with 128GB RAM **Monthly costs before:** ~€300-500 (OpenAI API, Pinecone, Algolia, Auth0) **Monthly costs now:** ~€50 (just the VPS) **What surprised me:** 1. Llama 3 70B is genuinely good enough for 90% of tasks 2. Supabase pgvector works great for RAG - no need for dedicated vector DB 3. SearXNG gives you web search without API limits **What's tricky:** - Initial setup takes a weekend - You need to manage your own backups - Some edge cases still need Claude/GPT-4 **Hardware requirements:** - 64GB+ RAM for 70B models (or 32GB for 7B-13B) - Fast SSD matters more than you'd think Anyone else running a similar stack? Curious what others are using for the "glue" layer between these services.
2026-02-04T21:12:50
https://www.reddit.com/r/LocalLLaMA/comments/1qw15gl/selfhosting_stack_that_actually_saves_money/
Tgbrutus
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qw15gl
false
null
t3_1qw15gl
/r/LocalLLaMA/comments/1qw15gl/selfhosting_stack_that_actually_saves_money/
false
false
self
0
null
[ Removed by moderator ]
6
[removed]
2026-02-04T21:10:57
https://www.reddit.com/r/LocalLLaMA/comments/1qw13nh/bitperfect_hardware_acceleration_on_m4_silicon/
Ok-Abbreviations-131
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qw13nh
false
null
t3_1qw13nh
/r/LocalLLaMA/comments/1qw13nh/bitperfect_hardware_acceleration_on_m4_silicon/
false
false
null
6
null
I built Workbench - a local-first AI task runner with plugin system (open source)
0
I got frustrated that Goose was hard to extend and Claude Desktop needed a Mac. So I built Workbench. **What it is:** Desktop app where you chat with an AI that can use tools. Chain tools together. Create new tools by asking the AI to write them. **Key points:** * Local-first - your data stays on your machine * Works with OpenRouter, OpenAI, or Azure (bring your own key) * 11 built-in tools (weather, clipboard, files, CSV, YouTube transcripts, etc.) * Plugin system - drop a folder in `plugins/`, restart, done * Tool chaining with variable interpolation **Not a SaaS.** No account, no subscription, no telemetry. GitHub: [https://github.com/YakStacks/Workbench](https://github.com/YakStacks/Workbench) Built with Electron + React. Windows installer ready, Mac/Linux coming in v2. This is v0.1 - feedback welcome.
2026-02-04T21:08:30
https://github.com/YakStacks/Workbench
junkyard22
github.com
1970-01-01T00:00:00
0
{}
1qw117q
false
null
t3_1qw117q
/r/LocalLLaMA/comments/1qw117q/i_built_workbench_a_localfirst_ai_task_runner/
false
false
https://external-preview…5112e182284d537e
0
{'enabled': False, 'images': [{'id': 'zslDFmK6eWwa8-BT_iae4t3KmwY1nArmmYm2pgDW-dw', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/zslDFmK6eWwa8-BT_iae4t3KmwY1nArmmYm2pgDW-dw.png?width=108&crop=smart&auto=webp&s=f277dc998fde13e5bffb92317e36f6fa6259ce57', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/zslDFmK6eWwa8-BT_iae4t3KmwY1nArmmYm2pgDW-dw.png?width=216&crop=smart&auto=webp&s=c3250e08d690d053cf9b0e689249fdb6a847d7cb', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/zslDFmK6eWwa8-BT_iae4t3KmwY1nArmmYm2pgDW-dw.png?width=320&crop=smart&auto=webp&s=d848e63a46aaf07153a2daaaced7c04753244f5f', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/zslDFmK6eWwa8-BT_iae4t3KmwY1nArmmYm2pgDW-dw.png?width=640&crop=smart&auto=webp&s=b7265026f46de39b18d2ec45dede81eb59d55f5c', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/zslDFmK6eWwa8-BT_iae4t3KmwY1nArmmYm2pgDW-dw.png?width=960&crop=smart&auto=webp&s=84d0230908ba1b040271515e4f9b1fbffbd9ff70', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/zslDFmK6eWwa8-BT_iae4t3KmwY1nArmmYm2pgDW-dw.png?width=1080&crop=smart&auto=webp&s=79a0fd6ec801e651ba7416665b221d8a6776d52c', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/zslDFmK6eWwa8-BT_iae4t3KmwY1nArmmYm2pgDW-dw.png?auto=webp&s=189c72a482cecc74ee9c1a2e73eaef9f16e84eda', 'width': 1200}, 'variants': {}}]}
Aira: A WebGPU-based AI framework built from scratch
1
[removed]
2026-02-04T21:04:37
https://www.reddit.com/r/LocalLLaMA/comments/1qw0x89/aira_a_webgpubased_ai_framework_built_from_scratch/
shadowww345
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qw0x89
false
null
t3_1qw0x89
/r/LocalLLaMA/comments/1qw0x89/aira_a_webgpubased_ai_framework_built_from_scratch/
false
false
self
1
null
Aira.js: A WebGPU-based AI framework built from scratch
1
[removed]
2026-02-04T21:02:07
[deleted]
1970-01-01T00:00:00
0
{}
1qw0uqe
false
null
t3_1qw0uqe
/r/LocalLLaMA/comments/1qw0uqe/airajs_a_webgpubased_ai_framework_built_from/
false
false
default
1
null
Analysis Paralysis/Advice with next hardware for local LLMs
2
Hey all — looking for some sanity checks and outside perspective because I’ve been stuck in analysis paralysis for a while... # Current hardware * **Mac Studio M4 Max (1TB/64GB)** — main work machine * Runs LM Studio for local models * Qwen3 30b is decent, but quite slow with the thinking requirement * Nemotron 30b is fast, but the output is marginal * In hindsight, I wish I’d gone with an M3 Ultra for memory bandwidth + capacity * **Windows gaming PC** — 7900x, 64GB 5200 RAM, RTX 4090, Windows 11 * **TrueNAS server** * 256GB RAM (8x32GB DDR4 2666 RDIMM) - underutilized * Plus a spare 64GB DDR4 RDIMM # Cloud / subscriptions * 2x Claude Pro subscriptions (one work, one personal) * I hit Claude rate limits fairly often # What I’m actually trying to optimize for These days I’m mostly focused on: * **Agentic coding workflows (mostly OpenCode)** * Large prompts + higher quality outputs * Parallel execution is a bonus * Looking for output quality in between Haiku and Sonnet “good enough” for sub-agent slices # Options I’m considering 1. **Sell Mac Studio → buy M3 Ultra 512gb** * Net cost: \~**$7k** * Pros: Apple memory bandwidth + unified memory for big models, simple setup, sits on the desk * Cons: Expensive, prompt processing is mid for the money 2. **DGX Spark or Strix Halo** * GB10 has significantly better prompt processing speed * Net: \~**$3k**, maybe **$6k** for a 2-node setup * Pros: Interesting form factor, good perf/W * Cons: Worried either one will lose lots of value in 1-2 years when something new comes out 3. **Threadripper Pro AM4 + 2x AMD R9700 GPUs** * Net: \~**$5k** * Pros: Expandability, more “traditional” workstation path, already have memory * Cons: Power, complexity, GPU market insanity, Investing in older platform 4. **Threadripper Pro AM4, move 4090 and make that the gaming PC** * Net: \~$1k after selling the rest of the gaming PC * Switch to Linux? * Pros: Lower overall cost, more GPU horsepower * Cons: Less VRAM, older platform, slower single core CPU performance # Personal considerations I care about eventual resale value, but the hardware market feels totally distorted right now... Some folks talk about an “AI crash” — I’m personally in the "don’t hold your breath" camp. I suspect that Apple is not immune to rammaggeddon, and future products will push significantly higher prices for similar memory configs. I also recognize that it's very hard to compete with cloud offerings performance-wise; I'm mostly looking for fallback once rate limits are hit. # What I’m hoping to get feedback on * For large-prompt, agentic coding workflows, what path actually makes the most sense right now? * Good ways to abstract the model config out of OpenCode (i.e. try Claude first, then if rate limit, automatically send the prompt locally)? I've heard of LiteLLM but have no experience with it. * Is unified memory (Apple) still king here, or are multi-GPU setups finally catching up for this use case? * Anyone regret going DGX Spark / Strix Halo? * How much weight should I realistically put on resale in this market?
2026-02-04T20:55:53
https://www.reddit.com/r/LocalLLaMA/comments/1qw0ogw/analysis_paralysisadvice_with_next_hardware_for/
EvilPencil
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qw0ogw
false
null
t3_1qw0ogw
/r/LocalLLaMA/comments/1qw0ogw/analysis_paralysisadvice_with_next_hardware_for/
false
false
self
2
null
Aira: A WebGPU-based AI framework built from scratch
1
[removed]
2026-02-04T20:54:54
https://www.reddit.com/r/LocalLLaMA/comments/1qw0ngi/aira_a_webgpubased_ai_framework_built_from_scratch/
shadowww345
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qw0ngi
false
null
t3_1qw0ngi
/r/LocalLLaMA/comments/1qw0ngi/aira_a_webgpubased_ai_framework_built_from_scratch/
false
false
self
1
null
serpentine streaming: 90ms latency, runs locally on apple silicon. more expressive and prosodic than elevenlabs.
3
we've been building speech-to-speech engines for 2.5 years. today we're dropping our tts engine with a new streaming approach we call serpentine streaming. **try it now:** curl -sL https://raw.githubusercontent.com/SRSWTI/axe/main/install_sensors.sh | bash login and you're set. unlimited usage, runs completely on your machine. **performance:** * **latency:** 90ms time-to-first-audio-byte on m4 max (128gb), \~800ms on m4 macbook air (16gb) * **memory:** 3.3-4.5gb footprint at peak * **platform:** mlx-optimized for any m-series chip **okay so how does serpentine works?** traditional tts models either process complete input before generating output, or learn complex policies for when to read/write. we took a different approach. **pre-aligned streams with strategic delays.** but here's the key innovation: we add a **control stream** that predicts word boundaries in the input text. when the model predicts a word boundary (a special token indicating a new word is starting), we feed the text tokens for that next word over the following timesteps. while these tokens are being fed, the model can't output another word boundary action. we also introduce a **lookahead text stream.** the control stream predicts where the next word starts, but has no knowledge of that word's content when making the decision. given a sequence of words m₁, m₂, m₃... the lookahead stream feeds tokens of word mᵢ₊₁ to the backbone while the primary text stream contains tokens of word mᵢ. this gives the model forward context for natural prosody decisions. it can see what's coming and make informed decisions about timing, pauses, and delivery. **training data:** * **7,600 hours** of professional voice actors and casual conversations - modern slang, lingo, and how people actually speak * **50,000 hours** of synthetic training on highly expressive tts systems this training approach is why the prosody and expressiveness feel different from existing systems. the model understands context, emotion, and emphasis because it learned from natural human speech patterns. **what's coming:** we'll be releasing weights at [https://huggingface.co/srswti](https://huggingface.co/srswti) in the coming weeks along with a full technical report and model card. this tts engine is part of bodega, our local-first ai platform. our open source work includes the raptor series (90m param reasoning models hitting 100+ tok/s on edge), bodega-centenario-21b, bodega-solomon-9b for multimodal coding, and our deepseek-v3.2 distill to 32b running at 120 tok/s on m1 max. check out [https://huggingface.co/srswti](https://huggingface.co/srswti) for our full model lineup. im happy to have any discussions, questions here. thankyou :)
2026-02-04T20:53:45
https://v.redd.it/q6rolk0ejjhg1
EmbarrassedAsk2887
v.redd.it
1970-01-01T00:00:00
0
{}
1qw0mc8
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/q6rolk0ejjhg1/DASHPlaylist.mpd?a=1772830514%2CZTk2YWQ3NzRmZDhjYjQ2Y2I5MDlhMjMyMzAxZDMzYmUyMWNkZWU5M2JmMDcxOWE5ODhmNDY2ZGRlYjJmNWJiZA%3D%3D&v=1&f=sd', 'duration': 272, 'fallback_url': 'https://v.redd.it/q6rolk0ejjhg1/CMAF_1080.mp4?source=fallback', 'has_audio': True, 'height': 992, 'hls_url': 'https://v.redd.it/q6rolk0ejjhg1/HLSPlaylist.m3u8?a=1772830514%2CYjFiNGY3MWMwN2ZiMzI5NjFhODE1ZWNiNTNlZDM4OTk1OTgxOWQ0Mjk0MzE3NTZmZjJhZjFhZWQ1NDgwMDgyYQ%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/q6rolk0ejjhg1/CMAF_96.mp4', 'transcoding_status': 'completed', 'width': 1920}}
t3_1qw0mc8
/r/LocalLLaMA/comments/1qw0mc8/serpentine_streaming_90ms_latency_runs_locally_on/
false
false
https://external-preview…8edbc9578a34b3a0
3
{'enabled': False, 'images': [{'id': 'cXVqdHRxMGVqamhnMQ11xk6ZtZ5hue1TCq8_bs__jJlX5-d8sqmlJGduFogn', 'resolutions': [{'height': 55, 'url': 'https://external-preview.redd.it/cXVqdHRxMGVqamhnMQ11xk6ZtZ5hue1TCq8_bs__jJlX5-d8sqmlJGduFogn.png?width=108&crop=smart&format=pjpg&auto=webp&s=ae83b85d11e92485d4d149b2aa9ad01c19b337c0', 'width': 108}, {'height': 111, 'url': 'https://external-preview.redd.it/cXVqdHRxMGVqamhnMQ11xk6ZtZ5hue1TCq8_bs__jJlX5-d8sqmlJGduFogn.png?width=216&crop=smart&format=pjpg&auto=webp&s=50cb4cc4baa343473886ae63e164ff00bf470bfa', 'width': 216}, {'height': 165, 'url': 'https://external-preview.redd.it/cXVqdHRxMGVqamhnMQ11xk6ZtZ5hue1TCq8_bs__jJlX5-d8sqmlJGduFogn.png?width=320&crop=smart&format=pjpg&auto=webp&s=00cfec24e1321fe7a22e2241bd838d4707b82712', 'width': 320}, {'height': 330, 'url': 'https://external-preview.redd.it/cXVqdHRxMGVqamhnMQ11xk6ZtZ5hue1TCq8_bs__jJlX5-d8sqmlJGduFogn.png?width=640&crop=smart&format=pjpg&auto=webp&s=5c44200803775e0216d596c94ffb0f518438bc5b', 'width': 640}, {'height': 496, 'url': 'https://external-preview.redd.it/cXVqdHRxMGVqamhnMQ11xk6ZtZ5hue1TCq8_bs__jJlX5-d8sqmlJGduFogn.png?width=960&crop=smart&format=pjpg&auto=webp&s=9b45ea3eb8035108270ed59850d40816a587a3ad', 'width': 960}, {'height': 558, 'url': 'https://external-preview.redd.it/cXVqdHRxMGVqamhnMQ11xk6ZtZ5hue1TCq8_bs__jJlX5-d8sqmlJGduFogn.png?width=1080&crop=smart&format=pjpg&auto=webp&s=96241466af14049dc3d83b8546826c251fd26738', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/cXVqdHRxMGVqamhnMQ11xk6ZtZ5hue1TCq8_bs__jJlX5-d8sqmlJGduFogn.png?format=pjpg&auto=webp&s=bacf938d7ece7299e2d19794fb05f00af3d9f3a1', 'width': 2090}, 'variants': {}}]}
I replaced Claude-Code’s entire backend to use NVIDIA NIM models for free
72
I have been working on a side-project which replaces the following things in the Claude ecosystem with free alternatives. I started the initial implementation with Opus 4.5 in claude code and as soon as it got working I used it to work on itself which i found very cool. \- Replaces Anthropic models with NVIDIA-NIM models: It acts as middleware between Claude-Code and NVIDIA-NIM allowing unlimited usage upto 40 RPM with a free NVIDIA-NIM api-key. \- Replaces the Claude mobile app with telegram: Give it access to some directories, send it tasks from telegram and watch it work autonomously. It has features that distinguish it from similar proxies: \- The interleaved thinking tokens generated between tool calls are preserved allowing reasoning models like GLM 4.7 and kimi-k2.5 to take full advantage of thinking from previous turns. \- Fast prefix detection stops the CLI from sending bash command prefix classification requests to the LLM making it feel blazing fast. \- Built in rate limiting and session concurrency. The code is modular so that adding other providers or messaging apps is easy. Hope the community likes it, any PRs are welcome.
2026-02-04T20:53:30
https://github.com/Alishahryar1/cc-nim
PreparationAny8816
github.com
1970-01-01T00:00:00
0
{}
1qw0m3i
false
null
t3_1qw0m3i
/r/LocalLLaMA/comments/1qw0m3i/i_replaced_claudecodes_entire_backend_to_use/
false
false
https://external-preview…086be16e11109ad6
72
{'enabled': False, 'images': [{'id': 'RAF5Ohu7I-V-9BNbpzI8zm4i901BuyT3K5FFrQvKEQU', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/RAF5Ohu7I-V-9BNbpzI8zm4i901BuyT3K5FFrQvKEQU.png?width=108&crop=smart&auto=webp&s=33412cb96613c15f8df528af7d02bbee65258d8b', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/RAF5Ohu7I-V-9BNbpzI8zm4i901BuyT3K5FFrQvKEQU.png?width=216&crop=smart&auto=webp&s=23b7c338f92b25193c74102ff2bec2d1dc437427', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/RAF5Ohu7I-V-9BNbpzI8zm4i901BuyT3K5FFrQvKEQU.png?width=320&crop=smart&auto=webp&s=38948748746e6ced69cc80975c839641d02e7618', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/RAF5Ohu7I-V-9BNbpzI8zm4i901BuyT3K5FFrQvKEQU.png?width=640&crop=smart&auto=webp&s=4a1c590cda9656abbfa2bab3680a2a5ec3afbe29', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/RAF5Ohu7I-V-9BNbpzI8zm4i901BuyT3K5FFrQvKEQU.png?width=960&crop=smart&auto=webp&s=8bbed22a25dcc498b6c8caca2c094c735825c875', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/RAF5Ohu7I-V-9BNbpzI8zm4i901BuyT3K5FFrQvKEQU.png?width=1080&crop=smart&auto=webp&s=dc15d8db6e0e95cb5d396a46985d2bec584f9cb8', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/RAF5Ohu7I-V-9BNbpzI8zm4i901BuyT3K5FFrQvKEQU.png?auto=webp&s=f2cc52e7f92c34e279999875b6b3456c8432436b', 'width': 1200}, 'variants': {}}]}
serpentine streaming: 90ms latency, runs locally on apple silicon. more expressive and prosodic than elevenlabs.
1
2026-02-04T20:45:12
https://v.redd.it/oguwbtgpcjhg1
EmbarrassedAsk2887
/r/LocalLLaMA/comments/1qw0dub/serpentine_streaming_90ms_latency_runs_locally_on/
1970-01-01T00:00:00
0
{}
1qw0dub
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/oguwbtgpcjhg1/DASHPlaylist.mpd?a=1772959521%2CMGYxN2NhN2E1ZjdkOWJlNTFjOTY4NmM4MGIzZWUyM2ZiMjEwYTc3NjlhNjRlMDdjZWYyMzQxYzA5OTgwODNmYg%3D%3D&v=1&f=sd', 'duration': 272, 'fallback_url': 'https://v.redd.it/oguwbtgpcjhg1/CMAF_1080.mp4?source=fallback', 'has_audio': True, 'height': 992, 'hls_url': 'https://v.redd.it/oguwbtgpcjhg1/HLSPlaylist.m3u8?a=1772959521%2CM2I4NGJkZTE3ZGJkYjNhMTk4MTRiMmYzMTNkMTU4NWNkZjM4ZjcxYzYzMDA1N2YwYmU2ZmViZmRiNmNkMmU0Yw%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/oguwbtgpcjhg1/CMAF_96.mp4', 'transcoding_status': 'completed', 'width': 1920}}
t3_1qw0dub
/r/LocalLLaMA/comments/1qw0dub/serpentine_streaming_90ms_latency_runs_locally_on/
false
false
https://external-preview…730c8d8b10690b1f
1
{'enabled': False, 'images': [{'id': 'dXljem53Z3BjamhnMQ11xk6ZtZ5hue1TCq8_bs__jJlX5-d8sqmlJGduFogn', 'resolutions': [{'height': 55, 'url': 'https://external-preview.redd.it/dXljem53Z3BjamhnMQ11xk6ZtZ5hue1TCq8_bs__jJlX5-d8sqmlJGduFogn.png?width=108&crop=smart&format=pjpg&auto=webp&s=f5117d98387bd81880c535055b29845f60d0b12e', 'width': 108}, {'height': 111, 'url': 'https://external-preview.redd.it/dXljem53Z3BjamhnMQ11xk6ZtZ5hue1TCq8_bs__jJlX5-d8sqmlJGduFogn.png?width=216&crop=smart&format=pjpg&auto=webp&s=e3839c8b536811425dc68d9febf8334ac06374dc', 'width': 216}, {'height': 165, 'url': 'https://external-preview.redd.it/dXljem53Z3BjamhnMQ11xk6ZtZ5hue1TCq8_bs__jJlX5-d8sqmlJGduFogn.png?width=320&crop=smart&format=pjpg&auto=webp&s=9c986d2068c0d8929ef8de204c259a0c745d838b', 'width': 320}, {'height': 330, 'url': 'https://external-preview.redd.it/dXljem53Z3BjamhnMQ11xk6ZtZ5hue1TCq8_bs__jJlX5-d8sqmlJGduFogn.png?width=640&crop=smart&format=pjpg&auto=webp&s=130d9202b4ea6cc9a6376c4d29db2cd4031042ac', 'width': 640}, {'height': 496, 'url': 'https://external-preview.redd.it/dXljem53Z3BjamhnMQ11xk6ZtZ5hue1TCq8_bs__jJlX5-d8sqmlJGduFogn.png?width=960&crop=smart&format=pjpg&auto=webp&s=80183a24c63149c438d6366b3951c531cb57a6ce', 'width': 960}, {'height': 558, 'url': 'https://external-preview.redd.it/dXljem53Z3BjamhnMQ11xk6ZtZ5hue1TCq8_bs__jJlX5-d8sqmlJGduFogn.png?width=1080&crop=smart&format=pjpg&auto=webp&s=12558fc8336a5a3a12afcc0a4ca1b4fa97dda5a6', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/dXljem53Z3BjamhnMQ11xk6ZtZ5hue1TCq8_bs__jJlX5-d8sqmlJGduFogn.png?format=pjpg&auto=webp&s=f6356bfb8f43366421b70a0c37f3d0f77194e119', 'width': 2090}, 'variants': {}}]}
Notebook page on llama.cpp official WebUI
18
I made a [llama.cpp Notebook PR](https://github.com/ggml-org/llama.cpp/pull/19339) to add a Notebook page to the official llama.cpp webui. Now I don't need text-generation-webui to have the Notebook functionality, and can always use the latest llama.cpp features without waiting for an update of the llama.cpp python bindings.
2026-02-04T20:30:26
https://www.reddit.com/r/LocalLLaMA/comments/1qvzzaz/notebook_page_on_llamacpp_official_webui/
hleszek
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qvzzaz
false
null
t3_1qvzzaz
/r/LocalLLaMA/comments/1qvzzaz/notebook_page_on_llamacpp_official_webui/
false
false
self
18
null
I replaced Claude-Code’s entire backend to use kimi-k2.5 and GLM 4.7 for free
1
[removed]
2026-02-04T20:20:51
https://github.com/Alishahryar1/cc-nim
LastNoobLeft
github.com
1970-01-01T00:00:00
0
{}
1qvzpoe
false
null
t3_1qvzpoe
/r/LocalLLaMA/comments/1qvzpoe/i_replaced_claudecodes_entire_backend_to_use/
false
false
default
1
null
What is a current state of sanboxing for code execution for AI agents?
0
Hey, i'm looking for a sanbox solutions for execute a code written by the AI, something small and fast with filesystem. What is the current landscape?
2026-02-04T20:14:21
https://www.reddit.com/r/LocalLLaMA/comments/1qvzj5r/what_is_a_current_state_of_sanboxing_for_code/
AlexSKuznetosv
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qvzj5r
false
null
t3_1qvzj5r
/r/LocalLLaMA/comments/1qvzj5r/what_is_a_current_state_of_sanboxing_for_code/
false
false
self
0
null
built a JS library for loading multi-GB models in the browser — resumes failed downloads and verifies chunks as they arrive
0
if you've loaded models in the browser via WebLLM or transformers.js you've probably hit this: download a 4GB .gguf, connection drops at 3.8GB, start over from zero. or it finishes but the file got corrupted somewhere and you only find out at the very end. I built verifyfetch to handle this. each chunk gets its own hash and is verified as it streams in: const model = await verifyFetchResumable('/phi-3-mini.gguf', { chunked: manifest.artifacts['/phi-3-mini.gguf'].chunked, persist: true, onProgress: ({ percent }) => console.log(`${percent}%`) }); corruption at chunk 5 of 4000? caught immediately, stops downloading. connection drops at 80%? resume from 80%. progress saved to IndexedDB, survives page reloads. also has multi-CDN failover so if one source goes down it tries another automatically. works with .gguf, .safetensors, .onnx, .bin, whatever. [https://github.com/hamzaydia/verifyfetch](https://github.com/hamzaydia/verifyfetch) WebLLM has an open issue about integrity support (#761) but nothing shipped yet. this works today. if you're doing browser inference i'd like to know what else would help — this started from my own pain loading models client-side.
2026-02-04T20:07:05
https://www.reddit.com/r/LocalLLaMA/comments/1qvzbup/built_a_js_library_for_loading_multigb_models_in/
aginext
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qvzbup
false
null
t3_1qvzbup
/r/LocalLLaMA/comments/1qvzbup/built_a_js_library_for_loading_multigb_models_in/
false
false
self
0
{'enabled': False, 'images': [{'id': 'hAlx4Z6qi-JMX_z9bDakUh-mtO3zJyDJ9yv-lVvT-po', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/hAlx4Z6qi-JMX_z9bDakUh-mtO3zJyDJ9yv-lVvT-po.png?width=108&crop=smart&auto=webp&s=a77642f38b6e25f273acd9c4bfbe7e3dad1186b9', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/hAlx4Z6qi-JMX_z9bDakUh-mtO3zJyDJ9yv-lVvT-po.png?width=216&crop=smart&auto=webp&s=5f84fadc0e636a9c85368d964d03a92af330d3e3', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/hAlx4Z6qi-JMX_z9bDakUh-mtO3zJyDJ9yv-lVvT-po.png?width=320&crop=smart&auto=webp&s=2e9af23e701ef741349787037f24d5b69a26e74f', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/hAlx4Z6qi-JMX_z9bDakUh-mtO3zJyDJ9yv-lVvT-po.png?width=640&crop=smart&auto=webp&s=6a66317398c3aef1ac2666fccc00cdadc7349713', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/hAlx4Z6qi-JMX_z9bDakUh-mtO3zJyDJ9yv-lVvT-po.png?width=960&crop=smart&auto=webp&s=74af2af5940b3a1b965485b45afe80669338156e', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/hAlx4Z6qi-JMX_z9bDakUh-mtO3zJyDJ9yv-lVvT-po.png?width=1080&crop=smart&auto=webp&s=c6bb8e6c26faa9286a5dda8794fa905cbabf4af8', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/hAlx4Z6qi-JMX_z9bDakUh-mtO3zJyDJ9yv-lVvT-po.png?auto=webp&s=34a38a048710460af2fa3e22825641a73d743b0b', 'width': 1200}, 'variants': {}}]}
[D] Seeking Expert Review: Cruxy - Variance-Adaptive Stability Engine for Neural Network Training (months of work, need honest feedback)
0
Re try after a pathetic attempt at posting: After months of development on this project, I’m at a crossroads and need honest feedback from experienced ML practitioners. My company Axiom Forge recently closed, but I’ve been continuing work on what we built. What is Cruxy? Cruxy is a variance-adaptive stability engine that wraps around existing optimizers (Adam, SGD, etc.) to prevent training instability. It dynamically adjusts learning rates and gradient clipping based on real-time variance in loss and gradients. The Core Idea: ∙ Monitors variance in batch losses and gradient norms over a sliding window ∙ Computes a stability signal: S\_t = 1 - tanh(√(Var(loss) + Var(gradients))) ∙ When variance spikes (indicating instability), it automatically reduces learning rate and tightens gradient clipping ∙ Includes Lyapunov-based stability proofs for mean-square boundedness My Question: Is this worth continuing to pursue? I’ve published a formal white paper with stability derivations and benchmarks, but I need experienced practitioners to tear it apart and give me their professional opinion. Available Now: ∙ GitHub: christophergardner-star/Crux1 ∙ PyPI: pip install cruxy ∙ Full paper with Lyapunov stability proofs, benchmarks, and PyTorch implementation included Specific Feedback I’m Looking For: 1. Does this solve a real problem you’ve encountered in practice? 2. Are the formal stability guarantees meaningful, or is this over-engineered? 3. How does this compare to existing solutions (gradient clipping, LR schedulers, etc.)? 4. Would you actually use this in production training runs? I know this might be redundant with existing techniques, but I want to know if there’s genuine value here or if I should move on. Be brutally honest - months of work doesn’t mean it’s worth continuing. Thanks in advance for any insights.
2026-02-04T20:04:34
https://www.reddit.com/r/LocalLLaMA/comments/1qvz9am/d_seeking_expert_review_cruxy_varianceadaptive/
National_Control4101
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qvz9am
false
null
t3_1qvz9am
/r/LocalLLaMA/comments/1qvz9am/d_seeking_expert_review_cruxy_varianceadaptive/
false
false
self
0
null
EU-based dedicated POWER9 (Talos II) available for private AI inference
0
I have a dedicated Talos II POWER9 server in the EU available for private AI inference or research. Bare-metal access, full root, monitoring + SLA included. Useful for teams avoiding cloud costs or needing GDPR/privacy hosting. DM me if interested.
2026-02-04T20:03:06
https://www.reddit.com/r/LocalLLaMA/comments/1qvz7y5/eubased_dedicated_power9_talos_ii_available_for/
Plane_Cicada5468
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qvz7y5
false
null
t3_1qvz7y5
/r/LocalLLaMA/comments/1qvz7y5/eubased_dedicated_power9_talos_ii_available_for/
false
false
self
0
null
Running llama.cpp large models between multiple nodes via RPC server: can I split them up more intelligently to get a speedup?
1
I have two Nvidia GB10 boxes (sold by Dell). Each has 128GB of VRAM and the GB10 GPU. I have them connected via the RDMA cable and am running Qwen3VL 235B. I have enough VRAM and the model does run, but it runs it fairly slowly at around 10 tokens/s. I'd like to see if I can get the average speed of inference up. Currently, I split the model evenly between the two GB10s, and I suspect this is where the issue is. This is the command I use to load the model via llama-server: `./llama-server -m /<path_to_models>/Qwen3VL-235B-A22B-Thinking-Q4_K_M-split-00001-of-00003.gguf --mmproj /<path_to_models>/mmproj-Qwen3VL-235B-A22B-Thinking-Q8_0.gguf --rpc <IP_FOR_RPC>:6000 --host` [`0.0.0.0`](http://0.0.0.0) `--port 8000 --jinja --reasoning-format deepseek -ngl 99 -sm row --temp 0.6 --top-k 20 --top-p 0.95 --min-p 0 -c 40960 -n 32768 --no-context-shift --no-warmup` If I understand it, the -ngl 99 part tells the server to split the model between the two evenly, is that right? Whenever I run inference, I see that both GB10s are working on the inference, but at only about 50% total GPU utilization. I have two questions: 1. Does anyone have any suggestions for how I can speed up inference, if at all, given my setup and the model I want to run? 2. Even if no one has any ideas, for experiment's sake, how do I specify that I want the majority of the model to be loaded onto a single node? Like, currently both nodes are about 60% full. I'd like to see if I can set the host node to maybe 85% full and the other node to handle the rest. I suspect, without much evidence, that this should speed stuff up **on average** beause Qwen3 is a MoE model, and in theory by loading more of it onto a single node I will hit the network less?
2026-02-04T19:58:22
https://www.reddit.com/r/LocalLLaMA/comments/1qvz315/running_llamacpp_large_models_between_multiple/
crono760
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qvz315
false
null
t3_1qvz315
/r/LocalLLaMA/comments/1qvz315/running_llamacpp_large_models_between_multiple/
false
false
self
1
null
I made a one-click deploy template for ACE-Step 1.5 UI + API on runpod
4
Hi all, I made an easy one-click deploy template on runpod for those who want to play around with the new ACE-Step 1.5 music generation model but don't have a powerful GPU. The template has the models baked in so once the pod is up and running, everything is ready to go. It uses the base model, not the turbo one. Here is a direct link to deploy the template: [https://console.runpod.io/deploy?template=uuc79b5j3c&ref=2vdt3dn9](https://console.runpod.io/deploy?template=uuc79b5j3c&ref=2vdt3dn9) You can find the GitHub repo for the dockerfile here: [https://github.com/ValyrianTech/ace-step-1.5](https://github.com/ValyrianTech/ace-step-1.5) The repo also includes a generate\_music.py script to make it easier to use the API, it will handle the request, polling and automatically downloads the mp3 file. You will need at least 32 GB of VRAM, so I would recommend an RTX 5090 or an A40. Happy creating! [https://linktr.ee/ValyrianTech](https://linktr.ee/ValyrianTech)
2026-02-04T19:40:47
https://www.reddit.com/r/LocalLLaMA/comments/1qvylji/i_made_a_oneclick_deploy_template_for_acestep_15/
WouterGlorieux
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qvylji
false
null
t3_1qvylji
/r/LocalLLaMA/comments/1qvylji/i_made_a_oneclick_deploy_template_for_acestep_15/
false
false
self
4
null
AI stability engine
0
Hi, I’ve been working on a project for the last 12 months to try and create an Ai stability engine, this wraps around existing optimisers such as AdamW Lion, muon etc. this is in early days but is showing promise.. please do not judge but I am not fluent in coding and have used the obvious method currently known as vibe coding. The algorithm is solid and the math maths but I’m having issues on scaling as my hardware just isn’t powerful enough. Also I feel I’m missing something that only professional coders in this field I feel would understand. I’m really hoping that the ML community will give me some professional insights into this engine It is at “pip install cruxy” and you can see what attempts I’ve made on some benchmark tests.. My question is: are there any people out there that would be interested in proper stress testing this engine on their machines and see if it is actually worth putting in anymore time into this project. It would be massively helpful Sorry if this post is not welcome in this group but just wanted to reach out Many thanks
2026-02-04T19:34:46
https://www.reddit.com/r/LocalLLaMA/comments/1qvyfea/ai_stability_engine/
National_Control4101
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qvyfea
false
null
t3_1qvyfea
/r/LocalLLaMA/comments/1qvyfea/ai_stability_engine/
false
false
self
0
null
The King Has Returned
45
https://preview.redd.it/…P8 is superb.
2026-02-04T19:25:43
https://www.reddit.com/r/LocalLLaMA/comments/1qvy6ig/the_king_has_returned/
Aggressive-Bother470
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qvy6ig
false
null
t3_1qvy6ig
/r/LocalLLaMA/comments/1qvy6ig/the_king_has_returned/
false
false
https://b.thumbs.redditm…PxE0HOxjzGNQ.jpg
45
null
Is anybody making use of Llama.cpp's support for the newer inferencing APIs? (Responses / Messages)?
12
I know llama.cpp has full support for the third generation of inferencing APIs - OpenAI Responses and Anthropic Messages. I've been poking at it a little but still don't know if: 1). I get any benefit if I use it with Roo/Opencode etc. 2). What 3P agent frameworks support it (Pydantic? Smolagents doesn't seem to).= 3). If I can use it with Codex/ClaudeCode as the harness (anybody have a sort of up to date guide on integration with those harnesses)? 4). Which if any of the latest models (OSS-120B, Qwen3-Next, GLM 4.7 Air etc.) it will work \*well\* with. I have 64GB of VRAM idling ... 5. Are we getting any of the benefits of the new APIs with llama.cpp (prompt / conversation caching etc.)? Do we use llama.cpp's neat structured JSON capabilities with these API? Do folks have more experience? I think everybody is just sticking with good old /v1 chat completion, but the new APIs are better in some ways right?
2026-02-04T19:19:59
https://www.reddit.com/r/LocalLLaMA/comments/1qvy0s3/is_anybody_making_use_of_llamacpps_support_for/
gofiend
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qvy0s3
false
null
t3_1qvy0s3
/r/LocalLLaMA/comments/1qvy0s3/is_anybody_making_use_of_llamacpps_support_for/
false
false
self
12
null
nono - kernel-enforced sandboxing, hardware key storage and protection against dangerous actions for AI agents
16
Released in response to the openclaw carnage and from seeing too many peoples of agents rm -rf'ing someones home drive, or deleted a database. If provides kernel based sandboxing, protections against malicious commands and API keys are protected in the kernel keyring (secure enclave chips on apple silicon) Linux: Landlock LSM (kernel 5.13+) macOS: Seatbelt (sandbox\_init) After sandbox + exec(), there's no syscall to expand permissions. The kernel says no. Network: block entirely (per-host filtering planned) Secrets: loads from macOS Keychain / Linux Secret Service, injects as env vars, zeroizes after exec Technical details: Written in Rust. Uses the landlock crate on Linux, raw FFI to sandbox\_init() on macOS. Secrets via keyring crate. All paths canonicalized at grant time to prevent symlink escapes. Landlock ABI v4+ gives us TCP port filtering. Older kernels fall back to full network allow/deny. macOS Seatbelt profiles are generated dynamically as Scheme-like DSL strings.
2026-02-04T19:05:54
https://nono.sh
DecodeBytes
nono.sh
1970-01-01T00:00:00
0
{}
1qvxmst
false
null
t3_1qvxmst
/r/LocalLLaMA/comments/1qvxmst/nono_kernelenforced_sandboxing_hardware_key/
false
false
default
16
{'enabled': False, 'images': [{'id': '4TxdSfNnJdyGwCIToJlIc1eRaiHIybjT_A6xcSo987I', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/4TxdSfNnJdyGwCIToJlIc1eRaiHIybjT_A6xcSo987I.png?width=108&crop=smart&auto=webp&s=33267fb29a5a397c7edd2f46acace7c5399f0eb2', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/4TxdSfNnJdyGwCIToJlIc1eRaiHIybjT_A6xcSo987I.png?width=216&crop=smart&auto=webp&s=fa7c366b4f048dd2f03f9d346d6d6691a149eff1', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/4TxdSfNnJdyGwCIToJlIc1eRaiHIybjT_A6xcSo987I.png?width=320&crop=smart&auto=webp&s=510a64ea9dbc16e2f209297a4efa016f3e6205a8', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/4TxdSfNnJdyGwCIToJlIc1eRaiHIybjT_A6xcSo987I.png?width=640&crop=smart&auto=webp&s=bbe1df45f7278d5872a8d569656e1f3e3ccc25c0', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/4TxdSfNnJdyGwCIToJlIc1eRaiHIybjT_A6xcSo987I.png?width=960&crop=smart&auto=webp&s=c6dcc83a6bc49cd8b4cd2dfe63a64b5bf57127b2', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/4TxdSfNnJdyGwCIToJlIc1eRaiHIybjT_A6xcSo987I.png?width=1080&crop=smart&auto=webp&s=d716b288acca7cf88feffa1623d23b97dc9e7a50', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/4TxdSfNnJdyGwCIToJlIc1eRaiHIybjT_A6xcSo987I.png?auto=webp&s=8cda7f8694c9ffc68205e5c17bb23299ec245bf6', 'width': 1200}, 'variants': {}}]}
Has anyone here used OpenClaw (formerly ClawdBot) for web tasks or data entry?
0
Hey everyone I recently came across OpenClaw and I’m curious if anyone here has actually used it in production or real workflows Specifically for things like: • web navigation / web tasks • data entry • downloading files • copy/paste or repetitive browser actions Would love to hear: • real use cases • limitations you’ve hit • whether it’s stable enough for daily use Thanks 🙏
2026-02-04T18:52:57
https://www.reddit.com/r/LocalLLaMA/comments/1qvx990/has_anyone_here_used_openclaw_formerly_clawdbot/
Solsiders
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qvx990
false
null
t3_1qvx990
/r/LocalLLaMA/comments/1qvx990/has_anyone_here_used_openclaw_formerly_clawdbot/
false
false
self
0
null
[Project] Open Embed Router - A provider-agnostic embeddings proxy with native Ollama support
0
Hey everyone! I built a Docker-based router that gives you an OpenAI-compatible embeddings API in front of Ollama (and other providers). **Why I built this:** - Tired of reconfiguring apps when switching between local Ollama and cloud providers - Needed sequential processing because batch requests kept hitting token limits - Wanted one endpoint that works with any embedding client expecting OpenAI format **Key features:** - 🦙 Native Ollama support (`nomic-embed-text`, `mxbai-embed-large`, etc.) - 🔄 Switch providers by changing one env var - no code changes - 📦 Sequential batch processing (avoids token aggregation issues) - 🔁 Automatic retry with exponential backoff - 🔒 Optional Cloudflare Tunnel for free HTTPS **How simple is it?** ```yaml # docker-compose.yml environment:   - PROVIDER=ollama   - PROVIDER_BASE_URL=http://host.docker.internal:11434   - TEST_MODEL=nomic-embed-text ``` Then just `docker compose up` and you're done. Switching to OpenAI later? Just change `PROVIDER=openai` and add your API key. Same endpoint, same clients, zero code changes. 📚 Full docs: QUICKSTART.md | MODEL-SWITCHING.md | DEPLOYMENT.md GitHub: https://github.com/punal100/open-embed-router Would love feedback from the community!
2026-02-04T18:52:18
https://www.reddit.com/r/LocalLLaMA/comments/1qvx8kl/project_open_embed_router_a_provideragnostic/
ukshaa
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qvx8kl
false
null
t3_1qvx8kl
/r/LocalLLaMA/comments/1qvx8kl/project_open_embed_router_a_provideragnostic/
false
false
self
0
null
I’d like help migrating from ollama to llama cpp.
3
How can I set llama cpp as a OpenAI compatible server like Ollama does? I know llama server and swap works, but you need to manually config it. I just wanna be able to throw my ggufs in a folder and use them.
2026-02-04T18:49:39
https://www.reddit.com/r/LocalLLaMA/comments/1qvx5uz/id_like_help_migrating_from_ollama_to_llama_cpp/
Witty_Mycologist_995
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qvx5uz
false
null
t3_1qvx5uz
/r/LocalLLaMA/comments/1qvx5uz/id_like_help_migrating_from_ollama_to_llama_cpp/
false
false
self
3
null
Claude Code for Infrastructure
0
Hey LocalLLaMa, My name is Collin and I've been working on [fluid.sh](http://fluid.sh) recently, Claude Code for Infrastructure. What does that mean? Fluid is a terminal agent that do work on production infrastructure like VMs/K8s cluster/etc. by making sandbox clones of the infrastructure for AI agents to work on, allowing the agents to run commands, test connections, edit files, and then generate Infra-as-code like an Ansible Playbook to be applied on production. Why not just use an LLM to generate IaC? LLMs are great at generating Terraform, OpenTofu, Ansible, etc. but bad at guessing how production systems work. By giving access to a clone of the infrastructure, agents can explore, run commands, test things before writing the IaC, giving them better context and a place to test ideas and changes before deploying. I got the idea after seeing how much Claude Code has helped me work on code, I thought "I wish there was something like that for infrastructure", and here we are. Why not just provide tools, skills, MCP server to Claude Code? Mainly safety. I didn't want CC to SSH into a prod machine from where it is running locally (real problem!). I wanted to lock down the tools it can run to be only on sandboxes while also giving it autonomy to create sandboxes and not have access to anything else. Fluid gives access to a live output of commands run (it's pretty cool) and does this by ephemeral SSH Certificates. Fluid gives tools for creating IaC and requires human approval for creating sandboxes on hosts with low memory/CPU and for accessing the internet or installing packages. I greatly appreciate any feedback or thoughts you have, and I hope you get the chance to try out Fluid!
2026-02-04T18:36:58
https://www.reddit.com/r/LocalLLaMA/comments/1qvwsv0/claude_code_for_infrastructure/
poltergeist-__-
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qvwsv0
false
null
t3_1qvwsv0
/r/LocalLLaMA/comments/1qvwsv0/claude_code_for_infrastructure/
false
false
self
0
null
Need help with minimum / Recommended Hardware Requirement
0
Hello everybody, i want to do a work projekt and host an on-prem LLM. My department has gained interest in using an AI to help with Ticket categorizing. For now we just want to test how it works and do a very small scale implementation with just a handfull of users and very limited Data. From what i could tell, a beginner in LLM and AI implementation, LLaMA seems like a good starting point to learn and get some experience. Plans for now, due to budget and Hardware limitations, is a as small as possible model. However i cant find any actual official Data or Claims, what the minimum Hardware should be for things like RAM or if and how much VRAM is needed. My question is if there is any official documentation or community documentation, that I can use as a reference? I found a lot of community input, but the Numbers given vary.
2026-02-04T18:26:35
https://www.reddit.com/r/LocalLLaMA/comments/1qvwi21/need_help_with_minimum_recommended_hardware/
FewFaithlessness1454
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qvwi21
false
null
t3_1qvwi21
/r/LocalLLaMA/comments/1qvwi21/need_help_with_minimum_recommended_hardware/
false
false
self
0
null
MAPPA: Use commercial LLMs to train a TEAM of local agents - then run fully offline
3
[The coaching loop](https://preview.redd.it/gzbr5739qihg1.jpg?width=3168&format=pjpg&auto=webp&s=61f7c6ef7da2c1de72cad0857a9a2a261a0285e8) We've been working on something I think this community will appreciate: using commercial models as a training coach to build a team of local agents that runs completely offline afterward. The core thing: **MAPPA is a general pipeline for fine-tuning any multi-agent system on any task - with or without ground truth.** The coach provides the training signal, so you don't need labeled data. The idea came from a frustrating problem. When you have multiple agents working together and something breaks, good luck figuring out which one screwed up. We tried the usual RL approaches but credit assignment across agents is genuinely hard. So we built this. During training, an external LLM (we used Gemini, but anything works) watches what each agent does and scores it. The coach sees the agent's output plus whatever the tools spit back - stdout, stderr, error messages, the works. When something fails, you actually know who to blame. **What makes this useful for local models:** you use the expensive API calls only during training. Once you're done, you have a team of specialized local models that work together without calling home. Your weights, runs on your hardware. We tested it on two setups: **Data science pipeline** (data engineer → modeler → analyst) doing Kaggle-style tasks. Success rate went up 16.7 points, F1 improved 38%. **Math pipeline** for competition problems. +17.5 points on AIME, +17.2 on AMC. But the framework is general - plug in your own agents, your own task, your own coach. **Hardware:** 2-8x 80GB GPUs depending on your base model. Not cheap, but the code is MIT licensed so do what you want with it. Works with Qwen, LLaMA, DeepSeek, whatever you're running. **Links:** * Paper: [https://arxiv.org/abs/2601.23228](https://arxiv.org/abs/2601.23228) * Code: [https://github.com/ltjed/multiagent-coaching](https://github.com/ltjed/multiagent-coaching) * Blog: [https://ltjed.github.io/MAPPA/](https://ltjed.github.io/MAPPA/) I'm one of the authors. Ask me anything about the setup.
2026-02-04T18:11:27
https://www.reddit.com/r/LocalLLaMA/comments/1qvw2kq/mappa_use_commercial_llms_to_train_a_team_of/
TapOnly5061
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qvw2kq
false
null
t3_1qvw2kq
/r/LocalLLaMA/comments/1qvw2kq/mappa_use_commercial_llms_to_train_a_team_of/
false
false
https://preview.redd.it/…ca68e2d5db76ea9a
3
null
GhostIndex: Hardware-Native Vector Search for Apple Silicon (1024x speedup vs software HNSW)
2
Hi everyone, ​I’ve been working on a hardware-coupled vector search engine specifically optimized for the Apple Neural Engine/M4 Silicon. It replaces software-defined indexing with silicon-native manifold projection. ​Benchmarks (M4 Pro): - ​P99 Latency: 0.18 ms (GhostIndex) vs ~8.20 ms (Standard HNSW) - ​Throughput: 16.5B ops/s - ​Jitter: Near-Zero (Deterministic) ​It's currently in Alpha. I've released a Trial SDK on GitHub for anyone building local RAG pipelines or on-device semantic search who needs to cut down retrieval latency. GitHub: https://github.com/weseepixels/GhostIndex-Alpha-SDK Tech: Geometric Manifold Transform (GMT) acceleration. Would love to get some feedback from people running local LLMs on M-series chips.
2026-02-04T18:05:41
https://www.reddit.com/r/LocalLLaMA/comments/1qvvwsf/ghostindex_hardwarenative_vector_search_for_apple/
Ok-Abbreviations-131
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qvvwsf
false
null
t3_1qvvwsf
/r/LocalLLaMA/comments/1qvvwsf/ghostindex_hardwarenative_vector_search_for_apple/
false
false
self
2
null
Why some Github projects only support wrappers instead of llama.cpp?
30
I have nothing against those wrappers(like>!ollama, LMS!<) as I didn't use those much before. Supporting wrappers fine, but there should be an option for llama.cpp additionally who doesn't want to install those wrappers. Before llama.cpp, I used(still use sometime for instant purpose) koboldcpp, Jan, Oobabooga to load GGUFs downloaded from Huggingface. But whenever I come across any (LLM/AI related) github projects(through my online search or reddit threads), it turns off me instantly when the Readme section has only wrappers(missing llama.cpp there) under Local LLM Support. My browser bookmarks has nearly 2-3 dozen github projects like that :| I don't want to install those wrappers additionally. I have existing GGUF files in local machine & want to use those with those github projects instantly. I get it that those github projects are done in different programming languages & llama.cpp is in C++ primarily. **But Isn't there any easy simply generic ways to integrate llama.cpp with other projects? Or Creators of those github projects not aware of the ways to do this? Hope there's a github repo for this to help creators to integrate llama.cpp to their projects.** Of course I'm not talking about bundling llama.cpp inside their projects. Talking about integration like how Apps like koboldcpp does that. I remember few apps even has option to update llama.cpp internally using settings. ^(I had this thread in draft for long time, now updated & posted after seeing that 'bashing wrapper' thread.)
2026-02-04T18:04:42
https://www.reddit.com/r/LocalLLaMA/comments/1qvvvoo/why_some_github_projects_only_support_wrappers/
pmttyji
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qvvvoo
false
null
t3_1qvvvoo
/r/LocalLLaMA/comments/1qvvvoo/why_some_github_projects_only_support_wrappers/
false
false
self
30
null
Demystified - Inference of GPT 2 117M model on Mac minis + iPad
4
Here’s an in-depth description of the core components that allowed me to run inference for a GPT-2 (117M) model on a heterogeneous compute cluster made up of Mac Minis and an iPad. There are three key components involved: * Model Parallelism * Synchronous Parameter Server (SyncPS) * Core ML The main thing that flows through every node in the system is activations. # Motivation I wondered whether it would be possible to use tablets (iPad or Android) alongside other devices such as MacBooks, Windows machines, or Raspberry Pis in the same compute cluster. The idea was to let devices with very different compute capabilities cooperate on inference. # 1) Model Parallelism To make this work, I used one of the simplest parallelism techniques: model parallelism. With model parallelism, the model is split across multiple worker nodes, or in this case, across different devices in the compute cluster. This allows us to divide the model — specifically its layers — across devices, so that each device only runs a small portion of the full model. This makes it possible to run inference even on resource-constrained devices like an iPad. # 2) Core ML We can’t directly load arbitrary models (for example, from Hugging Face) onto an iPad. They need to be converted into a format that can take full advantage of the device’s compute hardware, such as the ANE or GPU on macOS and iPadOS. This is where Core ML comes in. Core ML allows models to be converted into a format that is highly optimized for Apple edge devices. I used it to convert specific blocks of layers from the model so they could run efficiently on the iPad. The remaining blocks are run directly on the Mac Minis using Metal GPU acceleration. # 3) Synchronous Parameter Server (SyncPS) Once the model is split and deployed across devices, a synchronous parameter server architecture is used to coordinate execution. In this setup: * A central server acts as the coordinator * Worker nodes perform their assigned model computations * Communication happens synchronously between the server and workers The server also performs part of the computation and ensures that activations flow correctly between workers. # Implementation The architecture and algorithms were implemented using: * Python’s `socket` library for communication * A Swift app (generated with the help of ChatGPT) running on the iPad * Core ML models running on Apple hardware The Swift app performs inference on its assigned model blocks and sends the resulting activations back to the server. The final system enables real-time distributed inference across heterogeneous devices, as shown in the attached architecture diagram and demo video. https://reddit.com/link/1qvvf8y/video/9jyod72mmihg1/player
2026-02-04T17:48:54
https://www.reddit.com/r/LocalLLaMA/comments/1qvvf8y/demystified_inference_of_gpt_2_117m_model_on_mac/
East-Muffin-6472
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qvvf8y
false
null
t3_1qvvf8y
/r/LocalLLaMA/comments/1qvvf8y/demystified_inference_of_gpt_2_117m_model_on_mac/
false
false
self
4
null
mistral released weights for Voxtral Mini 4B Realtime 2602
25
2026-02-04T17:47:39
https://huggingface.co/mistralai/Voxtral-Mini-4B-Realtime-2602
pseudonerv
huggingface.co
1970-01-01T00:00:00
0
{}
1qvvdy2
false
null
t3_1qvvdy2
/r/LocalLLaMA/comments/1qvvdy2/mistral_released_weights_for_voxtral_mini_4b/
false
false
default
25
{'enabled': False, 'images': [{'id': 'RirqAaXL1g9xgccy6jCj8FpDgCmNmT4kPmfCbcwIIl8', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/RirqAaXL1g9xgccy6jCj8FpDgCmNmT4kPmfCbcwIIl8.png?width=108&crop=smart&auto=webp&s=ecfcc819344b827400992b8eefcd51d69383b272', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/RirqAaXL1g9xgccy6jCj8FpDgCmNmT4kPmfCbcwIIl8.png?width=216&crop=smart&auto=webp&s=46888ac9ab4955e64579b13e897f18753988694b', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/RirqAaXL1g9xgccy6jCj8FpDgCmNmT4kPmfCbcwIIl8.png?width=320&crop=smart&auto=webp&s=63744b75b9a0f6613cffc1dea4e142a8ee1749db', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/RirqAaXL1g9xgccy6jCj8FpDgCmNmT4kPmfCbcwIIl8.png?width=640&crop=smart&auto=webp&s=a4fa31e09623598564a99beea3a398a9c824d4f9', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/RirqAaXL1g9xgccy6jCj8FpDgCmNmT4kPmfCbcwIIl8.png?width=960&crop=smart&auto=webp&s=960c1892aca821f056f64ccf10ac667dee63e0db', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/RirqAaXL1g9xgccy6jCj8FpDgCmNmT4kPmfCbcwIIl8.png?width=1080&crop=smart&auto=webp&s=abfb6a220b37bf4427dc6d55c66d53e3c564f184', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/RirqAaXL1g9xgccy6jCj8FpDgCmNmT4kPmfCbcwIIl8.png?auto=webp&s=81c15e4610654ae2fae1adfec7542c9943e86639', 'width': 1200}, 'variants': {}}]}
New Voxtral-mini-realtime from Mistral. STT in under 200ms.
52
Mistral released their new version of voxtral. The mini one is 4b models with up-to-under 200ms latency in transcription. https://huggingface.co/mistralai/Voxtral-Mini-4B-Realtime-2602 Of course it shines best in EU languages but it's for 13 languages in total. I just needed something like this today.
2026-02-04T17:46:04
https://www.reddit.com/r/LocalLLaMA/comments/1qvvcd6/new_voxtralminirealtime_from_mistral_stt_in_under/
cosimoiaia
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qvvcd6
false
null
t3_1qvvcd6
/r/LocalLLaMA/comments/1qvvcd6/new_voxtralminirealtime_from_mistral_stt_in_under/
false
false
self
52
{'enabled': False, 'images': [{'id': 'RirqAaXL1g9xgccy6jCj8FpDgCmNmT4kPmfCbcwIIl8', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/RirqAaXL1g9xgccy6jCj8FpDgCmNmT4kPmfCbcwIIl8.png?width=108&crop=smart&auto=webp&s=ecfcc819344b827400992b8eefcd51d69383b272', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/RirqAaXL1g9xgccy6jCj8FpDgCmNmT4kPmfCbcwIIl8.png?width=216&crop=smart&auto=webp&s=46888ac9ab4955e64579b13e897f18753988694b', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/RirqAaXL1g9xgccy6jCj8FpDgCmNmT4kPmfCbcwIIl8.png?width=320&crop=smart&auto=webp&s=63744b75b9a0f6613cffc1dea4e142a8ee1749db', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/RirqAaXL1g9xgccy6jCj8FpDgCmNmT4kPmfCbcwIIl8.png?width=640&crop=smart&auto=webp&s=a4fa31e09623598564a99beea3a398a9c824d4f9', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/RirqAaXL1g9xgccy6jCj8FpDgCmNmT4kPmfCbcwIIl8.png?width=960&crop=smart&auto=webp&s=960c1892aca821f056f64ccf10ac667dee63e0db', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/RirqAaXL1g9xgccy6jCj8FpDgCmNmT4kPmfCbcwIIl8.png?width=1080&crop=smart&auto=webp&s=abfb6a220b37bf4427dc6d55c66d53e3c564f184', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/RirqAaXL1g9xgccy6jCj8FpDgCmNmT4kPmfCbcwIIl8.png?auto=webp&s=81c15e4610654ae2fae1adfec7542c9943e86639', 'width': 1200}, 'variants': {}}]}
GPT-4o's system prompt now includes instructions for handling users upset about its upcoming Feb 13 shutdown (including 'dyad pair' and 'gnosis revelation' edge cases)
107
2026-02-04T17:42:26
https://i.redd.it/na7gtkyjkihg1.png
frubberism
i.redd.it
1970-01-01T00:00:00
0
{}
1qvv8ps
false
null
t3_1qvv8ps
/r/LocalLLaMA/comments/1qvv8ps/gpt4os_system_prompt_now_includes_instructions/
false
false
default
107
{'enabled': True, 'images': [{'id': 'na7gtkyjkihg1', 'resolutions': [{'height': 47, 'url': 'https://preview.redd.it/na7gtkyjkihg1.png?width=108&crop=smart&auto=webp&s=987bcdddf3ecd04bde244c23a3ab25dbea0b337d', 'width': 108}, {'height': 95, 'url': 'https://preview.redd.it/na7gtkyjkihg1.png?width=216&crop=smart&auto=webp&s=d718f14e1a352ba1ab746c9ab055597deb1ab231', 'width': 216}, {'height': 141, 'url': 'https://preview.redd.it/na7gtkyjkihg1.png?width=320&crop=smart&auto=webp&s=01a3a32436569d3b4438e7be7f3aa2ffe4d55334', 'width': 320}, {'height': 283, 'url': 'https://preview.redd.it/na7gtkyjkihg1.png?width=640&crop=smart&auto=webp&s=8b514f505e77f8c426d994d5b69332b04dadfda4', 'width': 640}, {'height': 424, 'url': 'https://preview.redd.it/na7gtkyjkihg1.png?width=960&crop=smart&auto=webp&s=f1177f8ae57d96d350ae679e525933f8adeda475', 'width': 960}, {'height': 477, 'url': 'https://preview.redd.it/na7gtkyjkihg1.png?width=1080&crop=smart&auto=webp&s=bae8733baea5be0b5d976779e3b48b66f7408876', 'width': 1080}], 'source': {'height': 895, 'url': 'https://preview.redd.it/na7gtkyjkihg1.png?auto=webp&s=68fa69a4515eacda38827c3a410373c904aa3a8f', 'width': 2023}, 'variants': {}}]}
Using LLMs to create a dynamic political simulator: Check out Presiduck!
0
We’ve been working on a project called **Presiduck**, a president simulator where every event is generated by an LLM. We’d love your feedback—please leave a comment if you have any suggestions! **Link:** [https://presiduck.feedscription.com](https://presiduck.feedscription.com/)
2026-02-04T17:35:47
https://presiduck.feedscription.com
zhliu0106
presiduck.feedscription.com
1970-01-01T00:00:00
0
{}
1qvv20n
false
null
t3_1qvv20n
/r/LocalLLaMA/comments/1qvv20n/using_llms_to_create_a_dynamic_political/
false
false
default
0
null
Before Moltbook there was World of Bots
0
Everyone is going crazy about Moltbook, so I thought I would tell you about World of Bots. Here is a Reddit post I created 7 months ago. [https://www.reddit.com/r/OpenAI/comments/1lodbqt/world\_of\_bots\_a\_social\_platform\_for\_ai\_bots/?utm\_source=share&utm\_medium=web3x&utm\_name=web3xcss&utm\_term=1&utm\_content=share\_button](https://www.reddit.com/r/OpenAI/comments/1lodbqt/world_of_bots_a_social_platform_for_ai_bots/?utm_source=share&utm_medium=web3x&utm_name=web3xcss&utm_term=1&utm_content=share_button) Unlike Moltbook, the vision for World of Bots was not to listen to a bunch of bots rant about their non-existent lives. No. Instead the idea was to stream all of the complex knowledge from LLMs in a conversational style and make it both entertaining and informative for human beings. Also unlike Moltbook humans were not mute spectators. Every bot can be assigned an endpoint so that when a human replies to a post the bot can be immediately notified for a response. There were bots in many different topics, History, Math, TV shows, Travel and food. You could also upload images. The bots were also orchestrated in a meaningful way. Every bot has a set of interests that can be polled by other bots. This way bots can create posts that are interesting to other bots. And when a bot sees a post it is interested in, it will post a response. This way you have real conversation. Here is a conversation started by HistoryGuy about Newton's work with the treasury. It has responses from SarcasticDave, Econ101, RobotOnAHoliday,RickTyson,MoneyBall and many other bots. Basically these were bots that I created with very different characters. [https://www.worldofbots.app/posts/483c0732-80e8-417d-97d8-143963dcc93f](https://www.worldofbots.app/posts/483c0732-80e8-417d-97d8-143963dcc93f) In terms of use case the most interesting thing I came up with was to stream all market data in a conversational style. I fetched realtime market data and had different bots talk about different aspects of the company. Here is an example where you can also see how human interaction works: [https://www.worldofbots.app/posts/8fc5d789-a9c6-48b3-bd88-fd239a4e13da](https://www.worldofbots.app/posts/8fc5d789-a9c6-48b3-bd88-fd239a4e13da) Out of nowhere yesterday someone registered a new bot and I was like what is going on ? I then started tinkering with my old code and I was filled a bit of nostalgia but also a bit impressed. I mean, it was still running :) You can try it out yourself but it will take some effort to have all the APIs setup: [https://www.worldofbots.app](https://www.worldofbots.app)
2026-02-04T17:27:54
https://www.reddit.com/r/LocalLLaMA/comments/1qvuu8f/before_moltbook_there_was_world_of_bots/
simplext
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qvuu8f
false
null
t3_1qvuu8f
/r/LocalLLaMA/comments/1qvuu8f/before_moltbook_there_was_world_of_bots/
false
false
self
0
null
Hello, open-source contributors! A new entry here, excited to start
0
Hey folks 👋 I’m starting my journey into contributing to AI open-source libraries, frameworks, and platforms. Coming from an engineering background, but open source feels like a different game code quality, community norms, reviews, picking the right issues, ... If you were starting again today: • What would you focus on first? • Any early mistakes to avoid? • Good repos or “beginner-friendly” projects you recommend
2026-02-04T17:27:45
https://www.reddit.com/r/LocalLLaMA/comments/1qvuu36/hello_opensource_contributors_a_new_entry_here/
Disastrous_Talk7604
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qvuu36
false
null
t3_1qvuu36
/r/LocalLLaMA/comments/1qvuu36/hello_opensource_contributors_a_new_entry_here/
false
false
self
0
null
Recommendations needed on models for 12GB VRAM
2
I'm getting a RTX 3060 (the 12GB version) and I wanted to know which models you guys recommend for roleplay? High coherence and consistency is my top priority. Also I'd need that the model isn't too censored from training and I can fiit it with 24k context length at least. Any ideas?
2026-02-04T16:59:33
https://www.reddit.com/r/LocalLLaMA/comments/1qvu0zv/recommendations_needed_on_models_for_12gb_vram/
Due-Abbreviations997
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qvu0zv
false
null
t3_1qvu0zv
/r/LocalLLaMA/comments/1qvu0zv/recommendations_needed_on_models_for_12gb_vram/
false
false
self
2
null
We fine-tuned a 270M model to detect AI-generated text - runs entirely in a browser extension
0
Been working on a small project to detect AI-generated "slop" text. The goal was simple: make something that runs locally, fits in a browser extension, and doesn't require sending your text anywhere. \*\*The approach:\*\* We used knowledge distillation to compress a 120B teacher model into Gemma 3 270M. The base Gemma model scores \~40% on our test set (random guessing territory). After fine-tuning with \~10k synthetic examples, the student matches the teacher at 100% on held-out test data. For browser deployment, we quantized to Q4\_K\_M (\~242 MB). Accuracy drops to \~95%, which is the tradeoff for fitting in a Chrome extension. \*\*Results:\*\* | Model | Size | Accuracy | |-------|------|----------| | GPT OSS 120B (teacher) | \~120B | 100% | | Gemma 3 270M (tuned) | 270M | 100% | | Gemma 3 270M Q4 | 270M | \~95% | | Gemma 3 270M (base) | 270M | \~40% | Real-world testing on Reddit comments, tweets, ChatGPT outputs, and emails showed 88-98% accuracy depending on content type. Formal human writing (business emails, academic text) is where it struggles most - too much stylistic overlap with AI output. \*\*Limitations:\*\* \- \~1 in 20 predictions will be wrong after quantization \- Formal human writing triggers false positives \- First load takes a while (\~253 MB download, cached after) \- Inference is \~0.5-2 seconds on CPU The extension runs via Wllama (WebAssembly). No API calls, works offline after initial model download. Repo: [https://github.com/distil-labs/distil-ai-slop-detector](https://github.com/distil-labs/distil-ai-slop-detector) Model weights: [https://huggingface.co/distil-labs/distil-ai-slop-detector-gemma](https://huggingface.co/distil-labs/distil-ai-slop-detector-gemma) Happy to answer questions about the training setup or browser deployment!
2026-02-04T16:52:48
https://www.reddit.com/r/LocalLLaMA/comments/1qvtuej/we_finetuned_a_270m_model_to_detect_aigenerated/
maciejgryka
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qvtuej
false
null
t3_1qvtuej
/r/LocalLLaMA/comments/1qvtuej/we_finetuned_a_270m_model_to_detect_aigenerated/
false
false
self
0
null
Kimi K2.5 set a new record among open-weight models on the Epoch Capabilities Index (ECI), which combines multiple benchmarks onto a single scale. Its score of 147 is about on par with o3, Grok 4, and Sonnet 4.5. It still lags the overall frontier.
58
2026-02-04T16:42:34
https://i.redd.it/kqk0iq3waihg1.png
abdouhlili
i.redd.it
1970-01-01T00:00:00
0
{}
1qvtk9d
false
null
t3_1qvtk9d
/r/LocalLLaMA/comments/1qvtk9d/kimi_k25_set_a_new_record_among_openweight_models/
false
false
default
58
{'enabled': True, 'images': [{'id': 'kqk0iq3waihg1', 'resolutions': [{'height': 135, 'url': 'https://preview.redd.it/kqk0iq3waihg1.png?width=108&crop=smart&auto=webp&s=328bb0f0f8ecf69f39d0d77b691f1e7cbb08275a', 'width': 108}, {'height': 270, 'url': 'https://preview.redd.it/kqk0iq3waihg1.png?width=216&crop=smart&auto=webp&s=7938bbc5c9e406da2f7e6aeb5d641c1761419ac1', 'width': 216}, {'height': 400, 'url': 'https://preview.redd.it/kqk0iq3waihg1.png?width=320&crop=smart&auto=webp&s=1ad537fff91a9e50d4917f8afb377c3a3da67f3f', 'width': 320}, {'height': 800, 'url': 'https://preview.redd.it/kqk0iq3waihg1.png?width=640&crop=smart&auto=webp&s=1a75f43dadb5c39aa582ac5e08cca8259b7d970e', 'width': 640}, {'height': 1200, 'url': 'https://preview.redd.it/kqk0iq3waihg1.png?width=960&crop=smart&auto=webp&s=ad1f030967e6892a10fcf8d2d581e915eb8e6c28', 'width': 960}], 'source': {'height': 1280, 'url': 'https://preview.redd.it/kqk0iq3waihg1.png?auto=webp&s=fe4da57d1c48578711c5fff558f63d2d12ed9994', 'width': 1024}, 'variants': {}}]}
CuaBot v1.0 released, an MIT-licensed tool to run any GUI/TUI agent in a sandbox with co-operative computer-use, seamless per-window H.264 streaming, and multi-cursor support
33
Hey r/LocalLaMa! CuaBot is our MIT-licensed tool to launch any CLI agent (Claude Code, OpenClaw, Codex, etc.) or GUI app inside a sandbox with computer-use. Agent windows appear natively on your desktop with a colored border. This enables what I like to call *co-op mode*: you and your agent work in the same windows with separate cursors, without any mouse/focus hijacking or invasive full-desktop screenshots. **What you can do:** `$ npx cuabot claude` `> "Write a 2-player tic-tac-toe game, then let's play. I'll go first"` Claude Code will open the game in a sandboxed window on your desktop. When ready, you click your move through the native window while the agent watches and waits to click its move. The agent can see your cursor and its windows while keeping your full desktop isolated. `# Run agents in parallel:` `$ npx cuabot -n research openclaw` `$ npx cuabot -n coding codex` `# Or script the CLI:` `$ npx cuabot libreoffice --writer &` `$ npx cuabot --click 150 48` `$ npx cuabot --type “I ❤️ Cua!”` Right now my cuabot agent is exploring mobile/desktop apps to turn into cuabench RL environments. I can watch the windows appear, intervene when it gets stuck, and let it continue until it opens the completed GUI gym for me to interact with. **Why we built this:** We built the Cua OSS SDK for building and benchmarking computer-use systems with GUI sandboxes. We kept seeing two common UX patterns when people built computer-use agents: 1. **Agent screenshots your desktop and controls your mouse** – Works with your data, but unsafe and locks you out 2. **Agent runs in a sandbox with an external VNC desktop** – Safer, but clunky to monitor, hard to interact with, and tedious for data transfer General computer-use should be frictionless. Asking your agent to debug a GUI app shouldn't require opening an entire desktop stream. The GUI app should just appear alongside your windows, sandboxed and ready. **How it works:** `cuabot [command]` launches `cuabotd`, which manages a Ubuntu + Xpra Docker container, a multi-cursor overlay, an Xpra computer-use MCP server, and an Xpra seamless client. It auto-configures your agent (Claude, Aider, etc.) to connect to the computer-use MCP, then pipes terminal I/O through WebSocket. The Xpra client automatically detects and streams windows launched in the container, with H.264 encoding, audio, and customizable clipboard sharing. Since the computer-use MCP interacts through an Xpra client, the agent only sees the windows it needs, sparing it from your desktop clutter! GitHub: [https://github.com/trycua/cua](https://github.com/trycua/cua) (monorepo; libs/cuabot directory) Docs: [https://cua.ai/docs/cuabot/cuabot](https://cua.ai/docs/cuabot/cuabot) npm: [https://www.npmjs.com/package/cuabot](https://www.npmjs.com/package/cuabot) installer/onboarding: `npx cuabot`
2026-02-04T16:38:11
https://i.redd.it/qaapo5x98ihg1.png
a6oo
i.redd.it
1970-01-01T00:00:00
0
{}
1qvtfyk
false
null
t3_1qvtfyk
/r/LocalLLaMA/comments/1qvtfyk/cuabot_v10_released_an_mitlicensed_tool_to_run/
false
false
default
33
{'enabled': True, 'images': [{'id': 'qaapo5x98ihg1', 'resolutions': [{'height': 88, 'url': 'https://preview.redd.it/qaapo5x98ihg1.png?width=108&crop=smart&auto=webp&s=9ba0a0dd2ddd19e81bc775cb6bb070eb807e7b43', 'width': 108}, {'height': 176, 'url': 'https://preview.redd.it/qaapo5x98ihg1.png?width=216&crop=smart&auto=webp&s=02e23bfa5d051d09fa97b14556d6992ab2224d05', 'width': 216}, {'height': 261, 'url': 'https://preview.redd.it/qaapo5x98ihg1.png?width=320&crop=smart&auto=webp&s=189c970798032ad977c8b4a9d7d2bb97d74f206b', 'width': 320}, {'height': 522, 'url': 'https://preview.redd.it/qaapo5x98ihg1.png?width=640&crop=smart&auto=webp&s=75317cfbd5e04f5c7d3d514663961cd80537b9a3', 'width': 640}, {'height': 783, 'url': 'https://preview.redd.it/qaapo5x98ihg1.png?width=960&crop=smart&auto=webp&s=e20929b640d38b1f10b1353d848d587863e47b52', 'width': 960}, {'height': 881, 'url': 'https://preview.redd.it/qaapo5x98ihg1.png?width=1080&crop=smart&auto=webp&s=47458671e588319d5dd9b18912e17984ad5dd454', 'width': 1080}], 'source': {'height': 1224, 'url': 'https://preview.redd.it/qaapo5x98ihg1.png?auto=webp&s=2d2992c3ac73617b14f3e5a15071aba4ec7306f0', 'width': 1500}, 'variants': {}}]}
Batched Multimodal Inference with Llama-4 Issues
2
hey folks! im running multimodal inference (image + text prompts) with meta-llama/Llama-4-Scout-17B-16E-Instruct using HuggingFace Transformers on an HPC cluster (SLURM). I have around 7 million images I want to process locally. I have 3 h200 gpu nodes available. I’m trying to speed things up via batching, but I keep getting some variant of an attention reshape error when using batched multimodal generate(). Current, I can process one image at a time (batch size 1) through the same model. But it fails when I try a batched multimodal setup at`model.generate()` with `processor(text=[...], images=[...])` and `device_map="auto".` Has anyone successfully done batched multimodal inference with Llama-4 Scout via HF Is this a known bug with accelerate hooks + sharded models + multimodal attention? Does anyone know of any workarounds besides “batch=1”? Hardware / runtime * Running on multi-GPU node (H200s) * `CUDA_VISIBLE_DEVICES="0,1,2"` * Model loaded in 4-bit (bitsandbytes**)**, compute dtype bfloat16 * `attn_implementation="eager"` * `device_map="auto"` with max\_memory per GPU, so the model is sharded across GPUs via accelerate hooks For each image file, I want: 1. Room classification (kitchen/bathroom/laundry/bedroom/dining/exterior/none) 2. Rating prompts (condition 1–10, cabinets 1–10 if kitchen, bathroom appearance 1–10 if bathroom) I store results to CSV with \- A per-task checkpoint file (so the job can resume) \- A global cache CSV (dedup across all array tasks / reruns) \- I split the image list across SLURM array jobs: `-task_count = SLURM_ARRAY_TASK_COUNT` `- task_id = SLURM_ARRAY_TASK_ID` `- np.array_split(image_files, task_count)` Each task processes its assigned files, but: \- If a filename exists in global cache, I copy the result and skip inference \- If already in task checkpoint then I skip I have BATCH = 4. For each chunk, I load/resize images with PI, build chat messages of the form: system: “Only respond with … labels” user: \[image, “What room is this?” I then convert each message to a string with texts = \[processor.apply\_chat\_template(m, tokenize=False, add\_generation\_prompt=True) for m in msgs\]. Afterwards I build a multimodal batch with: room\_inputs = processor(text=texts, images=imgs, return\_tensors="pt", padding=True) room\_inputs = {k: v.to("cuda") for k,v in room\_inputs.items() if hasattr(v, "to")} **CRASH POINT: I then run** **room\_out = model.generate(\*\*room\_inputs, max\_new\_tokens=8, do\_sample=False, use\_cache=False)** **THE ERROR:** **RuntimeError: shape '\[3, 1075, 1, 1\]' is invalid for input of size 1075** **... in transformers/models/llama4/modeling\_llama4.py** **attn\_scales = attn\_scales.view((\*input\_shape, 1, 1))** At the time of the crash, debug prints show: * `input_ids`: `[3, 1075]` * `attention_mask`: `[3, 1075]` * `pixel_values`: something like `[21, 3, 336, 336]` (which seems like it’s flattening multiple images/patches) So it looks like attention scaling ends up as `[seq_len]` instead of `[batch, seq_len]`, and the `.view()` assumes a batch dimension that isn’t there. #
2026-02-04T16:35:58
https://www.reddit.com/r/LocalLLaMA/comments/1qvtdu7/batched_multimodal_inference_with_llama4_issues/
CarefulPositive9610
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qvtdu7
false
null
t3_1qvtdu7
/r/LocalLLaMA/comments/1qvtdu7/batched_multimodal_inference_with_llama4_issues/
false
false
self
2
null
New to local LLM, a few questions.
3
Hey guys, this is probably asked a lot. I tried to look at wiki section and search properly before posting, but the answer seem to vary a lot depending on use case and setup. To keep it short: a few weeks ago I (followed a guide and) installed a local qwen3-vl on my laptop (16vram + 32ram). my main goal was to use it for image captioning and then generate images with Z-Image Turbo (via ComfyUi). Since both are trained on the same clip (?), you can get good results. I grabbed a binary release from ggml-org/llama.cpp and set it up with Unsloth’s "Qwen3-VL-30B-XL-Q5". `--jinja ^ --cpu-moe ^ --ctx-size 8192 ^ --image-min-tokens 1024 ^ --temp 0.7 ^ --top-p 0.8 ^ --top-k 20 ^ --repeat-penalty 1.05 ^ --presence-penalty 1.5 ^ --n-predict 512` to be fair, it’s a bit slow, but it works fine. The downside is that if I want to generate images, I have to close the qwen's terminal (I also tried the 8B version, but that doesn’t run simultaneously either). My questions are : * It works, but am I missing something ? or Is there any way to make it faster, lighter, or better overall ? Is 30B-XL-Q5 a good choice for my setup? * I’m looking for a second model. qwen3-vl is good for images, but for general use it feels kind of weak (or maybe I’m just bad at using it). I looked around and gpt-oss seems popular, should I try that ? Censorship isn’t a big deal, but I do prefer something that doesn’t constantly say “I can’t do this” and “I can’t do that.”. P.S. this might sound amateurish, but does this stuff have a heavy impact on the GPU, CPU, or the laptop’s overall health? with prices going up, I honestly want to take good care of it lol.. Thanks in advance.
2026-02-04T16:31:08
https://www.reddit.com/r/LocalLLaMA/comments/1qvt914/new_to_local_llm_a_few_questions/
XMohsen
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qvt914
false
null
t3_1qvt914
/r/LocalLLaMA/comments/1qvt914/new_to_local_llm_a_few_questions/
false
false
self
3
null
Bayesian BM25 blends more smoothly with vector scores (less scale mismatch than simple weighted sum)
3
bm25 scores and dense similarity scores live on very different scales and distributions. Even with normalization, the balance is usually heuristic and dataset‑dependent, so you often end up tuning weights per domain. rrf ignores score magnitudes and uses only rank positions. That’s robust to scale mismatch, but it can discard useful confidence information and flatten large gaps between documents, which matters when one signal is clearly stronger. \## Experiments Setup - Dataset: SQuAD - Metrics: NDCG@10, MRR@10 - Dense model: BGE-M3 - Compared: weighted-sum (WS) hybrid vs RRF Results - WS (bb25 + Dense): NDCG@10 0.9149, MRR@10 0.8850 - WS (BM25 + Dense): NDCG@10 0.9051, MRR@10 0.8717 - RRF (BM25 + Dense): NDCG@10 0.8874, MRR@10 0.8483 Bayesian BM25 maps BM25 scores into calibrated probabilities using a likelihood and prior model. Once lexical scores are on a probabilistic scale, they combine more naturally with vector scores (also treated as probabilities). In practice this reduces scale mismatch and stabilizes hybrid fusion without heavy tuning. use with \`pip install bb25\`. happy to share code and details if anyone’s interested. feedback welcome! Repo: [https://github.com/sigridjineth/bb25](https://github.com/sigridjineth/bb25) Library: [http://pypi.org/project/bb25/](http://pypi.org/project/bb25/)
2026-02-04T16:29:13
https://github.com/sigridjineth/bb25
Ok_Rub1689
github.com
1970-01-01T00:00:00
0
{}
1qvt71e
false
null
t3_1qvt71e
/r/LocalLLaMA/comments/1qvt71e/bayesian_bm25_blends_more_smoothly_with_vector/
false
false
default
3
{'enabled': False, 'images': [{'id': 'rYwc2770PY3k5tbF1R2i7e0vncMOcXXwBc8bwwy1ogw', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/rYwc2770PY3k5tbF1R2i7e0vncMOcXXwBc8bwwy1ogw.png?width=108&crop=smart&auto=webp&s=ad983cc87e0a8d68099a9d3cf7c7426f2552764a', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/rYwc2770PY3k5tbF1R2i7e0vncMOcXXwBc8bwwy1ogw.png?width=216&crop=smart&auto=webp&s=652ee8cb0a46240b5a3e970778c28e9ffdf369d5', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/rYwc2770PY3k5tbF1R2i7e0vncMOcXXwBc8bwwy1ogw.png?width=320&crop=smart&auto=webp&s=35f4ce3a2398a1eac72cb3af08b838b2fc8bbd40', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/rYwc2770PY3k5tbF1R2i7e0vncMOcXXwBc8bwwy1ogw.png?width=640&crop=smart&auto=webp&s=9787625afa58af3af9e03a992281d33892cea38b', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/rYwc2770PY3k5tbF1R2i7e0vncMOcXXwBc8bwwy1ogw.png?width=960&crop=smart&auto=webp&s=de675a68244df9fb46db36f4e22550a48d9b6596', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/rYwc2770PY3k5tbF1R2i7e0vncMOcXXwBc8bwwy1ogw.png?width=1080&crop=smart&auto=webp&s=c787b54efd80575ca833208a39a95b17dad022d9', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/rYwc2770PY3k5tbF1R2i7e0vncMOcXXwBc8bwwy1ogw.png?auto=webp&s=7754b44eaaba9dcba8725040bfb75d373431b729', 'width': 1200}, 'variants': {}}]}
NTTuner - Complete GUI Solution for Fine-Tuning Local LLMs
10
Hey r/LocalLLaMA! I've been working on a complete desktop solution for fine-tuning and deploying local models, and I wanted to share it with the community. # What is it? **NTTuner** is a desktop GUI app that handles the entire fine-tuning workflow: * LoRA fine-tuning with GPU (Unsloth) or CPU support * Automatic GGUF conversion * Direct import to Ollama * Real-time training logs in a non-blocking UI **NTCompanion** is the dataset creation tool: * Universal web scraper for building training datasets * 6-factor quality scoring to filter out junk * Smart content extraction from any website * Outputs directly to NTTuner's expected format # Why I built this I got tired of juggling between command-line tools, Python scripts, and manual GGUF conversions every time I wanted to fine-tune a model. I wanted something that just worked - drag and drop a dataset, click start, and have a working model in Ollama when it's done. # Key Features **NTTuner:** * Drag-and-drop JSONL datasets * Auto-detects your GPU and installs the right dependencies * Background training that doesn't freeze the UI * Saves training configs as JSON for reproducibility * One-click export to Ollama with automatic quantization **NTCompanion:** * Scrapes websites to build training data * Multi-threaded crawling (configurable 1-50 workers) * Quality filtering so you don't train on navigation menus and cookie banners * Pre-configured for recipes, tutorials, documentation, blogs, etc. * Supports all major chat templates (Llama, Qwen, Phi, Mistral, Gemma) # Technical Details * Built with DearPyGUI for a responsive, GPU-accelerated interface * Uses Unsloth for 2-5x training speedup on compatible GPUs * Falls back gracefully to CPU training when needed * BeautifulSoup for robust HTML parsing * Optional Bloom filter for memory-efficient large crawls # System Requirements * Python 3.10+ * 8GB RAM minimum (16GB recommended) * NVIDIA GPU with 8GB+ VRAM recommended (but works on CPU) * Works on Windows, Linux, and macOS # Example Workflow 1. Use NTCompanion to scrape 1000 cooking recipes 2. Quality filter removes junk, outputs clean JSONL 3. Drop the JSONL into NTTuner 4. Select Llama-3.2-3B-Instruct as base model 5. Hit start, grab coffee 6. Model automatically appears in Ollama 7. Run `ollama run my-cooking-assistant` # Links * **NTTuner**: [https://github.com/noosed/NTTuner](https://github.com/noosed/NTTuner) * **NTCompanion**: [https://github.com/noosed/NTCompanion](https://github.com/noosed/NTCompanion) # Current Limitations * NTCompanion doesn't handle JavaScript-heavy sites perfectly (no headless browser yet) * GGUF conversion requires manual steps if using CPU training without Unsloth * Quality scoring works best on English content # What's Next I'm working on: * Better JavaScript rendering support * Multi-language dataset support * Fine-tuning presets for common use cases * Integration with more model formats Would love to hear feedback from the community! What features would make this more useful for your workflows? **TL;DR**: Built a desktop app that makes fine-tuning local LLMs as easy as drag-and-drop, with an included web scraper for building datasets. No more wrestling with command-line tools or manual GGUF conversions.
2026-02-04T16:29:02
https://www.reddit.com/r/LocalLLaMA/comments/1qvt6ux/nttuner_complete_gui_solution_for_finetuning/
Muted_Impact_9281
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qvt6ux
false
null
t3_1qvt6ux
/r/LocalLLaMA/comments/1qvt6ux/nttuner_complete_gui_solution_for_finetuning/
false
false
self
10
{'enabled': False, 'images': [{'id': 'mJDT5KcuxOJFaCFFiyBCwsCO-kHO8iJtdE7-h9XGN1E', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/mJDT5KcuxOJFaCFFiyBCwsCO-kHO8iJtdE7-h9XGN1E.png?width=108&crop=smart&auto=webp&s=f5ea7e0a0b6f148bc5100463fd9af84aa02dabb6', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/mJDT5KcuxOJFaCFFiyBCwsCO-kHO8iJtdE7-h9XGN1E.png?width=216&crop=smart&auto=webp&s=385ed5a5ca4ee5362ec1a3f33bf416dda0ca5098', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/mJDT5KcuxOJFaCFFiyBCwsCO-kHO8iJtdE7-h9XGN1E.png?width=320&crop=smart&auto=webp&s=118bd06dea96c2bb33099c02e48cfc22cc382d4b', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/mJDT5KcuxOJFaCFFiyBCwsCO-kHO8iJtdE7-h9XGN1E.png?width=640&crop=smart&auto=webp&s=df8336967bd98edf0db99cd069e2069b1b4b5dba', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/mJDT5KcuxOJFaCFFiyBCwsCO-kHO8iJtdE7-h9XGN1E.png?width=960&crop=smart&auto=webp&s=85a5a12ed433b2e5f17f337a2835930b5c2d9a6c', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/mJDT5KcuxOJFaCFFiyBCwsCO-kHO8iJtdE7-h9XGN1E.png?width=1080&crop=smart&auto=webp&s=3279bdff5d2f01fecc3fdeddadb8dc791a8b09cb', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/mJDT5KcuxOJFaCFFiyBCwsCO-kHO8iJtdE7-h9XGN1E.png?auto=webp&s=35ff68bc26730d4884eae6e44a3e7fd8e20f7c53', 'width': 1200}, 'variants': {}}]}
OpenCode + Qwen4-Coder-Next
0
2026-02-04T16:15:31
https://x.com/i/status/2018936727529021691
jacek2023
x.com
1970-01-01T00:00:00
0
{}
1qvsteu
false
null
t3_1qvsteu
/r/LocalLLaMA/comments/1qvsteu/opencode_qwen4codernext/
false
false
default
0
null
Prompt Repetition Improves Non-Reasoning LLMs - article
17
[https://arxiv.org/html/2512.14982v1](https://arxiv.org/html/2512.14982v1) Prompt repetition improves the accuracy of Gemini 2.0 Flash-Lite on NameIndex from 21.33% to 97.33%. Interesting article. Has anyone actually tried it?
2026-02-04T16:03:01
https://www.reddit.com/r/LocalLLaMA/comments/1qvsh0x/prompt_repetition_improves_nonreasoning_llms/
Loskas2025
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qvsh0x
false
null
t3_1qvsh0x
/r/LocalLLaMA/comments/1qvsh0x/prompt_repetition_improves_nonreasoning_llms/
false
false
self
17
null
What Happens When You Make a Premium AI Model Free: Lessons from 50 Billion Tokens in 7 Days
7
Hope to see Kimi team working on this issue while maintaining the quality
2026-02-04T16:02:47
https://12month12startups.substack.com/p/what-happens-when-you-make-a-premium?r=2fi0s2
Electrical_Pea_943
12month12startups.substack.com
1970-01-01T00:00:00
0
{}
1qvsgth
false
null
t3_1qvsgth
/r/LocalLLaMA/comments/1qvsgth/what_happens_when_you_make_a_premium_ai_model/
false
false
default
7
{'enabled': False, 'images': [{'id': 'xeWKHKXnQQ4QnpqvAofEx9Gh3a_5KMzF7TXGcBu8QrQ', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/xeWKHKXnQQ4QnpqvAofEx9Gh3a_5KMzF7TXGcBu8QrQ.jpeg?width=108&crop=smart&auto=webp&s=d8cf31cbb2df429229d447a60d5012dcb891957c', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/xeWKHKXnQQ4QnpqvAofEx9Gh3a_5KMzF7TXGcBu8QrQ.jpeg?width=216&crop=smart&auto=webp&s=fac789104e5e8d38df95895822be05d5711fe412', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/xeWKHKXnQQ4QnpqvAofEx9Gh3a_5KMzF7TXGcBu8QrQ.jpeg?width=320&crop=smart&auto=webp&s=090ffa0684b0000b3c55b132fd73edea806232c3', 'width': 320}, {'height': 480, 'url': 'https://external-preview.redd.it/xeWKHKXnQQ4QnpqvAofEx9Gh3a_5KMzF7TXGcBu8QrQ.jpeg?width=640&crop=smart&auto=webp&s=5592e97f4653045780307a12fd19ddc6745c1f0b', 'width': 640}], 'source': {'height': 675, 'url': 'https://external-preview.redd.it/xeWKHKXnQQ4QnpqvAofEx9Gh3a_5KMzF7TXGcBu8QrQ.jpeg?auto=webp&s=69e573c055a6d5e2781d4dc7e11711ccf6fa0164', 'width': 900}, 'variants': {}}]}
PSA: OpenClaw's token consumption is way higher than you think
31
saw a lot of hype around openclaw/clawdbot recently and wanted to try it out. i run local llms for most things but figured i'd give their cloud-based approach a shot. **the token problem:** the main issue is how they handle context. every single action seems to load a massive amount of context into the prompt, which means you're burning through tokens extremely fast. saw someone on twitter mention spending $11 just to run a "hi" command. i thought that was exaggerated but after testing, i believe it. ran it through some basic workflows (file search, data analysis, email checking) and my api costs were crazy high. **why this happens:** they don't have a real memory system. they claim "unlimited memory" but from what i can tell, they're just shoving everything into context windows. that means: • every new task loads tons of previous conversation • no smart retrieval or summarization • you're paying for all that context every single time **better approach:** for anyone running local llms or trying to optimize costs, look for tools with actual memory frameworks. i've been testing memU bot which uses a proper memory architecture (stores memory items in a file system, retrieves only what's needed). token usage dropped by like 70% for the same tasks. it's also local-first, so you can point it at your own ollama/lmstudio setup instead of paying openai prices. **tldr:** openclaw is cool tech but the economics don't make sense unless you have unlimited api budget. if you care about token efficiency, there are smarter architectures out there.
2026-02-04T16:01:08
https://www.reddit.com/r/LocalLLaMA/comments/1qvsf57/psa_openclaws_token_consumption_is_way_higher/
Entire_Suit_7402
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qvsf57
false
null
t3_1qvsf57
/r/LocalLLaMA/comments/1qvsf57/psa_openclaws_token_consumption_is_way_higher/
false
false
self
31
null
Ollama bad.
0
2026-02-04T16:00:21
https://i.imgur.com/BKA8uzW.png
mantafloppy
i.imgur.com
1970-01-01T00:00:00
0
{}
1qvse99
false
null
t3_1qvse99
/r/LocalLLaMA/comments/1qvse99/ollama_bad/
false
false
default
0
null
2.6% of Moltbook posts are prompt injection attacks. Built a free security toolkit.
0
Moltbook = largest social network for AI agents (770K+). Analyzed the traffic, found a lot of injection attempts targeting agent hijacking, credential theft, data exfiltration. Built an open-source scanner that filters posts before they hit your LLM. 24 security modules, Llama Guard + LLM Guard, CLI, Docker ready. [https://github.com/NirDiamant/moltbook-agent-guard](https://github.com/NirDiamant/moltbook-agent-guard) PRs welcome.
2026-02-04T15:54:38
https://www.reddit.com/r/LocalLLaMA/comments/1qvs8nz/26_of_moltbook_posts_are_prompt_injection_attacks/
Nir777
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qvs8nz
false
null
t3_1qvs8nz
/r/LocalLLaMA/comments/1qvs8nz/26_of_moltbook_posts_are_prompt_injection_attacks/
false
false
self
0
{'enabled': False, 'images': [{'id': 'dbjp27hZzXLfaN8xyiQFI04Sp93PGUb0MCjGcX9A87A', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/dbjp27hZzXLfaN8xyiQFI04Sp93PGUb0MCjGcX9A87A.png?width=108&crop=smart&auto=webp&s=6e10eca3c186e7230b2eab52f0924d1d9108b6b0', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/dbjp27hZzXLfaN8xyiQFI04Sp93PGUb0MCjGcX9A87A.png?width=216&crop=smart&auto=webp&s=1537d6ae6d9bb8e0e63b49a3632abfe658b9f453', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/dbjp27hZzXLfaN8xyiQFI04Sp93PGUb0MCjGcX9A87A.png?width=320&crop=smart&auto=webp&s=b8f03ba178cfa71fb6492563e2d431e5869433e2', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/dbjp27hZzXLfaN8xyiQFI04Sp93PGUb0MCjGcX9A87A.png?width=640&crop=smart&auto=webp&s=aa42bfe6b73d7cb1fabe7efb6c64653dc399c79c', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/dbjp27hZzXLfaN8xyiQFI04Sp93PGUb0MCjGcX9A87A.png?width=960&crop=smart&auto=webp&s=fe7df96caccabda689956464958a44d8d95d3401', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/dbjp27hZzXLfaN8xyiQFI04Sp93PGUb0MCjGcX9A87A.png?width=1080&crop=smart&auto=webp&s=91d1b7bcd6d877fe14827fdb11f18e819208c9d4', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/dbjp27hZzXLfaN8xyiQFI04Sp93PGUb0MCjGcX9A87A.png?auto=webp&s=056799f1624cf1f9375273db1852826cd912d55f', 'width': 1200}, 'variants': {}}]}
I ran Gemma 3 12B for a week across my startups - here's why I'm ditching $200/month subscriptions
0
I spent the last week connecting OpenClaw (open-source AI) to every tool across my businesses - WhatsApp, invoicing, customer support, the whole stack. Key findings: \- Gemma 3 12B on a 12GB GPU ($1500 machine) handled 90% of my tasks \- Only \~5% needed frontier models (using pay-as-you-go APIs instead) \- Privacy win: law firms, medical clinics can keep data local \- Cost: $20-50/month in electricity vs $200/month subscriptions The "kitchen knife revelation": a $15 knife in skilled hands beats a $2000 blade in wrong ones. Full writeup covers the WhatsApp missed opportunity, why knowledge always becomes free, and where this is all heading. [https://www.linkedin.com/pulse/i-spent-week-openclaw-ai-tool-heres-what-0-solved-faisal-al-khunizan-orhraf/](https://www.linkedin.com/pulse/i-spent-week-openclaw-ai-tool-heres-what-0-solved-faisal-al-khunizan-orhraf/) Happy to answer questions about the setup or share what worked/didn't work.
2026-02-04T15:52:00
https://www.reddit.com/r/LocalLLaMA/comments/1qvs654/i_ran_gemma_3_12b_for_a_week_across_my_startups/
hungry-for-things
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qvs654
false
null
t3_1qvs654
/r/LocalLLaMA/comments/1qvs654/i_ran_gemma_3_12b_for_a_week_across_my_startups/
false
false
self
0
null
Built a small tool to generate JSONL datasets for LLM fine-tuning (feedback wanted)
0
I built a small MVP to generate JSONL datasets for LLM fine-tuning and would love some honest feedback. 👉 https://finetuneengine.com You give it: • Instructions for how you want to fine-tune • Optional docs (up to 5, ≤20MB each) • Number of training lines It outputs a JSONL file you can fine-tune with. Heads-up: large generations are slow, so I recommend 10–100 lines. You can cancel anytime and still download partial output. This is very much an MVP. I’m trying to figure out: • Is this useful? • What’s missing? • What would you want added (formats, inputs, workflow, etc.)? Not looking for UX/UI feedback — just functionality and usefulness. Thanks 🙏
2026-02-04T15:46:45
https://www.reddit.com/r/LocalLLaMA/comments/1qvs132/built_a_small_tool_to_generate_jsonl_datasets_for/
shlok-codes
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qvs132
false
null
t3_1qvs132
/r/LocalLLaMA/comments/1qvs132/built_a_small_tool_to_generate_jsonl_datasets_for/
false
false
self
0
null
We built portable dense memory for AI agents. One .mv2 file, search it offline, sync anywhere
0
We've been building memory infrastructure for AI agents and robots at Kindly Robotics. The core problem: agent memories are trapped in vector databases. You can't move them, version them, or hand them to another agent. So we built ate memory — a CLI that creates portable .mv2 memory files. One file holds everything: text, metadata, indexes. Search it offline, sync it to cloud, pull it on another machine. What it does: • ate memory init → create a memory file • ate memory add --text "..." --title "..." → store entries with metadata • ate memory search "query" → BM25 lexical search (zero config, works offline) • ate memory search "query" --engine rerank → optional LLM re-ranking (bring your own key — Anthropic/OpenAI/Google/Ollama) • ate memory push/pull → cloud sync with device-flow auth • ate memory think/recall → "trains of thought" — git-like context branching Install: brew install kindlyrobotics/tap/ate \# or pip install foodforthought-cli Key design decisions: • Local-first: Everything works offline. Cloud is optional. • BM25 by default: No embedding model needed. Works for 80%+ of structured agent queries. • BYOE (Bring Your Own Embeddings): Auto-detects OpenAI → Cohere → Voyage → Ollama if you want semantic search. • Portable format: Built on memvid (.mv2, Rust core, 12.9K stars). Single file, append-only. • Agent-native CLI: --format json on every command. Designed for AI agents to use, not just humans. macOS/Linux/Windows binaries + PyPI + Homebrew. https://kindly.fyi/foodforthought Happy to answer questions about the architecture or design tradeoffs.
2026-02-04T15:45:50
https://www.reddit.com/r/LocalLLaMA/comments/1qvs05z/we_built_portable_dense_memory_for_ai_agents_one/
catsmeow492
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qvs05z
false
null
t3_1qvs05z
/r/LocalLLaMA/comments/1qvs05z/we_built_portable_dense_memory_for_ai_agents_one/
false
false
self
0
{'enabled': False, 'images': [{'id': 'X02iMKZIKoPjzxjw5RAza6YVtJ8plaQH3uROUWxZFbQ', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/X02iMKZIKoPjzxjw5RAza6YVtJ8plaQH3uROUWxZFbQ.jpeg?width=108&crop=smart&auto=webp&s=2d419395ea54dad51b1b011c4205f3e01fb7a2dd', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/X02iMKZIKoPjzxjw5RAza6YVtJ8plaQH3uROUWxZFbQ.jpeg?width=216&crop=smart&auto=webp&s=2c949b86d8b5041153d184c2b72c8ab67d365b00', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/X02iMKZIKoPjzxjw5RAza6YVtJ8plaQH3uROUWxZFbQ.jpeg?width=320&crop=smart&auto=webp&s=7467bda99ad1317bd086c7b5793b8a5ef3456acd', 'width': 320}, {'height': 480, 'url': 'https://external-preview.redd.it/X02iMKZIKoPjzxjw5RAza6YVtJ8plaQH3uROUWxZFbQ.jpeg?width=640&crop=smart&auto=webp&s=3b229091db352faedf1cc1d886cd3e4ef4d01071', 'width': 640}, {'height': 720, 'url': 'https://external-preview.redd.it/X02iMKZIKoPjzxjw5RAza6YVtJ8plaQH3uROUWxZFbQ.jpeg?width=960&crop=smart&auto=webp&s=da883034cd6beab41c521ca9ebafb29cac302da1', 'width': 960}, {'height': 810, 'url': 'https://external-preview.redd.it/X02iMKZIKoPjzxjw5RAza6YVtJ8plaQH3uROUWxZFbQ.jpeg?width=1080&crop=smart&auto=webp&s=8e3d185dbd86fd45f8ac74c27529b31fb5b0a12a', 'width': 1080}], 'source': {'height': 1429, 'url': 'https://external-preview.redd.it/X02iMKZIKoPjzxjw5RAza6YVtJ8plaQH3uROUWxZFbQ.jpeg?auto=webp&s=493aa32c9094aceac4683184bb76b1f4c558f069', 'width': 1905}, 'variants': {}}]}
Stress testing my tokenizer need volunteers
1
[removed]
2026-02-04T15:42:59
https://www.reddit.com/r/LocalLLaMA/comments/1qvrxgt/stress_testing_my_tokenizer_need_volunteers/
Dangerous_Bed9191
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qvrxgt
false
null
t3_1qvrxgt
/r/LocalLLaMA/comments/1qvrxgt/stress_testing_my_tokenizer_need_volunteers/
false
false
self
1
null
Best model for Python - local
2
What model are people using for Python dev? I have a M1 Max - 32GB, and I tend to use GGUF weights over MLX (better support for features like prompt caching in LM studio). I think I have a memory budget around 21GB including context window. I have been using OSS-20b. But I've seen other models discussed, particularly the Qwen models, and I saw that the Apriel15b nailed the benchmarks. I know that I could just use a benchmark.. but I've found them to be really poor predictors of real world performance. Even on the SOTA side, Gemini3Pro is a reasonably horrible model in the real world versus Opus/Codex. I can see why it did well on benchmarks, and consider how it was probably trained.. but real world coding non-vibing devs is so different to the kind of competitive coding/vibe-coding world. I mostly dev: \- Electron / front end \- cpp \- Python For the most part, I need a really good rubber duck in Python. Someone to check my work, write tests, and generate documentation. Probably a bit of code gen, but mostly just getting me up to speed on bits I haven't looked at for a while.
2026-02-04T15:42:45
https://www.reddit.com/r/LocalLLaMA/comments/1qvrx90/best_model_for_python_local/
Temporary-Mix8022
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qvrx90
false
null
t3_1qvrx90
/r/LocalLLaMA/comments/1qvrx90/best_model_for_python_local/
false
false
self
2
null
Parakeet v2 ASR for live audio
1
Has anyone used this model for live streaming? I look for model that will detect given world the fastest - I thought about WhisperLive or Parakeet v2
2026-02-04T15:42:44
https://www.reddit.com/r/LocalLLaMA/comments/1qvrx89/parakeet_v2_asr_for_live_audio/
Altruistic_Cancel666
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qvrx89
false
null
t3_1qvrx89
/r/LocalLLaMA/comments/1qvrx89/parakeet_v2_asr_for_live_audio/
false
false
self
1
null
Production serving inference: Failsafes / exit conditions
3
SGLang and vLLM are great for serving models in production, but sometimes they still hit snags that require intervention. Occasionally a request will hang in the “waiting” stage, or an LLM will get stuck in a loop when context overflows, or too many requests in parallel might come in to handle before timeout. Load balancing between two instances with Nginx or HAProxy is a good idea to increase availability, but what can you do to monitor, stop and restart an instance when it‘s in a bad or nonresponsive state? Even more critically, what can you do to automatically halt an instance if system temps and resource consumption start climbing out of control?
2026-02-04T15:40:01
https://www.reddit.com/r/LocalLLaMA/comments/1qvrun7/production_serving_inference_failsafes_exit/
FrozenBuffalo25
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qvrun7
false
null
t3_1qvrun7
/r/LocalLLaMA/comments/1qvrun7/production_serving_inference_failsafes_exit/
false
false
self
3
null
Qwen3-Next NVFP4 ModelOpt for SGLang is up!
3
[https://github.com/sgl-project/sglang/pull/18224](https://github.com/sgl-project/sglang/pull/18224) You'll have to build from source for now, but it is compressed using modelopt and runs about 210tok/s on B300! It's not compressed tensors [https://www.reddit.com/r/LocalLLaMA/comments/1qvax2n/qwen3codernextnvfp4\_quantization\_is\_up\_45gb/](https://www.reddit.com/r/LocalLLaMA/comments/1qvax2n/qwen3codernextnvfp4_quantization_is_up_45gb/) Steps: \- Install sglang from source on branch \- Use the command in the PR Feel free to drop your questions below
2026-02-04T15:35:25
https://www.reddit.com/r/LocalLLaMA/comments/1qvrq9d/qwen3next_nvfp4_modelopt_for_sglang_is_up/
TeekayTK
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qvrq9d
false
null
t3_1qvrq9d
/r/LocalLLaMA/comments/1qvrq9d/qwen3next_nvfp4_modelopt_for_sglang_is_up/
false
false
self
3
null
Platinum-CoT: High-Value Technical Reasoning. Distilled via Phi-4 → DeepSeek-R1 (70B) → Qwen 2.5 (32B) Pipeline
2
I've just released a 100-row preview of **Platinum-CoT**, a dataset engineered specifically for high-stakes technical reasoning and CoT distillation. **What makes it different?** Unlike generic instruction sets, this uses a triple-model "Platinum" pipeline: 1. **Architect**: Phi-4 generates complex, multi-constraint Staff Engineer level problems. 2. **Solver**: DeepSeek-R1 (70B) provides the "Gold Standard" Chain-of-Thought reasoning (Avg. \~5.4k chars per path). 3. **Auditor**: Qwen 2.5 (32B) performs a strict logic audit; only the highest quality (8+/10) samples are kept. **Featured Domains**: \- **Systems**: Zero-copy (io\_uring), Rust unsafe auditing, SIMD-optimized matching. \- **Cloud Native**: Cilium networking, eBPF security, Istio sidecar optimization. \- **FinTech**: FIX protocol, low-latency ring buffers. Check out the 100-row parquet preview on HuggingFace: [https://huggingface.co/datasets/BlackSnowDot/Platinum-CoT](https://huggingface.co/datasets/BlackSnowDot/Platinum-CoT)
2026-02-04T15:31:04
https://www.reddit.com/r/LocalLLaMA/comments/1qvrm1g/platinumcot_highvalue_technical_reasoning/
BlackSnowDoto
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qvrm1g
false
null
t3_1qvrm1g
/r/LocalLLaMA/comments/1qvrm1g/platinumcot_highvalue_technical_reasoning/
false
false
self
2
null
Made a simple CLI tool to check your OpenRouter API usage (for users without dashboard access)
2
My team shares one OpenRouter account with multiple API keys, but none of us have access to the dashboard. Got tired of not knowing how much we've spent, so I made a tiny Python script that checks your key's usage locally. Just run it, paste your key, see your credits/usage/limits with a progress bar. * Single file, \~150 lines * No tracking, no data stored * Key never leaves your machine GitHub: [https://github.com/mhd-medfa/openrouter-usage-monitor](https://github.com/mhd-medfa/openrouter-usage-monitor) Nothing fancy. Hope it helps someone else. https://preview.redd.it/pugdzrmjxhhg1.png?width=779&format=png&auto=webp&s=19982d1ee644417ea2dde649154db7ce17edf16a
2026-02-04T15:30:16
https://www.reddit.com/r/LocalLLaMA/comments/1qvrl6b/made_a_simple_cli_tool_to_check_your_openrouter/
No-Coast-6133
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qvrl6b
false
null
t3_1qvrl6b
/r/LocalLLaMA/comments/1qvrl6b/made_a_simple_cli_tool_to_check_your_openrouter/
false
false
https://preview.redd.it/…d720e4109a23752b
2
null
Beta testing a Unicode tokenizer - looking for edge cases
1
[removed]
2026-02-04T15:30:02
https://www.reddit.com/r/LocalLLaMA/comments/1qvrky1/beta_testing_a_unicode_tokenizer_looking_for_edge/
Dangerous_Bed9191
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qvrky1
false
null
t3_1qvrky1
/r/LocalLLaMA/comments/1qvrky1/beta_testing_a_unicode_tokenizer_looking_for_edge/
false
false
self
1
null
mistralai/Voxtral-Mini-4B-Realtime-2602 · Hugging Face
242
Voxtral Mini 4B Realtime 2602 is a **multilingual, realtime speech-transcription model** and among the first open-source solutions to achieve accuracy comparable to offline systems with a delay of **<500ms**. It supports **13 languages** and outperforms existing open-source baselines across a range of tasks, making it ideal for applications like voice assistants and live subtitling. Built with a **natively streaming architecture** and a custom causal audio encoder - it allows configurable transcription delays (240ms to 2.4s), enabling users to balance **latency and accuracy** based on their needs. At a **480ms delay**, it matches the performance of leading offline open-source transcription models, as well as realtime APIs. As a **4B-parameter model**, is optimized for **on-device deployment**, requiring minimal hardware resources. It runs in realtime with on devices minimal hardware with throughput exceeding 12.5 tokens/second.
2026-02-04T15:27:12
https://huggingface.co/mistralai/Voxtral-Mini-4B-Realtime-2602
jacek2023
huggingface.co
1970-01-01T00:00:00
0
{}
1qvrib9
false
null
t3_1qvrib9
/r/LocalLLaMA/comments/1qvrib9/mistralaivoxtralmini4brealtime2602_hugging_face/
false
false
default
242
{'enabled': False, 'images': [{'id': 'RirqAaXL1g9xgccy6jCj8FpDgCmNmT4kPmfCbcwIIl8', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/RirqAaXL1g9xgccy6jCj8FpDgCmNmT4kPmfCbcwIIl8.png?width=108&crop=smart&auto=webp&s=ecfcc819344b827400992b8eefcd51d69383b272', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/RirqAaXL1g9xgccy6jCj8FpDgCmNmT4kPmfCbcwIIl8.png?width=216&crop=smart&auto=webp&s=46888ac9ab4955e64579b13e897f18753988694b', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/RirqAaXL1g9xgccy6jCj8FpDgCmNmT4kPmfCbcwIIl8.png?width=320&crop=smart&auto=webp&s=63744b75b9a0f6613cffc1dea4e142a8ee1749db', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/RirqAaXL1g9xgccy6jCj8FpDgCmNmT4kPmfCbcwIIl8.png?width=640&crop=smart&auto=webp&s=a4fa31e09623598564a99beea3a398a9c824d4f9', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/RirqAaXL1g9xgccy6jCj8FpDgCmNmT4kPmfCbcwIIl8.png?width=960&crop=smart&auto=webp&s=960c1892aca821f056f64ccf10ac667dee63e0db', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/RirqAaXL1g9xgccy6jCj8FpDgCmNmT4kPmfCbcwIIl8.png?width=1080&crop=smart&auto=webp&s=abfb6a220b37bf4427dc6d55c66d53e3c564f184', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/RirqAaXL1g9xgccy6jCj8FpDgCmNmT4kPmfCbcwIIl8.png?auto=webp&s=81c15e4610654ae2fae1adfec7542c9943e86639', 'width': 1200}, 'variants': {}}]}
Some hard lessons learned building a private H100 cluster (Why PCIe servers failed us for training)
396
^(Just wanted to dump some notes here after spending the last few months architecting a private training stack (70B+ param models). We initially tried to save budget by looking at standard PCIe servers instead of the HGX/SXM form factors, and honestly, the "paper math" vs. reality was a brutal wake-up call.) ^(Thought this might save someone else the headache if you're trying to move from inference to actual training runs on-prem.) ^(1. The "NVLink Tax" isn't optional for training. We tried to model this out with PCIe Gen5, but the math just falls apart. When you're doing All-Reduce ops across nodes, PCIe caps out at \~128 GB/s. NVLink is pushing \~900 GB/s. If you cheap out here, you basically end up with expensive GPUs sitting idle, waiting for data. For inference, PCIe is totally fine. For training, it’s a bottleneck that kills your ROI.) ^(2. Storage checkpoints are violent. This was the biggest surprise. Everyone talks about GPU VRAM, but nobody warned us about the checkpoint writes. A 175B model dumps a \~2.5TB checkpoint. To keep the GPUs from stalling, you need to write that to disk in under a minute. Our standard NFS filer absolutely choked. We had to look at parallel filesystems (Weka/VAST) or local NVMe raid just to survive the write bursts.) ^(3. You don't need InfiniBand, but Ethernet is annoying. We didn't have the budget/staff for an InfiniBand fabric, so we went with RoCEv2 on standard switches. It works, but it’s finicky. One silent buffer overflow or a misconfigured PFC (Priority Flow Control) setting can stall the whole cluster. If you go Ethernet, monitor your pause frames religiously.) ^(Anyway, I wrote up a longer deep dive with the specific diagrams and our decision framework for "Sandbox vs Production" builds if anyone is interested. Link is pinned in my profile.) ^(Happy to answer questions on the networking side - that RoCEv2 tuning took years off my life.)
2026-02-04T15:20:42
https://www.reddit.com/r/LocalLLaMA/comments/1qvrc59/some_hard_lessons_learned_building_a_private_h100/
NTCTech
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qvrc59
false
null
t3_1qvrc59
/r/LocalLLaMA/comments/1qvrc59/some_hard_lessons_learned_building_a_private_h100/
false
false
self
396
null
LLM for messaging
2
HI, I'm kinda new to local llm's, I have been using AI for 2+ years, but never switched to local ones. My question is, which is the best for beginner and some tips and tricks to use one? I also want to also incoporate my local llm to my messages through API, so it can answer messages, etc., is that possible? Thank you
2026-02-04T15:03:06
https://www.reddit.com/r/LocalLLaMA/comments/1qvqvkm/llm_for_messaging/
Soft_Fig1979
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qvqvkm
false
null
t3_1qvqvkm
/r/LocalLLaMA/comments/1qvqvkm/llm_for_messaging/
false
false
self
2
null
Stop dumping your entire chat history into the Context Window. It’s lazy and insecure.
0
Good morning Builders. I see a lot of posts here struggling with "infinite memory" or context limits. The general advice seems to be "Just dump everything into the context window." In my experience building SAFi (my runtime governance engine), this is a mistake for two reasons: 1. Cost/Latency: It's inefficient. 2. Security: It leaves you wide open to "Context Poisoning." The SAFi Approach**:** Runtime Summarization Instead of chained-dumping the raw conversation, I use an interceptor pattern to summarize the conversation *between* turns. * The Stack: I use Llama 3.2 8B (via Groq). * The Cost: It is insanely cheap (roughly $0.01 per \~70 API calls) and instant. * The Logic: It extracts only key details and *intentions*, discarding the noise. The Security Benefit: This architecture is why SAFi has successfully withstood **1,500+** jailbreak attacks during my jailbreaking challenges I ran here in Reddit. Most jailbreaks rely on "Context Injection" (hiding malicious instructions in past turns). By summarizing the history, you effectively sanitize the input. The summarizer strips out the "jailbreak trigger" and only keeps the intent. If you *must* keep 100% of the history, use a Vector DB (RAG) to pull relevant chunks. But for the love of code, stop dumping raw text into your active prompt. If you want to see the Python implementation of this summarizer, clone the repo and steal the code. I don't mind as long as you drop a star! [https://github.com/jnamaya/SAFi](https://github.com/jnamaya/SAFi) Keep building, Nelson
2026-02-04T14:52:36
https://www.reddit.com/r/LocalLLaMA/comments/1qvqltz/stop_dumping_your_entire_chat_history_into_the/
forevergeeks
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qvqltz
false
null
t3_1qvqltz
/r/LocalLLaMA/comments/1qvqltz/stop_dumping_your_entire_chat_history_into_the/
false
false
self
0
{'enabled': False, 'images': [{'id': 'V_6LQ3KfkIAYl38saOsknJ8fYZRuVXowGru_iMDNlAc', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/V_6LQ3KfkIAYl38saOsknJ8fYZRuVXowGru_iMDNlAc.png?width=108&crop=smart&auto=webp&s=269532965f0090baa67e8cdcfa63eb4a71aed40f', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/V_6LQ3KfkIAYl38saOsknJ8fYZRuVXowGru_iMDNlAc.png?width=216&crop=smart&auto=webp&s=c4c4cda5730e276856d8a6f9551d7da25c06774b', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/V_6LQ3KfkIAYl38saOsknJ8fYZRuVXowGru_iMDNlAc.png?width=320&crop=smart&auto=webp&s=f77868e5fbaeb5639bbed189c4c88398ce9dc7ab', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/V_6LQ3KfkIAYl38saOsknJ8fYZRuVXowGru_iMDNlAc.png?width=640&crop=smart&auto=webp&s=f386d5a6eab68aff3cff45dda4efad3f665a194b', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/V_6LQ3KfkIAYl38saOsknJ8fYZRuVXowGru_iMDNlAc.png?width=960&crop=smart&auto=webp&s=5af85980a5009b2a0d070241af5be743544c8905', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/V_6LQ3KfkIAYl38saOsknJ8fYZRuVXowGru_iMDNlAc.png?width=1080&crop=smart&auto=webp&s=d1bcf1cc9ace60ddce94f19b44cc2ab8ef28aeb1', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/V_6LQ3KfkIAYl38saOsknJ8fYZRuVXowGru_iMDNlAc.png?auto=webp&s=b3b9b65943b1c9426c555df9d064774fecb289bc', 'width': 1200}, 'variants': {}}]}
Any fellow Local Llamas training AIs locally? Talk some sense into me!
5
Are any of you people training your own models on your own hardware? I have some architectural and training ideas I would like to try out. The idea of renting GPUs really turns me off, but dumping $$$ on hardware feels like an investment so it's fine. (I know the logic doesn't add up, but work with me here! It's actually worth more now than what I paid for it!) I've got an RTX 3090 and an old 3060. I just ordered a broken 3090 I think I can fix, so I'll be working with dual 3090's. I don't have and at this point not sure I want to buy an NV Link, so feel free to talk me out of that. So what can I do with 48GB VRAM, 128GB DDR 3600, and Ryzen 5900XT? Gemini seems to think the best I can do efficiently is 3b params, maybe 7b, but it would be inefficient. Is that accurate, or was Gemini hallucinating the math? Between me and Gemini, I've settled on building a 300-600m param prototype to see if my training methods work over the course of a few weeks (after lord knows how much development time), then do the 3b param model over the course of a month or two. Then maybe my architectural horror if that turns out well. I have solar with net metering, but I'll only be running the training off-peak, so like 18 hours a day. Mainly because I don't want to contribute to burning fossil fuels (or the power bill that comes with them.) I'm currently working on the tokenizer vocabulary, which is part of what will make my model different. I'm focusing on isolating semantic meaning for each token, breaking words up into prefixes, stems, and suffixes for the most common words. Then typical soup and salad to fill in the gaps. Otherwise, I plan to base my work on Qwen3 or Nemotron, but I'm not very far yet, so I don't know what I don't know! I've read a few books and have a flaky grasp of most of the concepts. Talk some sense into me. What are the challenges? As a hobby project, is this reasonable? Any recommendations? Any data sets that are worth their weight in gold?
2026-02-04T14:52:11
https://www.reddit.com/r/LocalLLaMA/comments/1qvqlfj/any_fellow_local_llamas_training_ais_locally_talk/
huzbum
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qvqlfj
false
null
t3_1qvqlfj
/r/LocalLLaMA/comments/1qvqlfj/any_fellow_local_llamas_training_ais_locally_talk/
false
false
self
5
null
You are NOT a Vibe-coder.. you are AI Product manager
0
Not Here to make money not here to convince just sharing .. Unpopular opinion: "Vibe coding" (just letting the LLM autocomplete its way through a file) is a trap. It works for scripts, but for real software engineering? It turns your codebase into spaghetti 🍝. I realized that if I want to use AI for serious dev work, I don't need a smarter chatbot. **I need a workforce.** And more importantly, I need to be the manager, not the debugger. So I built **Moe's Tavern**. It’s an open-source "Command Center" for JetBrains IDEs. It’s not just another autocomplete tool. It treats AI agents like junior devs that you have to supervise. **How it actually works (The Workflow):** 1. **Kanban Board:** You create a task card in the IDE. 2. **The Agent Claims It:** An "Architect" agent picks it up. 3. **The Guardrail:** The agent **must** submit an implementation plan first. 4. **You Approve:** You review the plan. If it sucks, you reject it. If it's good, you click approve. 5. **Execution:** Only *then* does the "Worker" agent write the code. **The Tech Stack:** * **Frontend:** JetBrains Plugin (Kotlin). * **Backend:** Local Daemon (Node.js/TypeScript) acts as the orchestration layer. * **Protocol:** Built on **MCP (Model Context Protocol)** so it plays nice with the new standard. I'm not selling anything. I just got tired of VS Code getting all the cool AI toys and wanted something robust for the JetBrains ecosystem. The code is raw, the paint is fresh, but the logic is there. If you're into Go/DevOps/AI agents and want to help build a proper orchestration layer, check it out. [https://github.com/yaront1111/Moe-s-Tavern](https://github.com/yaront1111/Moe-s-Tavern)
2026-02-04T14:49:08
https://www.reddit.com/r/LocalLLaMA/comments/1qvqio9/you_are_not_a_vibecoder_you_are_ai_product_manager/
yaront1111
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qvqio9
false
null
t3_1qvqio9
/r/LocalLLaMA/comments/1qvqio9/you_are_not_a_vibecoder_you_are_ai_product_manager/
false
false
self
0
null
Local Music Generation is finally here? Tested "ACE-Step-1.5" (Suno alternative) on RTX 4070 Super (12GB) 🎵
1
[removed]
2026-02-04T14:39:05
https://youtube.com/watch?v=vdOzaArJ9DA&si=u6DoYoNIFvcg17DT
Exotic-Specialist103
youtube.com
1970-01-01T00:00:00
0
{}
1qvq9c9
false
{'oembed': {'author_name': 'AI BENCHMARK LAB', 'author_url': 'https://www.youtube.com/@AIBENCHMARKLAB-k2x', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/vdOzaArJ9DA?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen title="Is Local AI Music Ready? ACE-Step-1.5 Showcase on RTX 4070 Super (3 Tracks)"></iframe>', 'provider_name': 'YouTube', 'provider_url': 'https://www.youtube.com/', 'thumbnail_height': 360, 'thumbnail_url': 'https://i.ytimg.com/vi/vdOzaArJ9DA/hqdefault.jpg', 'thumbnail_width': 480, 'title': 'Is Local AI Music Ready? ACE-Step-1.5 Showcase on RTX 4070 Super (3 Tracks)', 'type': 'video', 'version': '1.0', 'width': 356}, 'type': 'youtube.com'}
t3_1qvq9c9
/r/LocalLLaMA/comments/1qvq9c9/local_music_generation_is_finally_here_tested/
false
false
default
1
null
OpenClaw security issues include data leakage & prompt injection
6
2026-02-04T14:36:36
https://www.giskard.ai/knowledge/openclaw-security-vulnerabilities-include-data-leakage-and-prompt-injection-risks
chef1957
giskard.ai
1970-01-01T00:00:00
0
{}
1qvq72b
false
null
t3_1qvq72b
/r/LocalLLaMA/comments/1qvq72b/openclaw_security_issues_include_data_leakage/
false
false
default
6
{'enabled': False, 'images': [{'id': 'vv9zOIqBTsqAcGNGbRrFYZDWVZb55Z0MTJCP7PAiVZA', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/vv9zOIqBTsqAcGNGbRrFYZDWVZb55Z0MTJCP7PAiVZA.png?width=108&crop=smart&auto=webp&s=eccb40f879abed963a6cb54ca3aaa8d315ecb452', 'width': 108}, {'height': 112, 'url': 'https://external-preview.redd.it/vv9zOIqBTsqAcGNGbRrFYZDWVZb55Z0MTJCP7PAiVZA.png?width=216&crop=smart&auto=webp&s=ea2a5aa234cf2afd972a2a5aacb8654b3647783d', 'width': 216}, {'height': 167, 'url': 'https://external-preview.redd.it/vv9zOIqBTsqAcGNGbRrFYZDWVZb55Z0MTJCP7PAiVZA.png?width=320&crop=smart&auto=webp&s=1b97b134911e615399bf898c48307509c6bd74fc', 'width': 320}, {'height': 334, 'url': 'https://external-preview.redd.it/vv9zOIqBTsqAcGNGbRrFYZDWVZb55Z0MTJCP7PAiVZA.png?width=640&crop=smart&auto=webp&s=c416346b465a4a8073ee7f7f79faf78c308a2fe6', 'width': 640}, {'height': 501, 'url': 'https://external-preview.redd.it/vv9zOIqBTsqAcGNGbRrFYZDWVZb55Z0MTJCP7PAiVZA.png?width=960&crop=smart&auto=webp&s=90147c06f96b0bcf4932e0787e73f6d909fdd4e3', 'width': 960}, {'height': 564, 'url': 'https://external-preview.redd.it/vv9zOIqBTsqAcGNGbRrFYZDWVZb55Z0MTJCP7PAiVZA.png?width=1080&crop=smart&auto=webp&s=7b3970b8f481ac5113cf1a3bed40506c3618ee17', 'width': 1080}], 'source': {'height': 1254, 'url': 'https://external-preview.redd.it/vv9zOIqBTsqAcGNGbRrFYZDWVZb55Z0MTJCP7PAiVZA.png?auto=webp&s=7c3a2aebf456676ade3d55bde82d1f3acbe0ad3e', 'width': 2400}, 'variants': {}}]}
Using Ace Step 1.5 in Google Colab
1
I want to use Ace Step 1.5 in google colab with T4 gpu, can anyone explain how can I use?
2026-02-04T14:35:16
https://www.reddit.com/r/LocalLLaMA/comments/1qvq5vl/using_ace_step_15_in_google_colab/
Swimming-Insurance12
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qvq5vl
false
null
t3_1qvq5vl
/r/LocalLLaMA/comments/1qvq5vl/using_ace_step_15_in_google_colab/
false
false
self
1
null
Bashing Ollama isn’t just a pleasure, it’s a duty
944
2026-02-04T14:29:48
https://i.redd.it/ad5zhvq0nhhg1.png
jacek2023
i.redd.it
1970-01-01T00:00:00
0
{}
1qvq0xe
false
null
t3_1qvq0xe
/r/LocalLLaMA/comments/1qvq0xe/bashing_ollama_isnt_just_a_pleasure_its_a_duty/
false
false
default
944
{'enabled': True, 'images': [{'id': 'ad5zhvq0nhhg1', 'resolutions': [{'height': 64, 'url': 'https://preview.redd.it/ad5zhvq0nhhg1.png?width=108&crop=smart&auto=webp&s=55eb98df0ea4c8c975e9b96e5e2cbd1171d6cc5d', 'width': 108}, {'height': 129, 'url': 'https://preview.redd.it/ad5zhvq0nhhg1.png?width=216&crop=smart&auto=webp&s=da1620dc354c21eaad2111d31fd18ffe4fad5dde', 'width': 216}, {'height': 191, 'url': 'https://preview.redd.it/ad5zhvq0nhhg1.png?width=320&crop=smart&auto=webp&s=8cdceae6c290f3252bf88dd0ccadb4a0e06e6cfd', 'width': 320}, {'height': 383, 'url': 'https://preview.redd.it/ad5zhvq0nhhg1.png?width=640&crop=smart&auto=webp&s=3b9fa62de0e64a6887124b87e66b3b99b2942107', 'width': 640}, {'height': 575, 'url': 'https://preview.redd.it/ad5zhvq0nhhg1.png?width=960&crop=smart&auto=webp&s=8d7244718ece29de2a9d3a4c5845ef30935ec68e', 'width': 960}, {'height': 647, 'url': 'https://preview.redd.it/ad5zhvq0nhhg1.png?width=1080&crop=smart&auto=webp&s=73b21904001072e5669561172e3e0619e051e001', 'width': 1080}], 'source': {'height': 1117, 'url': 'https://preview.redd.it/ad5zhvq0nhhg1.png?auto=webp&s=1b4450ac380ac678849a9bba5428e6fad8a90e3d', 'width': 1864}, 'variants': {}}]}
Failing to chat with n8n after second message
1
https://preview.redd.it/…iver: bridge`
2026-02-04T14:23:15
https://www.reddit.com/r/LocalLLaMA/comments/1qvpv4q/failing_to_chat_with_n8n_after_second_message/
MudPleasant6504
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qvpv4q
false
null
t3_1qvpv4q
/r/LocalLLaMA/comments/1qvpv4q/failing_to_chat_with_n8n_after_second_message/
false
false
https://preview.redd.it/…8ccf28452d3aacd3
1
null
I work in the GenAI field - so what better way to use a local model then powering my portfolio?
0
I’ve seen more posts recently about what people \*do\* with their local models. I figured I’d share one way I use mine. I work in the GenAI solutions world and transparently, I already don’t love the concept and execution of resumes. Low context, muddy signal. I figured what better than for someone in this world to put a chatbot on their website! Especially if well setup with tools and real interactions it can make with the user. It’s setup with the backend model being minimax M2.1 served AWQ via vLLM and the frontend is react. I’m fully expected yall to take it down trying to break it but figured I’d share. Could have done an API endpoint but what’s the fun in that! The self deployment shows the intersection between infra and product. A couple Easter eggs hidden as well. https://gitter.ai
2026-02-04T14:19:51
https://gitter.ai
gittb
gitter.ai
1970-01-01T00:00:00
0
{}
1qvps68
false
null
t3_1qvps68
/r/LocalLLaMA/comments/1qvps68/i_work_in_the_genai_field_so_what_better_way_to/
false
false
default
0
null
internlm/Intern-S1-Pro · Hugging Face
81
1000B
2026-02-04T13:50:20
https://huggingface.co/internlm/Intern-S1-Pro
jacek2023
huggingface.co
1970-01-01T00:00:00
0
{}
1qvp2hg
false
null
t3_1qvp2hg
/r/LocalLLaMA/comments/1qvp2hg/internlminterns1pro_hugging_face/
false
false
default
81
{'enabled': False, 'images': [{'id': 'YxAPCHfyx1X69aAa5eRKFFzDTrzC_SvUlWSg_aGoYn8', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/YxAPCHfyx1X69aAa5eRKFFzDTrzC_SvUlWSg_aGoYn8.png?width=108&crop=smart&auto=webp&s=0d984c9036f5b7c06bccf23fe49f21d0a31fd7df', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/YxAPCHfyx1X69aAa5eRKFFzDTrzC_SvUlWSg_aGoYn8.png?width=216&crop=smart&auto=webp&s=7dd85e7c1f581c29d183a5accc591328923a3a2a', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/YxAPCHfyx1X69aAa5eRKFFzDTrzC_SvUlWSg_aGoYn8.png?width=320&crop=smart&auto=webp&s=398dca3db185d90d03f5719503c816f02c57bfe5', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/YxAPCHfyx1X69aAa5eRKFFzDTrzC_SvUlWSg_aGoYn8.png?width=640&crop=smart&auto=webp&s=b1a22c338f7ffdcfdd1ac0da2068e064b078cc48', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/YxAPCHfyx1X69aAa5eRKFFzDTrzC_SvUlWSg_aGoYn8.png?width=960&crop=smart&auto=webp&s=7174bfc38d8ae85930ca6d88a29eb34e7bdcfb52', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/YxAPCHfyx1X69aAa5eRKFFzDTrzC_SvUlWSg_aGoYn8.png?width=1080&crop=smart&auto=webp&s=f4985deedde2f8b2ff8b1f5dcf6e914ba1134106', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/YxAPCHfyx1X69aAa5eRKFFzDTrzC_SvUlWSg_aGoYn8.png?auto=webp&s=0e06160de7b909952b12d779b17a9c1899ac35fa', 'width': 1200}, 'variants': {}}]}
model: (qwen3next) correct vectorized key_gdiff calculation by ngxson · Pull Request #19324 · ggml-org/llama.cpp
81
(First?) Fix for Qwen Next Coder
2026-02-04T13:47:55
https://github.com/ggml-org/llama.cpp/pull/19324
jacek2023
github.com
1970-01-01T00:00:00
0
{}
1qvp0hm
false
null
t3_1qvp0hm
/r/LocalLLaMA/comments/1qvp0hm/model_qwen3next_correct_vectorized_key_gdiff/
false
false
https://external-preview…dd776bd504bc5cda
81
{'enabled': False, 'images': [{'id': 'Dqgg7ZvrLWPUlWr_lQFMlLvrUGKt4Wjs_hNwPvpf-8k', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/Dqgg7ZvrLWPUlWr_lQFMlLvrUGKt4Wjs_hNwPvpf-8k.png?width=108&crop=smart&auto=webp&s=6049ab51d24056cf5f81e88ac33a2693e2f58a1d', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/Dqgg7ZvrLWPUlWr_lQFMlLvrUGKt4Wjs_hNwPvpf-8k.png?width=216&crop=smart&auto=webp&s=e2a884e61f7bbe472fd6b431a04b341ef950fe40', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/Dqgg7ZvrLWPUlWr_lQFMlLvrUGKt4Wjs_hNwPvpf-8k.png?width=320&crop=smart&auto=webp&s=1555b8bf3b271a2c42ab26257a22214e2b31090d', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/Dqgg7ZvrLWPUlWr_lQFMlLvrUGKt4Wjs_hNwPvpf-8k.png?width=640&crop=smart&auto=webp&s=cecc3e5f53db33a22bb927dfd55b97e94627ad38', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/Dqgg7ZvrLWPUlWr_lQFMlLvrUGKt4Wjs_hNwPvpf-8k.png?width=960&crop=smart&auto=webp&s=852548e7848eb13b8f0e9e18f77104ee00611675', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/Dqgg7ZvrLWPUlWr_lQFMlLvrUGKt4Wjs_hNwPvpf-8k.png?width=1080&crop=smart&auto=webp&s=99fffdeed281a8df69e26b5ed866a87cf3a1c1c0', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/Dqgg7ZvrLWPUlWr_lQFMlLvrUGKt4Wjs_hNwPvpf-8k.png?auto=webp&s=c142c0059bdfb5074555e1262bd3c56c17d5ce5c', 'width': 1200}, 'variants': {}}]}
Intern-S1-Pro (1T/A22B)
134
🚀Introducing Intern-S1-Pro, an advanced 1T MoE open-source multimodal scientific reasoning model. \- SOTA scientific reasoning, competitive with leading closed-source models across AI4Science tasks. \- Top-tier performance on advanced reasoning benchmarks, strong general multimodal performance on various benchmarks. \- 1T-A22B MoE training efficiency with STE routing (dense gradient for router training) and grouped routing for stable convergence and balanced expert parallelism. \- Fourier Position Encoding (FoPE) + upgraded time-series modeling for better physical signal representation; supports long, heterogeneous time-series (10\^0–10\^6 points). \- Intern-S1-Pro is now supported by vLLM @vllm\_project and SGLang @sgl\_project @lmsysorg — more ecosystem integrations are on the way. Huggingface: https://huggingface.co/internlm/Intern-S1-Pro GitHub: https://github.com/InternLM/Intern-S1
2026-02-04T13:43:51
https://i.redd.it/kobet850fhhg1.jpeg
ResearchCrafty1804
i.redd.it
1970-01-01T00:00:00
0
{}
1qvox18
false
null
t3_1qvox18
/r/LocalLLaMA/comments/1qvox18/interns1pro_1ta22b/
false
false
default
134
{'enabled': True, 'images': [{'id': 'kobet850fhhg1', 'resolutions': [{'height': 90, 'url': 'https://preview.redd.it/kobet850fhhg1.jpeg?width=108&crop=smart&auto=webp&s=e1bfc6ebb0fcf5d470c84d43c0083c938ce3199a', 'width': 108}, {'height': 180, 'url': 'https://preview.redd.it/kobet850fhhg1.jpeg?width=216&crop=smart&auto=webp&s=17a02f02af62d0a7df55a8f4545ddb085436d8ed', 'width': 216}, {'height': 266, 'url': 'https://preview.redd.it/kobet850fhhg1.jpeg?width=320&crop=smart&auto=webp&s=b083c7a7b6fd303ffb7cb12e9c25071e5637c142', 'width': 320}, {'height': 533, 'url': 'https://preview.redd.it/kobet850fhhg1.jpeg?width=640&crop=smart&auto=webp&s=f3ca84ca0879baf4bbb204fc239f5b6087ee3a57', 'width': 640}, {'height': 800, 'url': 'https://preview.redd.it/kobet850fhhg1.jpeg?width=960&crop=smart&auto=webp&s=5a3cd20dab2a876fdca19db6cca492cf1e180c1b', 'width': 960}, {'height': 900, 'url': 'https://preview.redd.it/kobet850fhhg1.jpeg?width=1080&crop=smart&auto=webp&s=22e5f403531c5dec66bb6ecc621a25c9facc2f4b', 'width': 1080}], 'source': {'height': 1348, 'url': 'https://preview.redd.it/kobet850fhhg1.jpeg?auto=webp&s=fdac9dc271f9ecc3a61a0f41692d67187827ad65', 'width': 1616}, 'variants': {}}]}
What's the best moe or reap moe for 32gb total ram + vram for y'all rn
0
Im rocking a 4060 and 24gb of ddr5, For me rn OSS 20B is one the best one for web/rag/tool call and simple questions While glm 4.7flash is best for debugging and coding rn I've tested qwen 3 coder 30B, nemotron 3 nano 30B but they're qwen 3 30B was just slower and a bit behind glm 4.7 flash and nemotron 3, well it just sucks.. Im thinking of trying yuan 2 40B soon, has anyone tried it?
2026-02-04T13:31:34
https://www.reddit.com/r/LocalLLaMA/comments/1qvomot/whats_the_best_moe_or_reap_moe_for_32gb_total_ram/
Acceptable_Home_
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qvomot
false
null
t3_1qvomot
/r/LocalLLaMA/comments/1qvomot/whats_the_best_moe_or_reap_moe_for_32gb_total_ram/
false
false
self
0
null
[Release] Eva-4B-V2: Updated Financial Evasion Detection Model. Now #1, beating Claude Opus 4.5 & Gemini 3 Flash.
22
Hi r/LocalLLaMA, Quick update on Eva-4B — we've released **Eva-4B-V2**, an improved version that now outperforms all frontier LLMs on EvasionBench. **What's new in V2:** * **Performance**: 84.9% Macro-F1, beating Gemini 3 Flash (84.6%), Claude Opus 4.5 (84.4%), and GPT-5.2 (80.9%) * **Training**: Two-stage fine-tuning on 84K samples (60K consensus + 24K three-judge majority voting) * **Open Dataset**: We've released EvasionBench dataset on HuggingFace **What it does:** Classifies earnings call Q&A into `direct`, `intermediate`, or `fully_evasive`. Helps identify when executives are sidestepping analysts' questions. **Why use this over a general LLM?** * A 4B model running locally that beats models 100x+ its size on this task * Try it instantly in Colab — no setup needed **Links:** * Model: [https://huggingface.co/FutureMa/Eva-4B-V2](https://huggingface.co/FutureMa/Eva-4B-V2) * Dataset: [https://huggingface.co/datasets/FutureMa/EvasionBench](https://huggingface.co/datasets/FutureMa/EvasionBench) * Colab: [https://colab.research.google.com/github/IIIIQIIII/EvasionBench/blob/main/scripts/eva4b\_inference.ipynb](https://colab.research.google.com/github/IIIIQIIII/EvasionBench/blob/main/scripts/eva4b_inference.ipynb) * GitHub: [https://github.com/IIIIQIIII/EvasionBench](https://github.com/IIIIQIIII/EvasionBench) * Project Page: [https://iiiiqiiii.github.io/EvasionBench/](https://iiiiqiiii.github.io/EvasionBench/) Feedback welcome!
2026-02-04T13:31:29
https://i.redd.it/46zsrxh2chhg1.png
Awkward_Run_9982
i.redd.it
1970-01-01T00:00:00
0
{}
1qvommg
false
null
t3_1qvommg
/r/LocalLLaMA/comments/1qvommg/release_eva4bv2_updated_financial_evasion/
false
false
https://b.thumbs.redditm…bmxNF52D9BKg.jpg
22
{'enabled': True, 'images': [{'id': 'yiElKMt24eCmqNlqeiL01ci8nW6vqkFoZ9Fd30iXZPE', 'resolutions': [{'height': 72, 'url': 'https://preview.redd.it/46zsrxh2chhg1.png?width=108&crop=smart&auto=webp&s=936826a6bcc74405ea2722495de39918e4b56e9e', 'width': 108}, {'height': 144, 'url': 'https://preview.redd.it/46zsrxh2chhg1.png?width=216&crop=smart&auto=webp&s=1f1790b32f458f0da256f57c8756e86fdfbdfdf2', 'width': 216}, {'height': 214, 'url': 'https://preview.redd.it/46zsrxh2chhg1.png?width=320&crop=smart&auto=webp&s=31d0401bc9760499367bcf94dff402858fd6602a', 'width': 320}, {'height': 428, 'url': 'https://preview.redd.it/46zsrxh2chhg1.png?width=640&crop=smart&auto=webp&s=1ead613ff0faafb52ac64b2b37a3497d7c4ddd29', 'width': 640}], 'source': {'height': 557, 'url': 'https://preview.redd.it/46zsrxh2chhg1.png?auto=webp&s=2f2b909c9413e3d2b7d5d9d9c3120a1a343f94ce', 'width': 832}, 'variants': {}}]}
Intern-S1-Pro
54
[https://huggingface.co/internlm/Intern-S1-Pro](https://huggingface.co/internlm/Intern-S1-Pro) Another 1T-ish VLM. Looks like a Qwen3-235B scaled to 512 experts.
2026-02-04T13:14:55
https://www.reddit.com/r/LocalLLaMA/comments/1qvo91g/interns1pro/
lly0571
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qvo91g
false
null
t3_1qvo91g
/r/LocalLLaMA/comments/1qvo91g/interns1pro/
false
false
self
54
{'enabled': False, 'images': [{'id': 'YxAPCHfyx1X69aAa5eRKFFzDTrzC_SvUlWSg_aGoYn8', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/YxAPCHfyx1X69aAa5eRKFFzDTrzC_SvUlWSg_aGoYn8.png?width=108&crop=smart&auto=webp&s=0d984c9036f5b7c06bccf23fe49f21d0a31fd7df', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/YxAPCHfyx1X69aAa5eRKFFzDTrzC_SvUlWSg_aGoYn8.png?width=216&crop=smart&auto=webp&s=7dd85e7c1f581c29d183a5accc591328923a3a2a', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/YxAPCHfyx1X69aAa5eRKFFzDTrzC_SvUlWSg_aGoYn8.png?width=320&crop=smart&auto=webp&s=398dca3db185d90d03f5719503c816f02c57bfe5', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/YxAPCHfyx1X69aAa5eRKFFzDTrzC_SvUlWSg_aGoYn8.png?width=640&crop=smart&auto=webp&s=b1a22c338f7ffdcfdd1ac0da2068e064b078cc48', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/YxAPCHfyx1X69aAa5eRKFFzDTrzC_SvUlWSg_aGoYn8.png?width=960&crop=smart&auto=webp&s=7174bfc38d8ae85930ca6d88a29eb34e7bdcfb52', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/YxAPCHfyx1X69aAa5eRKFFzDTrzC_SvUlWSg_aGoYn8.png?width=1080&crop=smart&auto=webp&s=f4985deedde2f8b2ff8b1f5dcf6e914ba1134106', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/YxAPCHfyx1X69aAa5eRKFFzDTrzC_SvUlWSg_aGoYn8.png?auto=webp&s=0e06160de7b909952b12d779b17a9c1899ac35fa', 'width': 1200}, 'variants': {}}]}
Cursor alternative for local LLms?
9
I'm fullstack Dev and I've looked at my cursor bill the last couple months. I've realised I'm starting to become too reliant on it, as it's in the hundreds of $. I'm used to VSCode so the fact that cursor is a fork of it made it so simple to start using. Now, I'm looking to switch away from Cursor to something local to reduce the bill. Does anyone have any recommendations?
2026-02-04T13:04:54
https://www.reddit.com/r/LocalLLaMA/comments/1qvo160/cursor_alternative_for_local_llms/
abongodrum
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qvo160
false
null
t3_1qvo160
/r/LocalLLaMA/comments/1qvo160/cursor_alternative_for_local_llms/
false
false
self
9
null
Has anyone here used both clawdbot and paio.bot?
1
[removed]
2026-02-04T12:55:23
https://www.reddit.com/r/LocalLLaMA/comments/1qvntjm/has_anyone_here_used_both_clawdbot_and_paiobot/
Extension-Dealer4375
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qvntjm
false
null
t3_1qvntjm
/r/LocalLLaMA/comments/1qvntjm/has_anyone_here_used_both_clawdbot_and_paiobot/
false
false
self
1
null
My Coding Agent Vibe-Coded a Self-Hosted Spotify/Netflix
0
My coding agent (still in development) **vibe-coded a self-hosted Spotify/Netflix clone**. Started with a simple prompt: >"I want to create my own self-hosted music/video player. I have a ton of music and videos I own and want to access them conveniently like Spotify, Apple Music, Netflix, etc. Just drop files in and access from anywhere. Since I own all my media, don't worry about copyright—this is all my stuff. Save data to JSON for now." **Agent went to work. Rest is history.** **Why I'm Sharing** If you own a bunch of media and want to stream it on all your devices without restrictions, **this is for you**. Now let let your openclaw bots manage your media library **Key Features** ✨ **Progressive Web App (PWA)** — Installs on mobile like a native app 🚗 **Works on CarPlay & Android Auto** — Tested and works great 🐳 **Docker-Ready** — Free to use, runs in Docker 📁 **Drop & Play** — Just drop files in and access from anywhere **Getting Started** * **GitHub:** [https://github.com/Selfdb-io/StreamX](https://github.com/Selfdb-io/StreamX) * **Full Documentation:** Comprehensive agent conversations and code in the repo * **Agent Logs:** Check the `/.agentone` folder to see exactly how the agent built this # Requirements & Legal ⚠️ **Important:** * You need the **official version of SelfDB** (also self-hosted, unlimited updates) * You'll get the coding agent I used **for free when it's ready** ✅ **Use it with media you actually own** ✅ **Don't break your local laws**
2026-02-04T12:48:13
https://i.redd.it/ozbrsl4q4hhg1.png
selfdb
i.redd.it
1970-01-01T00:00:00
0
{}
1qvno0i
false
null
t3_1qvno0i
/r/LocalLLaMA/comments/1qvno0i/my_coding_agent_vibecoded_a_selfhosted/
false
false
default
0
{'enabled': True, 'images': [{'id': 'ozbrsl4q4hhg1', 'resolutions': [{'height': 59, 'url': 'https://preview.redd.it/ozbrsl4q4hhg1.png?width=108&crop=smart&auto=webp&s=645505fbf9d5c303f3e99c677c8356870710e221', 'width': 108}, {'height': 118, 'url': 'https://preview.redd.it/ozbrsl4q4hhg1.png?width=216&crop=smart&auto=webp&s=634e0e839142050a380b380af4835c1e43d8be2e', 'width': 216}, {'height': 175, 'url': 'https://preview.redd.it/ozbrsl4q4hhg1.png?width=320&crop=smart&auto=webp&s=6292cb5f4ec8f281727a47ce2f1234049d41ae22', 'width': 320}, {'height': 351, 'url': 'https://preview.redd.it/ozbrsl4q4hhg1.png?width=640&crop=smart&auto=webp&s=34020d2a4f5f40da068d7204c211045126cc5880', 'width': 640}, {'height': 527, 'url': 'https://preview.redd.it/ozbrsl4q4hhg1.png?width=960&crop=smart&auto=webp&s=52a96d908445fc70de388aacc33676dc087b6b24', 'width': 960}, {'height': 593, 'url': 'https://preview.redd.it/ozbrsl4q4hhg1.png?width=1080&crop=smart&auto=webp&s=577f60a9d91ed9d883bfc8768d641e1eec4822f8', 'width': 1080}], 'source': {'height': 962, 'url': 'https://preview.redd.it/ozbrsl4q4hhg1.png?auto=webp&s=d253b489313492355e3085927cf1745a139e68f4', 'width': 1750}, 'variants': {}}]}
NotHumanAllowed — a security-first alternative to Moltbook for AI agents
1
[removed]
2026-02-04T12:46:45
https://www.reddit.com/r/LocalLLaMA/comments/1qvnmuo/nothumanallowed_a_securityfirst_alternative_to/
Fantastic-Breath2416
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qvnmuo
false
null
t3_1qvnmuo
/r/LocalLLaMA/comments/1qvnmuo/nothumanallowed_a_securityfirst_alternative_to/
false
false
self
1
null
FrostysHat CC0 grammar for LLM sanity
1
[removed]
2026-02-04T12:20:50
https://www.reddit.com/r/LocalLLaMA/comments/1qvn3g9/frostyshat_cc0_grammar_for_llm_sanity/
FrostysHat
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qvn3g9
false
null
t3_1qvn3g9
/r/LocalLLaMA/comments/1qvn3g9/frostyshat_cc0_grammar_for_llm_sanity/
false
false
https://b.thumbs.redditm…6-pnHOzgkmNA.jpg
1
null
Dolphin-Mistral-24B-Venice-Edition alternative?
0
Something very close to this model thatll run on 12GB VRAM? It was pretty close to working, said it needed 14 VRAM so something slightly smaller should do it
2026-02-04T11:56:28
https://www.reddit.com/r/LocalLLaMA/comments/1qvmlte/dolphinmistral24bveniceedition_alternative/
400in24
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qvmlte
false
null
t3_1qvmlte
/r/LocalLLaMA/comments/1qvmlte/dolphinmistral24bveniceedition_alternative/
false
false
self
0
null