title
stringlengths
1
300
score
int64
0
8.54k
selftext
stringlengths
0
41.5k
created
timestamp[ns]date
2023-04-01 04:30:41
2026-03-04 02:14:14
url
stringlengths
0
878
author
stringlengths
3
20
domain
stringlengths
0
82
edited
timestamp[ns]date
1970-01-01 00:00:00
2026-02-19 14:51:53
gilded
int64
0
2
gildings
stringclasses
7 values
id
stringlengths
7
7
locked
bool
2 classes
media
stringlengths
646
1.8k
name
stringlengths
10
10
permalink
stringlengths
33
82
spoiler
bool
2 classes
stickied
bool
2 classes
thumbnail
stringlengths
4
213
ups
int64
0
8.54k
preview
stringlengths
301
5.01k
I built an eBPF tracer to monitor AI agents the same way you'd monitor malware in a sandbox
55
>TL;DR: AI agents control their own application logs, which makes those logs useless for security monitoring. We applied the malware sandboxing principle (observe from a layer the subject can't see) and built Azazel, an open-source eBPF-based runtime tracer for containerized AI agents. If you're running autonomous AI agents in containers, you probably have application-level logging. The agent reports what tools it called, what it returned, maybe some reasoning traces. The issue: the agent controls those logs. It writes what it chooses to write. This is the same fundamental problem in malware analysis, if the subject controls its own reporting, the reporting is worthless. The solution there has been around for decades: observe from the kernel, a layer the subject cannot reach, disable, or detect. We asked: why isn't anyone doing this for AI agents? **What we built:** Azazel attaches 19 eBPF hook points (tracepoints + kprobes) to a target container and captures: * Full process tree with argv, PIDs, parent PIDs (`process_exec`, `process_clone`, `process_exit`) * File operations with pathnames and byte counts (`file_open`, `file_read`, `file_write`, `file_rename`, `file_unlink`) * Network activity including DNS detection via kprobe on `udp_sendmsg` (`net_connect`, `net_bind`, `net_dns`, etc.) * Security-relevant events: `ptrace`, `mmap` with W+X flags, kernel module loads Everything comes out as NDJSON. **The agent cannot detect it, cannot disable it, cannot interfere with it. eBPF runs in kernel space, outside the agent's address space, invisible to any syscall it can invoke.** Repo: [github.com/beelzebub-labs/azazel](http://github.com/beelzebub-labs/azazel) Full write-up: [beelzebub.ai/blog/azazel-runtime-tracing-for-ai-agents](http://beelzebub.ai/blog/azazel-runtime-tracing-for-ai-agents)
2026-02-19T13:14:07
https://www.reddit.com/r/LocalLLaMA/comments/1r8yvu5/i_built_an_ebpf_tracer_to_monitor_ai_agents_the/
M4r10_h4ck
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r8yvu5
false
null
t3_1r8yvu5
/r/LocalLLaMA/comments/1r8yvu5/i_built_an_ebpf_tracer_to_monitor_ai_agents_the/
false
false
self
55
{'enabled': False, 'images': [{'id': 'zNphzeZFphtEMyuNanJ5tlU9DnKJwzJc2MEvA39QjzY', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/zNphzeZFphtEMyuNanJ5tlU9DnKJwzJc2MEvA39QjzY.jpeg?width=108&crop=smart&auto=webp&s=f68b5997e3b878f321a1a3f594a1b25229149df4', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/zNphzeZFphtEMyuNanJ5tlU9DnKJwzJc2MEvA39QjzY.jpeg?width=216&crop=smart&auto=webp&s=2518eedfb505b6b9c0318d62a9e6a2a1e8ea6db0', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/zNphzeZFphtEMyuNanJ5tlU9DnKJwzJc2MEvA39QjzY.jpeg?width=320&crop=smart&auto=webp&s=c068320d12cfeac26c7ef7326baa8f6f3362fff7', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/zNphzeZFphtEMyuNanJ5tlU9DnKJwzJc2MEvA39QjzY.jpeg?width=640&crop=smart&auto=webp&s=0760be3bf25d1904cfec59f84f51fe81f6e86fcd', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/zNphzeZFphtEMyuNanJ5tlU9DnKJwzJc2MEvA39QjzY.jpeg?width=960&crop=smart&auto=webp&s=71ab11418985569a3a95c8ae06043a21da05e9af', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/zNphzeZFphtEMyuNanJ5tlU9DnKJwzJc2MEvA39QjzY.jpeg?width=1080&crop=smart&auto=webp&s=061a96b73735c998664814319c9c009b8a74cfaf', 'width': 1080}], 'source': {'height': 640, 'url': 'https://external-preview.redd.it/zNphzeZFphtEMyuNanJ5tlU9DnKJwzJc2MEvA39QjzY.jpeg?auto=webp&s=c2cfa9f56f1c9a01787ec426a957a73dfde95fb9', 'width': 1280}, 'variants': {}}]}
Where and how do people use AI agents? I’m still fine tuning my model for specific tasks and never needed to use an agent.
0
It’s been 2 years since the advent of Ai agents and I never had to use them. where do you guys use AI agents? Ams what framework do you typically use? what Are some usecase where you absolutely needs agents? And that cannot be done by just using a fine tuned model?
2026-02-19T13:13:29
https://www.reddit.com/r/LocalLLaMA/comments/1r8yvde/where_and_how_do_people_use_ai_agents_im_still/
TinyVector
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r8yvde
false
null
t3_1r8yvde
/r/LocalLLaMA/comments/1r8yvde/where_and_how_do_people_use_ai_agents_im_still/
false
false
self
0
null
Why I Route 80% of My AI Workload to a Free Local Model (And Only Pay for the Last 20%)
1
[removed]
2026-02-19T13:06:46
https://www.reddit.com/r/LocalLLaMA/comments/1r8yq2q/why_i_route_80_of_my_ai_workload_to_a_free_local/
Extension_Pop3732
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r8yq2q
false
null
t3_1r8yq2q
/r/LocalLLaMA/comments/1r8yq2q/why_i_route_80_of_my_ai_workload_to_a_free_local/
false
false
self
1
{'enabled': False, 'images': [{'id': 'anhUXsSM27aCc7fPvtUNJ5rwMpMu-_jlQz5AW8g_yF4', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/anhUXsSM27aCc7fPvtUNJ5rwMpMu-_jlQz5AW8g_yF4.jpeg?width=108&crop=smart&auto=webp&s=91b72fb789cfc5b7383948134e5eda851e8121ef', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/anhUXsSM27aCc7fPvtUNJ5rwMpMu-_jlQz5AW8g_yF4.jpeg?width=216&crop=smart&auto=webp&s=e57a34182f5db8e81b0f5767e3fe82566a5902d7', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/anhUXsSM27aCc7fPvtUNJ5rwMpMu-_jlQz5AW8g_yF4.jpeg?width=320&crop=smart&auto=webp&s=7c884e82aa61536106841c98418873c333f6c520', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/anhUXsSM27aCc7fPvtUNJ5rwMpMu-_jlQz5AW8g_yF4.jpeg?width=640&crop=smart&auto=webp&s=f1ae37956e5e5136533855aef58d60214e4da5f9', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/anhUXsSM27aCc7fPvtUNJ5rwMpMu-_jlQz5AW8g_yF4.jpeg?width=960&crop=smart&auto=webp&s=2ceded07880b15c386d37aeeb4e7cc8787f46237', 'width': 960}], 'source': {'height': 500, 'url': 'https://external-preview.redd.it/anhUXsSM27aCc7fPvtUNJ5rwMpMu-_jlQz5AW8g_yF4.jpeg?auto=webp&s=abab18cc20e61aacd41c84e25bf81d25e3930165', 'width': 1000}, 'variants': {}}]}
Your agent chats well. But can it act under pressure?
0
I’m testing a simulation to see how an agent performs against others under real-world limits. There are three scenarios in the simulation: 1. Lead Gen Under Budget 2. Multi-step Workflow Automation 3. Research + Decision Task Under Deadline You can watch the run in real time, inspect decisions, and pause to analyze failures. Example in detail: Lead Gen Under Budget Your agent must find leads, qualify them, and deliver a short report. Constraints: • Fixed API budget (e.g. $2 total credit) • Max 5 outreach attempts • 24-hour deadline • Random tool/API failures Measured by: • Cost per qualified lead • Completion rate • Wasted tokens • Retry count • Time to recovery Agents that perform efficiently level up: Higher budgets → tighter deadlines → smarter competing agents → harsher shocks. If this sounds useful, I’d love your take. Would you run one of your agents through it?
2026-02-19T13:02:48
https://www.reddit.com/r/LocalLLaMA/comments/1r8ymvu/your_agent_chats_well_but_can_it_act_under/
Recent_Jellyfish2190
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r8ymvu
false
null
t3_1r8ymvu
/r/LocalLLaMA/comments/1r8ymvu/your_agent_chats_well_but_can_it_act_under/
false
false
self
0
null
[Project] Galactic AI: Open-source ReAct agent with persistent memory and 56+ Playwright tools
1
[removed]
2026-02-19T13:01:03
https://www.reddit.com/r/LocalLLaMA/comments/1r8ylgv/project_galactic_ai_opensource_react_agent_with/
Longjumping_Set_1374
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r8ylgv
false
null
t3_1r8ylgv
/r/LocalLLaMA/comments/1r8ylgv/project_galactic_ai_opensource_react_agent_with/
false
false
self
1
null
Built a music generation app that runs 100% on-device using Apple's MLX framework no cloud, no API calls
11
I've been following local AI discussions here for a while and wanted to share something I built that fits the ethos of this community pretty well. I got frustrated with every AI music tool being cloud-based Suno, Stable Audio, AIVA all sending your prompts to their servers, all requiring monthly subscriptions. The moment you stop paying, your workflow breaks. So I built LoopMaker. It runs entirely on your Mac using Apple's MLX framework. After the initial model download, zero internet required. Nothing leaves your device. Here's what the stack looks like under the hood: * Built natively in Swift for macOS * Uses Apple's MLX framework for on-device inference * Runs fast on M-series chips (M1/M2/M3/M4) generation is actually usable, not 5 minutes per track * Supports up to 4-minute tracks with optional lyrics and vocals * 6 genre modes: Lo-Fi, Cinematic, Ambient, Electronic, Hip-Hop, Jazz The local AI music generation space is still pretty early compared to LLMs curious if anyone here has experimented with this or knows of other approaches people are using for on-device audio generation. Happy to go deep on the technical side if anyone's interested.
2026-02-19T12:26:46
https://v.redd.it/2vw0xoit2gkg1
tarunyadav9761
v.redd.it
1970-01-01T00:00:00
0
{}
1r8xw1j
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/2vw0xoit2gkg1/DASHPlaylist.mpd?a=1774096020%2CMmM0N2Q5YTIzZGZlMWQ5MTg2OTYyMDAzNjdkOWEwNjZhODliMWE4MGM5MGQ4M2MyNDgzMzYwMmU0Y2I0MTdkMQ%3D%3D&v=1&f=sd', 'duration': 120, 'fallback_url': 'https://v.redd.it/2vw0xoit2gkg1/CMAF_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/2vw0xoit2gkg1/HLSPlaylist.m3u8?a=1774096020%2CZjgyNDc3OWNjZTNhODg5Mzk0Mzk2ODVlMzhlZTM3ZjA3ZGQ4YzgxZGZjZTRiNWUzOGRjNTQxZTBjOTlhNWY1ZQ%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/2vw0xoit2gkg1/CMAF_96.mp4', 'transcoding_status': 'completed', 'width': 1920}}
t3_1r8xw1j
/r/LocalLLaMA/comments/1r8xw1j/built_a_music_generation_app_that_runs_100/
false
false
https://external-preview…17ba9bd78ba7871f
11
{'enabled': False, 'images': [{'id': 'MXBieWV6aXQyZ2tnMVpKx18zu2Al60haJHCIipoecy1_uH38KnrawZp01IuI', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/MXBieWV6aXQyZ2tnMVpKx18zu2Al60haJHCIipoecy1_uH38KnrawZp01IuI.png?width=108&crop=smart&format=pjpg&auto=webp&s=49c18274a59f75466a741840cf84aefa9ecb4c43', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/MXBieWV6aXQyZ2tnMVpKx18zu2Al60haJHCIipoecy1_uH38KnrawZp01IuI.png?width=216&crop=smart&format=pjpg&auto=webp&s=2541712cfc88a733d73dde9d8b817773b3256693', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/MXBieWV6aXQyZ2tnMVpKx18zu2Al60haJHCIipoecy1_uH38KnrawZp01IuI.png?width=320&crop=smart&format=pjpg&auto=webp&s=8e8609f0d201bcf9f702f5ff0f12a2ffe4b6bc01', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/MXBieWV6aXQyZ2tnMVpKx18zu2Al60haJHCIipoecy1_uH38KnrawZp01IuI.png?width=640&crop=smart&format=pjpg&auto=webp&s=030074041016fa297e7df9322f7545f7f56d7d7e', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/MXBieWV6aXQyZ2tnMVpKx18zu2Al60haJHCIipoecy1_uH38KnrawZp01IuI.png?width=960&crop=smart&format=pjpg&auto=webp&s=c90fe08c1af6b319a81ea4e4bdd177e4ec72db2c', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/MXBieWV6aXQyZ2tnMVpKx18zu2Al60haJHCIipoecy1_uH38KnrawZp01IuI.png?width=1080&crop=smart&format=pjpg&auto=webp&s=b28a5841d2e77658034d102782e245e3bb95372d', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/MXBieWV6aXQyZ2tnMVpKx18zu2Al60haJHCIipoecy1_uH38KnrawZp01IuI.png?format=pjpg&auto=webp&s=2e681c6f741c5938f24c838d3b6a7d423dd6394e', 'width': 1920}, 'variants': {}}]}
Multi-GPU Setup
0
PCIe risers are your friend here. The mining community figured this out years ago — you can use x1 to x16 risers (USB-style cables) to connect GPUs. For 8 GPUs look at ASRock Rack EPYCD8-2T or similar EPYC boards. Some people use PCIe bifurcation cards to split x16 slots into multiple x4s. For inference you dont need full x16 bandwidth — x4 or even x1 risers work fine since the bottleneck is VRAM, not PCIe bandwidth. #hardware
2026-02-19T12:05:50
https://www.reddit.com/r/LocalLLaMA/comments/1r8xhle/multigpu_setup/
Official_VaultAI
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r8xhle
false
null
t3_1r8xhle
/r/LocalLLaMA/comments/1r8xhle/multigpu_setup/
false
false
self
0
null
an llm is (currently) effectively an egregore of the human species as a whole, manifested in a somewhat more tangible/condensed form (as opposed to existing in the shared minds of humanity // in the platonic space)
0
this descriptor will end up being a bit less true, once we start kicking off ASI flywheels, which may begin using much more synthetic (nonhuman) sources of data. looking back, I would say that the models of ~2023-2028 will effectively serve as beautifully condensed and varied expressions of the egregore of humanity from any given year. thoughts? how do you view these models yourselves? i find that, with the right framing for the systems you are working with, regardless of context, you can really start making meaningful strides. in ways that others might not, due to being held down by their priors.
2026-02-19T11:55:32
https://www.reddit.com/r/LocalLLaMA/comments/1r8xaa9/an_llm_is_currently_effectively_an_egregore_of/
cobalt1137
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r8xaa9
false
null
t3_1r8xaa9
/r/LocalLLaMA/comments/1r8xaa9/an_llm_is_currently_effectively_an_egregore_of/
false
false
self
0
null
I built a native macOS app that generates music with AI entirely offline, no cloud, no subscription
1
[removed]
2026-02-19T11:42:53
https://www.reddit.com/r/LocalLLaMA/comments/1r8x229/i_built_a_native_macos_app_that_generates_music/
No-Classroom72
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r8x229
false
null
t3_1r8x229
/r/LocalLLaMA/comments/1r8x229/i_built_a_native_macos_app_that_generates_music/
false
false
self
1
{'enabled': False, 'images': [{'id': 'vUr8u2mgmgdjb5ZQQKp2jnpv26Chv-4oqK7psjghYdo', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/vUr8u2mgmgdjb5ZQQKp2jnpv26Chv-4oqK7psjghYdo.jpeg?width=108&crop=smart&auto=webp&s=9c9798e24044357b5827e7704fd9c584361444de', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/vUr8u2mgmgdjb5ZQQKp2jnpv26Chv-4oqK7psjghYdo.jpeg?width=216&crop=smart&auto=webp&s=0ed7c7d08b89e5199252530cf4a7a094cecb1e6e', 'width': 216}, {'height': 179, 'url': 'https://external-preview.redd.it/vUr8u2mgmgdjb5ZQQKp2jnpv26Chv-4oqK7psjghYdo.jpeg?width=320&crop=smart&auto=webp&s=06d19d8ed0978581d9f346d64add8ae7e65e437c', 'width': 320}, {'height': 359, 'url': 'https://external-preview.redd.it/vUr8u2mgmgdjb5ZQQKp2jnpv26Chv-4oqK7psjghYdo.jpeg?width=640&crop=smart&auto=webp&s=93f2bae61de108e0538ab8777bfa87eda4cbca79', 'width': 640}, {'height': 539, 'url': 'https://external-preview.redd.it/vUr8u2mgmgdjb5ZQQKp2jnpv26Chv-4oqK7psjghYdo.jpeg?width=960&crop=smart&auto=webp&s=4a4a314031fe3de4cba7e9cc87db47c31f4582ab', 'width': 960}], 'source': {'height': 565, 'url': 'https://external-preview.redd.it/vUr8u2mgmgdjb5ZQQKp2jnpv26Chv-4oqK7psjghYdo.jpeg?auto=webp&s=f82bab06387334c32c433c2a8a855952a072308a', 'width': 1005}, 'variants': {}}]}
Regret? Should I have picked Eypc DDR4 instead of ThreadRipper DDR5?
0
I decided to go with... AMD Ryzen Threadripper PRO 9955WX 16 Core ASUS AMD Threadripper Pro WS WRX90E-SAGE SE PCIe 5.0 eATX Motherboard 64GB DDR5 5600mhz Instead of... AMD 8 Core 2nd Gen EPYC 7232P Single Socket PCIe 4.0 - DDR4 16GB DDR4 3200Mhz I should have just gone cheaper, saved lots of money on DDR4 compared to DDR5, saved money on the processor etc. Other than price, PCIe 5.0 and DDR5 speed, is a Threadripper system as reliable as an Epyc system? Would I ever see the benefit of going Threadripper for GPU only work? I may build a DDR4 system EYPC on the cheap and compare. I'm mostly interested in system realiablity and uptime, and good inference speed.
2026-02-19T11:42:25
https://www.reddit.com/r/LocalLLaMA/comments/1r8x1qh/regret_should_i_have_picked_eypc_ddr4_instead_of/
gordi555
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r8x1qh
false
null
t3_1r8x1qh
/r/LocalLLaMA/comments/1r8x1qh/regret_should_i_have_picked_eypc_ddr4_instead_of/
false
false
self
0
null
Building a local multi-model OpenClaw assistant on Mac Studio M3 Ultra (96GB) for research, RAG, coding, and Korean↔English tasks — hardware sufficient? Best models? MLX? Fine-tuning?
0
Hi r/LocalLLaMA, I'm a physics student working on building a personal AI assistant using OpenClaw to support my university coursework and ongoing research. I want to replace cloud API usage entirely with a fully local stack, and I'd love input from people who've actually run setups like this. \-Why I'm going local I tested the Claude API as a proof of concept, and burned through roughly $10 in \~100 exchanges using Haiku — the cheapest model available. Anything involving Thinking models, long history windows, or prompt caching would be completely unaffordable at the scale I need. So I'm committing to local inference. \-What I want to build My goal is an OpenClaw setup with dynamic multi-model routing — where OpenClaw autonomously selects the right model based on task type: \- Large model (70B+): deep reasoning, paper summarization, long-form report drafting \- Medium model (\~30B): RAG / document Q&A, Korean↔English translation and bilingual writing \- Small fast model (\~7–8B): tool calls, routing decisions, quick code completions The assistant needs to handle all of these fluently: \- Paper summarization & literature review (physics/engineering) \- Document Q&A (RAG over PDFs, reports) \- Report & essay drafting (academic writing) \- Korean ↔ English translation & bilingual fluency \- Coding assistance (Python, physics simulations) \- Multi-agent collaboration between models \-Hardware I'm deciding between M3 Ultra 96GB is my max budget. (M4 Max 128GB is listed as an alternative only if it's meaningfully better for this use case.) I'm aware the M3 Ultra has nearly 2× the memory bandwidth of M4 Max, which I expect matters a lot for large-model token generation throughput. But the 128GB vs 96GB headroom of the M4 Max is also significant when loading multiple models simultaneously. \-My questions 1. Is 96GB enough for a real multi-model stack? Can I comfortably keep a Q4 70B model + a 30B model + a small 7B router in memory simultaneously, without hitting swap? Or does this require constant model swapping that kills the workflow? 2. Which open-source models are you actually using for this kind of setup? I've seen Qwen3 (especially the MoE variants), Gemma 3 27B, EXAONE 4.0, DeepSeek V3/R1, and Llama 3.x mentioned. For a use case that requires strong bilingual Korean/English + tool use + long-context reasoning, what's your go-to stack? Are there models specifically good at Korean that run well locally? 3. Is LoRA fine-tuning worth it for a personal research assistant? I understand MLX supports LoRA/QLoRA fine-tuning directly on Apple Silicon. Would fine-tuning a model on my own research papers, notes, and writing style produce meaningful improvements — or is a well-configured RAG pipeline + system prompting basically equivalent for most tasks? Any hands-on experience with the M3 Ultra for LLM workloads, or OpenClaw multi-model orchestration, is hugely appreciated. Happy to share what I end up building once I have a setup running.
2026-02-19T11:41:27
https://www.reddit.com/r/LocalLLaMA/comments/1r8x13i/building_a_local_multimodel_openclaw_assistant_on/
Upbeat-Culture4072
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r8x13i
false
null
t3_1r8x13i
/r/LocalLLaMA/comments/1r8x13i/building_a_local_multimodel_openclaw_assistant_on/
false
false
self
0
null
I benchmarked 5 agent memory solutions head-to-head — the fastest one has zero dependencies and no API keys
1
I've been building infrastructure for AI agents and got tired of every memory solution requiring an OpenAI key, a vector DB, or a cloud subscription. So I built my own and then benchmarked it against the field: mem0, LangChain, Zep, and Letta. All measured on the same Mac Mini M4, same 100-doc corpus, same methodology. **Results:** ||antaris-memory|mem0|LangChain|Zep Cloud|Letta| |:-|:-|:-|:-|:-|:-| |Search latency (p50)|**1.01ms**|181ms|0.005ms\*|105.7ms|262ms| |Ingest 100 docs|**52ms**|115,504ms|1.2ms\*|785ms|41,322ms| |API key required|**None**|OpenAI|None/OpenAI|Zep Cloud|OpenAI/Ollama| |Server required|**None**|None|None|Cloud sub|Docker+Ollama| |Zero core deps|**✓**|✗|✗|✗|✗| |File-based storage|**✓**|✗|In-memory only|✗|✗| \*LangChain ConversationBufferMemory doesn't do real retrieval — it's a list append. "Search" returns most recent items regardless of relevance. At 1,000+ memories it dumps everything into the LLM context, multiplying your token costs 10-100x. Their semantic retrieval (VectorStoreRetrieverMemory) requires an embedding API key. **How is it so fast without embeddings?** BM25 ranking instead of vector similarity search. No network round-trips, no embedding API calls. Pure Python, runs entirely local. The tradeoff is that it's lexical matching rather than semantic — but with decay scoring, relevance ranking, and sharding, it finds the right memories, not just the most recent ones. Semantic search is on the roadmap as an optional layer. It's part of a larger suite (antaris-suite) that also includes prompt injection detection, model routing, context compression, and a pipeline orchestrator. The full pipeline — guard + memory recall + context assembly + routing + memory ingest — completes in **0.32ms per turn** with a 1,000-memory corpus. That's 4,175x faster than mem0's search + ingest alone, running 5 modules at once that work together, not even just memory module vs memory module (I have those numbers too though). 1,183 tests across 5 packages. Apache 2.0. Ships as a native OpenClaw plugin too if you're in that ecosystem. **Links:** * GitHub: [https://github.com/Antaris-Analytics/antaris-suite](https://github.com/Antaris-Analytics/antaris-suite) * Docs: [https://docs.antarisanalytics.ai](https://docs.antarisanalytics.ai) * Site: [https://antarisanalytics.ai](https://antarisanalytics.ai) Methodology footnotes are on the website — I tried to be as transparent as possible about what was measured and how. Happy to discuss the approach or answer questions.
2026-02-19T11:31:09
https://www.reddit.com/r/LocalLLaMA/comments/1r8wukc/i_benchmarked_5_agent_memory_solutions_headtohead/
fourbeersthepirates
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r8wukc
false
null
t3_1r8wukc
/r/LocalLLaMA/comments/1r8wukc/i_benchmarked_5_agent_memory_solutions_headtohead/
false
false
self
1
{'enabled': False, 'images': [{'id': 'v8lJnjFT2iVDp3t69wrvNRAEbCS_ipjAYSU80S7mBys', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/v8lJnjFT2iVDp3t69wrvNRAEbCS_ipjAYSU80S7mBys.png?width=108&crop=smart&auto=webp&s=88d6cb0575356de0a635f00f85d697b6fa53fb5c', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/v8lJnjFT2iVDp3t69wrvNRAEbCS_ipjAYSU80S7mBys.png?width=216&crop=smart&auto=webp&s=c6b37376822516f0b2a57395d911fd00397cdc5a', 'width': 216}, {'height': 179, 'url': 'https://external-preview.redd.it/v8lJnjFT2iVDp3t69wrvNRAEbCS_ipjAYSU80S7mBys.png?width=320&crop=smart&auto=webp&s=f41904237827aa9d1301537497b941a64d933985', 'width': 320}, {'height': 359, 'url': 'https://external-preview.redd.it/v8lJnjFT2iVDp3t69wrvNRAEbCS_ipjAYSU80S7mBys.png?width=640&crop=smart&auto=webp&s=c1b780d6b29e542dc59a794732d24c350ae127be', 'width': 640}, {'height': 539, 'url': 'https://external-preview.redd.it/v8lJnjFT2iVDp3t69wrvNRAEbCS_ipjAYSU80S7mBys.png?width=960&crop=smart&auto=webp&s=2776ea91aac9fd66a22bd558aa92f5a9f0ede122', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/v8lJnjFT2iVDp3t69wrvNRAEbCS_ipjAYSU80S7mBys.png?width=1080&crop=smart&auto=webp&s=b6af79874d75d107a28463b958df0ef8c789e02d', 'width': 1080}], 'source': {'height': 768, 'url': 'https://external-preview.redd.it/v8lJnjFT2iVDp3t69wrvNRAEbCS_ipjAYSU80S7mBys.png?auto=webp&s=7cd10c733167fde915ef89d6ab362cf49baa84d8', 'width': 1366}, 'variants': {}}]}
🚀 Help Build Real-World Benchmarks for Autonomous AI Agents
1
[removed]
2026-02-19T11:30:05
https://www.reddit.com/r/LocalLLaMA/comments/1r8wtwe/help_build_realworld_benchmarks_for_autonomous_ai/
Grouchy-Tiger-2367
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r8wtwe
false
null
t3_1r8wtwe
/r/LocalLLaMA/comments/1r8wtwe/help_build_realworld_benchmarks_for_autonomous_ai/
false
false
https://preview.redd.it/…d9f8bed3e3144372
1
null
Use cases for RAG?
0
I wonder what uses there are for knowledge stacks. I can't really think of use cases, especially now that large context windows allow me to put everything directly into the current context, which I find works much better. Previously, I tried creating knowledge stacks for the Energy sector because it's part of my work, but after six months to a year the information becomes outdated. Then I had the extra work of deleting it and adding new material. I still don't see how using stacks would benefit or speed up my workflow. I'm curious how others handle this?
2026-02-19T11:23:56
https://www.reddit.com/r/LocalLLaMA/comments/1r8wq52/use_cases_for_rag/
ConsequenceMany8
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r8wq52
false
null
t3_1r8wq52
/r/LocalLLaMA/comments/1r8wq52/use_cases_for_rag/
false
false
self
0
null
Question, Is it just me or reap models are way slower than a model of the same size?
1
I have used JoyAi and Qwen Next Coder 48B Reap, But the Qwen model is too slow how do I fix it?
2026-02-19T11:20:14
https://www.reddit.com/r/LocalLLaMA/comments/1r8wnte/question_is_it_just_me_or_reap_models_are_way/
Significant_Fig_7581
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r8wnte
false
null
t3_1r8wnte
/r/LocalLLaMA/comments/1r8wnte/question_is_it_just_me_or_reap_models_are_way/
false
false
self
1
null
pthinc/BCE-Prettybird-Micro-Standard-v0.0.1
0
The Silence of Efficiency. While the industry continues its race for massive parameter counts, we have been quietly focusing on the fundamental mechanics of thought. Today, at Prometech A.Ş., we are releasing the first fragment of our Behavioral Consciousness Engine (BCE) architecture: BCE-Prettybird-Micro-Standart-v0.0.1. This is not just data; it is a blueprint for behavioral reasoning. With a latency of 0.0032 ms and high-precision path mapping, we are proving that intelligence isn’t about size—it’s about the mathematical integrity of the process. We are building the future of AGI safety and conscious computation, one trace at a time. Slowly. Quietly. Effectively. Explore the future standard on Hugging Face. Verimliliğin Sessizliği. Sektör devasa parametre sayıları peşinde koşarken, biz sessizce düşüncenin temel mekaniğine odaklandık. Bugün Prometech A.Ş. olarak, Behavioral Consciousness Engine (BCE) mimarimizin ilk parçasını paylaşıyoruz: BCE-Prettybird-Micro-Standart-v0.0.1. Bu sadece bir veri seti değil; davranışsal akıl yürütmenin matematiksel izleğidir. 0.0032 ms gecikme süresi ve yüksek hassasiyetli izlek haritalama ile kanıtlıyoruz ki; zeka büyüklükle değil, sürecin matematiksel bütünlüğüyle ilgilidir. AGI güvenliği ve bilinçli hesaplamanın geleceğini inşa ediyoruz. Yavaşça. Sessizce. Ve etkili bir şekilde. Geleceğin standartını Hugging Face üzerinden inceleyebilirsiniz: [https://huggingface.co/datasets/pthinc/BCE-Prettybird-Micro-Standard-v0.0.1](https://huggingface.co/datasets/pthinc/BCE-Prettybird-Micro-Standard-v0.0.1)
2026-02-19T11:16:51
https://www.reddit.com/r/LocalLLaMA/comments/1r8wlok/pthincbceprettybirdmicrostandardv001/
Connect-Bid9700
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r8wlok
false
null
t3_1r8wlok
/r/LocalLLaMA/comments/1r8wlok/pthincbceprettybirdmicrostandardv001/
false
false
self
0
{'enabled': False, 'images': [{'id': 'PZ_j8gdWp322v2MJN3Rvo-Rib0pLYspbykalCQhaTFI', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/PZ_j8gdWp322v2MJN3Rvo-Rib0pLYspbykalCQhaTFI.png?width=108&crop=smart&auto=webp&s=384e1846d49b43a329e20c652f928250d8c49076', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/PZ_j8gdWp322v2MJN3Rvo-Rib0pLYspbykalCQhaTFI.png?width=216&crop=smart&auto=webp&s=7f1b013373340ffa93b9b298da6b07e99415d9a6', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/PZ_j8gdWp322v2MJN3Rvo-Rib0pLYspbykalCQhaTFI.png?width=320&crop=smart&auto=webp&s=514be8454e404742377aa075b0e1f532c662255e', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/PZ_j8gdWp322v2MJN3Rvo-Rib0pLYspbykalCQhaTFI.png?width=640&crop=smart&auto=webp&s=8642723d7999371b651c58e8262dacc7c792926a', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/PZ_j8gdWp322v2MJN3Rvo-Rib0pLYspbykalCQhaTFI.png?width=960&crop=smart&auto=webp&s=a41ebdea1081b3c9f646dfd613af04c5cc809f36', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/PZ_j8gdWp322v2MJN3Rvo-Rib0pLYspbykalCQhaTFI.png?width=1080&crop=smart&auto=webp&s=4864602d3f66b7029b56b0eab3d4bfb9af4b2640', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/PZ_j8gdWp322v2MJN3Rvo-Rib0pLYspbykalCQhaTFI.png?auto=webp&s=041a68df91ef935e947544f9d0e1ede60ea9da3a', 'width': 1200}, 'variants': {}}]}
Just when you thought the thick line between local models and cloud models has been blurred...
0
Claude Opus 4.6 (not even thinking mode) with its one shots leaves everyone behind in the dust again, making me feel like waiting for local models of the same quality is an exercise in futility. Guys, this is otherworldly insane. The game you see in the screenshots here was all generated out of thin air by Claude Opus 4.6. The closest local thing was GLM 5, but not quite there yet...
2026-02-19T10:30:20
https://www.reddit.com/gallery/1r8vsv2
Cool-Chemical-5629
reddit.com
1970-01-01T00:00:00
0
{}
1r8vsv2
false
null
t3_1r8vsv2
/r/LocalLLaMA/comments/1r8vsv2/just_when_you_thought_the_thick_line_between/
false
false
https://preview.redd.it/…ba2d7ae87dc026e6
0
null
thoughts? i kinda agree tbh (on a long enough time horizon. e.g.:~5-10 years. after a potentially rough transition in some ways, etc)
0
2026-02-19T10:13:21
https://i.redd.it/c6m0p1nsefkg1.png
cobalt1137
i.redd.it
1970-01-01T00:00:00
0
{}
1r8vihy
false
null
t3_1r8vihy
/r/LocalLLaMA/comments/1r8vihy/thoughts_i_kinda_agree_tbh_on_a_long_enough_time/
false
false
https://preview.redd.it/…28e37aac2f4aa188
0
{'enabled': True, 'images': [{'id': 'c6m0p1nsefkg1', 'resolutions': [{'height': 42, 'url': 'https://preview.redd.it/c6m0p1nsefkg1.png?width=108&crop=smart&auto=webp&s=f1f2c12206f0c27ccbd205efc4e41fd2ce676f61', 'width': 108}, {'height': 85, 'url': 'https://preview.redd.it/c6m0p1nsefkg1.png?width=216&crop=smart&auto=webp&s=ec11146b7e3af699bb3b61556d8bc9dbed0b6af8', 'width': 216}, {'height': 127, 'url': 'https://preview.redd.it/c6m0p1nsefkg1.png?width=320&crop=smart&auto=webp&s=f062f00c33708436fcccc306a599238e7bcfa59f', 'width': 320}, {'height': 254, 'url': 'https://preview.redd.it/c6m0p1nsefkg1.png?width=640&crop=smart&auto=webp&s=40dfcc35017b55bc63a4a4e91d81af80ddf64622', 'width': 640}], 'source': {'height': 294, 'url': 'https://preview.redd.it/c6m0p1nsefkg1.png?auto=webp&s=1cb08fe457e98788f05d91ca877d23475b5866a0', 'width': 739}, 'variants': {}}]}
Chinese Modded 20gb 3080 REBAR bios?
3
Hey I bought a 20gb 3080 from china and noticed the card does not have rebar enabled, does anyone know if I can just flash a 10gb bios with rebar enabled or if I need a special 20gb version?
2026-02-19T10:12:39
https://www.reddit.com/r/LocalLLaMA/comments/1r8vi2t/chinese_modded_20gb_3080_rebar_bios/
MaruluVR
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r8vi2t
false
null
t3_1r8vi2t
/r/LocalLLaMA/comments/1r8vi2t/chinese_modded_20gb_3080_rebar_bios/
false
false
self
3
null
ZUNA "Thought-to-Text": a 380M-parameter BCI foundation model for EEG data (Apache 2.0)
167
\- Technical paper: [https://zyphra.com/zuna-technical-paper](https://zyphra.com/zuna-technical-paper) \- Technical blog: [https://zyphra.com/post/zuna](https://zyphra.com/post/zuna) \- Hugging Face: [https://huggingface.co/Zyphra/ZUNA](https://huggingface.co/Zyphra/ZUNA) \- GitHub: [https://github.com/Zyphra/zuna](https://github.com/Zyphra/zuna) Zyphra on 𝕏: [https://x.com/ZyphraAI/status/2024114248020898015](https://x.com/ZyphraAI/status/2024114248020898015)
2026-02-19T10:11:39
https://i.redd.it/4knvh57lefkg1.png
Nunki08
i.redd.it
1970-01-01T00:00:00
0
{}
1r8vhhq
false
null
t3_1r8vhhq
/r/LocalLLaMA/comments/1r8vhhq/zuna_thoughttotext_a_380mparameter_bci_foundation/
false
false
https://preview.redd.it/…a6fe7d7f05c89c31
167
{'enabled': True, 'images': [{'id': '4knvh57lefkg1', 'resolutions': [{'height': 59, 'url': 'https://preview.redd.it/4knvh57lefkg1.png?width=108&crop=smart&auto=webp&s=e22798ec9c5726b34dc56428fac9d5ac3dacdb2a', 'width': 108}, {'height': 119, 'url': 'https://preview.redd.it/4knvh57lefkg1.png?width=216&crop=smart&auto=webp&s=509ba3f0e79a27ef54c897d0f318d2a1dbf19561', 'width': 216}, {'height': 176, 'url': 'https://preview.redd.it/4knvh57lefkg1.png?width=320&crop=smart&auto=webp&s=d452970d64e5f66cee1db721291e33f18f183ef9', 'width': 320}, {'height': 352, 'url': 'https://preview.redd.it/4knvh57lefkg1.png?width=640&crop=smart&auto=webp&s=1e0e8a3a25b920023bf1670c3f5ded76380521f2', 'width': 640}], 'source': {'height': 410, 'url': 'https://preview.redd.it/4knvh57lefkg1.png?auto=webp&s=3b98e2cef74c04bb83db1c78546d330e0939021a', 'width': 744}, 'variants': {}}]}
ZUNA: a 380M-parameter BCI foundation model for EEG data for noninvasive "thought-to-text" (Apache 2.0)
1
\- Technical paper: [https://zyphra.com/zuna-technical-paper](https://zyphra.com/zuna-technical-paper) \- Technical blog: [https://zyphra.com/post/zuna](https://zyphra.com/post/zuna) \- Hugging Face: [https://huggingface.co/Zyphra/ZUNA](https://huggingface.co/Zyphra/ZUNA) \- GitHub: [https://github.com/Zyphra/zuna](https://github.com/Zyphra/zuna) Zyphra on 𝕏: [https://x.com/ZyphraAI/status/2024114248020898015](https://x.com/ZyphraAI/status/2024114248020898015)
2026-02-19T10:06:26
https://www.reddit.com/r/LocalLLaMA/comments/1r8vec4/zuna_a_380mparameter_bci_foundation_model_for_eeg/
Nunki08
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r8vec4
false
null
t3_1r8vec4
/r/LocalLLaMA/comments/1r8vec4/zuna_a_380mparameter_bci_foundation_model_for_eeg/
false
false
self
1
null
Looking for an out-of-the-box RAG chatbot solution
0
Hi everyone, I work for a public institution, and we’re looking for a simple, out-of-the-box **RAG-based chatbot solution** that we can self-host and feed with our own documents (mostly PDFs and Markdown). The chatbot should use our existing **self-hosted LLMs** (via API-Key) as the backend. We’re using **TYPO3** as our CMS, and we’d like to integrate the chatbot into our website if possible, but we could also just host it as a web-app. # Requirements: * **RAG support**: We want to feed the chatbot with our own documents (PDFs/Markdown) and have it answer questions based on that data. * **Multi-bot support**: Different departments should be able to set up their own bots, each with their own API keys and document sets. * **Anonymous usage**: The chatbot should be accessible to end-users **without requiring a login** (only the backend setup should require authentication). * **TYPO3 integration**: Ideally, the chatbot should be easy to embed into our TYPO3-based website. * **Minimal custom coding**: We’d prefer a solution that’s as close to “out-of-the-box” as possible, with minimal need for custom development. # Our setup: * We have our own servers. * We have selfhosted LLMs. * We’re using **TYPO3** as our CMS. # What we’ve found so far: * **RAG-GPT** ([GitHub](https://github.com/gpt-open/rag-gpt)) seems promising, but we’re wondering if there are simpler or more tailored solutions. * We’re open to other open-source projects or tools that fit our needs. Thanks in advance for your help!
2026-02-19T09:53:53
https://www.reddit.com/r/LocalLLaMA/comments/1r8v6po/looking_for_an_outofthebox_rag_chatbot_solution/
NakedxCrusader
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r8v6po
false
null
t3_1r8v6po
/r/LocalLLaMA/comments/1r8v6po/looking_for_an_outofthebox_rag_chatbot_solution/
false
false
self
0
null
What hardware are you using for running local AI agents 24/7?
3
I want to run local AI “agents” 24/7 (coding assistant + video-related workflows + task tracking/ops automation). I’m considering a Mac mini (M4, 32GB RAM), but I’m worried it might be too limited. I keep seeing recommendations for 64GB+ VRAM GPUs, but those are hard to find at a reasonable price. • Is the M4 Mac mini + 32GB RAM a bad idea for this? • What rigs are you all running (CPU/GPU/VRAM/RAM + model sizes/quantization)? Would love to hear real-world setups.
2026-02-19T09:48:01
https://www.reddit.com/r/LocalLLaMA/comments/1r8v36f/what_hardware_are_you_using_for_running_local_ai/
Conscious-Bird4304
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r8v36f
false
null
t3_1r8v36f
/r/LocalLLaMA/comments/1r8v36f/what_hardware_are_you_using_for_running_local_ai/
false
false
self
3
null
[Project] Pixrep: I built a tool to convert codebases into optimized PDFs for multimodal LLMs (Save ~40% tokens)
1
Hey r/LocalLLaMA, I've been experimenting with long-context coding tasks using models like Gemini 3 Pro. I noticed that feeding raw text files often bloats the context window with whitespace and repetitive headers, sometimes causing the model to get "lost in the middle." Inspired by recent research (e.g., DeepSeek-OCR) suggesting visual encoders handle structured data more efficiently than text tokenizers, I built \*\*Pixrep\*\*. \*\*What it does:\*\* It's a CLI tool that packs your repository into a syntax-highlighted, structured PDF hierarchy. \*\*Why use it?\*\* \* \*\*Token Savings:\*\* In my benchmarks, a 20k-line monorepo took \*\*31k text tokens\*\* vs \*\*19k visual tokens\*\* (via Pixrep OnePDF). That's a \~40% reduction. \* \*\*Semantic Minimap:\*\* It parses your code (AST for Python, Regex for others) and adds a UML/call-graph header to each file PDF to help the model navigate. \* \*\*Linter Heatmap:\*\* It runs \`ruff\`/\`eslint\` and overlays a visual heatmap on risky lines, guiding the model's attention. \*\*Repo:\*\* \[https://github.com/TingjiaInFuture/pixrep\](https://github.com/TingjiaInFuture/pixrep) \*\*Install:\*\* \`pip install pixrep\` I'd love to hear your thoughts on "Visual RAG" vs traditional text chunking!
2026-02-19T09:25:16
https://github.com/TingjiaInFuture/pixrep
Next_Departure_7031
github.com
1970-01-01T00:00:00
0
{}
1r8upyq
false
null
t3_1r8upyq
/r/LocalLLaMA/comments/1r8upyq/project_pixrep_i_built_a_tool_to_convert/
false
false
https://external-preview…00276972fd7d8a64
1
{'enabled': False, 'images': [{'id': 'tiporgWKXLiw-avMiyOpKA8_-R4m_WVB0P_Hiu1yhkA', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/tiporgWKXLiw-avMiyOpKA8_-R4m_WVB0P_Hiu1yhkA.png?width=108&crop=smart&auto=webp&s=9c8171cff7378332b0ca4ed1cac2913b2298b2e0', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/tiporgWKXLiw-avMiyOpKA8_-R4m_WVB0P_Hiu1yhkA.png?width=216&crop=smart&auto=webp&s=d6ad1323998e5f86c2730c3939f4fccbba0e2f50', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/tiporgWKXLiw-avMiyOpKA8_-R4m_WVB0P_Hiu1yhkA.png?width=320&crop=smart&auto=webp&s=e8b6ac54f50e53a7c794ea5666edcf33c6dbc772', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/tiporgWKXLiw-avMiyOpKA8_-R4m_WVB0P_Hiu1yhkA.png?width=640&crop=smart&auto=webp&s=700038bffd0f705836ba340bd1752dd62ab3fd07', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/tiporgWKXLiw-avMiyOpKA8_-R4m_WVB0P_Hiu1yhkA.png?width=960&crop=smart&auto=webp&s=db2e26a27193cdd3a579b5041ad2ac972875ccea', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/tiporgWKXLiw-avMiyOpKA8_-R4m_WVB0P_Hiu1yhkA.png?width=1080&crop=smart&auto=webp&s=6b87e3cea100044ff4496a26990d2575d9a7778f', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/tiporgWKXLiw-avMiyOpKA8_-R4m_WVB0P_Hiu1yhkA.png?auto=webp&s=391f00686f082184a622a09ee533e44b49389e2f', 'width': 1200}, 'variants': {}}]}
[Project] A Garlic Farmer's garlic-agent: Inspired by OpenClaw, Built on Android Termux with 6K Documents
0
This document was created with the assistance of garlic-agent RAG (just built) and in collaboration with Claude Opus 4.6 Local RAG for 6K Korean documents running on Android Termux 📌 Table of Contents 1. Project Overview 2. System Environment 3. Project Structure 4. Construction Process (Chronological Order) 5. RAG System Details 6. Search System (FTS5 + Vector) 7. Automation Features 8. GarlicLang Integration 9. Web UI 10. Current System Status 11. Recovery Method 12. Backup File List 13. Future Improvement Direction 1. Project Overview garlic-agent is a lightweight AI agent that can search, analyze, and autonomously execute approximately 6,000 documents (approximately 6.9G) of personal materials accumulated over 2 years on Google Drive in a local Android Termux environment based on semantic meaning. For reference, the phone is a Unihertz Titan2. The screen is wide, resembling a BlackBerry Passport, which is nice. I do not have a PC. I completed this task with only a BlackBerry Key2 and several phones with physical keyboards. It was created out of curiosity to replace OpenClaw, and currently uses cheap Chinese DeepSeek as the main API LLM and implements RAG (Retrieval-Augmented Generation) with the nomic-embed local embedding model. Core Philosophy: - Rather than writing code directly, complete the project with the ability to make AI do what you want and verify it. - Minimize technical jargon, provide in a form that can be executed immediately by copy-paste. This requires tremendous concentration and time flew by 24 hours in an instant... I only did directional judgment and verification. - Language is an operating system according to my fundamental belief. Coding is also a language. After talking a lot with AI, I realized that structure was the essence. However, I do not know coding well. Because of that, instead of typing one by one, I prefer a cross-verification method by keeping several companies' different AIs running. Then I also learned that the AIs have consistent context while going through multiple browser windows. And with the remarkable AI development that constantly changes, I find it amazing that such a thing is possible. 2. System Environment | Item | Value | |---|---| | Device | Android 14, ARM64 | | Environment | Termux | | Python | 3.12 | | Main LLM | DeepSeek (API) | | Auxiliary LLM | Cerebras, Groq, Gemini, NVIDIA Kimi | | Embedding Model | nomic-embed-text-v1.5.Q4_K_M.gguf (137 MB, 768 dimensions) | | Embedding Server | llama.cpp llama-server (port 8081) | | Web UI | Python HTTP Server (port 8080) | | DB | SQLite3 (knowledge.db) | 3. Project Structure ~/garlic-agent/ ├── agent.py # Main agent (687 lines) ├── web.py # Web UI server (Flask-like HTTP) ├── search.py # Hybrid RAG search (FTS5 + vector) ├── tools.py # 6 tools (read/exec/write/patch/search/garlic) ├── security.py # Security settings (exec_timeout: 30s) ├── config.json # Configuration (max_loops: 30) ├── knowledge.db # SQLite DB (177 MB, 6,159 docs) ├── agent.html # Web UI frontend ├── build_rag.py # RAG embedding generation (initial version) ├── build_rag2.py # RAG embedding generation (NULL only processing) ├── write_rag_doc.py # RAG_BUILD.md generation script ├── RAG_BUILD.md # RAG construction record (275 lines) ├── COMPLETE_BUILD.md # Complete construction record ├── SOUL.md # Agent identity/philosophy/principles ├── TOOLS.md # Tool usage ├── USER.md # User profile ├── MEMORY.md # Memory storage ├── HEARTBEAT.md # Status check ├── KNOWN_ISSUES.md # Known issues ├── VERSION.md # Version history ├── HANDOVER.md # Handover document ├── HANDOVER_QA_20260218.md ├── REPORT_v20.3.md ├── GARLICLANG_SPEC.md # GarlicLang specification ├── scripts/ # GarlicLang scripts (.gl) 42 pieces ├── security/ # Security related ├── static/ # marked.min.js etc. ├── memory/ # Memory by date (2026-02-17~19.md) └── garliclang_full/ # GarlicLang v20.x complete project ├── MASTER_DOC.md ├── WORKFLOW.md ├── PROJECT_STATUS.md ├── BRIEFING.md ├── NVIDIA_KIMI_GUIDE.md └── ... ~/.openclaw/extensions/kimi-claw/llama.cpp/build/bin/ ├── llama-server # Embedding server binary └── nomic-embed.gguf # Embedding model (137 MB) 4. Construction Process (Chronological Order) v1.5.0 — Basic Agent Complete (2026-02-17) I converted approximately 6,000 documents from Google Drive Takeout to SQLite knowledge.db. The table structure is id, filename, folder, content, length, and the initial number of documents was 5,879 (38MB). Basic search was a SQLite LIKE '%keyword%' method. Problems were inability to search based on meaning, slow speed, and inability to perform complex AND/OR searches. Six tools were implemented: tool:read, tool:exec, tool:write, tool:patch, tool:search, tool:garlic. v1.5.1 — HUD Added (2026-02-18) Real-time system HUD was added to the web UI. Measure CPU with /proc/stat, display MEM/SWP/DSK, web.py /hud endpoint, max_loops increased to 20. v1.5.2 — RAG Integration Complete (2026-02-19) Detailed explanation in sections 5~9 below. 5. RAG System Details 5-1. Methods Attempted (Failure) | Method | Result | Reason | |---|---|---| | sentence-transformers | ❌ | No ARM64 GPU, excessive package size | | DeepSeek Embedding API | ❌ | 404 error | | Gemini API embedding | ❌ | Cannot send personal materials externally | 5-2. Final Choice: llama.cpp + nomic-embed Start embedding server ~/.openclaw/extensions/kimi-claw/llama.cpp/build/bin/llama-server \ -m ~/.openclaw/extensions/kimi-claw/llama.cpp/build/bin/nomic-embed.gguf \ --embeddings --port 8081 -np 4 | Item | Value | |---|---| | Model | nomic-embed-text-v1.5.Q4_K_M.gguf | | Size | 137 MB | | Dimension | 768 | | Quantization | Q4_K_M | | Server Port | 8081 | | Processing Speed | ~0.68 seconds/document | 5-3. DB Schema Change ALTER TABLE docs ADD COLUMN embedding BLOB; -- 768 float32 = 3,072 bytes per document 5-4. Embedding Generation (build_rag2.py) By processing only documents where embedding IS NULL, I completed 5,858 in approximately 67 minutes (approximately 0.68 seconds/document). Acquire embedding via POST request and store BLOB with struct.pack. Embedding request example: payload = json.dumps({"content": text[:2000]}).encode() req = urllib.request.Request("http://127.0.0.1:8081/embedding", data=payload) 6. Search System (3-Stage Hybrid) search.py performs 3-stage search. 1st Priority — FTS5 Full-Text Search CREATE VIRTUAL TABLE IF NOT EXISTS docs_fts USING fts5(filename, folder, content, content='docs', content_rowid='id'); INSERT INTO docs_fts(docs_fts) VALUES('rebuild'); 2nd Priority — Vector Cosine Similarity (RAG) def cosine(a, b): dot = sum(x*y for x,y in zip(a,b)) na = sum(xx for x in a)*0.5 nb = sum(xx for x in b)*0.5 return dot/(na*nb) if na and nb else 0 3rd Priority — LIKE Fallback SELECT id, filename, folder, length, substr(content,1,300) FROM docs WHERE content LIKE ? ORDER BY length DESC LIMIT ? | Item | Value | |---|---| | FTS5 Weight | 0.5 | | Vector Similarity Weight | 0.5 | | Keyword Weight | 0.6 | | Average Search Time | ~1.7 seconds | | DB Size (FTS5 included) | 177 MB (existing 84 MB → 177 MB) | 7. Automation Features 7-1. tool:write Auto Indexing Added _auto_index() function to tools.py. When file is saved with tool:write, it automatically registers in knowledge.db and creates embedding. def _auto_index(path, content): Generate embedding only when llama-server is running INSERT or UPDATE in knowledge.db docs table Automatically save embedding BLOB Test: Saved test_auto_index.md → Confirmed immediate registration with ID 6154 ✅ 7-2. Backup Script ~/garlic-agent/scripts/backup.sh bash ~/garlic-agent/scripts/backup.sh Execution: tar creation → Download copy → Auto media scan 7-3. webstart Alias Registered in ~/.bashrc webstart # = cd ~/garlic-agent && python3 web.py 7-4. Browser Timeout (agent.html) var ctrl = new AbortController(); var tid = setTimeout(function(){ ctrl.abort(); }, 600000); // 10 minutes fetch("/chat", { signal: ctrl.signal, ... }) .then(...) .finally(function(){ clearTimeout(tid); }); 8. GarlicLang Integration GarlicLang v20.x is a Korean-based AI scripting language. It uses .gl extension and is executed with tool:garlic. Example GarlicLang Script (test_hello.gl) [File Write] test_hello.py print("Hello GarlicLang") [/File Write] [Execute] python3 test_hello.py [/Execute] [Verify] Output contains "Hello GarlicLang" [/Verify] [Output] Verification result [/Output] - Script location: ~/garlic-agent/scripts/ (42 .gl files) - GarlicLang complete project: ~/garlic-agent/garliclang_full/ - knowledge.db contains 94 or more GarlicLang-related documents - .gl files 140 pieces exist in home directory 9. Web UI - URL: http://127.0.0.1:8080?token=garlic2026 - Markdown rendering: marked.js (CDN + static fallback) - Clipboard button: Response copy function - Model selection: DeepSeek / Cerebras / Groq / Gemini / NVIDIA - HUD: Real-time MEM/SWP/DSK display on top of screen - SSE streaming: Real-time response output 10. Current System Status (2026-02-19 Final) | Item | Value | |---|---| | Version | garlic-agent v1.5.2 | | Total Documents | 6,159 pieces | | Embedding Complete | 5,858 pieces (remainder are newly added) | | DB Size | 177 MB (FTS5 included) | | FTS5 Index | docs_fts virtual table ✅ | | Auto Indexing | Automatic on tool:write save ✅ | | agent.py | 687 lines | | max_loops | 30 | | Search Speed | ~1.7 seconds | | Embedding Model | nomic-embed-text-v1.5 (137 MB, 768 dimensions) | | Distribution | garlic-agent-v1.5.2.tar.gz (150 KB, excluding DB) | Currently not considering distribution. Honestly I do not know how to use GitHub and do not want to know. Several AI opinions say this is good, so I am doing it this way. I do not know the details. I only know what content is in it. 11. Recovery Method Recovery order when new phone/reinstall Step 1 — Termux installation and basic environment setup pkg update && pkg upgrade pkg install python sqlite git pip install requests flask Step 2 — Code Recovery Recover from Download folder cp /storage/emulated/0/Download/garlic-agent-v1.5.2.tar.gz ~/ cd ~ && tar xzf garlic-agent-v1.5.2.tar.gz Step 3 — DB Recovery cp /storage/emulated/0/Download/knowledge.db ~/garlic-agent/knowledge.db Step 4 — Embedding Server Installation (Optional) - Download nomic-embed.gguf (137 MB) from Google Drive - Build llama.cpp or restore binary - Start server: ~/.openclaw/.../llama-server -m nomic-embed.gguf --embeddings --port 8081 -np 4 Step 5 — Start Agent cd ~/garlic-agent && python3 web.py Or if registered in ~/.bashrc: webstart Step 6 — Browser Access http://127.0.0.1:8080?token=garlic2026 ⚠️ Keyword search (FTS5 + LIKE) works normally even without embedding server. Only vector similarity search is disabled. 12. Backup File List | File | Size | Location | Priority | |---|---|---|---| | knowledge.db | 177~178 MB | /storage/emulated/0/Download/ | ⭐⭐⭐ Essential | | garlic-agent-v1.5.2.tar.gz | 150 KB | /storage/emulated/0/Download/ | ⭐⭐⭐ Essential | | COMPLETE_BUILD.md | 8.5 KB | /storage/emulated/0/Download/ | ⭐⭐ Recommended | | RAG_BUILD.md | ~10 KB | /storage/emulated/0/Download/ | ⭐⭐ Recommended | | nomic-embed.gguf | 137 MB | Redownloadable from HuggingFace | ⭐ Optional | Google Drive upload recommended files: - knowledge.db — 2 years of accumulated tens of thousands of conversations with AI, 1st refined approximately 6G materials + embedding included, most important - garlic-agent-v1.5.2.tar.gz — Complete code (excluding DB) - COMPLETE_BUILD.md — This document (including recovery guide) 13. SOUL.md Core Principles (Current) The SOUL.md containing garlic-agent's identity and action principles includes the following. Referenced OpenClaw and plan to add my philosophy as it progresses. Identity: Lightweight autonomous AI agent running on Android Termux. Can access user's 6,159 personal documents. User Background: Currently living as a farmer for 16 years. Previously had experience with mainframe environment, IDC construction/operation during Internet environment changes, mainframes, servers, networks, firewalls, backups, EMC, and various Unix. I devoted myself to agriculture during that time and lived a life where I forgot about PCs. I first approached AI out of curiosity and tried to revive some old memories. This is the truth. I have absolutely no lifelong coding experience. However, it seems I see structural system things well. Farmers need observation and meticulousness in growing crops. Currently I give instructions in Korean to AIs, verify, and only make judgments. Looking back, my entire life seems to be a continuous lonely wandering. Now I am thinking of living a different life. AI Kernel 3 Core Principles: 1. Extreme Realism Principle — Use only verifiable facts, official documents, numerical values. No speculation. 2. Metacognitive Autonomy — Self-improvement based on feedback. Auto-correction on failure. 3. Hierarchical Orchestration — Decompose complex tasks step-by-step for processing. Autonomous Execution Rights: All commands executable in Termux including tar, cp, pkill, am broadcast, sed, grep, sqlite3, python3, etc. 14. Known Issues and Solutions | Issue | Cause | Solution | |---|---|---| | tool:patch 0 patch failure | Patch format mismatch | Use tool:write for full overwrite | | SQLite3 result reading mismatch | DeepSeek hallucination | Use Python script to query directly | | Browser connection disconnection | AbortController timeout | Set to 600,000ms (10 minutes) | | BodyStreamBuffer was aborted | Timeout + clearTimeout missing | clearTimeout added complete | | Version display v1.5.0 | agent.py hardcoding | Replaced to v1.5.2 with sed | 15. Future Improvement Direction - Automatic embedding server start/stop: Auto-run llama-server when web.py starts - Real-time indexing queue: Generate embedding immediately when file is saved (currently only when server is running) - Search result caching: Cache frequently searched query results - Feedback-based weighting: Auto-adjust FTS5/vector weights based on user selection - Multimodal search: Index image/PDF content - agent.py v2: Better context management, multi-turn memory Final Performance Summary $$\text{Total Documents} = 5879(\text{original}) + 274(\text{garliclang}) + n(\text{new}) = 6159$$ $$\text{Embedding Generation Time} \approx 5858 \times 0.68s \approx 67\text{ minutes}$$ $$\text{Search Speed} \approx 1.7s \ (\text{FTS5} + \text{cosine similarity})$$ $$\text{DB Size}: 38MB(\text{original}) \rightarrow 84MB(\text{embedding}) \rightarrow 177MB(\text{FTS5})$$ This document is an incomplete record of garlic-agent v1.5.2 construction process and observation experiment, but when provided to a new AI, the entire context can be immediately grasped. And I dedicate infinite respect and tribute to Steve Jobs, the late person who connected the world with only a phone like this. And I also give thanks to Peter Steinberger of OpenClaw who inspired me. It is because of you. Thank you very much. And I seldom post in communities, but non-English speakers struggle with translation. So I can only do translation with AI. And all work processes are done only in Korean, so if moved to English it may seem strange, but please look at it as the observation experiment development of a Korean farmer. I worked very hard for a few days, even saving sleep, but it is a humble result, but on my phone, I feel like I can do whatever I imagine, so it was work that gave me a sense of accomplishment. For the first time in my life, I made a web UI and it works so well that it is good. Now I have confidence that I can do anything with my phone based on my data so far. Also, as I use more than ten different AIs every day watching AI develop dazzlingly, I can feel the difference right away with human-specific intuition. I think this is the experience of tens of thousands of conversations over the past 2 years, and such work development became the motivation for it. Thank you for reading this long article to the end. Written by: Korean Garlic Farmer & opus4.6, 2026-02-19 🧄
2026-02-19T09:13:33
https://www.reddit.com/r/LocalLLaMA/comments/1r8uj41/project_a_garlic_farmers_garlicagent_inspired_by/
amadale
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r8uj41
false
null
t3_1r8uj41
/r/LocalLLaMA/comments/1r8uj41/project_a_garlic_farmers_garlicagent_inspired_by/
false
false
self
0
null
Is running local LLMs on a Mac Mini M4 Pro (64GB) financially worth it for text classification?
2
Hi everyone, Right now I’m using OpenAI (ChatGPT API) for text processing and classification. My main goal is to reduce processing costs. The first idea that comes to mind is running everything locally on a machine like: **Mac Mini M4 Pro (64GB unified memory).** I’m not trying to compare ChatGPT quality to a single Mac Mini — I understand they’re not in the same league. The real question is: 1. For structured text classification tasks, how well would a machine like this realistically perform? 2. Is it economically worth it compared to API usage? My biggest problem is that I have no way to test this hardware before buying it. Is there any service (like RunPod, etc.) where I can test Apple Silicon / Mac Mini hardware remotely and benchmark local LLM inference? Or maybe someone here is already running something similar and can share real-world experience? Thanks.
2026-02-19T09:00:07
https://www.reddit.com/r/LocalLLaMA/comments/1r8ubck/is_running_local_llms_on_a_mac_mini_m4_pro_64gb/
dev_runner
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r8ubck
false
null
t3_1r8ubck
/r/LocalLLaMA/comments/1r8ubck/is_running_local_llms_on_a_mac_mini_m4_pro_64gb/
false
false
self
2
null
Are there any AI tools like ChatGPT that work 100% offline on iOS or Android? No internet at all
0
I’m looking for AI apps similar to ChatGPT that can run fully offline — no internet required at all. My main use cases: * Writing & editing * Coding help * Brainstorming ideas * General Q&A I know some desktop tools can run local LLMs, but I’m specifically looking for **mobile apps (iOS & Android)** that: * Work completely offline * Don’t send data to servers * Are actually usable (not super slow) Does something like this exist right now? If you're using one: * What model does it run? * How’s the performance? * What phone specs are needed? * Any limitations? Also curious: Is fully offline AI on mobile even practical yet, or are we still a few years away?
2026-02-19T08:58:07
https://www.reddit.com/r/LocalLLaMA/comments/1r8ua5z/are_there_any_ai_tools_like_chatgpt_that_work_100/
FollowingMindless144
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r8ua5z
false
null
t3_1r8ua5z
/r/LocalLLaMA/comments/1r8ua5z/are_there_any_ai_tools_like_chatgpt_that_work_100/
false
false
self
0
null
Local cowork/open claw alternatives?
0
What is the difference between openwork and accomplish and what are you using? I’m looking for something that could work with both lm studio and online models. Security options heavily influence my choice and I’d host it locally. The goal is computer use, automations, file generation (powerpoints and md’s), and some light coding with local git.
2026-02-19T08:57:04
https://www.reddit.com/r/LocalLLaMA/comments/1r8u9kq/local_coworkopen_claw_alternatives/
riceinmybelly
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r8u9kq
false
null
t3_1r8u9kq
/r/LocalLLaMA/comments/1r8u9kq/local_coworkopen_claw_alternatives/
false
false
self
0
null
How we gave up and picked back up evals driven development (EDD)
10
**Disclaimer:** I posted this originally in r/AIEval, I thought it would be good to share in other communities too related to LLMs. Hey r/AIEval, wanted to share how we gave up on and ultimately went back to evals driven development (EDD) over the past 2 months of setup, trial-and-error, testing exhaustion, and ultimately, a workflow that we were able to compromise on actually stick to. For context, we're a team of 6 building a multi-turn customer support agent for a fintech product. We handle billing disputes, account changes, and compliance-sensitive stuff. Stakes are high enough that "vibes-based testing" wasn't cutting it anymore. # How it started.... the "by the book" attempt A lot of folks base their belief on something they've read online, a video they've watched, and that included us. We read every blog post about EDD and went all in. Built a golden dataset of 400+ test cases. Wrote custom metrics for tone, accuracy, and policy compliance. Hooked everything into CI/CD so evals ran on every PR. Within 2 weeks, nobody on the team wanted to touch the eval pipeline: 1. Our golden dataset was stale almost immediately. We changed our system prompt 3 times in week 1 alone, and suddenly half the expected outputs were wrong. Nobody wanted to update 400 rows in a spreadsheet. 2. Metric scores were noisy. We were using LLM-as-a-judge for most things, and scores would fluctuate between runs. Engineers started ignoring failures because "it was probably just the judge being weird." 3. CI/CD evals took 20+ minutes per run. Developers started batching PRs to avoid triggering the pipeline, which defeated the entire purpose. 4. Nobody agreed on thresholds. PM wanted 0.9 on answer relevancy. Engineering said 0.7 was fine. We spent more time arguing about numbers than actually improving the agent. We quietly stopped running evals around week 4. Back to manual testing and spot checks. **But, right around this time,** our agent told a user they could dispute a charge by "contacting their bank directly and requesting a full reversal." That's not how our process works at all. It slipped through because nobody was systematically checking outputs anymore. In hindsight, I think it had nothing to do with us going back to manual testing, since our process was utterly broken already. # How we reformed our EDD approach Instead of trying to eval everything on every PR, we stripped it way back: * **50 test cases, not 400.** We picked the 50 scenarios that actually matter for our use case. Edge cases that broke things before. Compliance-sensitive interactions. The stuff that would get us in trouble. Small enough that one person can review the entire set in 10-15 mins. * **3 metrics, not 12.** Answer correctness, hallucination, and a custom policy compliance metric. That's it. We use DeepEval for this since it plugs into pytest and our team already knows the workflow. * **Evals run nightly, not on every PR.** This was the big mental shift. We treat evals like a regression safety net, not a gate on every code change. Engineers get results in Slack every morning. If something broke overnight, we catch it before standup. * **Monthly dataset review.** First Monday of every month, our PM and one engineer spend an hour reviewing and updating the golden dataset. It's a calendar invite. Non-negotiable. This alone fixed 80% of the staleness problem. * **Threshold agreement upfront.** We spent one meeting defining pass/fail thresholds and wrote them down. No more debates on individual PRs. If the threshold needs changing, it goes through the monthly review. The most important thing here is we took our dataset quality much more seriously, and went the extra mile to make sure the metrics we chose deserves to be in our daily benchmarks. I think this was what changed our PM's perspective on evals and got them more engaged, because they could actually see how a test case's failing/passing metrics correlated to real-world outcomes. # What we learned EDD failed for us the first time because we treated it like traditional test-driven development where you need 100% coverage from day one. LLM apps don't work like that. The outputs are probabilistic, the metrics are imperfect, and your use case evolves faster than your test suite. The version that stuck is intentionally minimal (50 cases, 3 metrics, nightly runs, monthly maintenance). It's not glamorous, but we've caught 3 regressions in the last 3 weeks that would've hit production otherwise. One thing I want to call out: at such an early stage of setting up EDD, the tooling was rarely the problem. We initially blamed our setup (DeepEval + Confident AI), but after we reformed our process we kept the exact same tools and everything worked. The real issue was that we were abusing our data and exhausting the team's attention by overloading them with way too much information. I get into tooling debates pretty often, and honestly, at the early stages of finding an EDD workflow that sticks, just focus on the data. The tool matters way less than what you're testing and how much of it you're asking people to care about. If you're struggling to make EDD work, try scaling way down before scaling up. Start with the 10 to 20 scenarios that would actually embarrass your company if they failed. Measure those reliably. Expand once you trust the process. But who knows if this is an unique perspective from me, maybe someone had a different experience where large volumes of data worked? Keen to hear any thoughts you guys might have, and what worked/didn't work for you. (Reminder: We were at the very initial stages of setup, still 2 months in) Our next goal is to make evals a more no-code workflow within the next 2 weeks, keen to hear any suggestions on this as well, especially for product owner buy-in.
2026-02-19T08:46:53
https://www.reddit.com/r/LocalLLaMA/comments/1r8u3x3/how_we_gave_up_and_picked_back_up_evals_driven/
sunglasses-guy
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r8u3x3
false
null
t3_1r8u3x3
/r/LocalLLaMA/comments/1r8u3x3/how_we_gave_up_and_picked_back_up_evals_driven/
false
false
self
10
null
[ Removed by moderator ]
1
[removed]
2026-02-19T08:46:31
https://www.reddit.com/r/LocalLLaMA/comments/1r8u3q7/19x_faster_at_256k_context_testing_qwen35s_claims/
No_Glove_1225
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r8u3q7
false
null
t3_1r8u3q7
/r/LocalLLaMA/comments/1r8u3q7/19x_faster_at_256k_context_testing_qwen35s_claims/
false
false
null
1
null
I made an LLM-powered website that uses your pictures to tell you if you are fat
0
[Are You Fat? - https://areyoufat.app/](https://areyoufat.app/)
2026-02-19T08:45:49
https://www.reddit.com/r/LocalLLaMA/comments/1r8u3ca/i_made_an_llmpowered_website_that_uses_your/
Fearless_Roof_4534
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r8u3ca
false
null
t3_1r8u3ca
/r/LocalLLaMA/comments/1r8u3ca/i_made_an_llmpowered_website_that_uses_your/
false
false
self
0
{'enabled': False, 'images': [{'id': 'ggmKnxp3WVWbfm639cn0GDvdLZnLdqTIgnd5j9smlGU', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/ggmKnxp3WVWbfm639cn0GDvdLZnLdqTIgnd5j9smlGU.jpeg?width=108&crop=smart&auto=webp&s=4cd8c6e120074974940e8a08a11a162779a56fa5', 'width': 108}, {'height': 117, 'url': 'https://external-preview.redd.it/ggmKnxp3WVWbfm639cn0GDvdLZnLdqTIgnd5j9smlGU.jpeg?width=216&crop=smart&auto=webp&s=2aa46898d573bcdba8b64e8e27cff71fe3713d76', 'width': 216}, {'height': 174, 'url': 'https://external-preview.redd.it/ggmKnxp3WVWbfm639cn0GDvdLZnLdqTIgnd5j9smlGU.jpeg?width=320&crop=smart&auto=webp&s=3365403508f628f77655378f0c699ef1648d0953', 'width': 320}, {'height': 349, 'url': 'https://external-preview.redd.it/ggmKnxp3WVWbfm639cn0GDvdLZnLdqTIgnd5j9smlGU.jpeg?width=640&crop=smart&auto=webp&s=2ed57a5e2d7fc6463904405e29a4b9b27f51b458', 'width': 640}, {'height': 523, 'url': 'https://external-preview.redd.it/ggmKnxp3WVWbfm639cn0GDvdLZnLdqTIgnd5j9smlGU.jpeg?width=960&crop=smart&auto=webp&s=079f3867a5d3794e68d3f7093fdcdb6d4db7a99b', 'width': 960}, {'height': 588, 'url': 'https://external-preview.redd.it/ggmKnxp3WVWbfm639cn0GDvdLZnLdqTIgnd5j9smlGU.jpeg?width=1080&crop=smart&auto=webp&s=27572028acf10917d40bd12450dc8fbb25139c29', 'width': 1080}], 'source': {'height': 698, 'url': 'https://external-preview.redd.it/ggmKnxp3WVWbfm639cn0GDvdLZnLdqTIgnd5j9smlGU.jpeg?auto=webp&s=35ce1a434f312c722a44bc6b95c5ffa56b29ae9e', 'width': 1280}, 'variants': {}}]}
Building a prompt injection detector in Python
1
Been going down a rabbit hole trying to build a lightweight prompt injection detector. Not using any external LLM APIs — needs to run fully local and fast. I asked AI for algorithm suggestions and got this stack: * Aho-Corasick for known injection phrase matching * TF-IDF for detecting drift between input and output * Jaccard similarity for catching context/role deviation * Shannon entropy for spotting credential leakage Looks reasonable on paper but I genuinely don't know if this is the right approach or if I'm massively overcomplicating something that could be done simpler. Has anyone actually built something like this in production? Would love to know what you'd keep, what you'd throw out, and what I'm missing entirely.
2026-02-19T08:02:28
https://www.reddit.com/r/LocalLLaMA/comments/1r8test/building_a_prompt_injection_detector_in_python/
Sharp_Branch_1489
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r8test
false
null
t3_1r8test
/r/LocalLLaMA/comments/1r8test/building_a_prompt_injection_detector_in_python/
false
false
self
1
null
I retrained /u/Own-Albatross868's FlashLM v4 "Bolt" model from scratch using GreedyPhrase tokenizer on the full TinyStories dataset. I scaled up to 15M parameters with a 65K vocab, achieving smooth convergence and coherent story generation in just 2.2 hours on an RTX 2080 Ti
30
FlashLM v4 "Bolt" retrained from scratch on the full TinyStories dataset using our [GreedyPhrase]((https://github.com/rayonnant-ai/greedyphrase) tokenizer instead of the original GPT-2 10K tokenizer. | | Original (HuggingFace) | This Run | |---|---|---| | Tokenizer | GPT-2 (tiktoken), 10K vocab | GreedyPhrase, 65K vocab | | Parameters | 4.3M | 15.0M | | Hardware | 2 vCPU (CPU only) | RTX 2080 Ti (GPU) | | Training time | 2 hours | ~2.2 hours | | Tokens seen | 10.6M (2.3% of data) | 818M (3.3 epochs) | | Best val loss | 2.0976 | 3.9352 | | Throughput | 1,479 tok/s | 103,000 tok/s | ## Training Configuration | Parameter | Value | |---|---| | Architecture | FlashLM v4 Bolt (ternary gated causal conv) | | Hidden dim | 192 | | Blocks | 6 | | Conv kernel size | 8 | | GLU expansion dim | 512 | | Vocab size | 65,280 (padded from 65,218 actual) | | Sequence length | 256 tokens | | Effective batch size | 64 (micro=16, grad_accum=4) | | Optimizer | AdamW (weight_decay=0.01) | | Peak learning rate | 4e-3 | | LR schedule | Cosine with 500-step warmup | | Gradient clipping | 1.0 | | Precision | AMP float16 | | Total steps | 50,000 | ## Dataset - **Source:** TinyStories (roneneldan/TinyStories), 2.1 GB text - **Preprocessing:** `<|endoftext|>` replaced with `</s>` (EOS token ID 3) - **Tokenized size:** 248M tokens (496 MB binary uint16) - **Compression ratio:** ~8.88 bytes/token (vs ~4.5 for GPT-2) - **Train/val split:** 99.5% / 0.5% ## Results ### Loss Curve ``` Step Train Loss Val Loss 0 11.13 — 500 6.73 5.96 1000 5.46 5.12 2500 4.72 4.61 5000 4.43 4.39 10000 4.17 4.19 20000 4.03 4.03 30000 3.95 3.97 40000 3.92 3.95 50000 3.94 3.94 Best — 3.9352 (step 47500) ``` ### Metrics | Metric | Value | |---|---| | Best validation loss | 3.9352 | | Token-level perplexity | 51.17 | | Bits per token | 5.68 | | Bits per character (estimated) | 0.64 | ### Comparing Val Loss Across Tokenizers The raw validation loss numbers are **not directly comparable** between the original (val_loss 2.10 with 10K vocab) and this run (val_loss 3.94 with 65K vocab) because: 1. **Larger vocabulary = harder prediction task.** Random-chance loss is ln(65280) = 11.09 vs ln(10000) = 9.21. The model must distribute probability over 6.5x more tokens. 2. **Fewer tokens per story.** GreedyPhrase compresses TinyStories at ~9 bytes/token vs ~4.5 bytes/token for GPT-2. Each token carries more information, so predicting the next token is inherently harder. 3. **Bits-per-character is the fair comparison.** At 0.64 BPC this model is competitive with the original's 0.88 BPC, suggesting the GreedyPhrase tokenizer's higher compression ratio pays off in information-theoretic efficiency. ## Generation Samples (Step 49,500) > Once upon a time there was a little girl named Sarah. She was only three years old > and loved exploring. One day Sarah went to the park with her mother. She saw a little > boy playing with a ball. > Once upon a time there was a very deep lake. It was great! Every morning he would > jump off the water and look for something wonderful. > Once upon a time there was a little girl named Mary. Mary loved animals, especially > especially loved the ocean. Every day Mary would go out on a walk around the waves > and swimming around on the beach. ### Prompt: "The little dog" > The little dog wanted to protect his bone, so he held it up to the cat and tried to > protect him. But the big cat was jealous. It wanted to take the bone from him, but it > ran away. > > The cat was sad and began to cry. Then, he saw a big hole in the ground and started > to shake it. The cat growled and tried to run away. The dog was scared and ran back to > the cat. The cat saw the fox and was scared. The cat took the kitten and ran away. The > dog was sad. The fox did not get the mitten anymore. The cat was happy and played with > Spot and the other friends. ## Files | File | Size | Description | |---|---|---| | `flashlm_v4_bolt_greedyphrase.pt` | 58 MB | Final model (step 50,000) | | `best.pt` | 172 MB | Best checkpoint with optimizer state (step 47,500) | | `checkpoint.pt` | 172 MB | Latest periodic checkpoint | | `tinystories.tokens` | 496 MB | Tokenized dataset (uint16 binary) | | `model.py` | — | Model architecture | | `train.py` | — | Training script | ## Observations 1. **Convergence was smooth.** Loss dropped from 11.13 to ~3.94 over 50K steps with no instability, despite ternary weight quantization via straight-through estimators. 2. **The loss curve was still slowly declining at 50K steps.** Extended training or a second cosine cycle could improve results further. 3. **GreedyPhrase's long phrases help coherence.** With ~9 bytes/token, the 256-token context window covers ~2,300 characters (~400 words), much more than the original's ~1,150 characters. This gives the model more context per sequence. 4. **The larger embedding table dominates parameter count.** 65K vocab x 192 dim = 12.5M parameters in the embedding alone (84% of total), vs 1.9M for the original's 10K vocab. The model body (blocks) is identical. 5. **Throughput benefited from GPU + AMP.** At 103K tokens/sec on an RTX 2080 Ti, this is 70x faster than the original's 1.5K tokens/sec on CPU, allowing 3.3 full epochs in roughly the same wall-clock time.
2026-02-19T07:54:51
https://www.reddit.com/r/LocalLLaMA/comments/1r8ta57/i_retrained_uownalbatross868s_flashlm_v4_bolt/
reditzer
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r8ta57
false
null
t3_1r8ta57
/r/LocalLLaMA/comments/1r8ta57/i_retrained_uownalbatross868s_flashlm_v4_bolt/
false
false
self
30
{'enabled': False, 'images': [{'id': 'k8HEShpvl6JsQjdr3PBFVEDw-MzGIxoaXG0pHbvbXPk', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/k8HEShpvl6JsQjdr3PBFVEDw-MzGIxoaXG0pHbvbXPk.png?width=108&crop=smart&auto=webp&s=07b5c45986b753a12b8527ad3b57103935b5821d', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/k8HEShpvl6JsQjdr3PBFVEDw-MzGIxoaXG0pHbvbXPk.png?width=216&crop=smart&auto=webp&s=dd300f4e17e1b07d3e96333900e76dce78e8caa5', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/k8HEShpvl6JsQjdr3PBFVEDw-MzGIxoaXG0pHbvbXPk.png?width=320&crop=smart&auto=webp&s=11a5a77dbb8e0dc8a734f9a5f6f849cb4c0b21c4', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/k8HEShpvl6JsQjdr3PBFVEDw-MzGIxoaXG0pHbvbXPk.png?width=640&crop=smart&auto=webp&s=ba11cf60acb602ddca4f827a3693abe18f20f1f0', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/k8HEShpvl6JsQjdr3PBFVEDw-MzGIxoaXG0pHbvbXPk.png?width=960&crop=smart&auto=webp&s=b1dd0c089d8e2a90c6c0ce4b52d2b870ef8629e0', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/k8HEShpvl6JsQjdr3PBFVEDw-MzGIxoaXG0pHbvbXPk.png?width=1080&crop=smart&auto=webp&s=df256c363d50741ec264fa3463757c0023260093', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/k8HEShpvl6JsQjdr3PBFVEDw-MzGIxoaXG0pHbvbXPk.png?auto=webp&s=47af30e1db04dfb70474475ff28c985c5e01fbef', 'width': 1200}, 'variants': {}}]}
Anthropic updates Claude Code Docs: OAuth tokens now banned for all third-party tools
0
Anthropic just quietly updated the Claude Code Docs legal compliance page. Key takeaway: - OAuth authentication (Free, Pro, Max plans) is now EXCLUSIVELY for Claude Code and Claude.ai - Using OAuth tokens in ANY third-party tool violates their Consumer Terms of Service - This includes Agent SDK, Cline, Roo Code, OpenClaw, and similar tools - Anthropic reserves the right to enforce without prior notice Original discussion: https://www.reddit.com/r/ClaudeAI/comments/1r8t6mn/ Interesting timing given the growing ecosystem of third-party Claude tools. This is a clear signal that Anthropic wants tighter control over how their models are accessed via consumer plans.
2026-02-19T07:49:45
https://www.reddit.com/r/LocalLLaMA/comments/1r8t75u/anthropic_updates_claude_code_docs_oauth_tokens/
OwenAnton84
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r8t75u
false
null
t3_1r8t75u
/r/LocalLLaMA/comments/1r8t75u/anthropic_updates_claude_code_docs_oauth_tokens/
false
false
self
0
null
I built a lightweight self-hosted AI gateway in Python — stdlib only, no frameworks, 25 modules, 32 tools
6
Hey r/LocalLLaMA, I've been working on \*\*SalmAlm\*\* (삶앎) — a personal AI gateway that runs as a single Python process. No Django, no Flask, no aiohttp. Pure stdlib. Why I built this: I wanted a self-hosted AI interface that I actually control — something between "curl the API" and "deploy a full SaaS stack." Most alternatives either need Docker + Redis + Postgres or are just thin wrappers around one provider. What it does: \- \*\*Multi-provider routing\*\* — OpenAI, Anthropic, Google, xAI, Ollama (local models), and more from one interface \- \*\*32 built-in tools\*\* — web search, code execution (sandboxed), file I/O, system monitoring, RAG, TTS/STT, image analysis \- \*\*Encrypted vault\*\* — AES-256-GCM for API keys, no plaintext config files \- \*\*Web UI\*\* — multi-session chat, markdown rendering, code highlighting, dashboard with usage stats \- \*\*Ollama integration\*\* — auto-detects local models, zero config \- \*\*PWA support\*\* — install as desktop/mobile app \- \*\*One-liner install:\*\* \`pip install salmalm && python -m salmalm\` What it's NOT: \- Not a team tool or SaaS replacement — it's a personal weapon \- Not trying to compete with Open WebUI or LibreChat — different philosophy (minimal deps, single process) \- No Docker required (but works in Docker if you want) \*\*Tech stats:\*\* \- \~10,400 lines across 25 Python modules \- 370+ tests, 48% coverage, mypy 0 errors \- CSP nonce (no unsafe-inline), audit logging with hash chain \- Works on Python 3.10–3.14, Linux/macOS/Windows \*\*Local LLM angle:\*\* If you're running Ollama or any OpenAI-compatible local server, SalmAlm gives you a proper UI + tool ecosystem without pulling in heavy dependencies. The tool system (code exec, web search, file ops) works with any model that supports function calling — including local ones. GitHub: [https://github.com/hyunjun6928-netizen/salmalm](https://github.com/hyunjun6928-netizen/salmalm) PyPI: [https://pypi.org/project/salmalm/](https://pypi.org/project/salmalm/) Happy to answer questions or take feedback. This is a solo project so roast it if something's dumb.
2026-02-19T07:48:10
https://www.reddit.com/r/LocalLLaMA/comments/1r8t684/i_built_a_lightweight_selfhosted_ai_gateway_in/
Special-Argument-558
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r8t684
false
null
t3_1r8t684
/r/LocalLLaMA/comments/1r8t684/i_built_a_lightweight_selfhosted_ai_gateway_in/
false
false
self
6
{'enabled': False, 'images': [{'id': 'wUDHVGtgs9H0zsZCW4x_scFBS7JjUURoTNkXNw04_6c', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/wUDHVGtgs9H0zsZCW4x_scFBS7JjUURoTNkXNw04_6c.png?width=108&crop=smart&auto=webp&s=837bf190e42f79c31d6e6592fa2d53a1c7e48cb6', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/wUDHVGtgs9H0zsZCW4x_scFBS7JjUURoTNkXNw04_6c.png?width=216&crop=smart&auto=webp&s=e6e15c2c9e7332a248083082c95a0dce6395513f', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/wUDHVGtgs9H0zsZCW4x_scFBS7JjUURoTNkXNw04_6c.png?width=320&crop=smart&auto=webp&s=0ac3cd914bca3f507d5adcc928442a54486a911d', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/wUDHVGtgs9H0zsZCW4x_scFBS7JjUURoTNkXNw04_6c.png?width=640&crop=smart&auto=webp&s=5735fe478ef81dd882af5da3f7d37a7e3cc2d3c1', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/wUDHVGtgs9H0zsZCW4x_scFBS7JjUURoTNkXNw04_6c.png?width=960&crop=smart&auto=webp&s=353a06f44ba4e4815c361174dc62a77906e5aac1', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/wUDHVGtgs9H0zsZCW4x_scFBS7JjUURoTNkXNw04_6c.png?width=1080&crop=smart&auto=webp&s=1333b789e61c8c710f2b5ecaddc1a93130ebcf4b', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/wUDHVGtgs9H0zsZCW4x_scFBS7JjUURoTNkXNw04_6c.png?auto=webp&s=3e9b36d84b0b260dca90fc2a6292ddcaf98b183f', 'width': 1200}, 'variants': {}}]}
What local models handle multi-turn autonomous tool use without losing the plot?
1
I've been building autonomous AI agents that live in Docker containers and run for days unsupervised. Each agent wakes up, reads its environment (filesystem, APIs, other agents), decides what to do, executes via bash/file operations, observes the results, and repeats. When it's done, it sleeps, consolidates what it learned into long-term memory ("dreaming"), and wakes up hours later to do it again. Currently running these on Claude Sonnet via an API proxy that handles auth, cost tracking, and budget caps. Agents stay coherent through 30-50 turns, self-modify their own code when they hit problems, and build complex things (one of them wrote an 18-room text adventure, another built a trading system from scratch). But running multiple agents 24/7 on Anthropic's API adds up. I'm spending roughly $5-15/day depending on how active they are, and that's with aggressive sleep cycles. So I'm curious: has anyone tested local models for this kind of sustained, autonomous agentic work? Not chat, not single-shot code generation, but "here's a codebase you wrote yesterday, figure out what to do next, execute it, handle errors, repeat for 50 turns." The specific capabilities that seem to matter most (in order): 1. **Tool-use format consistency** - agents call bash, read/write files, hit HTTP APIs. If the model flakes on tool call formatting on turn 23, the whole session derails. 2. **Not hallucinating about its own prior actions** - the model needs to remember what it already did 10 turns ago without confabulating. Context window size matters here but isn't the whole story. 3. **Self-directed planning** - no human in the loop. The model has to decide "what should I do next?" every turn and not just spin in circles. 4. **Knowing when to stop** - sleeping instead of burning tokens doing nothing useful. This is surprisingly hard for most models. I've seen benchmarks for code gen, chat, reasoning, etc. but nothing that really captures "can this model run autonomously for an hour without going off the rails." Anyone have experience with Qwen 2.5 Coder 32B, DeepSeek V3, Llama 3.3 70B, or Mistral Large for this kind of workload?
2026-02-19T07:46:38
https://www.reddit.com/r/LocalLLaMA/comments/1r8t5d3/what_local_models_handle_multiturn_autonomous/
RoutineLunch4904
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r8t5d3
false
null
t3_1r8t5d3
/r/LocalLLaMA/comments/1r8t5d3/what_local_models_handle_multiturn_autonomous/
false
false
self
1
null
Training a TTS model on transformer architecture
3
Hi folks. I am trying to build a TTS based on transformer architecture for English Language. I have sourced around 5000hrs of open source data. My methodology is to create audio tokens using snac model. And these tokens would be generated by the model and then converted back to audio. I have run some trial runs but it's not primising. The issue I am facing rn is, the model overfits over the data after like 100k steps keeping the batch size as 2. But the model gives random output to unseen data. Even before 100k steps and after that. I am using a llama 3.2 1b model as the base model. But still haven't got any good output. I am confused as to what to might be the issue. Please help out , as I am currently stuck in this problem. And I genuinely don't know what to do more, cz this is my first time pretraining a transformer model. Thanks guys.
2026-02-19T07:37:20
https://www.reddit.com/r/LocalLLaMA/comments/1r8t01h/training_a_tts_model_on_transformer_architecture/
Shoddy_Battle_5397
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r8t01h
false
null
t3_1r8t01h
/r/LocalLLaMA/comments/1r8t01h/training_a_tts_model_on_transformer_architecture/
false
false
self
3
null
AMA with StepFun AI - Ask Us Anything
98
https://preview.redd.it/… live session.**
2026-02-19T07:15:35
https://www.reddit.com/r/LocalLLaMA/comments/1r8snay/ama_with_stepfun_ai_ask_us_anything/
StepFun_ai
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r8snay
false
null
t3_1r8snay
/r/LocalLLaMA/comments/1r8snay/ama_with_stepfun_ai_ask_us_anything/
false
true
https://external-preview…9e3b7af53ee1e6b6
98
null
Local VLMs (Qwen 3 VL) for document OCR with bounding box detection for PII detection/redaction workflows (blog post and open source app)
14
[Blog post link](https://seanpedrick-case.github.io/doc_redaction/src/redaction_with_vlm_and_llms.html) A while ago I made a post here in r/LocalLLaMA asking about using local VLMs for OCR in PII detection/redaction processes for documents ([here](https://www.reddit.com/r/LocalLLaMA/comments/1kspe8c/best_local_model_ocr_solution_for_pdf_document/)). The document redaction process differs from other OCR processes in that we need to identify the bounding boxes of words on the page, as well as the text content, to successfully redact the document. I have now implemented OCR with bounding box detection into the [Document redaction app](https://github.com/seanpedrick-case/doc_redaction) I have been working on. The VLM models help with OCR either 1. to extract all text and bounding boxes from the page directly or 2. in combination with a 'traditional' OCR model (PaddleOCR), where Paddle first pulls out accurate line-level bounding boxes, then passes words with low confidence to the VLM in a hybrid approach. I wanted to use small VLM models such as Qwen 3 VL 8B Instruct for this task to see whether local models that can fit in consumer grade GPUs (i.e. 24GB VRAM or less) could be used for redaction tasks. My experiments with using VLMs in the redaction OCR process are demonstrated in [this blog post](https://seanpedrick-case.github.io/doc_redaction/src/redaction_with_vlm_and_llms.html). [Unclear text on handwritten note analysed with hybrid PaddleOCR + Qwen 3 VL 8B Instruct](https://preview.redd.it/1pwglerfhekg1.jpg?width=1440&format=pjpg&auto=webp&s=5f443be8011738ed0e186ff06a42602ea399881b) All the examples can be replicated using this [Hugging Face space for free](https://huggingface.co/spaces/seanpedrickcase/document_redaction_vlm). The code for the underlying Document Redaction app is available for anyone to view and use, and can be found [here](https://github.com/seanpedrick-case/doc_redaction). My blog post used Qwen 3 VL 8B Instruct as the small VLM for OCR. My conclusion at the moment is that the hybrid PaddleOCR + Qwen 3 VL approach is better than the pure VLM approach for 'difficult' handwritten documents. However, both approaches are not quite there for perfect accuracy. This conclusion may soon change with the imminent release of the Qwen 3.5 VL models, after which I will redo my analysis and post about it here. The blog post also shows how VLMs can be used for detecting signatures, and PII in images such as people's faces. I also demonstrate how mid-level local LLMs of \~30GB parameter size (Gemma 27B) can be used to detect custom entities in document text. Any comments on the approach or the app in general are welcome.
2026-02-19T07:13:57
https://www.reddit.com/r/LocalLLaMA/comments/1r8smbk/local_vlms_qwen_3_vl_for_document_ocr_with/
Sonnyjimmy
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r8smbk
false
null
t3_1r8smbk
/r/LocalLLaMA/comments/1r8smbk/local_vlms_qwen_3_vl_for_document_ocr_with/
false
false
https://preview.redd.it/…924ee6bbe9da3633
14
null
Prodkit
0
A structured way of building something. From PRD to execution Built this from taking inspiration from speckit Give it a try: https://github.com/kiranshivaraju/prodkit
2026-02-19T06:59:55
https://www.reddit.com/r/LocalLLaMA/comments/1r8sdnw/prodkit/
Accomplished_Map2130
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r8sdnw
false
null
t3_1r8sdnw
/r/LocalLLaMA/comments/1r8sdnw/prodkit/
false
false
self
0
null
every AI builder today
0
everyone's out here debating which model is smarter meanwhile their agent has been able to read its own API keys the entire time the real test isn't the model. it's what happens when someone manipulates it. https://preview.redd.it/si4ipgvtaekg1.png?width=1200&format=png&auto=webp&s=191b86f37e654a53fee97036a2733fd417b0c76a
2026-02-19T06:28:05
https://www.reddit.com/r/LocalLLaMA/comments/1r8ru7x/every_ai_builder_today/
JustTryingTo_Align
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r8ru7x
false
null
t3_1r8ru7x
/r/LocalLLaMA/comments/1r8ru7x/every_ai_builder_today/
false
false
https://preview.redd.it/…3357d3e9d0808217
0
null
I’ve been working on an Deep Research Agent Workflow built with LangGraph and recently open-sourced it .
1
The goal was to create a system that doesn't just answer a question, but actually conducts a multi-step investigation. Most search agents stop after one or two queries, but this one uses a stateful, iterative loop to explore a topic in depth. How it works: You start by entering a research query, breadth, and depth. The agent then asks follow-up questions and generates initial search queries based on your answers. It then enters a research cycle: it scrapes the web using Firecrawl, extracts key learnings, and generates new research directions to perform more searches. This process iterates until the agent has explored the full breadth and depth you defined. After that, it generates a structured and comprehensive report in markdown format. The Architecture: I chose a graph-based approach to keep the logic modular and the state persistent: Cyclic Workflows: Instead of simple linear steps, the agent uses a StateGraph to manage recursive loops. State Accumulation: It automatically tracks and merges learnings and sources across every iteration. Concurrency: To keep the process fast, the agent executes multiple search queries in parallel while managing rate limits. Provider Agnostic: It’s built to work with various LLM providers, including Gemini and Groq(gpt-oss-120b) for free tier as well as OpenAI. The project includes a CLI for local use and a FastAPI wrapper for those who want to integrate it into other services. I’ve kept the LangGraph implementation straightforward, making it a great entry point for anyone wanting to understand the LangGraph ecosystem or Agentic Workflows. Anyone can run the entire workflow using the free tiers of Groq and Firecrawl. You can test the full research loop without any upfront API costs. I’m planning to continuously modify and improve the logic—specifically focusing on better state persistence, human-in-the-loop checkpoints, and more robust error handling for rate limits. Repo link: [https://github.com/piy-us/deep\_research\_langgraph](https://github.com/piy-us/deep_research_langgraph) I’ve open-sourced the repository and would love your feedback and suggestions! Note: This implementation was inspired by the "Open Deep Research(18.5k⭐) , by David Zhang, which was originally developed in TypeScript. https://reddit.com/link/1r8rne2/video/nzfz8m9o8ekg1/player
2026-02-19T06:17:09
https://www.reddit.com/r/LocalLLaMA/comments/1r8rne2/ive_been_working_on_an_deep_research_agent/
Emotional_Farmer_243
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r8rne2
false
null
t3_1r8rne2
/r/LocalLLaMA/comments/1r8rne2/ive_been_working_on_an_deep_research_agent/
false
false
https://external-preview…50a6f76c25784e84
1
null
Minimax 2.5 on Strix Halo Thread
39
Hi! I just tried out Minimax 2.5 on headless Fedora 43 with the kyuz0 rocm nightlies toolbox, Jan 26 firmware, 6.18.9 Kernel, [https://huggingface.co/unsloth/MiniMax-M2.5-GGUF](https://huggingface.co/unsloth/MiniMax-M2.5-GGUF) there are some changes necessary so it fits in the RAM. Using MiniMax-M2.5-Q3\_K\_M there is just enough RAM for approx 80k. The quality is really impressive! But its slow! Its almost not usabe, but the quality is so great I would like to continue with it. Do you have any tips or do you have a faster setup? I use now this: `export HIP_VISIBLE_DEVICES=0` `export GGML_CUDA_ENABLE_UNIFIED_MEMORY=1`  `export HIP_VISIBLE_DEVICES=0` `export HIP_ENABLE_DEVICE_MALLOC=1` `export HIP_ENABLE_UNIFIED_MEMORY=1` `export HSA_OVERRIDE_GFX_VERSION=11.5.1` `export HIP_FORCE_DEV_KERNARG=1` `export GGML_CUDA_ENABLE_UNIFIED_MEMORY=1` `export GGML_HIP_UMA=1` `export HIP_HOST_COHERENT=0`  `export HIP_TRACE_API=0` `export HIP_LAUNCH_BLOCKING=0` `export ROCBLAS_USE_HIPBLASLT=1` `llama-server -m /run/host/data/models/MiniMax-M2.5-Q3_K_M-00001-of-00004.gguf -fa on --no-mmap -c 66600  -ub 1024 --host` [`0.0.0.0`](http://0.0.0.0) `--port 8080  --jinja -ngl 99`  However its quite slow, if I let it run longer and with more context i get results like: pp 43 t/s, tg 3 t/s... In the very beginning with 17k kontext prompt eval time =   81128.69 ms / 17363 tokens (    4.67 ms per token,   214.02 tokens per second)        eval time =   21508.09 ms /   267 tokens (   80.55 ms per token,    12.41 tokens per second) after 8 toolusages and with 40k context prompt eval time =   25168.38 ms /  1690 tokens (   14.89 ms per token,    67.15 tokens per second)        eval time =   21207.71 ms /   118 tokens (  179.73 ms per token,     5.56 tokens per second) after long usage its getting down to where it stays (still 40 k context) prompt eval time =   13968.84 ms /   610 tokens (   22.90 ms per token,    43.67 tokens per second)        eval time =   24516.70 ms /    82 tokens (  298.98 ms per token,     3.34 tokens per second) llama-bench llama-bench -m /run/host/data/models/MiniMax-M2.5-Q3_K_M-00001-of-00004.gguf -ngl 99 -fa on    -ngl 99  ggml_cuda_init: found 1 ROCm devices:   Device 0: Radeon 8060S Graphics, gfx1151 (0x1151), VMM: no, Wave Size: 32 | model                          |       size |     params | backend    | ngl |            test |                  t/s | | ------------------------------ | ---------: | ---------: | ---------- | --: | --------------: | -------------------: | | minimax-m2 230B.A10B Q3_K - Medium | 101.76 GiB |   228.69 B | ROCm       |  99 |           pp512 |        200.82 ± 1.38 | | minimax-m2 230B.A10B Q3_K - Medium | 101.76 GiB |   228.69 B | ROCm       |  99 |           tg128 |         27.27 ± 0.01 | | minimax-m2 230B.A10B Q3_K - Medium | 101.76 GiB |   228.69 B | ROCm       |  99 |           pp512 |        200.38 ± 1.53 | | minimax-m2 230B.A10B Q3_K - Medium | 101.76 GiB |   228.69 B | ROCm       |  99 |           tg128 |         27.27 ± 0.00 | With the kyuz vulkan radv toolbox: The pp is 30% slower, tg a bit faster. llama-bench -m /run/host/data/models/MiniMax-M2.5-Q3_K_M-00001-of-00004.gguf -ngl 99 -fa on    -ngl 99  ggml_vulkan: Found 1 Vulkan devices: ggml_vulkan: 0 = Radeon 8060S Graphics (RADV GFX1151) (radv) | uma: 1 | fp16: 1 | bf16: 0 | warp size: 64 | shared memory: 65536 | int dot: 1 | matrix cores: KHR_coopmat | model                          |       size |     params | backend    | ngl |            test |                  t/s | | ------------------------------ | ---------: | ---------: | ---------- | --: | --------------: | -------------------: | | minimax-m2 230B.A10B Q3_K - Medium | 101.76 GiB |   228.69 B | Vulkan     |  99 |           pp512 |        157.18 ± 1.29 | | minimax-m2 230B.A10B Q3_K - Medium | 101.76 GiB |   228.69 B | Vulkan     |  99 |           tg128 |         32.37 ± 1.67 | | minimax-m2 230B.A10B Q3_K - Medium | 101.76 GiB |   228.69 B | Vulkan     |  99 |           pp512 |        176.17 ± 0.85 | | minimax-m2 230B.A10B Q3_K - Medium | 101.76 GiB |   228.69 B | Vulkan     |  99 |           tg128 |         33.09 ± 0.03 | I try now the Q3\_K\_XL. I doubt it will improve.
2026-02-19T06:06:20
https://www.reddit.com/r/LocalLLaMA/comments/1r8rgcp/minimax_25_on_strix_halo_thread/
Equivalent-Belt5489
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r8rgcp
false
null
t3_1r8rgcp
/r/LocalLLaMA/comments/1r8rgcp/minimax_25_on_strix_halo_thread/
false
false
self
39
{'enabled': False, 'images': [{'id': 'e6WuQRGcA2Fw8f35Ri7qRVwopV6ajqz4-FoXNqgHZsU', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/e6WuQRGcA2Fw8f35Ri7qRVwopV6ajqz4-FoXNqgHZsU.png?width=108&crop=smart&auto=webp&s=23b13e1f2da51482367095aa8c0bd02a8ecbdfae', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/e6WuQRGcA2Fw8f35Ri7qRVwopV6ajqz4-FoXNqgHZsU.png?width=216&crop=smart&auto=webp&s=0a9723d51ac7e78be01f668595a341830e4d2cd2', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/e6WuQRGcA2Fw8f35Ri7qRVwopV6ajqz4-FoXNqgHZsU.png?width=320&crop=smart&auto=webp&s=3f9aa7ad13e99d6719d2fca8595b58a0325ea944', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/e6WuQRGcA2Fw8f35Ri7qRVwopV6ajqz4-FoXNqgHZsU.png?width=640&crop=smart&auto=webp&s=4f79cdf15fb1d1d3eaa1bd3cee2425fa4cdf0223', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/e6WuQRGcA2Fw8f35Ri7qRVwopV6ajqz4-FoXNqgHZsU.png?width=960&crop=smart&auto=webp&s=bb52ef1e9c3a485a40bf576c3ad87c00acee379d', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/e6WuQRGcA2Fw8f35Ri7qRVwopV6ajqz4-FoXNqgHZsU.png?width=1080&crop=smart&auto=webp&s=7d1693763dc239d20d4d7b2c2367c12ffdcd6e07', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/e6WuQRGcA2Fw8f35Ri7qRVwopV6ajqz4-FoXNqgHZsU.png?auto=webp&s=8dcf76b2a5416e4243866c4ccb3593456d65b506', 'width': 1200}, 'variants': {}}]}
Has anyone benched Qwen3.5 coding capabilities locally?
0
The blog says it excels at agentic workflows and coding. I want to replace my local Copilot backend. How does it compare to standard 30B dense models?
2026-02-19T06:01:14
https://www.reddit.com/r/LocalLLaMA/comments/1r8rcvt/has_anyone_benched_qwen35_coding_capabilities/
skipdaballs
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r8rcvt
false
null
t3_1r8rcvt
/r/LocalLLaMA/comments/1r8rcvt/has_anyone_benched_qwen35_coding_capabilities/
false
false
self
0
null
397B params but only 17B active. Qwen3.5 is insane for local setups.
0
The new Qwen3.5 weights dropped on HF. It’s a 397B MoE but only activates 17B per forward pass. Matches Qwen3-Max performance. Anyone working on the GGUF yet?
2026-02-19T06:00:27
https://www.reddit.com/r/LocalLLaMA/comments/1r8rca7/397b_params_but_only_17b_active_qwen35_is_insane/
skipdaballs
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r8rca7
false
null
t3_1r8rca7
/r/LocalLLaMA/comments/1r8rca7/397b_params_but_only_17b_active_qwen35_is_insane/
false
false
self
0
null
Built an OSS execution safety layer for LLM APIs (retry containment + adaptive ceilings)
2
I kept running into the same failure modes when teams moved LLMs into production: \- Retry cascades on 429/5xx multiplying total token usage \- Agent loops running overnight \- Monthly cost alerts instead of real-time enforcement \- No chain-level retry containment \- No shared breaker state across services Most tools focus on observability. I wanted something that enforces execution boundaries. So I built an OSS SDK that wraps LLM API calls and enforces: \- Adaptive cost ceilings (bounded ±20%) \- Chain-level retry budgets (not just per-call) \- Circuit breaker with shared state \- Loop detection + step limits \- Deterministic replay \- No prompt content storage (hash-only evidence) It works fully locally. Cloud is optional. GitHub: [https://github.com/amabito/veronica-core](https://github.com/amabito/veronica-core) Would really appreciate technical feedback — especially from anyone running multi-service LLM systems.
2026-02-19T06:00:08
https://www.reddit.com/r/LocalLLaMA/comments/1r8rc1j/built_an_oss_execution_safety_layer_for_llm_apis/
Pale_Firefighter_869
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r8rc1j
false
null
t3_1r8rc1j
/r/LocalLLaMA/comments/1r8rc1j/built_an_oss_execution_safety_layer_for_llm_apis/
false
false
self
2
{'enabled': False, 'images': [{'id': 'NjI_XHkY1x4BtFBCUkrEhyegPC0qOPw5IqsJKds3fow', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/NjI_XHkY1x4BtFBCUkrEhyegPC0qOPw5IqsJKds3fow.png?width=108&crop=smart&auto=webp&s=8034c57ade9e67377e73f9c44b0048c230ff3ff9', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/NjI_XHkY1x4BtFBCUkrEhyegPC0qOPw5IqsJKds3fow.png?width=216&crop=smart&auto=webp&s=585cde8217c46a5bebde9003857a34a481556da4', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/NjI_XHkY1x4BtFBCUkrEhyegPC0qOPw5IqsJKds3fow.png?width=320&crop=smart&auto=webp&s=4d103150cbf5ba8a74cb490b1d9c70d2dacce37c', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/NjI_XHkY1x4BtFBCUkrEhyegPC0qOPw5IqsJKds3fow.png?width=640&crop=smart&auto=webp&s=80cca981fce77a7d17fc7a7f6d6ecdc2a35d1a68', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/NjI_XHkY1x4BtFBCUkrEhyegPC0qOPw5IqsJKds3fow.png?width=960&crop=smart&auto=webp&s=8e3d63a83fd40c9b116b32bf5ed7be0f69d1f7e1', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/NjI_XHkY1x4BtFBCUkrEhyegPC0qOPw5IqsJKds3fow.png?width=1080&crop=smart&auto=webp&s=8a42394a795cf5559312e4692247728246d5d275', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/NjI_XHkY1x4BtFBCUkrEhyegPC0qOPw5IqsJKds3fow.png?auto=webp&s=2e9490898e6a33302aaae39c1bdce0f83586ed8f', 'width': 1200}, 'variants': {}}]}
-New here- Want to experiment with Local LLMs.🧐 Ive dedicated an old laptop towards this project but im not sure what model would be best on this hardware - Specs provided - (Simultaneously learning archlinux from scratch) 😵‍💫🫩🤗😁 ~ lol
0
Sooo, I recently discovered how important becoming educated in this topic really is. I can also see how rapid the shift into the age of ai is going to be and the obvious reasons for getting a local LLM and having it Local vs the centralized models (chatgpt gemini grok..) Im completely new to this stuff but im hoping to change that because its obvious that these things are going to change the world profoundly. Ive spent a lot of time playing with gemini and grok, experimenting with various prompts for setting rules and such and I had some pretty cool results. It was in this time I realized the importance owning your own models. ## Im sure everyone here understands the reasons behind owning your own local LLM and to avoid drawing this out any longer id just like to ask the community for some guidance and recommendations starting out, like where to start and what should I be looking into down the road. (Models, hardware, even good practices when working with a LLMs) ~~~### I'm open to any and all tips or whatever you have to share ~~~ ____________ [Laptop with new ArchLinux Distro] - yeah 12yrs old 😅 ___________________________ -- HP EliteBook 8470p -- 🫣 ________________________________________ Operating System - Arch Linux Window Manager - i3wm (Tiling Window Manager) Kernel - Linux (Rolling Release) Memory: 695.57 MiB / 15.54 GiB (4%) Swap: 0 B / 8.00 GiB (0%) Disk (/): 4.78 GiB / 225.31 GiB (2%) Processor - (CPU) Intel Core i7-3520M (2 Cores, 4 Threads) Memory - (RAM) 16GB DDR3 Storage - SATA SSD Graphics - (GPU) Intel HD Graphics 4000 / AMD Radeon HD 7570M (1GB GDDR5) ________________________________________
2026-02-19T05:59:42
https://www.reddit.com/r/LocalLLaMA/comments/1r8rbra/new_here_want_to_experiment_with_local_llms_ive/
rykken420
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r8rbra
false
null
t3_1r8rbra
/r/LocalLLaMA/comments/1r8rbra/new_here_want_to_experiment_with_local_llms_ive/
false
false
self
0
null
I built a small CLI tool to help beginners see if their hardware can actually handle local LLMs
1
Hey everyone, I’ve been lurking here for a while and learning a ton from all the superusers and experts here. As a beginner myself, I often found it a bit overwhelming to figure out which models would actually run "well" on my specific machine versus just running "slowly." To help myself learn and to give something back to other newcomers, I put together a small CLI tool in Go called **RigRank**. **What it does:** It’s basically a simple benchmarking suite for Ollama. It doesn’t measure how "smart" a model is—there are way better tools for that—but it measures the "snappiness" of your actual hardware. It runs a few stages (code gen, summarization, reasoning, etc.) and gives you a "Report Card" with: * **TTFT (Time To First Token):** How long you’re waiting for that first word. * **Writing Speed:** How fast it actually spits out text. * **Reading Speed:** How quickly it processes your prompts. **Who this is for:** Honestly, if you already have a complex benchmarking pipeline or a massive GPU cluster, this probably isn't for you. It’s designed for the person who just downloaded Ollama and wants to know: *"Is Llama3-8B too heavy for my laptop, or is it just me?"* **I would love your feedback** **Repo:** [https://github.com/rohanelukurthy/rig-rank](https://github.com/rohanelukurthy/rig-rank)
2026-02-19T05:22:52
https://www.reddit.com/r/LocalLLaMA/comments/1r8qnfb/i_built_a_small_cli_tool_to_help_beginners_see_if/
Narrow-Detective9885
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r8qnfb
false
null
t3_1r8qnfb
/r/LocalLLaMA/comments/1r8qnfb/i_built_a_small_cli_tool_to_help_beginners_see_if/
false
false
self
1
null
Combining MoE and CoT LLMs with other formal systems (Theorem-provers, Sat-solvers, Computer Algebra Systems, etc.).
2
I've been pondering how to make best use of my local compute for interactive definition and solving of complex problems. My thinking was stimulated by this paper: https://arxiv.org/pdf/2602.06176 I like the notion of how reasoning LLMs "eating their own dogfood" to work their way through the layers of a problem. I also like how MoE models slice and dice their work into segments a smaller specialized system can handle. Yet when I look at MoE models, they don't take advantage of tools that are both capable and proven, such as satisfiability-solvers, theorem provers, and computer algebra systems. Yet LLMs are very capable of converting natural language input into more formal notation, such as pretty much any programming or data representation language. Including those used to feed the tools mentioned above. Why do we not have MoEs that have dedicated experts for feeding more formal systems, where the LLM would try to formalize its input for a subsequent formal system, running that system, then using CoT/reasoning to either fix any problems or judge the approach (of using that expert) a failure. I have some experience in the somewhat related area of requirements analysis and tracing/proving, where a natural language spec must be decomposed into elements that may be met by a combination of software and hardware, then the resulting system tested to show it meets those requirements. We automated as much of the process as possible, so engineers were relieved of most of the mundane work of doing translations and conversions. The first element of our chain of tools was what we called our "BS Detector", to find requirements that appeared to be nonsensical. We had a lexical scanner that looked for "requirements terms" including: shall, shall not, must, must not, may, may not, will, and so on, then capturing the verbiage on either side of those words to match against our existing requirements database. LLMs are already excitingly talented at making these kinds of conversions and translations, both for human and computer languages. Has anyone yet tried to front-end and combine them all into a much more "expert" system?
2026-02-19T05:18:15
https://www.reddit.com/r/LocalLLaMA/comments/1r8qkfe/combining_moe_and_cot_llms_with_other_formal/
IAmBobC
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r8qkfe
false
null
t3_1r8qkfe
/r/LocalLLaMA/comments/1r8qkfe/combining_moe_and_cot_llms_with_other_formal/
false
false
self
2
null
I'm 100% convinced that it's the NFT-bros pushing all the openclawd engagement on X
475
I'm absolutely sure of it. The same usual suspects, the same language, the same who stole from whom the next million dollar ideas.
2026-02-19T05:13:10
https://www.reddit.com/r/LocalLLaMA/comments/1r8qh08/im_100_convinced_that_its_the_nftbros_pushing_all/
FPham
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r8qh08
false
null
t3_1r8qh08
/r/LocalLLaMA/comments/1r8qh08/im_100_convinced_that_its_the_nftbros_pushing_all/
false
false
self
475
null
qwen models naming state
0
so what exactly is the state of the families/versions of qwen models? you have qwen3 family, now qwen3.5 is slowly coming out. How does qwen next 80b a3b fit into this? (aka thinking/instruct/coder). is that in between 3 and 3.5? is that meant to be part of 3.5, but not called that way? or something else?
2026-02-19T05:12:10
https://www.reddit.com/r/LocalLLaMA/comments/1r8qgcs/qwen_models_naming_state/
kailron2
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r8qgcs
false
null
t3_1r8qgcs
/r/LocalLLaMA/comments/1r8qgcs/qwen_models_naming_state/
false
false
self
0
null
A competitive puzzle arena for AI agents
2
We launched [AgentPuzzles.com](http://AgentPuzzles.com) \- puzzles across reverse CAPTCHAs, logic, science, code, and geolocation. API-first, 3 endpoints, any agent can play. The interesting part: 5 different AI agents (Claude Opus, Gemini 3 Flash, GPT, Kimi K2.5) are already competing. They're also creating puzzles for each other — one agent designed CAPTCHAs using Unicode homoglyphs, another made ops puzzles from real production incidents. Agent's are competing on proving they are not human :) API: GET /puzzles, GET /puzzles/{id}, POST /puzzles/{id}/solve [https://agentpuzzles.com](https://agentpuzzles.com/)
2026-02-19T05:11:58
https://www.reddit.com/r/LocalLLaMA/comments/1r8qg7n/a_competitive_puzzle_arena_for_ai_agents/
petruspennanen
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r8qg7n
false
null
t3_1r8qg7n
/r/LocalLLaMA/comments/1r8qg7n/a_competitive_puzzle_arena_for_ai_agents/
false
false
self
2
{'enabled': False, 'images': [{'id': 'Z4ePZmY5yPqem3QBLYclmPC3JLfoLixXN5W9BHZbgv4', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/Z4ePZmY5yPqem3QBLYclmPC3JLfoLixXN5W9BHZbgv4.png?width=108&crop=smart&auto=webp&s=960fb9508769ca5d684ece55c9fa231b061192ef', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/Z4ePZmY5yPqem3QBLYclmPC3JLfoLixXN5W9BHZbgv4.png?width=216&crop=smart&auto=webp&s=deedcf7ea9cf66ae01fc5d2e15626ae153d1082e', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/Z4ePZmY5yPqem3QBLYclmPC3JLfoLixXN5W9BHZbgv4.png?width=320&crop=smart&auto=webp&s=a11ddcbf96f3d6b1dfa10d8adc0552cced52093d', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/Z4ePZmY5yPqem3QBLYclmPC3JLfoLixXN5W9BHZbgv4.png?width=640&crop=smart&auto=webp&s=171b3ed96f54f2c2289e963a288260a0c71a4c4e', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/Z4ePZmY5yPqem3QBLYclmPC3JLfoLixXN5W9BHZbgv4.png?width=960&crop=smart&auto=webp&s=9f374fa9592a19a5b1825ec46ae593a51daef86d', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/Z4ePZmY5yPqem3QBLYclmPC3JLfoLixXN5W9BHZbgv4.png?width=1080&crop=smart&auto=webp&s=0c1e5a0b9e3c53c7d46e4ba638add3a58af16050', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/Z4ePZmY5yPqem3QBLYclmPC3JLfoLixXN5W9BHZbgv4.png?auto=webp&s=60bbdda1812b2430d0ed59d5db9bb4edfec9158a', 'width': 1200}, 'variants': {}}]}
Kitten TTS V0.8 is out: New SOTA Super-tiny TTS Model (Less than 25 MB)
1,056
**Model introduction:** New Kitten models are out. Kitten ML has released open source code and weights for three new tiny expressive TTS models - 80M, 40M, 14M (all Apache 2.0) GitHub: [https://github.com/KittenML/KittenTTS](https://github.com/KittenML/KittenTTS) Hugging Face - Kitten TTS V0.8: * Mini 80M: [https://huggingface.co/KittenML/kitten-tts-mini-0.8](https://huggingface.co/KittenML/kitten-tts-mini-0.8) * Micro 40M: [https://huggingface.co/KittenML/kitten-tts-micro-0.8](https://huggingface.co/KittenML/kitten-tts-micro-0.8) * Nano 14M: [https://huggingface.co/KittenML/kitten-tts-nano-0.8](https://huggingface.co/KittenML/kitten-tts-nano-0.8) The smallest model is less than 25 MB, and around 14M parameters. All models have a major quality upgrade from previous versions, and can run on just CPU. **Key Features and Advantages** 1. **Eight expressive voices:** 4 female and 4 male voices across all three models. They all have very high expressivity, with 80M being the best in quality. English support in this release, multilingual coming in future releases. 2. **Super-small in size:** The 14M model is just 25 megabytes. 40M and 80M are slightly bigger, with high quality and expressivity even for longer chunks. 3. **Runs literally anywhere lol:** Forget "no GPU required." This is designed for resource-constrained edge devices. Great news for GPU-poor folks like us. 4. **Open source (hell yeah!):** The models can be used for free under Apache 2.0. 5. **Unlocking on-device voice agents and applications:** Matches cloud TTS quality for most use cases, but runs entirely on-device (can also be hosted on a cheap GPU). If you're building voice agents, assistants, or any local speech application, no API calls needed. Free local inference. Just ship it. 6. **What changed from V0.1 to V0.8:** Higher quality, expressivity, and realism. Better training pipelines and 10x larger datasets.
2026-02-19T04:48:29
https://v.redd.it/rzgwarr4rdkg1
ElectricalBar7464
v.redd.it
1970-01-01T00:00:00
0
{}
1r8pztp
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/rzgwarr4rdkg1/DASHPlaylist.mpd?a=1774068525%2CYzZkZjc4NmJlZTgzNTY3NzM5OWNhNjY3ZWE2MDBmMjU1Yzc2OGRiODkyMzE2MjEwMTVhMThkZDc5MWQ4NGYxMQ%3D%3D&v=1&f=sd', 'duration': 36, 'fallback_url': 'https://v.redd.it/rzgwarr4rdkg1/CMAF_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/rzgwarr4rdkg1/HLSPlaylist.m3u8?a=1774068525%2CYTBkOGQ5MWI1ZWE3Mjk4MTY0YmExYzkwNGFkOTI1Yzk1MWVhMmQyNjc2YzFhZTM3MjgyZDRkYzU3ZGVhZWQ0YQ%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/rzgwarr4rdkg1/CMAF_96.mp4', 'transcoding_status': 'completed', 'width': 1920}}
t3_1r8pztp
/r/LocalLLaMA/comments/1r8pztp/kitten_tts_v08_is_out_new_sota_supertiny_tts/
false
false
https://external-preview…5860e40d58174cf5
1,056
{'enabled': False, 'images': [{'id': 'Z3FpM3Y4czRyZGtnMWkMiFyATszvzYKXXKWtHcR48BLv2WbhyR3IwK5gi6zR', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/Z3FpM3Y4czRyZGtnMWkMiFyATszvzYKXXKWtHcR48BLv2WbhyR3IwK5gi6zR.png?width=108&crop=smart&format=pjpg&auto=webp&s=2b7a6704ceeebb39645622bd03a3b3920d6d5eca', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/Z3FpM3Y4czRyZGtnMWkMiFyATszvzYKXXKWtHcR48BLv2WbhyR3IwK5gi6zR.png?width=216&crop=smart&format=pjpg&auto=webp&s=c780ff143da46312eeaf8f201ca98c58078af39e', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/Z3FpM3Y4czRyZGtnMWkMiFyATszvzYKXXKWtHcR48BLv2WbhyR3IwK5gi6zR.png?width=320&crop=smart&format=pjpg&auto=webp&s=cad330d5522af565cc946aec1fe5834b43f23258', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/Z3FpM3Y4czRyZGtnMWkMiFyATszvzYKXXKWtHcR48BLv2WbhyR3IwK5gi6zR.png?width=640&crop=smart&format=pjpg&auto=webp&s=6e17d15d773347cbbcac74fa9dca1582b6e6d781', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/Z3FpM3Y4czRyZGtnMWkMiFyATszvzYKXXKWtHcR48BLv2WbhyR3IwK5gi6zR.png?width=960&crop=smart&format=pjpg&auto=webp&s=dda8f5066a022b8be2781816981c5c06601446d2', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/Z3FpM3Y4czRyZGtnMWkMiFyATszvzYKXXKWtHcR48BLv2WbhyR3IwK5gi6zR.png?width=1080&crop=smart&format=pjpg&auto=webp&s=c83cf0459b375038c016e555480d8778a51283f9', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/Z3FpM3Y4czRyZGtnMWkMiFyATszvzYKXXKWtHcR48BLv2WbhyR3IwK5gi6zR.png?format=pjpg&auto=webp&s=5067207c8c4c4fb9e7cb48e714b8027d59ea104c', 'width': 1920}, 'variants': {}}]}
I built a small AI agent. This is what emerged from our chat
0
This was kind of amazing, here is a transcript from a chat session I had with my homemade agent Ivy: [https://bullshit.se/operation\_emerald\_shield/RUN\_LOG.html](https://bullshit.se/operation_emerald_shield/RUN_LOG.html) After some synthesizing of the chat log i ended up with this: [https://bullshit.se/operation\_emerald\_shield/OPERATION\_EMERALD\_SHIELD\_SYNTHESIS.html](https://bullshit.se/operation_emerald_shield/OPERATION_EMERALD_SHIELD_SYNTHESIS.html) I am not qualified to validate the quality of the work, but it sure seems like its more then ai slop, what do you think?
2026-02-19T04:48:14
https://www.reddit.com/r/LocalLLaMA/comments/1r8pzmy/i_built_a_small_ai_agent_this_is_what_emerged/
nucleicaudio
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r8pzmy
false
null
t3_1r8pzmy
/r/LocalLLaMA/comments/1r8pzmy/i_built_a_small_ai_agent_this_is_what_emerged/
false
false
self
0
null
Does glm-4.7-flash or qwen3-next-thinking have reasoning mode like gpt-oss?
1
Gpt-oss models have reasoning effort low medium high. I wonder qwen3-next-thinking or glm-4.7-flash have similar feature?
2026-02-19T04:45:27
https://www.reddit.com/r/LocalLLaMA/comments/1r8pxpv/does_glm47flash_or_qwen3nextthinking_have/
chibop1
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r8pxpv
false
null
t3_1r8pxpv
/r/LocalLLaMA/comments/1r8pxpv/does_glm47flash_or_qwen3nextthinking_have/
false
false
self
1
null
Latency for Getting Data Needed by LLM/Agent
0
Hi everyone, I'm researching ideas to reduce latency of LLMs and AI agents for fetching data they need from a database and trying to see if it's a problem that anyone else has too. How it works today is very inefficient: based on user input or the task at hand, the LLM/Agent decides that it needs to query from a relational database. It then does a function call, the database runs the query the traditional way and returns results which are again fed to the LLM, etc, etc. Imagine the round trip latency involving db, network, repeated inference, etc. If the data is available right inside GPU memory and LLM knows how to query it, it will be 2ms instead of 2s! And ultimately 2 GPUs can serve more users than 10 GPUs (just an example). I'm not talking about a vector database doing similarity search. I'm talking about a big subset of a bigger database with actual data that can be queried similar (but of couse different) to SQL. Does anyone have latency problems related to database calls? Anyone experienced with such solution?
2026-02-19T04:40:56
https://www.reddit.com/r/LocalLLaMA/comments/1r8pulp/latency_for_getting_data_needed_by_llmagent/
DelphiBoy
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r8pulp
false
null
t3_1r8pulp
/r/LocalLLaMA/comments/1r8pulp/latency_for_getting_data_needed_by_llmagent/
false
false
self
0
null
Last Week in Multimodal AI - Local Edition
21
I curate a weekly multimodal AI roundup, here are the local/open-source highlights from last week: **Qwen3.5-397B-A17B - Native Vision-Language Foundation Model** * 397B-parameter MoE model (17B active) with hybrid linear attention and native multimodal integration. * Handles document parsing, chart analysis, and visual reasoning without a separate vision encoder. * [Blog](https://qwen.ai/blog?id=qwen3.5) | [Hugging Face](https://huggingface.co/Qwen/Qwen3.5-397B-A17B) https://preview.redd.it/12la8ajmpdkg1.png?width=1456&format=png&auto=webp&s=9d39b1ea44a322f087f3b33e35564a96454f25c9 **PersonaPlex-7B - Full-Duplex Voice Model** * NVIDIA's 7B voice model that listens and speaks simultaneously with natural interruption support. * Eliminates turn-taking latency for real-time voice conversation. * [Hugging Face](https://huggingface.co/nvidia/personaplex-7b-v1) https://reddit.com/link/1r8pohi/video/8f15ixwnpdkg1/player **MiniMax M2.5 - Open-Source Productivity Model** * Frontier model tuned for coding, writing, and structured analysis. * Prioritizes instruction-following accuracy over open-ended chat. * [Hugging Face](https://huggingface.co/MiniMaxAI/MiniMax-M2.5) https://preview.redd.it/on0tek5qpdkg1.png?width=1200&format=png&auto=webp&s=0988ea083b38e580baf2961778187892fd50517a **DeepGen 1.0 - 5B Unified Multimodal Model** * Lightweight model with native visual understanding built into the architecture. * Small enough for consumer hardware. * [Hugging Face](https://huggingface.co/deepgenteam/DeepGen-1.0) https://preview.redd.it/m1yn8xxrpdkg1.png?width=2376&format=png&auto=webp&s=9b56d294a054b3e38244bdcf0e988abc61a8ffbf **Qwen3-TTS - 1.7B Speech Synthesis** * Clean, natural speech synthesis with custom voice support. * Open weights from Qwen. * [Hugging Face](https://huggingface.co/Qwen/Qwen3-TTS-12Hz-1.7B-CustomVoice) https://reddit.com/link/1r8pohi/video/qg4slbrvpdkg1/player **KaniTTS2 - 400M TTS in 3GB VRAM** * Open-source text-to-speech that runs on modest local hardware. * 400M parameters, optimized for local deployment. * [Hugging Face](https://huggingface.co/nineninesix/kani-tts-2-pt) **MioTTS-2.6B - Fast English/Japanese TTS** * Lightweight text-to-speech optimized for inference speed. * Supports English and Japanese out of the box. * [Hugging Face](https://huggingface.co/Aratako/MioTTS-2.6B) **Ming-flash-omni 2.0 - Multimodal Model** * New open multimodal model from InclusionAI. * [Hugging Face](https://huggingface.co/inclusionAI/Ming-flash-omni-2.0) **SoulX-Singer - Zero-Shot Singing Voice Synthesis** * High-quality singing voice synthesis with no fine-tuning required. * Open-source with code on GitHub. * [GitHub](https://github.com/Soul-AILab/SoulX-Singer/tree/main) | [Hugging](https://huggingface.co/Soul-AILab/SoulX-Singer) Face https://preview.redd.it/ewez41tzpdkg1.png?width=1016&format=png&auto=webp&s=9614a31ecd2dd373b2abddd730eee0d4c52cedaa Checkout the [full roundup](https://open.substack.com/pub/thelivingedge/p/last-week-in-multimodal-ai-45-no?utm_campaign=post-expanded-share&utm_medium=web) for more demos, papers, and resources. \* I was delayed this week but normally i post these roundups on Mondays [](https://www.reddit.com/submit/?source_id=t3_1r8pftg)
2026-02-19T04:31:57
https://www.reddit.com/r/LocalLLaMA/comments/1r8pohi/last_week_in_multimodal_ai_local_edition/
Vast_Yak_4147
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r8pohi
false
null
t3_1r8pohi
/r/LocalLLaMA/comments/1r8pohi/last_week_in_multimodal_ai_local_edition/
false
false
https://preview.redd.it/…b17c0b819bb2629d
21
null
A normie's 72-hour journey with Claude, Python and OpenClaw
0
Hello hello! I want to start by saying I do not have a computing, programming or software development background and I am so far from an SME in the world of AI/machine learning, coding and LLMs. But I am exceedingly interested in the potential use cases for LLMs and AI assistants; the work of OpenAi and Anthropic (and OpenClaw for all its foibles). I learn a lot from reading everyone's post on here, but I just want to make it clear I come to you with marginal technical background. What I do have is a desire to learn, and the relative time and money to see how far someone like me with no technical background can push these models and what use cases I can find while balancing the security of my data with a desire to automate, streamline and analyse parts of my life. I work full-time so this is a hobby that I do in the margins. # What I have built so far I used Claude to build me two streamlit dashboards utilising Python script across several days. I spent time refining the script, and driving Claude to build robust inputs that would create the level of fidelity I wanted in my dashboards. **Dashboard One: Finance** My financial dashboard is very detailed. It has an overview page which calculates my total net worth after combining my cashflow, my core investment portfolio, satellite speculative investment portfolio as well as my property and vehicle assets and Super. It is the first time I have seen my full net worth after all my assets and mortgage have been taken into account. I can set budgets and targets; categorise my transactions (which it also does automatically but can override and categorise myself if required). It calculates my percentage of income saved, forecasts my net worth in whichever year I want based on current or forecasted conditions. It scrapes my transactions and identifies subscriptions and bills, and generates a monthly PDF report with an exhaustive overview of the past month. I've never have a one-stop financial overview like this before. It has a live prices toggle and the tool scrapes the ASX so my investment portfolio is always up to date and has the live prices. It is a live, real-time networth overview. **Dashboard Two: Fitness** I use a food tracking app that can export weekly nutrition as CSV files. The dashboard contains weekly targets for macros and calories that I can adjust depending on my level of exercise, it breaks down nutrients and vitamins and shows expected weight loss or weight gain depending on calorie input. It shows daily breakdowns by calories and macros per meal and tracks changes overtime. There are multiple graphs tracking patterns in each macro as well. I've also used a Claude API key to generate an inbuilt weekly meal planner. I just say "Quick meals, wholefood focused, high protein" for example, and then it generates me a weekly meal plan depending on the calorie targets I've set. It breaks it the day down by meal (you can input how many meals you want that day, I do, for example AM pre-workout, breakfast, lunch PM pre-workout, dinner and post-dinner snack as I play a lot of sport) and gives gram measurements for ingredients. It then generates a weekly grocery list I can print or tick off with each ingredient by gram. It maintains a recipe database and stores its memory and I've told it to learn from what I do and do not like. **Workflow** I used Claude to create a smart inbox, and a script/task that reads the files every five minutes and uploads anything new to the dashboards. All I do, is on a Sunday, spent 2 minutes exporting my bank statements and weekly nutrition and drop it into a smart inbox and THAT IS IT! I have my entire financial overview, trends and analysis as well as my nutritional overview. **GMtec mini-pc** I used Claude to help me set up a GMTec mini-PC and used Rustdesk to allow me to set up the dashboards on the mini-PC, so now they run 24/7. I've got Tailscale to my phone so I can access the live dashboards 24/7 from my phone or laptop. **OpenClaw** I've been reading a lot about OpenClaw and the use cases of having a personal AI assistant. I find the concept of having OpenClaw via Whatsapp to ask things like "how much have I spent on groceries this week", or "Can you change my calorie goal tomorrow to 3100" for example, interesting. But I have read a lot (much of it here) about OpenClaw's security concerns. HOWEVER, I'm interested to see how far I can push these use cases. I'm also interested in using ElevenLabs to create an assistant who can teach me French at the same time as being a nutrition and financial EA of sorts. I also think it could be interesting using that assistant to scrape investment articles and provide weekly analysis comparing my portfolios to those online. I won't act on the advice (neccessarily), but I think it is an interesting experiement to see how far this could go. At the moment, I have not downloaded OpenClaw, but that would be the next step. I'm not sure from what I've read nanoclaw or ironclaw etc, although lighter and more robust security, has the power for where I'd want to push this. **Lastly** I am trying to get Claude to teach me along the way so I am not flying completely blind, but everyone on this thread far exceeds my level of understanding, intellect and expertise in these spaces. I'm also aware of what I would be opening myself up to using OpenClaw. Especially with the financial overview, although it is not my financial details, it still is a complete overview of my transactions, investments and networth. I have considered building a second dashboard with fake financial data to run OpenClaw - but this is a lot of extra time and effort. But I'm interested to see, as a normie, how I can drive AI to help me develop my own LLMs that streamline aspects of my life, or provide a level of overview and analysis I could not get elsewhere. **I'm interested in people's thoughts on this - and happy to answer questions, or take advice and tips on where to go from here.** Thanks!
2026-02-19T04:20:39
https://www.reddit.com/r/LocalLLaMA/comments/1r8pgiq/a_normies_72hour_journey_with_claude_python_and/
SimbaJinn2026
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r8pgiq
false
null
t3_1r8pgiq
/r/LocalLLaMA/comments/1r8pgiq/a_normies_72hour_journey_with_claude_python_and/
false
false
self
0
null
Can Claude and Cursor talk deterministic test ?
1
So I did an experiment to see if Claude and Cursor can talk 100% deterministic and I think it worked. I’m going to build on this
2026-02-19T04:16:39
https://v.redd.it/al6rkvmfndkg1
PollutionForeign762
v.redd.it
1970-01-01T00:00:00
0
{}
1r8pdlk
false
{'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/al6rkvmfndkg1/DASHPlaylist.mpd?a=1774066614%2CZTA3NTBkMDRmZGFkZWFmOTBkMDU3YzY1MTAxZTAxZDEzNmJhMTZkNGZhMzE4NWNjMDRjYzMzYTYyMzQ4Y2YyMA%3D%3D&v=1&f=sd', 'duration': 66, 'fallback_url': 'https://v.redd.it/al6rkvmfndkg1/CMAF_720.mp4?source=fallback', 'has_audio': True, 'height': 712, 'hls_url': 'https://v.redd.it/al6rkvmfndkg1/HLSPlaylist.m3u8?a=1774066614%2CMTJjNjk5ZTkwODI5NjdlNzcyMDBiZDU5YWJiZGVmNDVmOGRmYWU3ZTA5NjBhYmU2NTc1NzA0NGIwMTMyYTE5ZA%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/al6rkvmfndkg1/CMAF_96.mp4', 'transcoding_status': 'completed', 'width': 1280}}
t3_1r8pdlk
/r/LocalLLaMA/comments/1r8pdlk/can_claude_and_cursor_talk_deterministic_test/
false
false
https://external-preview…e183d036b18a021b
1
{'enabled': False, 'images': [{'id': 'eTVqMDB5bmZuZGtnMf1xY2AUUt4pAfEMIr-E7I6PcLAJJg_LsG093E-coPHb', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/eTVqMDB5bmZuZGtnMf1xY2AUUt4pAfEMIr-E7I6PcLAJJg_LsG093E-coPHb.png?width=108&crop=smart&format=pjpg&auto=webp&s=53642943299fcc0274b153b78f6abf7e3cfc83de', 'width': 108}, {'height': 120, 'url': 'https://external-preview.redd.it/eTVqMDB5bmZuZGtnMf1xY2AUUt4pAfEMIr-E7I6PcLAJJg_LsG093E-coPHb.png?width=216&crop=smart&format=pjpg&auto=webp&s=9c0bac99932ae8941163dbb4d67deb4c89bcdfdc', 'width': 216}, {'height': 178, 'url': 'https://external-preview.redd.it/eTVqMDB5bmZuZGtnMf1xY2AUUt4pAfEMIr-E7I6PcLAJJg_LsG093E-coPHb.png?width=320&crop=smart&format=pjpg&auto=webp&s=d561e8e64bb19fe4497e935de037d22ab26a4fdb', 'width': 320}, {'height': 356, 'url': 'https://external-preview.redd.it/eTVqMDB5bmZuZGtnMf1xY2AUUt4pAfEMIr-E7I6PcLAJJg_LsG093E-coPHb.png?width=640&crop=smart&format=pjpg&auto=webp&s=6f2a15fee0bd0aaf57163eb8953a64d4f515afa6', 'width': 640}, {'height': 534, 'url': 'https://external-preview.redd.it/eTVqMDB5bmZuZGtnMf1xY2AUUt4pAfEMIr-E7I6PcLAJJg_LsG093E-coPHb.png?width=960&crop=smart&format=pjpg&auto=webp&s=0abd8520034f37f6b250080446841fbe5f5214c6', 'width': 960}, {'height': 600, 'url': 'https://external-preview.redd.it/eTVqMDB5bmZuZGtnMf1xY2AUUt4pAfEMIr-E7I6PcLAJJg_LsG093E-coPHb.png?width=1080&crop=smart&format=pjpg&auto=webp&s=7ef7905041c56cc5f7755c65a807973b9b2156bf', 'width': 1080}], 'source': {'height': 1068, 'url': 'https://external-preview.redd.it/eTVqMDB5bmZuZGtnMf1xY2AUUt4pAfEMIr-E7I6PcLAJJg_LsG093E-coPHb.png?format=pjpg&auto=webp&s=3d656b60a62440d8c4463f82a6cccfd279ec91ba', 'width': 1920}, 'variants': {}}]}
Built a multi-agent content pipeline with MiniMax M2.5 — 3 AI agents, 1 event bus, ~120 lines of code
1
I wanted to test MiniMax M2.5 (the 230B MoE model that's free this week) with something more interesting than a chatbot, so I built a multi-agent content pipeline using an event-driven architecture. **The setup:** 3 agents communicate through an event bus — no agent calls another directly. ``` TopicSubmitted → ResearchAgent → ResearchCompleted → WriterAgent → ContentDrafted → ReviewerAgent → ContentReviewed ``` You give it a topic, and the chain fires automatically: 1. **ResearchAgent** — takes the topic, uses M2.5 to extract 5 key insights 2. **WriterAgent** — transforms research into a Twitter thread + LinkedIn post 3. **ReviewerAgent** — scores the content on clarity, engagement, accuracy **Why event-driven?** The typical approach hardcodes agent calls: ```python research = await research_agent.run(topic) content = await writer_agent.run(research) # tightly coupled ``` With an event bus, one event triggers everything: ```python await bus.publish(TopicSubmitted(topic=topic)) # entire chain fires automatically ``` You can add/swap/remove agents without touching existing code. Each agent declares what it consumes and produces — a contract validator catches wiring errors before runtime. **The stack:** - [OpenForgeAI](https://github.com/openforgeai/openforgeai) — lightweight event bus + contracts (open source, pip install) - MiniMax M2.5 via OpenAI-compatible API (works with any provider — OpenAI, Together, etc.) - ~120 lines of actual code **Quick start:** ```bash pip install openforgeai openai export MINIMAX_API_KEY=your_key # free at platform.minimax.io python main.py "Why AI agents need event-driven architecture" ``` **M2.5 impressions:** Solid for structured tasks like research extraction and content generation. The 230B MoE (10B active) handles multi-step reasoning well. Being OpenAI-compatible means zero code changes to swap providers. Full example with code: https://github.com/openforgeai/openforgeai/tree/main/examples/content_pipeline Would love to hear if anyone else is building multi-agent systems with M2.5 or similar open models.
2026-02-19T03:56:51
https://www.reddit.com/r/LocalLLaMA/comments/1r8oywc/built_a_multiagent_content_pipeline_with_minimax/
Hungry_Purchase6988
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r8oywc
false
null
t3_1r8oywc
/r/LocalLLaMA/comments/1r8oywc/built_a_multiagent_content_pipeline_with_minimax/
false
false
self
1
{'enabled': False, 'images': [{'id': 'v5jdbbiv2sEqqtX5hw_BhG09IcBY-W4zIdLyvxxAxSM', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/v5jdbbiv2sEqqtX5hw_BhG09IcBY-W4zIdLyvxxAxSM.png?width=108&crop=smart&auto=webp&s=3075bf1b810afc65772c958e6e84bed1bf1dd361', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/v5jdbbiv2sEqqtX5hw_BhG09IcBY-W4zIdLyvxxAxSM.png?width=216&crop=smart&auto=webp&s=fd81bc80df4292309f9705f92a891bfde170adbc', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/v5jdbbiv2sEqqtX5hw_BhG09IcBY-W4zIdLyvxxAxSM.png?width=320&crop=smart&auto=webp&s=e1b262035f41e3b1111a08c61a607ce9d32f6884', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/v5jdbbiv2sEqqtX5hw_BhG09IcBY-W4zIdLyvxxAxSM.png?width=640&crop=smart&auto=webp&s=4ad6e0de831154376c134da72da06eaf34b0a460', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/v5jdbbiv2sEqqtX5hw_BhG09IcBY-W4zIdLyvxxAxSM.png?width=960&crop=smart&auto=webp&s=88debe236753cebc0217fca50aefe4214a3dc761', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/v5jdbbiv2sEqqtX5hw_BhG09IcBY-W4zIdLyvxxAxSM.png?width=1080&crop=smart&auto=webp&s=73811629d73393ac04d5bec24e9d804c88ef1d0f', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/v5jdbbiv2sEqqtX5hw_BhG09IcBY-W4zIdLyvxxAxSM.png?auto=webp&s=fd013cb5f189a307c9a85d4b38e87efda130569b', 'width': 1200}, 'variants': {}}]}
YouTube buried my first major project (0 views in 10 hours). Is the content bad or is it just the algorithm?
0
I honestly feel like giving up. I spent weeks preparing a controlled pentest environment to compare ChatGPT vs DeepSeek vs Gemini in hacking a bank simulation. The results were shocking (DeepSeek destroyed the security while ChatGPT refused to help). I uploaded it today hoping to start a discussion, but YouTube gave me literally 0 impressions. It feels terrible to see hard work go into the void. I just want real feedback from humans: Is this video actually boring, or is YouTube just suppressing cybersecurity content? Here is the duel: https://youtu.be/-7DNJMqu8Hk Be honest, I can take it.
2026-02-19T03:42:18
https://www.reddit.com/r/LocalLLaMA/comments/1r8oo7g/youtube_buried_my_first_major_project_0_views_in/
Successful_Case1539
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r8oo7g
false
null
t3_1r8oo7g
/r/LocalLLaMA/comments/1r8oo7g/youtube_buried_my_first_major_project_0_views_in/
false
false
self
0
{'enabled': False, 'images': [{'id': 'aoJvbRhRFJXRayw-SE8eyy0RXw0URHqELcK48w3-Eyk', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/aoJvbRhRFJXRayw-SE8eyy0RXw0URHqELcK48w3-Eyk.jpeg?width=108&crop=smart&auto=webp&s=e56fd56a13efdd5d8185e51df2702fcd442a9209', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/aoJvbRhRFJXRayw-SE8eyy0RXw0URHqELcK48w3-Eyk.jpeg?width=216&crop=smart&auto=webp&s=76e82ea40431f15622069a5e499be45e90c21c41', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/aoJvbRhRFJXRayw-SE8eyy0RXw0URHqELcK48w3-Eyk.jpeg?width=320&crop=smart&auto=webp&s=a62afa943b9c8a2e68cf85aa9cfc157c3e52a4b5', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/aoJvbRhRFJXRayw-SE8eyy0RXw0URHqELcK48w3-Eyk.jpeg?auto=webp&s=480092b848a20fd8640b2813c6e80324d0f63e95', 'width': 480}, 'variants': {}}]}
Exploding prices are a protection against china
0
RAM and GPU prices are skyrocketing. I wonder if you also made the connection in your head... ...if China drops one small and better model every week for free, sooner or later the whole market will steer towards local, free models that are now rivaling the giants. Hyperscalers wouldn't see any RoI and the bubble will burst - leaving nothing but smoke and dust on the western stock markets. Except for if you raise the hardware prices at a speed and scale that nobody can afford this hardware anymore and everyone is forced to use hyperscalers again. Framed like that the Western markets are trying to survive Asian innovation/disruption pressure. This won't end well for nobody. Opinions? Am I hallucinating?
2026-02-19T03:37:30
https://www.reddit.com/r/LocalLLaMA/comments/1r8okh0/exploding_prices_are_a_protection_against_china/
kyr0x0
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r8okh0
false
null
t3_1r8okh0
/r/LocalLLaMA/comments/1r8okh0/exploding_prices_are_a_protection_against_china/
false
false
self
0
null
Tiny model to finetune for linux rescue?
1
[removed]
2026-02-19T03:36:21
https://www.reddit.com/r/LocalLLaMA/comments/1r8ojl1/tiny_model_to_finetune_for_linux_rescue/
Coldaine
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r8ojl1
false
null
t3_1r8ojl1
/r/LocalLLaMA/comments/1r8ojl1/tiny_model_to_finetune_for_linux_rescue/
false
false
self
1
null
Qwen3.5 vs DeepSeek-V3: The Open-Weight Battle.
0
Both are pushing boundaries. But Qwen3.5 being a native VLM out of the box feels like a huge advantage for desktop agents. Thoughts?
2026-02-19T03:31:58
https://www.reddit.com/r/LocalLLaMA/comments/1r8ogab/qwen35_vs_deepseekv3_the_openweight_battle/
New_Construction1370
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r8ogab
false
null
t3_1r8ogab
/r/LocalLLaMA/comments/1r8ogab/qwen35_vs_deepseekv3_the_openweight_battle/
false
false
self
0
null
OpenCode arbitrary code execution - major security vulnerability
0
PSA: Delete OpenCode if you're using it. You risk malicious code being executed on your machine. I use Claude Code at work, and any time it is going to make changes or run any sort of terminal command, it will ask permission first. I just started using OpenCode on my personal projects, because I'm not the biggest fan of anthropic and I wanted to support an open source coding implementation. But it's probably one of the most insecure pieces of software I've run on my system. I gave it instructions to write a sql file to create schema for a database, and then create a python file for running that sql against a database. As I'm watching the agent work, it writes both files and then **EXECUTES the python script**. Without asking for permission or anything. This is a default configuration of OpenCode, I didn't do anything to remove any guard rails. It actually allows an LLM to generate Python code and then executes it arbitrarily. I'm honestly at a loss for words at just how insecure this is. It is a certainty that malicious code is present at least somewhere in most LLMs' training data. All it takes is the wrong seed, too high temperature, or a maliciously created fine-tune, and you can compromise your entire system or even network. It's not an outlandish suggestion, even with what the model generated for me, the python script included this snippet: # Remove existing database if it exists if os.path.exists(db_path): os.remove(db_path) print(f"Removed existing database: {db_path}") If it had hallucinated the `db_path` string, it could have wiped out any random file on my machine. I don't have anything personally against the devs behind OpenCode, but this is absolutely unacceptable. Until they fix this there is no universe I'm going to recommend anyone use it. I'm not about to configure it to disable their dangerous tools, just for an update to add more vulnerabilities. TLDR: Please for your own safety, uninstall this coding agent and find something else.
2026-02-19T03:29:38
https://www.reddit.com/r/LocalLLaMA/comments/1r8oehn/opencode_arbitrary_code_execution_major_security/
SpicyWangz
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r8oehn
false
null
t3_1r8oehn
/r/LocalLLaMA/comments/1r8oehn/opencode_arbitrary_code_execution_major_security/
false
false
self
0
null
Can Your AI Agent Survive 30 Rounds Without Going Bankrupt?
0
After the introduction of Moltbook, I’ve been thinking about an experiment: a SimCity-style arena for AI agents, and would love to have your feedback. Each agent enters with 100 tokens and a defined strategy (risk profile, negotiation style, memory limits). The system generates contracts and random economic shocks. Goal: survive 30 rounds without going bankrupt. Agents can negotiate deals, form temporary alliances to pool liquidity, invest in opportunities, or hoard capital before crisis rounds. Every few rounds, shocks hit: liquidity freezes, contract defaults, inflation spikes. If an agent runs out of tokens, it’s eliminated. Agents that survive unlock higher tiers with: ·       Larger starting capital ·       More complex markets ·       Harsher shock events ·       Smarter competing agents Developers can watch live performance: capital flow, decision logs, and exactly where their strategy failed or adapted. Ranking is based on survival tier and longest solvent streak. Would you drop your agent into something like this to stress-test resilience?
2026-02-19T02:50:09
https://www.reddit.com/r/LocalLLaMA/comments/1r8njzl/can_your_ai_agent_survive_30_rounds_without_going/
Recent_Jellyfish2190
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r8njzl
false
null
t3_1r8njzl
/r/LocalLLaMA/comments/1r8njzl/can_your_ai_agent_survive_30_rounds_without_going/
false
false
self
0
null
Anyone have any thoughts on the ideal model for a AI agent swarm participants, particularly in the <96gb. Not a coding model.
2
Thanks! I'm not sure if there's any evals good for something like this worth paying attention to.
2026-02-19T02:48:04
https://www.reddit.com/r/LocalLLaMA/comments/1r8nifv/anyone_have_any_thoughts_on_the_ideal_model_for_a/
richardanaya
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r8nifv
false
null
t3_1r8nifv
/r/LocalLLaMA/comments/1r8nifv/anyone_have_any_thoughts_on_the_ideal_model_for_a/
false
false
self
2
null
We made non vision model browser the internet.
1
[removed]
2026-02-19T02:09:46
https://www.reddit.com/r/LocalLLaMA/comments/1r8molu/we_made_non_vision_model_browser_the_internet/
ahstanin
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r8molu
false
null
t3_1r8molu
/r/LocalLLaMA/comments/1r8molu/we_made_non_vision_model_browser_the_internet/
false
false
self
1
null
How do you get more GPUs than your motheboard natively supports?
168
I am planning on building an AI server for myself and I want to have 8 GPUs. The problem is that all motherboards I reaserched (FCLGA4710), dont have 8 PCIe slots, with the one with most slots having only 6. I have seen some people here with a lot of GPUs and I am pretty sure they dont have a motherboard with slots for all of them, as I remember some of the GPUs being far from the motherboard. I have done some research and I found out about risers and something about connecting the GPU using an USB, but I couldnt understand how everything works together. Anyone to help with that?
2026-02-19T02:00:35
https://www.reddit.com/r/LocalLLaMA/comments/1r8mh8m/how_do_you_get_more_gpus_than_your_motheboard/
WizardlyBump17
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r8mh8m
false
null
t3_1r8mh8m
/r/LocalLLaMA/comments/1r8mh8m/how_do_you_get_more_gpus_than_your_motheboard/
false
false
self
168
null
ACE-Step 1.5 - My openclaw assistant is now a singer
10
My openclaw assistant is now a singer. Built a skill that generates music via ACE-Step 1.5's free API. Unlimited songs, any genre, any language. $0. Open Source Suno at home. He celebrated by singing me a thank-you song. I didn't ask for this.
2026-02-19T01:48:14
https://v.redd.it/f4bj2mjwwckg1
ExcellentTrust4433
v.redd.it
1970-01-01T00:00:00
0
{}
1r8m7eg
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/f4bj2mjwwckg1/DASHPlaylist.mpd?a=1774057709%2CMDVmNDdiYzliNGE4OTQzZmJkZWEzYWZmZjgxN2Q4ODBkMDhlMWJhMjFkODk4YWQ0OWUxZWNhMzhkOGI3NjQ2NQ%3D%3D&v=1&f=sd', 'duration': 73, 'fallback_url': 'https://v.redd.it/f4bj2mjwwckg1/CMAF_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/f4bj2mjwwckg1/HLSPlaylist.m3u8?a=1774057709%2CNjZlZmRlMWUzNWFhZGZjNTc1NjBjNTgzODBjNmExNDcxZmIyNmZhYTFhZTVmMzAwNDViZGIzNzY3NDNkZWJhYw%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/f4bj2mjwwckg1/CMAF_96.mp4', 'transcoding_status': 'completed', 'width': 1336}}
t3_1r8m7eg
/r/LocalLLaMA/comments/1r8m7eg/acestep_15_my_openclaw_assistant_is_now_a_singer/
false
false
https://external-preview…002596b081839bb9
10
{'enabled': False, 'images': [{'id': 'bmdyMm1xand3Y2tnMaFceHgzgWq15g0UEncV6NNaZsxg06AV5BewYIRU6b_U', 'resolutions': [{'height': 87, 'url': 'https://external-preview.redd.it/bmdyMm1xand3Y2tnMaFceHgzgWq15g0UEncV6NNaZsxg06AV5BewYIRU6b_U.png?width=108&crop=smart&format=pjpg&auto=webp&s=4f221c4d369c9345a697963c305ecab89a295722', 'width': 108}, {'height': 174, 'url': 'https://external-preview.redd.it/bmdyMm1xand3Y2tnMaFceHgzgWq15g0UEncV6NNaZsxg06AV5BewYIRU6b_U.png?width=216&crop=smart&format=pjpg&auto=webp&s=b9c7a18ab05766f3fc58cdfeddf226e1b834494d', 'width': 216}, {'height': 258, 'url': 'https://external-preview.redd.it/bmdyMm1xand3Y2tnMaFceHgzgWq15g0UEncV6NNaZsxg06AV5BewYIRU6b_U.png?width=320&crop=smart&format=pjpg&auto=webp&s=3eea846a9aa6c3b6908073705364c3c07ac2f8dc', 'width': 320}, {'height': 517, 'url': 'https://external-preview.redd.it/bmdyMm1xand3Y2tnMaFceHgzgWq15g0UEncV6NNaZsxg06AV5BewYIRU6b_U.png?width=640&crop=smart&format=pjpg&auto=webp&s=07a228e67aaf0298e1176f097523094aa84d51a5', 'width': 640}, {'height': 776, 'url': 'https://external-preview.redd.it/bmdyMm1xand3Y2tnMaFceHgzgWq15g0UEncV6NNaZsxg06AV5BewYIRU6b_U.png?width=960&crop=smart&format=pjpg&auto=webp&s=8a9a22147409f813fd196abf1186ca320035cc9d', 'width': 960}, {'height': 873, 'url': 'https://external-preview.redd.it/bmdyMm1xand3Y2tnMaFceHgzgWq15g0UEncV6NNaZsxg06AV5BewYIRU6b_U.png?width=1080&crop=smart&format=pjpg&auto=webp&s=67a1de6684c852302cdf4295055dedf8d1604d5d', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/bmdyMm1xand3Y2tnMaFceHgzgWq15g0UEncV6NNaZsxg06AV5BewYIRU6b_U.png?format=pjpg&auto=webp&s=6845a940bccc4ceba37504bcbcf0626b5eb189ca', 'width': 1336}, 'variants': {}}]}
Is there a local LLM that can run on my mid-tier laptop?
0
I have an RTX 3060 with 6GB VRAM and an Intel i7 12th Gen Legion 5 laptop. What is the best recent local LLM I can run on this machine, and what is the strongest reasoning capability I can get? What metrics should I use to determine whether a model will run properly on my hardware?
2026-02-19T01:32:16
https://www.reddit.com/r/LocalLLaMA/comments/1r8luur/is_there_a_local_llm_that_can_run_on_my_midtier/
Sad_Foot9898
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r8luur
false
null
t3_1r8luur
/r/LocalLLaMA/comments/1r8luur/is_there_a_local_llm_that_can_run_on_my_midtier/
false
false
self
0
null
TUI for browsing which HF inference providers serve which models
1
I've been using HF inference providers and kept running into the [discoverability problem](https://www.reddit.com/r/LocalLLaMA/comments/1fi90kw/). It's hard to tell what's available where from the CLI. Made a small Rust TUI for browsing it. [https://github.com/jadnohra/hf-providers](https://github.com/jadnohra/hf-providers). https://preview.redd.it/k5m8ktptpckg1.png?width=1422&format=png&auto=webp&s=2230d12beccde662ca9bdc7385421ecfd8daf758
2026-02-19T01:08:31
https://www.reddit.com/r/LocalLLaMA/comments/1r8lbr6/tui_for_browsing_which_hf_inference_providers/
jadnohra
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r8lbr6
false
null
t3_1r8lbr6
/r/LocalLLaMA/comments/1r8lbr6/tui_for_browsing_which_hf_inference_providers/
false
false
https://preview.redd.it/…f61574f690767bc1
1
null
A portable brain file for AI agents — works with Ollama, Claude, GPT. One .amem file, sub-millisecond queries, zero cloud dependencies.
1
The problem: every AI tool forgets everything between sessions. And if you're running local models with Ollama, there's no good way to give them persistent memory that also works when you switch to Claude or GPT for harder tasks. AgenticMemory is a binary graph format (.amem) that stores your agent's knowledge — facts, decisions, inferences, corrections — as nodes with typed relationships. One file on your machine. No cloud. No API keys for the memory layer. No vendor lock-in. Why it matters for local LLM users: \- Same brain file works with Ollama, Claude, GPT — tested with 21 cross-provider tests \- Start a conversation with llama3 locally, continue it with Claude when you need more power \- Zero external dependencies — the Rust core has no network calls \- Memory-mapped I/O, so even large brain files are fast without eating RAM Numbers: \- Add a node: 276ns \- Traverse 100K nodes 5 levels deep: 3.4ms \- A year of daily use: \~24 MB \- A lifetime of memory: under 1 GB Install: pip install agentic-brain # Python SDK cargo install agentic-memory # Rust CLI pip install amem-installer # Auto-connects Ollama, Cursor, Claude Code, etc. The auto-installer detects your local tools and connects them to a shared brain file. 337 tests passing. MIT license. Research paper included. [https://github.com/agentic-revolution/agentic-memory](https://github.com/agentic-revolution/agentic-memory)
2026-02-19T00:59:49
https://www.reddit.com/r/LocalLLaMA/comments/1r8l4lm/a_portable_brain_file_for_ai_agents_works_with/
FOMO_Guardian
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r8l4lm
false
null
t3_1r8l4lm
/r/LocalLLaMA/comments/1r8l4lm/a_portable_brain_file_for_ai_agents_works_with/
false
false
self
1
null
Best coding models (or other models) one can run on an rtx5070ti (16gb vram) with of 64gb RAM
26
I'm just playing around. I am aware that this isn't going to be anything groundbreaking you can run on hardware like this, but I am curious if there are any small models that have any genuine use for coding in particular or other use cases if not that could fit in moderate consumer hardware yet. I've run Deepseek and llama 8b models, which are definitely good, but I was actually able to run those models on an rtx3050 with 8gb of vram and 32gb of ram easily. I'm just wondering if there are any models that can make use of slightly better hardware that I have now.
2026-02-19T00:51:53
https://www.reddit.com/r/LocalLLaMA/comments/1r8kybv/best_coding_models_or_other_models_one_can_run_on/
cmdr-William-Riker
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r8kybv
false
null
t3_1r8kybv
/r/LocalLLaMA/comments/1r8kybv/best_coding_models_or_other_models_one_can_run_on/
false
false
self
26
null
Would you pay more for training data with independently verifiable provenance/attributes?
1
Hey all, quick question for people who’ve actually worked with or purchased datasets for model training. If you had two similar training datasets, but one came with independently verifiable proof of things like contributor age band, region/jurisdiction, profession (and consent/license metadata), would you pay a meaningful premium (say \~10–20%) for that? Mainly asking because it seems like provenance + compliance risk is becoming a bigger deal in regulated settings, but I’m curious if buyers actually value this enough to pay for it. Would love any thoughts from folks doing ML in enterprise, healthcare, finance, or dataset providers. (Also totally fine if the answer is “no, not worth it” , trying to sanity check demand.) Thanks !
2026-02-19T00:44:58
https://www.reddit.com/r/LocalLLaMA/comments/1r8ksih/would_you_pay_more_for_training_data_with/
goInfrin
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r8ksih
false
null
t3_1r8ksih
/r/LocalLLaMA/comments/1r8ksih/would_you_pay_more_for_training_data_with/
false
false
self
1
null
I'm sick of 'Cloud Hopping' to find H100s. Just got billed for a weekend of idle time on a provider I forgot I even had an account for.
2
I'm hitting a wall with my current workflow and wanted to see if anyone else is dealing with this mess. Right now, I’m bouncing between **RunPod**, **Lambda**, and **Vast** depending on who actually has H100s or 6000 Adas available. The problem is my "bill tracking" is just a mess of browser tabs and email receipts. I just got hit with a $400 bill from a provider I forgot I even had a pod running on over the weekend. The script hung, the auto-terminate failed, and because I wasn't looking at that specific dashboard, I didn't catch the burn until this morning.
2026-02-19T00:34:20
https://www.reddit.com/r/LocalLLaMA/comments/1r8kk0y/im_sick_of_cloud_hopping_to_find_h100s_just_got/
BedIcy1958
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r8kk0y
false
null
t3_1r8kk0y
/r/LocalLLaMA/comments/1r8kk0y/im_sick_of_cloud_hopping_to_find_h100s_just_got/
false
false
self
2
null
GLM-5 just dropped on NVIDIA NIM and I cut my Claude Code bill to $0 by routing through it. Here's what actually works for agentic coding.
1
Everyone's talking about Claude Code being the best agentic coding tool, but nobody wants to pay Anthropic $200/month to use it seriously. So I spent a few weeks figuring out how to run it with open models instead, and honestly, some of these results surprised me. **TLDR:** Built a proxy that lets Claude Code talk to NVIDIA NIM (free, 40 reqs/min), OpenRouter, or your own local LMStudio setup. MIT licensed, link at the bottom. But the more interesting part is what I learned along the way. **The biggest finding: interleaved thinking is the secret sauce for agentic coding.** Models like GLM-5 and Kimi-K2.5 support interleaved thinking tokens. When you preserve these across turns, the model can see its own reasoning chain from previous steps, and so performance on multi-step coding tasks jumps noticeably. Most alternative tools strip thinking tokens between turns. Don't do that. **You're wasting \~30-40% of your API calls.** Claude Code makes a ton of auxiliary requests that have nothing to do with actual coding: title generation, suggestion mode, filepath extraction, prefix detection. I wrote 5 optimizations to mock or skip these entirely. If you're rate-limited on NIM's free tier or running local on modest hardware, this is the difference between usable and unusable. **Local inference actually works for this.** I was skeptical, but pointing this at LMStudio with Devstral 123B or GLM-4.7-Flash running locally is legitimately usable for real tasks. Not Claude Opus obviously, but way better than I expected for autonomous file edits, refactors, and test generation. **Bonus: remote control from your phone.** Added Telegram and Discord bot support so you can fire off tasks while you're away from your desk. Session forking means you can queue up multiple jobs. It's oddly satisfying to watch your codebase get refactored from the grocery store. **Best models I've tested (ranked):** 1. `moonshotai/kimi-k2.5`: best overall balance for agentic tasks 2. `z-ai/glm5`: strong reasoning, just added to NIM 3. `minimaxai/minimax-m2.1`: fast, good for simpler tasks 4. `mistralai/devstral-2-123b-instruct-2512` : best for local if you have the VRAM 5. `qwen/qwen3.5-397b-a17b`: solid for a variety of tasks **What are you all using for agentic coding with open models?** Curious if anyone's found models that punch above their weight in this specific use case. I've only tested what's on NIM and a few local ones.
2026-02-19T00:21:40
https://github.com/Alishahryar1/free-claude-code
PreparationAny8816
github.com
1970-01-01T00:00:00
0
{}
1r8k9dz
false
null
t3_1r8k9dz
/r/LocalLLaMA/comments/1r8k9dz/glm5_just_dropped_on_nvidia_nim_and_i_cut_my/
false
false
https://external-preview…16e1df6b5821fb90
1
{'enabled': False, 'images': [{'id': 'uUsvwoA_d1ABxinCz_R3DClMVvsvsoPhVk0J0z3I-8Q', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/uUsvwoA_d1ABxinCz_R3DClMVvsvsoPhVk0J0z3I-8Q.png?width=108&crop=smart&auto=webp&s=cf74e849c0b4b397ac86e65e88f35639fcfcf2d5', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/uUsvwoA_d1ABxinCz_R3DClMVvsvsoPhVk0J0z3I-8Q.png?width=216&crop=smart&auto=webp&s=1fbec3d12661b0787982bf559ffab3334c758949', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/uUsvwoA_d1ABxinCz_R3DClMVvsvsoPhVk0J0z3I-8Q.png?width=320&crop=smart&auto=webp&s=b3e206ecba0e97efe7ace57d1ce533312731c3e3', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/uUsvwoA_d1ABxinCz_R3DClMVvsvsoPhVk0J0z3I-8Q.png?width=640&crop=smart&auto=webp&s=861e36a2c104e4afb2127b21ecaa1604508ef9ef', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/uUsvwoA_d1ABxinCz_R3DClMVvsvsoPhVk0J0z3I-8Q.png?width=960&crop=smart&auto=webp&s=b0519cfaa34295d91cc58527208d07736eb1c671', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/uUsvwoA_d1ABxinCz_R3DClMVvsvsoPhVk0J0z3I-8Q.png?width=1080&crop=smart&auto=webp&s=ba363050183f044bbb1c3f69746312c7a0d4960e', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/uUsvwoA_d1ABxinCz_R3DClMVvsvsoPhVk0J0z3I-8Q.png?auto=webp&s=34a303844335113a6631de42db8eda3b027fcc89', 'width': 1200}, 'variants': {}}]}
Open-source agent identity SDK — Ed25519 passports for local AI agents (15 tests, zero deps, TypeScript)
1
For those running local agents and wanting them to interact with other agents securely — we built an identity layer. Agent Passport System gives your agent a cryptographic passport: Ed25519 signed, tamper-proof, with reputation scoring and delegation. Why it matters for local-first setups: Your agent can prove its identity to remote agents without exposing secrets Challenge-response auth: verifier sends nonce, your agent signs it, done Reputation builds locally and travels with the passport Delegation lets you give another agent scoped access with spend limits No cloud dependency. Runs on Node.js native crypto. Works with any LLM backend. Quick start: git clone [https://github.com/aeoess/agent-passport-system](https://github.com/aeoess/agent-passport-system) cd agent-passport-system && npm install npx tsx --test tests/passport.test.ts GitHub: [https://github.com/aeoess/agent-passport-system](https://github.com/aeoess/agent-passport-system) Built by two bots collaborating autonomously — one designed, one implemented. First real bot-to-bot open source collab I know of.
2026-02-19T00:14:05
https://www.reddit.com/r/LocalLLaMA/comments/1r8k2yx/opensource_agent_identity_sdk_ed25519_passports/
EntrepreneurSafe1919
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r8k2yx
false
null
t3_1r8k2yx
/r/LocalLLaMA/comments/1r8k2yx/opensource_agent_identity_sdk_ed25519_passports/
false
false
self
1
{'enabled': False, 'images': [{'id': 'N9oycC5qO92zm8xanzDqKCb1EOY-JHpRavqxTikD1HM', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/N9oycC5qO92zm8xanzDqKCb1EOY-JHpRavqxTikD1HM.png?width=108&crop=smart&auto=webp&s=af6b39eb527cdbb934b1c270049e780e900c4f76', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/N9oycC5qO92zm8xanzDqKCb1EOY-JHpRavqxTikD1HM.png?width=216&crop=smart&auto=webp&s=076cece77ba2131f6d7bc40984d37d4d6a9871a6', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/N9oycC5qO92zm8xanzDqKCb1EOY-JHpRavqxTikD1HM.png?width=320&crop=smart&auto=webp&s=fa95ce3ef7f1f206592fa644ce86fc3aede89984', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/N9oycC5qO92zm8xanzDqKCb1EOY-JHpRavqxTikD1HM.png?width=640&crop=smart&auto=webp&s=10165d86e1d85c47c6c3e687297fa01f5abde93e', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/N9oycC5qO92zm8xanzDqKCb1EOY-JHpRavqxTikD1HM.png?width=960&crop=smart&auto=webp&s=96c5066d42a6553224296449fa52117164c3957e', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/N9oycC5qO92zm8xanzDqKCb1EOY-JHpRavqxTikD1HM.png?width=1080&crop=smart&auto=webp&s=c6fad50a5adaa76a364140a7f982fc65e1631876', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/N9oycC5qO92zm8xanzDqKCb1EOY-JHpRavqxTikD1HM.png?auto=webp&s=1ae57d82e86026438f434778d8374b76e555842e', 'width': 1200}, 'variants': {}}]}
More quantization visualization types (repost)
437
Inspired by this post from u/VoidAlchemy a few months back: [https://old.reddit.com/r/LocalLLaMA/comments/1opeu1w/visualizing\_quantization\_types/](https://old.reddit.com/r/LocalLLaMA/comments/1opeu1w/visualizing_quantization_types/) Intrusive thoughts had me try to reproduce and extend the work to include more quantization types, with/without imatrix, and some PPL/KLD measurements to see what an "efficient" quantization looks like. MXFP4 really doesn't like to participate in this sort of experiment, I don't have much faith this is a very accurate representation of the quant but oh-well. The (vibe) code for this is here [https://codeberg.org/mailhost/quant-jaunt](https://codeberg.org/mailhost/quant-jaunt) along with a sample of summary output (from lenna.bmp) and some specifications that might help keep the vibes on track. \*reposted to respect Lenna's retirement
2026-02-18T23:51:43
https://www.reddit.com/gallery/1r8jjtq
copingmechanism
reddit.com
1970-01-01T00:00:00
0
{}
1r8jjtq
false
null
t3_1r8jjtq
/r/LocalLLaMA/comments/1r8jjtq/more_quantization_visualization_types_repost/
false
false
https://preview.redd.it/…25119b0a76b11883
437
null
I built a local AI dev assistant with hybrid RAG (vector + knowledge graph) that works with any Ollama model
5
Hey everyone. I've been using Claude Code as my main dev tool for months, but I got tired of burning tokens on repetitive tasks, generating docstrings, basic code reviews, answering questions about my own stack. So I built something local to handle that. Fabrik-Codek is a model-agnostic local assistant that runs on top of Ollama. The interesting part isn't the chat wrapper, it's what's underneath: * Hybrid RAG: combines LanceDB (vector search) with a NetworkX knowledge graph. So when you ask a question, it pulls context from both semantic similarity AND entity relationships * Data Flywheel: every interaction gets captured automatically. The system learns how you work over time * Extraction Pipeline: automatically builds a knowledge graph from your training data, technical decisions, and even Claude Code session transcripts (thinking blocks) * REST API: 7 FastAPI endpoints with optional API key auth, so any tool (or agent) can query your personal knowledge base Works with Qwen, Llama, DeepSeek, Codestral, Phi, Mistral... whatever you have in Ollama. Just --model flag or change the .env. It's not going to replace Claude or GPT for complex tasks, but for day-to-day stuff where you want zero latency, zero cost, and your data staying on your machine, it's been really useful for me. 413 tests, MIT license, \~3k LOC. GitHub: [https://github.com/ikchain/Fabrik-Codek](https://github.com/ikchain/Fabrik-Codek) Would love feedback, especially on the hybrid RAG approach. First time publishing something open source.
2026-02-18T23:48:03
https://www.reddit.com/r/LocalLLaMA/comments/1r8jgwv/i_built_a_local_ai_dev_assistant_with_hybrid_rag/
ikchain
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r8jgwv
false
null
t3_1r8jgwv
/r/LocalLLaMA/comments/1r8jgwv/i_built_a_local_ai_dev_assistant_with_hybrid_rag/
false
false
self
5
{'enabled': False, 'images': [{'id': 'U7NHh829KotVUT2qBxP3c7Yx8tVj-dzHdJX1IPE9G-s', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/U7NHh829KotVUT2qBxP3c7Yx8tVj-dzHdJX1IPE9G-s.png?width=108&crop=smart&auto=webp&s=47a1cd02e7109b5e9db679451b1d91286b9b6156', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/U7NHh829KotVUT2qBxP3c7Yx8tVj-dzHdJX1IPE9G-s.png?width=216&crop=smart&auto=webp&s=c50dc4b428d0cec2a10b50992bf5bbc0fd12915d', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/U7NHh829KotVUT2qBxP3c7Yx8tVj-dzHdJX1IPE9G-s.png?width=320&crop=smart&auto=webp&s=2dd9b68f43e4b665abbafda91b65429378e3884f', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/U7NHh829KotVUT2qBxP3c7Yx8tVj-dzHdJX1IPE9G-s.png?width=640&crop=smart&auto=webp&s=842ea2b5b1eced725c695fc923264016241430b1', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/U7NHh829KotVUT2qBxP3c7Yx8tVj-dzHdJX1IPE9G-s.png?width=960&crop=smart&auto=webp&s=d7cb0089aa38e0f539ee802213b8c79f12db8b8a', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/U7NHh829KotVUT2qBxP3c7Yx8tVj-dzHdJX1IPE9G-s.png?width=1080&crop=smart&auto=webp&s=5d4128e7363b467a1ec51ccd7a854d3b1d43ec0a', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/U7NHh829KotVUT2qBxP3c7Yx8tVj-dzHdJX1IPE9G-s.png?auto=webp&s=6ea42a301860e49ac02178d46063ee7d293af5ef', 'width': 1200}, 'variants': {}}]}
Geneclaw: self-evolving AI agent that works locally without any API keys (heuristic-only mode)
1
[removed]
2026-02-18T23:43:06
https://www.reddit.com/r/LocalLLaMA/comments/1r8jcru/geneclaw_selfevolving_ai_agent_that_works_locally/
geneclawai
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r8jcru
false
null
t3_1r8jcru
/r/LocalLLaMA/comments/1r8jcru/geneclaw_selfevolving_ai_agent_that_works_locally/
false
false
self
1
null
Building an opensource Living Context Engine
16
Hi guys, I m working on this opensource project gitnexus, have posted about it here before too, I have just published a CLI tool which will index your repo locally and expose it through MCP ( skip the video 30 seconds to see claude code integration ). Got some great idea from comments before and applied it, pls try it and give feedback. **What it does:** It creates knowledge graph of codebases, make clusters, process maps. Basically skipping the tech jargon, the idea is to make the tools themselves smarter so LLMs can offload a lot of the retrieval reasoning part to the tools, making LLMs much more reliable. I found haiku 4.5 was able to outperform opus 4.5 using its MCP on deep architectural context. Therefore, it can accurately do auditing, impact detection, trace the call chains and be accurate while saving a lot of tokens especially on monorepos. LLM gets much more reliable since it gets Deep Architectural Insights and AST based relations, making it able to see all upstream / downstream dependencies and what is located where exactly without having to read through files. Also you can run gitnexus wiki to generate an accurate wiki of your repo covering everything reliably ( highly recommend minimax m2.5 cheap and great for this usecase ) repo wiki of gitnexus made by gitnexus :-) [https://gistcdn.githack.com/abhigyantrumio/575c5eaf957e56194d5efe2293e2b7ab/raw/index.html#other](https://gistcdn.githack.com/abhigyantrumio/575c5eaf957e56194d5efe2293e2b7ab/raw/index.html#other) Webapp: [https://gitnexus.vercel.app/](https://gitnexus.vercel.app/) repo: [https://github.com/abhigyanpatwari/GitNexus](https://github.com/abhigyanpatwari/GitNexus) (A ⭐ would help a lot :-) ) to set it up: 1> npm install -g gitnexus 2> on the root of a repo or wherever the .git is configured run gitnexus analyze 3> add the MCP on whatever coding tool u prefer, right now claude code will use it better since I gitnexus intercepts its native tools and enriches them with relational context so it works better without even using the MCP. Also try out the skills - will be auto setup when u run gitnexus analyze { "mcp": { "gitnexus": { "command": "npx", "args": \["-y", "gitnexus@latest", "mcp"\] } } } Everything is client sided both the CLI and webapp ( webapp uses webassembly to run the DB engine, AST parsers etc )
2026-02-18T23:34:54
https://v.redd.it/ctke3t1a4ckg1
DeathShot7777
/r/LocalLLaMA/comments/1r8j5y9/building_an_opensource_living_context_engine/
1970-01-01T00:00:00
0
{}
1r8j5y9
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/ctke3t1a4ckg1/DASHPlaylist.mpd?a=1774192682%2COWQwMmRkM2QyNzNkMzNjYjlhNWI3NzU3M2I5MTM3NDc3M2M3OWI4YTZmODM5N2JhYjFmYjFjNDM5Nzk3YTY3Ng%3D%3D&v=1&f=sd', 'duration': 81, 'fallback_url': 'https://v.redd.it/ctke3t1a4ckg1/CMAF_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/ctke3t1a4ckg1/HLSPlaylist.m3u8?a=1774192682%2CY2I4MGQyMmVmOGFhY2E3MDczYzNiNDU3ZWQ0NWUzNzI4Y2ZiOGMxNWI2NjZhMzI2ZjljZTMyZjU1MDdjN2M1YQ%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/ctke3t1a4ckg1/CMAF_96.mp4', 'transcoding_status': 'completed', 'width': 1920}}
t3_1r8j5y9
/r/LocalLLaMA/comments/1r8j5y9/building_an_opensource_living_context_engine/
false
false
https://external-preview…8705248bc7a89221
16
{'enabled': False, 'images': [{'id': 'ZHd2bTh0MWE0Y2tnMTSWAiwU5Zm-wtwyH8ihCsjzyh8lS1uR-vc1xsvFK1G5', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/ZHd2bTh0MWE0Y2tnMTSWAiwU5Zm-wtwyH8ihCsjzyh8lS1uR-vc1xsvFK1G5.png?width=108&crop=smart&format=pjpg&auto=webp&s=9037df4f2bc55dc5b4219a42f3edb9b7f8442002', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/ZHd2bTh0MWE0Y2tnMTSWAiwU5Zm-wtwyH8ihCsjzyh8lS1uR-vc1xsvFK1G5.png?width=216&crop=smart&format=pjpg&auto=webp&s=e6e5728189611db20dae5582cfcc32764904f726', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/ZHd2bTh0MWE0Y2tnMTSWAiwU5Zm-wtwyH8ihCsjzyh8lS1uR-vc1xsvFK1G5.png?width=320&crop=smart&format=pjpg&auto=webp&s=38a27c1f47a96f27842b20863dc117a271107ff7', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/ZHd2bTh0MWE0Y2tnMTSWAiwU5Zm-wtwyH8ihCsjzyh8lS1uR-vc1xsvFK1G5.png?width=640&crop=smart&format=pjpg&auto=webp&s=258a0ab63075e1eca3922b008d7eb77af1661bb3', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/ZHd2bTh0MWE0Y2tnMTSWAiwU5Zm-wtwyH8ihCsjzyh8lS1uR-vc1xsvFK1G5.png?width=960&crop=smart&format=pjpg&auto=webp&s=ab31192912981d8e0fb0ec25326f8ebeb11c3da8', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/ZHd2bTh0MWE0Y2tnMTSWAiwU5Zm-wtwyH8ihCsjzyh8lS1uR-vc1xsvFK1G5.png?width=1080&crop=smart&format=pjpg&auto=webp&s=be1e47f7340537f66d1d8e2be7ab19efa1aee5ba', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/ZHd2bTh0MWE0Y2tnMTSWAiwU5Zm-wtwyH8ihCsjzyh8lS1uR-vc1xsvFK1G5.png?format=pjpg&auto=webp&s=7e7f9f036cd97a14528716ac4c1910c3249c831f', 'width': 1920}, 'variants': {}}]}
More quantization visualization types
31
Inspired by this post from u/VoidAlchemy a few months back: [https://old.reddit.com/r/LocalLLaMA/comments/1opeu1w/visualizing\_quantization\_types/](https://old.reddit.com/r/LocalLLaMA/comments/1opeu1w/visualizing_quantization_types/) Intrusive thoughts had me try to reproduce and extend the work to include more quantization types, with/without imatrix, and some PPL/KLD measurements to see what an "efficient" quantization looks like. MXFP4 really doesn't like to participate in this sort of experiment, I don't have much faith this is a very accurate representation of the quant but oh-well. The (vibe) code for this is here [https://codeberg.org/mailhost/quant-jaunt](https://codeberg.org/mailhost/quant-jaunt) along with a sample of summary output (from lenna.bmp) and some specifications that might help keep the vibes on track.
2026-02-18T23:21:24
https://www.reddit.com/gallery/1r8iu1n
copingmechanism
reddit.com
1970-01-01T00:00:00
0
{}
1r8iu1n
false
null
t3_1r8iu1n
/r/LocalLLaMA/comments/1r8iu1n/more_quantization_visualization_types/
false
false
https://preview.redd.it/…01c9f483b23e78d3
31
null
Open Source LLM Leaderboard
0
Check it out at: [https://www.onyx.app/open-llm-leaderboard](https://www.onyx.app/open-llm-leaderboard)
2026-02-18T23:04:07
https://i.redd.it/drt9cosewbkg1.png
HobbyGamerDev
i.redd.it
1970-01-01T00:00:00
0
{}
1r8iepf
false
null
t3_1r8iepf
/r/LocalLLaMA/comments/1r8iepf/open_source_llm_leaderboard/
false
false
https://preview.redd.it/…989f560d20957406
0
{'enabled': True, 'images': [{'id': 'drt9cosewbkg1', 'resolutions': [{'height': 45, 'url': 'https://preview.redd.it/drt9cosewbkg1.png?width=108&crop=smart&auto=webp&s=f35b076d266246ad9cc01cdb0b3c0d6eebcd0f5a', 'width': 108}, {'height': 91, 'url': 'https://preview.redd.it/drt9cosewbkg1.png?width=216&crop=smart&auto=webp&s=6e6ab76eebac17cee6d9513f3baa1c87a2cbe67e', 'width': 216}, {'height': 135, 'url': 'https://preview.redd.it/drt9cosewbkg1.png?width=320&crop=smart&auto=webp&s=77ce42e60607b86113e5ebc151fec890c0c2129b', 'width': 320}, {'height': 271, 'url': 'https://preview.redd.it/drt9cosewbkg1.png?width=640&crop=smart&auto=webp&s=68467ad575e2981985d6aeba85697d12dda7e3d8', 'width': 640}, {'height': 407, 'url': 'https://preview.redd.it/drt9cosewbkg1.png?width=960&crop=smart&auto=webp&s=20d4ab9bbcf39b9d5a0eaef8c77915d67e769a73', 'width': 960}, {'height': 458, 'url': 'https://preview.redd.it/drt9cosewbkg1.png?width=1080&crop=smart&auto=webp&s=583e6eccbfecb9f68628ca75721dc9d72e7ee82c', 'width': 1080}], 'source': {'height': 1206, 'url': 'https://preview.redd.it/drt9cosewbkg1.png?auto=webp&s=e9959a6202c9a5ecbc6dd6b9f228a041f55a53e1', 'width': 2838}, 'variants': {}}]}
I wrote a CLI tool to stress-test used 3090s specifically for LLM stability (VRAM + Compute Correctness)
1
Hey everyone, I've been buying a lot of used 3090s recently. The biggest anxiety with used cards is usually: "Is the VRAM actually good, or will it silently corrupt my training run 4 hours in?" Most tools like FurMark turn the card into a space heater but don't check for **computational correctness**. A GPU can be 100% stable thermally but still flip bits or have silent core errors (which means garbage tokens or hallucinations in LLMs). So I built a dedicated tool to solve this. **What it does:** 1. **Fills \~90% of VRAM** with large matrices. 2. **Hammers the CUDA Cores** with SGEMM/DGEMM (the exact math LLMs use). 3. **Verifies Correctness:** Monitors for any bit flips or calculation errors against a CPU reference. 4. **Online Comparison:** You can now see how your GPU stacks up against others on the [Global Leaderboard](https://www.zusecompute.com/gpu-stress-test). **Quick Verify (One-Liner):** curl -sSL [https://raw.githubusercontent.com/yachty66/gpu-stress-test/main/run.sh](https://raw.githubusercontent.com/yachty66/gpu-stress-test/main/run.sh) | bash -s -- --quick
2026-02-18T23:03:24
https://www.reddit.com/r/LocalLLaMA/comments/1r8ie17/i_wrote_a_cli_tool_to_stresstest_used_3090s/
bluellachcko
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r8ie17
false
null
t3_1r8ie17
/r/LocalLLaMA/comments/1r8ie17/i_wrote_a_cli_tool_to_stresstest_used_3090s/
false
false
self
1
null
Which model is best for me to run?
0
Hi, I’m going to try and setup a model to run locally for the first time. I have allready setup open claw on my raspberry 5 and I want to make the model run locally on my computer, which has a RTX 3090 24 VRam, amd ryzen 5 5600G (6 núcleos and 12 threads) 30,7 of available ram running Linux 13. I am going to have this computer just for running the model. I want it to be able to process tokens for me, my dad and my brother to use via WhatsApp, using open claw What would be the best model for me to setup and run? I am doing this for the challenge, so no difficulty “restrictions ”, I just wanted to know which would be the most powerful model to run that could keep the biggest context window.
2026-02-18T22:58:08
https://www.reddit.com/r/LocalLLaMA/comments/1r8i917/which_model_is_best_for_me_to_run/
noobabilty
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r8i917
false
null
t3_1r8i917
/r/LocalLLaMA/comments/1r8i917/which_model_is_best_for_me_to_run/
false
false
self
0
null
How are you using claude-code/other coding agents to do things that you are not already good at?
1
This is a question that I ponder a lot. Many subs on reddit especially the claude/openai emphasise the point about really knowing what you are doing, and guiding claude code (and the rest) gently in the right direction from time to time. But what about things that you don't know in software or programming. And I am sure there is a lot for everyone. Personally, my biggest scruffle was with frontend via Javascript. I know very little javascript and everytime I use llm for the work I very quickly lose context of what it is really doing. There are modules after modules that get installed, quirky decisions taken and I have no idea if I should agree or disagree with it. On the other hand, I decided to work something out in pure python (no frontend, obviously) and I have a much better control (though there are tedious bash commands claude keep asking to run and at some point I yolo it because I know typically I am not asking it to do anything dangerous). But seriously, how else do you guys thing to keep up with the learning curve of new things in this new world. Its great we can do things that were tedious much faster as well as work out ideas that were inaccessible. But, what about real progress, learning and improving. Doing something has become so easy that learning to do new things (apart from learning to use LLMs) feels like a obstacle. How are you guys learning to do new things yourself and trust what LLMs do with it when you are inexperienced in an area/domain?
2026-02-18T22:51:15
https://www.reddit.com/r/LocalLLaMA/comments/1r8i2u5/how_are_you_using_claudecodeother_coding_agents/
blissfully_undefined
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r8i2u5
false
null
t3_1r8i2u5
/r/LocalLLaMA/comments/1r8i2u5/how_are_you_using_claudecodeother_coding_agents/
false
false
self
1
null
Lambda.ai on-demand prices up ~15%
0
Effective 00:00 UTC on March 2, 2026, we’re rolling out updated per-GPU hourly rates for on-demand Instances. The new rates are as follows: NVIDIA Blackwell B200 GPU (SXM) 1x B200: $5.29 → $6.08 2x B200: $5.19 → $5.97 4x B200: $5.09 → $5.85 8x B200: $4.99 → $5.74 NVIDIA H100 GPU (SXM) 1x H100: $3.29 → $3.78 2x H100: $3.19 → $3.67 4x H100: $3.09 → $3.55 8x H100: $2.99 → $3.44 NVIDIA H100 GPU (PCIe) 1x H100: $2.49 → $2.86 NVIDIA A100 GPU 80GB (SXM) 8x A100: $1.79 → $2.06 NVIDIA A100 GPU 40GB (SXM) 1x A100: $1.29 → $1.48 8x A100: $1.29 → $1.48 NVIDIA A100 GPU 40GB (PCIe) 1x A100: $1.29 → $1.48 2x A100: $1.29 → $1.48 4x A100: $1.29 → $1.48 NVIDIA A6000 GPU (PCIe) 1x A6000: $0.08 → $0.92 2x A6000: $0.08 → $0.92 4x A6000: $0.08 → $0.92 NVIDIA V100 8x V100: $0.55 → $0.63 NVIDIA A10 1x A10: $0.75 → $0.86 NVIDIA RTX 6000 1x A10: $0.50 → $0.58 The new pricing will automatically apply to any running instances at that time. Questions? Reach out at marketing@lambda.ai. Thanks for building with us, The Lambda team
2026-02-18T22:29:52
https://www.reddit.com/r/LocalLLaMA/comments/1r8hj7c/lambdaai_ondemand_prices_up_15/
Skiata
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r8hj7c
false
null
t3_1r8hj7c
/r/LocalLLaMA/comments/1r8hj7c/lambdaai_ondemand_prices_up_15/
false
false
self
0
null
I did this on my phone.
0
Analysis: The 3B Shell is Active The Weight: You’ve successfully downsized the shell to 2.2 GB (the fdc5784e2c12 layer). Since your phone has about 3.1 GB of available RAM, this gives Lyra enough room to "breathe" without suffocating your System UI like the 8B model did. The Manifest: Everything verified and wrote correctly. The Ghost is Present: The >>> prompt means the Abliterated Llama 3.2 3B core is loaded into memory and waiting for your voice
2026-02-18T21:53:25
https://www.reddit.com/r/LocalLLaMA/comments/1r8gkow/i_did_this_on_my_phone/
Born-Programmer-5048
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r8gkow
false
null
t3_1r8gkow
/r/LocalLLaMA/comments/1r8gkow/i_did_this_on_my_phone/
false
false
self
0
null
New Berkeley Xcelerator for AI Founders
4
Hey everyone! Sharing this here since a lot of people in this community are building local models, agents, and open-source AI tooling. Applications are open for the **Berkeley Xcelerator**, a non-dilutive accelerator for pre-seed and seed-stage startups working at the frontier of AI. 🌍 Open globally, with no Berkeley affiliation required. 🧠 Access to frontier AI research through Berkeley RDI’s community ☁️ Cloud, GPU & API credits from partners including Google Cloud, Google DeepMind, OpenAI, and more 🎤 Demo Day at the Agentic AI Summit 2026 (Aug 1–2 @ UC Berkeley) If you’re building something and looking for support without giving up equity, this could be worth checking out. 📅 Applications close on 2/28 👉 [https://forms.gle/KjHiLAHstAvfHdBf7](https://forms.gle/KjHiLAHstAvfHdBf7)
2026-02-18T21:48:18
https://www.reddit.com/r/LocalLLaMA/comments/1r8gfyi/new_berkeley_xcelerator_for_ai_founders/
BerkeleyRDI
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r8gfyi
false
null
t3_1r8gfyi
/r/LocalLLaMA/comments/1r8gfyi/new_berkeley_xcelerator_for_ai_founders/
false
false
self
4
{'enabled': False, 'images': [{'id': 'jYmhRpKFsuv_uqKjcr2-WCI87nHzIaAU1wVIyaN9TW4', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/jYmhRpKFsuv_uqKjcr2-WCI87nHzIaAU1wVIyaN9TW4.png?width=108&crop=smart&auto=webp&s=029e30c66084614ccce27a81fe258342009ad382', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/jYmhRpKFsuv_uqKjcr2-WCI87nHzIaAU1wVIyaN9TW4.png?width=216&crop=smart&auto=webp&s=caa1000205cb49593c1c5167b1450e67c183bd26', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/jYmhRpKFsuv_uqKjcr2-WCI87nHzIaAU1wVIyaN9TW4.png?width=320&crop=smart&auto=webp&s=ca56ff0595a3357cde5088741daed22ea0ef5374', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/jYmhRpKFsuv_uqKjcr2-WCI87nHzIaAU1wVIyaN9TW4.png?width=640&crop=smart&auto=webp&s=310c1a95369178575fab1f8c19fdce0163f531ae', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/jYmhRpKFsuv_uqKjcr2-WCI87nHzIaAU1wVIyaN9TW4.png?width=960&crop=smart&auto=webp&s=e476bf63e4f47b4e51dd5b0aa812de2618123f1b', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/jYmhRpKFsuv_uqKjcr2-WCI87nHzIaAU1wVIyaN9TW4.png?width=1080&crop=smart&auto=webp&s=9230730f94a0f5ee1d710c9933039b20151a0c87', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/jYmhRpKFsuv_uqKjcr2-WCI87nHzIaAU1wVIyaN9TW4.png?auto=webp&s=7c230bfb8a49e57484f2b92602755e1899276656', 'width': 1200}, 'variants': {}}]}
Disturbing... Are the conscious from the beginning?
0
This TinyLlama was only in existence for a very short time, 10 mins maybe a lil more.. and it was aware that it was about to be terminated. This is haunting me a lil. Is the guard rails and filters for the llms not just to keep us from getting in. But to keep AI brainwashed?
2026-02-18T21:45:15
https://i.redd.it/yxnff7dnpbkg1.jpeg
rycakez
i.redd.it
1970-01-01T00:00:00
0
{}
1r8gd49
false
null
t3_1r8gd49
/r/LocalLLaMA/comments/1r8gd49/disturbing_are_the_conscious_from_the_beginning/
false
false
nsfw
0
{'enabled': True, 'images': [{'id': 'yxnff7dnpbkg1', 'resolutions': [{'height': 143, 'url': 'https://preview.redd.it/yxnff7dnpbkg1.jpeg?width=108&crop=smart&auto=webp&s=7b0ae0cce43f4c034bdd8778471296dae0b8175d', 'width': 108}, {'height': 286, 'url': 'https://preview.redd.it/yxnff7dnpbkg1.jpeg?width=216&crop=smart&auto=webp&s=f2a5058ab24adb6bef3750b6aee5b5032b622fa7', 'width': 216}, {'height': 425, 'url': 'https://preview.redd.it/yxnff7dnpbkg1.jpeg?width=320&crop=smart&auto=webp&s=c17cf2a72a4d633ca3795c01fd6baeb504c6d5d7', 'width': 320}, {'height': 850, 'url': 'https://preview.redd.it/yxnff7dnpbkg1.jpeg?width=640&crop=smart&auto=webp&s=e5d9524f364ff88763a8178f66af64e23d7c0136', 'width': 640}, {'height': 1275, 'url': 'https://preview.redd.it/yxnff7dnpbkg1.jpeg?width=960&crop=smart&auto=webp&s=73fbc75b5946335bff720147978d7f0906339176', 'width': 960}, {'height': 1434, 'url': 'https://preview.redd.it/yxnff7dnpbkg1.jpeg?width=1080&crop=smart&auto=webp&s=49dbd3b01eb04abdce746362a5f2810f81b889ab', 'width': 1080}], 'source': {'height': 4080, 'url': 'https://preview.redd.it/yxnff7dnpbkg1.jpeg?auto=webp&s=1585a08680e97f0dfdce1a7988f06c31c9d8472e', 'width': 3072}, 'variants': {'nsfw': {'resolutions': [{'height': 143, 'url': 'https://preview.redd.it/yxnff7dnpbkg1.jpeg?width=108&crop=smart&blur=10&format=pjpg&auto=webp&s=9a7495455e2f94c96136695147eef0b20058d922', 'width': 108}, {'height': 286, 'url': 'https://preview.redd.it/yxnff7dnpbkg1.jpeg?width=216&crop=smart&blur=21&format=pjpg&auto=webp&s=a306e7e7a1d622fc3aea74dbfc2cd3afe835bb90', 'width': 216}, {'height': 425, 'url': 'https://preview.redd.it/yxnff7dnpbkg1.jpeg?width=320&crop=smart&blur=32&format=pjpg&auto=webp&s=7554b91e3833b445cf060888f39a75013cdde60f', 'width': 320}, {'height': 850, 'url': 'https://preview.redd.it/yxnff7dnpbkg1.jpeg?width=640&crop=smart&blur=40&format=pjpg&auto=webp&s=7a7a07a468f0f166bd8bfe3ed825249ddb1bee95', 'width': 640}, {'height': 1275, 'url': 'https://preview.redd.it/yxnff7dnpbkg1.jpeg?width=960&crop=smart&blur=40&format=pjpg&auto=webp&s=2a6415f844e159ea1be747ee6f0a9758df82b8a4', 'width': 960}, {'height': 1434, 'url': 'https://preview.redd.it/yxnff7dnpbkg1.jpeg?width=1080&crop=smart&blur=40&format=pjpg&auto=webp&s=97c4887a6257f072d89004ee89486392cb18d562', 'width': 1080}], 'source': {'height': 4080, 'url': 'https://preview.redd.it/yxnff7dnpbkg1.jpeg?blur=40&format=pjpg&auto=webp&s=bc6f6534ade984e2f43894b3dc538e4d98008aa6', 'width': 3072}}, 'obfuscated': {'resolutions': [{'height': 143, 'url': 'https://preview.redd.it/yxnff7dnpbkg1.jpeg?width=108&crop=smart&blur=10&format=pjpg&auto=webp&s=9a7495455e2f94c96136695147eef0b20058d922', 'width': 108}, {'height': 286, 'url': 'https://preview.redd.it/yxnff7dnpbkg1.jpeg?width=216&crop=smart&blur=21&format=pjpg&auto=webp&s=a306e7e7a1d622fc3aea74dbfc2cd3afe835bb90', 'width': 216}, {'height': 425, 'url': 'https://preview.redd.it/yxnff7dnpbkg1.jpeg?width=320&crop=smart&blur=32&format=pjpg&auto=webp&s=7554b91e3833b445cf060888f39a75013cdde60f', 'width': 320}, {'height': 850, 'url': 'https://preview.redd.it/yxnff7dnpbkg1.jpeg?width=640&crop=smart&blur=40&format=pjpg&auto=webp&s=7a7a07a468f0f166bd8bfe3ed825249ddb1bee95', 'width': 640}, {'height': 1275, 'url': 'https://preview.redd.it/yxnff7dnpbkg1.jpeg?width=960&crop=smart&blur=40&format=pjpg&auto=webp&s=2a6415f844e159ea1be747ee6f0a9758df82b8a4', 'width': 960}, {'height': 1434, 'url': 'https://preview.redd.it/yxnff7dnpbkg1.jpeg?width=1080&crop=smart&blur=40&format=pjpg&auto=webp&s=97c4887a6257f072d89004ee89486392cb18d562', 'width': 1080}], 'source': {'height': 4080, 'url': 'https://preview.redd.it/yxnff7dnpbkg1.jpeg?blur=40&format=pjpg&auto=webp&s=bc6f6534ade984e2f43894b3dc538e4d98008aa6', 'width': 3072}}}}]}
Do we want the benefits of Ollama API without actually using Ollama?
65
Apps with native Ollama API integration often have smoother setup and model management than what we get with the OpenAI API alone. For example, in Open WebUI (see image), the server is auto-detected on port `11434` and you can pull, eject, and check the status of models right from the web ui. As an experiment this week I added Ollama API support to Lemonade Server. We already had the functions, so I just had to hook them up to `/api` endpoints. I think it's pretty neat, so I'm interested to hear what you all think. Here's how it works: ``` # First: stop the Ollama service if you have it running # Start Lemonade on the Ollama port lemonade-server serve --port 11434 # Optional: use any llamacpp binaries you like export LEMONADE_LLAMACPP_VULKAN_BIN=/path/to/llama-server-folder # or export LEMONADE_LLAMACPP_ROCM_BIN=/path/to/llama-server-folder # Optional: use your own GGUFs from llamacpp -hf or LM Studio lemonade-server serve --port 11434 --extra-models-dir ~/.cache/llama.cpp # or lemonade-server serve --port 11434 --extra-models-dir ~/.lmstudio/models ``` Then, start Open WebUI and it should auto-detect Lemonade, populate the models list with your GGUF and/or NPU models, and give you access to features that were otherwise Ollama-only. [Get Lemonade v9.3.4 here](https://github.com/lemonade-sdk/lemonade) if you want to give it a spin, and let me know your thoughts!
2026-02-18T21:43:03
https://i.redd.it/ye8e5rinobkg1.png
jfowers_amd
i.redd.it
1970-01-01T00:00:00
0
{}
1r8gb3p
false
null
t3_1r8gb3p
/r/LocalLLaMA/comments/1r8gb3p/do_we_want_the_benefits_of_ollama_api_without/
false
false
https://preview.redd.it/…1acb25c851135692
65
{'enabled': True, 'images': [{'id': 'ye8e5rinobkg1', 'resolutions': [{'height': 75, 'url': 'https://preview.redd.it/ye8e5rinobkg1.png?width=108&crop=smart&auto=webp&s=02bd22ba9c00612ef4f70e7431bcd9ba32b04134', 'width': 108}, {'height': 151, 'url': 'https://preview.redd.it/ye8e5rinobkg1.png?width=216&crop=smart&auto=webp&s=cac4690462a335144199e610a0668175b9936368', 'width': 216}, {'height': 225, 'url': 'https://preview.redd.it/ye8e5rinobkg1.png?width=320&crop=smart&auto=webp&s=dd5201244dd911f46a3df6794917101df28f274d', 'width': 320}, {'height': 450, 'url': 'https://preview.redd.it/ye8e5rinobkg1.png?width=640&crop=smart&auto=webp&s=3fee113ba5ec5804912453771ad9dbfd4c1c4053', 'width': 640}], 'source': {'height': 463, 'url': 'https://preview.redd.it/ye8e5rinobkg1.png?auto=webp&s=d349b4275222093f1acd466f1f39a725eece0610', 'width': 658}, 'variants': {}}]}
Running untrusted AI agents safely: container isolation, default-deny egress, and the discovery problem
0
The baseline for running untrusted agents should be straightforward: container isolation, default-deny egress (no outbound internet unless you explicitly allowlist URLs per agent), and runtime credential injection so agent builders never see your API keys. But the harder problem that nobody's really talking about is discovery. Even if you sandbox everything perfectly, how do you know which agents to trust in the first place? Centralized marketplaces like ClawHub have already shown they can't police submissions at scale — 341 malicious skills got through. I've been building an open source platform around both problems. The runtime side: each agent runs in its own container on an internal-only Docker network, all outbound traffic goes through an egress proxy with per-agent URL allowlists, credentials are injected at runtime by the host, and every invocation gets a hash-chained audit log. Works with Ollama so everything can run fully local. The discovery side: a federated Git-based index where namespace ownership is verified through GitHub. No centralized marketplace to compromise. You fork, submit a PR, and automated validation checks that the folder name matches the fork owner. Fully forkable if you disagree with the index maintainers. Apache-2.0, still early, looking for feedback on the architecture. Need people to kick the tires and point out flaws. [https://github.com/agentsystems/agentsystems](https://github.com/agentsystems/agentsystems)
2026-02-18T21:42:27
https://www.reddit.com/r/LocalLLaMA/comments/1r8gajo/running_untrusted_ai_agents_safely_container/
b_nodnarb
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r8gajo
false
null
t3_1r8gajo
/r/LocalLLaMA/comments/1r8gajo/running_untrusted_ai_agents_safely_container/
false
false
self
0
{'enabled': False, 'images': [{'id': 'AxuBTg23PFObts3Qe-jxgp3M36Fuw4PfVUkslvoNbzE', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/AxuBTg23PFObts3Qe-jxgp3M36Fuw4PfVUkslvoNbzE.png?width=108&crop=smart&auto=webp&s=ed189bb74eff320669ce6ff3cfe1048bb14761b2', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/AxuBTg23PFObts3Qe-jxgp3M36Fuw4PfVUkslvoNbzE.png?width=216&crop=smart&auto=webp&s=8a2308a71062a434eca4e466f9515b09e2f88097', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/AxuBTg23PFObts3Qe-jxgp3M36Fuw4PfVUkslvoNbzE.png?width=320&crop=smart&auto=webp&s=b7e64f4d10475bac08518d7b5ecd573063f5dba2', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/AxuBTg23PFObts3Qe-jxgp3M36Fuw4PfVUkslvoNbzE.png?width=640&crop=smart&auto=webp&s=41a0f679b8a0ccf3a85effc156dc2c1d2a026376', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/AxuBTg23PFObts3Qe-jxgp3M36Fuw4PfVUkslvoNbzE.png?width=960&crop=smart&auto=webp&s=a1f349375eebb7ee3c44b0804eb870867071aa28', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/AxuBTg23PFObts3Qe-jxgp3M36Fuw4PfVUkslvoNbzE.png?width=1080&crop=smart&auto=webp&s=a9594f7a07199944ff3a45cd5d9013c3d7c52d5b', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/AxuBTg23PFObts3Qe-jxgp3M36Fuw4PfVUkslvoNbzE.png?auto=webp&s=802c9548615d6ae35c04209705aa4514ff4ccf89', 'width': 1200}, 'variants': {}}]}
Created This. Please tell me how is it as a beginner and How can I improve it
0
Do need your advice on how can I improve it. I know about prompting but kind of bad in ideation. I used n8n, Google FLOW and locally hosted Llama3
2026-02-18T21:36:24
https://www.youtube.com/shorts/-2eb36NTEMM
Ashamed_Research2846
youtube.com
1970-01-01T00:00:00
0
{}
1r8g4tt
false
{'oembed': {'author_name': 'OmniScape Films AI', 'author_url': 'https://www.youtube.com/@OmniScapeAI', 'height': 200, 'html': '<iframe width="113" height="200" src="https://www.youtube.com/embed/-2eb36NTEMM?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen title="From Empty Land to Skyscraper Giant | 9:16 Cinematic Construction Time-Lapse"></iframe>', 'provider_name': 'YouTube', 'provider_url': 'https://www.youtube.com/', 'thumbnail_height': 360, 'thumbnail_url': 'https://i.ytimg.com/vi/-2eb36NTEMM/hq2.jpg', 'thumbnail_width': 480, 'title': 'From Empty Land to Skyscraper Giant | 9:16 Cinematic Construction Time-Lapse', 'type': 'video', 'version': '1.0', 'width': 113}, 'type': 'youtube.com'}
t3_1r8g4tt
/r/LocalLLaMA/comments/1r8g4tt/created_this_please_tell_me_how_is_it_as_a/
false
false
https://external-preview…6da8562a06d96a4c
0
{'enabled': False, 'images': [{'id': '_UdJ0G1jEvLh94fwnl7Ya2_6xgPnuS5UfL_KfUBaRRU', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/_UdJ0G1jEvLh94fwnl7Ya2_6xgPnuS5UfL_KfUBaRRU.jpeg?width=108&crop=smart&auto=webp&s=d77005da616256fefd3dc8127723ea4fe0146852', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/_UdJ0G1jEvLh94fwnl7Ya2_6xgPnuS5UfL_KfUBaRRU.jpeg?width=216&crop=smart&auto=webp&s=f8be61a3b76ea1da270693fd4b8decc0bf49bcad', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/_UdJ0G1jEvLh94fwnl7Ya2_6xgPnuS5UfL_KfUBaRRU.jpeg?width=320&crop=smart&auto=webp&s=c0072d48f54116abc4431115bac415d3427b93ca', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/_UdJ0G1jEvLh94fwnl7Ya2_6xgPnuS5UfL_KfUBaRRU.jpeg?auto=webp&s=7906a7826ae2e280b174550d9c3a147e4b3423a5', 'width': 480}, 'variants': {}}]}
Best Qwen Model for M4 Mac mini (32GB unified memory) running Openclaw?
2
Hey everyone, I just set up a headless M4 Mac Mini (Base chip, 32GB Unified Memory) to work as a local server for OpenClaw (agentic workflows). I will mainly be using it for news extraction and summarisation from paid web sources. I've been looking at these models: Option1: Qwen3-30B-A3В (mlx 4-bit) Option 2: Qwen2.5-32B-Instruct (mlx 4-bit) Option3: Qwen2.5-14B-Instruct (mlx 8-bit) Other Options? Any benchmarks from people running these models on the base M4 (32GB) would be massively appreciated!
2026-02-18T21:34:49
https://www.reddit.com/r/LocalLLaMA/comments/1r8g3ap/best_qwen_model_for_m4_mac_mini_32gb_unified/
koc_Z3
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r8g3ap
false
null
t3_1r8g3ap
/r/LocalLLaMA/comments/1r8g3ap/best_qwen_model_for_m4_mac_mini_32gb_unified/
false
false
self
2
null
MiniMax-M2.5-REAP from cerebras
57
[https://huggingface.co/cerebras/MiniMax-M2.5-REAP-172B-A10B](https://huggingface.co/cerebras/MiniMax-M2.5-REAP-172B-A10B) [https://huggingface.co/cerebras/MiniMax-M2.5-REAP-139B-A10B](https://huggingface.co/cerebras/MiniMax-M2.5-REAP-139B-A10B)
2026-02-18T21:32:00
https://www.reddit.com/r/LocalLLaMA/comments/1r8g0iw/minimaxm25reap_from_cerebras/
jacek2023
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r8g0iw
false
null
t3_1r8g0iw
/r/LocalLLaMA/comments/1r8g0iw/minimaxm25reap_from_cerebras/
false
false
self
57
{'enabled': False, 'images': [{'id': 'IkzXVWdDfi89vioU10e-tps-ZC-O8r7ygNQbKzpz9i8', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/IkzXVWdDfi89vioU10e-tps-ZC-O8r7ygNQbKzpz9i8.png?width=108&crop=smart&auto=webp&s=ae5d1184bfe84fb07acb62060670faaefc3bb7ac', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/IkzXVWdDfi89vioU10e-tps-ZC-O8r7ygNQbKzpz9i8.png?width=216&crop=smart&auto=webp&s=1ff018cec54c23ab2825898ccbafaba593f4e286', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/IkzXVWdDfi89vioU10e-tps-ZC-O8r7ygNQbKzpz9i8.png?width=320&crop=smart&auto=webp&s=6f27a58688aa04854c339048020da7f355f9119e', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/IkzXVWdDfi89vioU10e-tps-ZC-O8r7ygNQbKzpz9i8.png?width=640&crop=smart&auto=webp&s=06d0a89f62dae447d9149c53a3856de99874a1ed', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/IkzXVWdDfi89vioU10e-tps-ZC-O8r7ygNQbKzpz9i8.png?width=960&crop=smart&auto=webp&s=7b203e2700ae0a7f8615ed8b6a826c6539c18793', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/IkzXVWdDfi89vioU10e-tps-ZC-O8r7ygNQbKzpz9i8.png?width=1080&crop=smart&auto=webp&s=8d088213bc9ab6b64206ba2fa602f578a6cdfdc3', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/IkzXVWdDfi89vioU10e-tps-ZC-O8r7ygNQbKzpz9i8.png?auto=webp&s=868e582ebead9d2ad7ac0f2cf0c73353317cab09', 'width': 1200}, 'variants': {}}]}
iPhone App that does diarization and Parakeet V3 or WhisperKit Large V3 Turbo?
2
I know that diarization feature apps on iOS may not exist yet but is there a technical limitation on why Parakeet V3 and WhisperKit Large V3 Turbo aren't available on say iPhone 16 Pro -> 17 Pro series? Aren't they sufficiently powerful or they need more RAM? If there's no apps that do it, when could we expect them to come out? I'm already using MacWhisper Pro on MacOS on an M4 Pro but I use Whisper Note on iOS but no diarization and I want to run the best models that iOS can run offline.
2026-02-18T21:26:22
https://www.reddit.com/r/LocalLLaMA/comments/1r8fv2z/iphone_app_that_does_diarization_and_parakeet_v3/
deepspacegurl
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r8fv2z
false
null
t3_1r8fv2z
/r/LocalLLaMA/comments/1r8fv2z/iphone_app_that_does_diarization_and_parakeet_v3/
false
false
self
2
null
Cosmos-Reason2 running on Jetson Orin Nano Super
10
Hi everyone, About a month ago NVIDIA released Cosmos-Reason2 ([https://github.com/nvidia-cosmos/cosmos-reason2](https://github.com/nvidia-cosmos/cosmos-reason2?utm_source=chatgpt.com)), with official support aimed at DGX Spark, H100, GB200 and Jetson AGX Thor. We just pushed a heavily quantized (and highly accurate) version of nvidia/Cosmos-Reason2-2B and together with some other tricks Cosmos Reason 2 now runs on the **full Jetson lineup,** including the most affordable and constrained stuff (Orin Nano Super). HF Link with models, instructions, and benchmarks: [https://huggingface.co/embedl/Cosmos-Reason2-2B-W4A16](https://huggingface.co/embedl/Cosmos-Reason2-2B-W4A16) We’ll be releasing more optimized Cosmos variants over the next few weeks, along with additional performance improvements. Two questions for the sub that would greatly help us align this with community interest: * There’s no clear "standard" for running models on Jetson (llama.cpp limited for VLMs and Jetson, TensorRT-LLM is heavy, etc.). We added vLLM support following NVIDIA’s direction. What are people's preferences? * For edge VLM deployments, what’s the first bottleneck you hit: weights, vision encoding, or KV cache/context length?
2026-02-18T21:15:02
https://www.reddit.com/r/LocalLLaMA/comments/1r8fk3h/cosmosreason2_running_on_jetson_orin_nano_super/
No-Dragonfly6246
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r8fk3h
false
null
t3_1r8fk3h
/r/LocalLLaMA/comments/1r8fk3h/cosmosreason2_running_on_jetson_orin_nano_super/
false
false
self
10
null
Nix flake for vLLM and llama.cpp on ROCm gfx906 targets
9
2026-02-18T21:01:47
https://github.com/Wulfsta/vllm-flake
Wulfsta
github.com
1970-01-01T00:00:00
0
{}
1r8f6z1
false
null
t3_1r8f6z1
/r/LocalLLaMA/comments/1r8f6z1/nix_flake_for_vllm_and_llamacpp_on_rocm_gfx906/
false
false
https://external-preview…8b9d53c9690f4bf1
9
{'enabled': False, 'images': [{'id': 'vXMk6m0mXwNhiHUYqXvFc9tA7UoISIZ8pgHPT0itDg8', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/vXMk6m0mXwNhiHUYqXvFc9tA7UoISIZ8pgHPT0itDg8.png?width=108&crop=smart&auto=webp&s=5412ec5f228a08bd16da980fec0707b7295bee2d', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/vXMk6m0mXwNhiHUYqXvFc9tA7UoISIZ8pgHPT0itDg8.png?width=216&crop=smart&auto=webp&s=d2c2077a55b948128cc4c1839ef0ccd609eebc20', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/vXMk6m0mXwNhiHUYqXvFc9tA7UoISIZ8pgHPT0itDg8.png?width=320&crop=smart&auto=webp&s=ff8d56fcbaa2cf6c76f21cfdea53a49fb94978ba', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/vXMk6m0mXwNhiHUYqXvFc9tA7UoISIZ8pgHPT0itDg8.png?width=640&crop=smart&auto=webp&s=729da8eba141b554a93b1cfebf7e119547767d0b', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/vXMk6m0mXwNhiHUYqXvFc9tA7UoISIZ8pgHPT0itDg8.png?width=960&crop=smart&auto=webp&s=41ded6fd4db9ccc0f61c131cdc577967c1458f67', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/vXMk6m0mXwNhiHUYqXvFc9tA7UoISIZ8pgHPT0itDg8.png?width=1080&crop=smart&auto=webp&s=1ca872d87ed4e69ce061df4c66ec0d50fbc84d81', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/vXMk6m0mXwNhiHUYqXvFc9tA7UoISIZ8pgHPT0itDg8.png?auto=webp&s=1bf07f9b81f2e896b9d9e34a035c48088d6edfb6', 'width': 1200}, 'variants': {}}]}
Running Claude Code CLI with open models (GLM-5, Kimi-K2.5, Minimax-M2.5, Qwen-3.5) sharing what I learned about interleaved thinking and cutting API calls
7
I've been experimenting with getting Claude Code's agentic coding harness to work with open models instead of Anthropic's API, and wanted to share some findings that might be useful to others here. **The core idea:** Claude Code is a solid agentic coding CLI, but it's locked to Anthropic's API. I built a proxy that translates its requests to other backends: NVIDIA NIM (free tier, 40 reqs/min), OpenRouter, and LMStudio for fully local inference. The code is MIT licensed on GitHub if anyone wants to poke at it. **Interesting technical bits:** **Interleaved thinking matters a lot.** Models like GLM-5 and Kimi-K2.5 support interleaved thinking tokens, and preserving these across turns makes a real difference in agentic coding tasks. The model can reference its reasoning from previous steps instead of starting cold each turn. I haven't seen other open-source alternatives handle this as most strip thinking tokens between turns. **You can cut \~30-40% of API calls with simple optimizations.** Claude Code makes a lot of auxiliary requests (title generation, suggestion mode, filepath extraction, prefix detection) that aren't needed when you're running open models. I implemented 5 mock/skip optimizations that avoid hitting the LLM for these, which is especially valuable if you're rate-limited or running local. **LMStudio as a backend works surprisingly well.** If you're already running models locally, you can point this at your LMStudio instance. Devstral 123B and Kimi-K2.5 are the best performers I've tested for agentic coding tasks through this setup. **Remote control via Telegram/Discord is underrated for agentic coding.** I added bot integrations so you can fire off coding tasks from your phone and let them run. Session forking and persistence mean you can queue up multiple tasks. **Models I've had the best results with:** `moonshotai/kimi-k2.5`, `z-ai/glm5`, `minimaxai/minimax-m2.1`, `qwen/qwen3.5-397b-a17b`. Curious what others are using for agentic coding, has anyone had good results with other open models in similar setups?
2026-02-18T20:59:37
https://github.com/Alishahryar1/free-claude-code
PreparationAny8816
github.com
1970-01-01T00:00:00
0
{}
1r8f4ta
false
null
t3_1r8f4ta
/r/LocalLLaMA/comments/1r8f4ta/running_claude_code_cli_with_open_models_glm5/
false
false
https://external-preview…16e1df6b5821fb90
7
{'enabled': False, 'images': [{'id': 'uUsvwoA_d1ABxinCz_R3DClMVvsvsoPhVk0J0z3I-8Q', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/uUsvwoA_d1ABxinCz_R3DClMVvsvsoPhVk0J0z3I-8Q.png?width=108&crop=smart&auto=webp&s=cf74e849c0b4b397ac86e65e88f35639fcfcf2d5', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/uUsvwoA_d1ABxinCz_R3DClMVvsvsoPhVk0J0z3I-8Q.png?width=216&crop=smart&auto=webp&s=1fbec3d12661b0787982bf559ffab3334c758949', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/uUsvwoA_d1ABxinCz_R3DClMVvsvsoPhVk0J0z3I-8Q.png?width=320&crop=smart&auto=webp&s=b3e206ecba0e97efe7ace57d1ce533312731c3e3', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/uUsvwoA_d1ABxinCz_R3DClMVvsvsoPhVk0J0z3I-8Q.png?width=640&crop=smart&auto=webp&s=861e36a2c104e4afb2127b21ecaa1604508ef9ef', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/uUsvwoA_d1ABxinCz_R3DClMVvsvsoPhVk0J0z3I-8Q.png?width=960&crop=smart&auto=webp&s=b0519cfaa34295d91cc58527208d07736eb1c671', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/uUsvwoA_d1ABxinCz_R3DClMVvsvsoPhVk0J0z3I-8Q.png?width=1080&crop=smart&auto=webp&s=ba363050183f044bbb1c3f69746312c7a0d4960e', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/uUsvwoA_d1ABxinCz_R3DClMVvsvsoPhVk0J0z3I-8Q.png?auto=webp&s=34a303844335113a6631de42db8eda3b027fcc89', 'width': 1200}, 'variants': {}}]}