title stringlengths 1 300 | score int64 0 8.54k | selftext stringlengths 0 41.5k | created timestamp[ns]date 2023-04-01 04:30:41 2026-03-04 02:14:14 ⌀ | url stringlengths 0 878 | author stringlengths 3 20 | domain stringlengths 0 82 | edited timestamp[ns]date 1970-01-01 00:00:00 2026-02-19 14:51:53 | gilded int64 0 2 | gildings stringclasses 7
values | id stringlengths 7 7 | locked bool 2
classes | media stringlengths 646 1.8k ⌀ | name stringlengths 10 10 | permalink stringlengths 33 82 | spoiler bool 2
classes | stickied bool 2
classes | thumbnail stringlengths 4 213 ⌀ | ups int64 0 8.54k | preview stringlengths 301 5.01k ⌀ |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
I built an eBPF tracer to monitor AI agents the same way you'd monitor malware in a sandbox | 55 | >TL;DR: AI agents control their own application logs, which makes those logs useless for security monitoring. We applied the malware sandboxing principle (observe from a layer the subject can't see) and built Azazel, an open-source eBPF-based runtime tracer for containerized AI agents.
If you're running autonomous AI ... | 2026-02-19T13:14:07 | https://www.reddit.com/r/LocalLLaMA/comments/1r8yvu5/i_built_an_ebpf_tracer_to_monitor_ai_agents_the/ | M4r10_h4ck | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r8yvu5 | false | null | t3_1r8yvu5 | /r/LocalLLaMA/comments/1r8yvu5/i_built_an_ebpf_tracer_to_monitor_ai_agents_the/ | false | false | self | 55 | {'enabled': False, 'images': [{'id': 'zNphzeZFphtEMyuNanJ5tlU9DnKJwzJc2MEvA39QjzY', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/zNphzeZFphtEMyuNanJ5tlU9DnKJwzJc2MEvA39QjzY.jpeg?width=108&crop=smart&auto=webp&s=f68b5997e3b878f321a1a3f594a1b25229149df4', 'width': 108}, {'height': 108, 'url': '... |
Where and how do people use AI agents? I’m still fine tuning my model for specific tasks and never needed to use an agent. | 0 | It’s been 2 years since the advent of Ai agents and I never had to use them. where do you guys use AI agents? Ams what framework do you typically use? what Are some usecase where you absolutely needs agents? And that cannot be done by just using a fine tuned model? | 2026-02-19T13:13:29 | https://www.reddit.com/r/LocalLLaMA/comments/1r8yvde/where_and_how_do_people_use_ai_agents_im_still/ | TinyVector | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r8yvde | false | null | t3_1r8yvde | /r/LocalLLaMA/comments/1r8yvde/where_and_how_do_people_use_ai_agents_im_still/ | false | false | self | 0 | null |
Why I Route 80% of My AI Workload to a Free Local Model (And Only Pay for the Last 20%) | 1 | [removed] | 2026-02-19T13:06:46 | https://www.reddit.com/r/LocalLLaMA/comments/1r8yq2q/why_i_route_80_of_my_ai_workload_to_a_free_local/ | Extension_Pop3732 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r8yq2q | false | null | t3_1r8yq2q | /r/LocalLLaMA/comments/1r8yq2q/why_i_route_80_of_my_ai_workload_to_a_free_local/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'anhUXsSM27aCc7fPvtUNJ5rwMpMu-_jlQz5AW8g_yF4', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/anhUXsSM27aCc7fPvtUNJ5rwMpMu-_jlQz5AW8g_yF4.jpeg?width=108&crop=smart&auto=webp&s=91b72fb789cfc5b7383948134e5eda851e8121ef', 'width': 108}, {'height': 108, 'url': '... |
Your agent chats well. But can it act under pressure? | 0 | I’m testing a simulation to see how an agent performs against others under real-world limits.
There are three scenarios in the simulation:
1. Lead Gen Under Budget
2. Multi-step Workflow Automation
3. Research + Decision Task Under Deadline
You can watch the run in real time, inspect decisions, and pause to anal... | 2026-02-19T13:02:48 | https://www.reddit.com/r/LocalLLaMA/comments/1r8ymvu/your_agent_chats_well_but_can_it_act_under/ | Recent_Jellyfish2190 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r8ymvu | false | null | t3_1r8ymvu | /r/LocalLLaMA/comments/1r8ymvu/your_agent_chats_well_but_can_it_act_under/ | false | false | self | 0 | null |
[Project] Galactic AI: Open-source ReAct agent with persistent memory and 56+ Playwright tools | 1 | [removed] | 2026-02-19T13:01:03 | https://www.reddit.com/r/LocalLLaMA/comments/1r8ylgv/project_galactic_ai_opensource_react_agent_with/ | Longjumping_Set_1374 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r8ylgv | false | null | t3_1r8ylgv | /r/LocalLLaMA/comments/1r8ylgv/project_galactic_ai_opensource_react_agent_with/ | false | false | self | 1 | null |
Built a music generation app that runs 100% on-device using Apple's MLX framework no cloud, no API calls | 11 | I've been following local AI discussions here for a while and wanted to share something I built that fits the ethos of this community pretty well.
I got frustrated with every AI music tool being cloud-based Suno, Stable Audio, AIVA all sending your prompts to their servers, all requiring monthly subscriptions. The mom... | 2026-02-19T12:26:46 | https://v.redd.it/2vw0xoit2gkg1 | tarunyadav9761 | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1r8xw1j | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/2vw0xoit2gkg1/DASHPlaylist.mpd?a=1774096020%2CMmM0N2Q5YTIzZGZlMWQ5MTg2OTYyMDAzNjdkOWEwNjZhODliMWE4MGM5MGQ4M2MyNDgzMzYwMmU0Y2I0MTdkMQ%3D%3D&v=1&f=sd', 'duration': 120, 'fallback_url': 'https://v.redd.it/2vw0xoit2gkg1/CMAF_1080.mp4?source=fallback', '... | t3_1r8xw1j | /r/LocalLLaMA/comments/1r8xw1j/built_a_music_generation_app_that_runs_100/ | false | false | 11 | {'enabled': False, 'images': [{'id': 'MXBieWV6aXQyZ2tnMVpKx18zu2Al60haJHCIipoecy1_uH38KnrawZp01IuI', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/MXBieWV6aXQyZ2tnMVpKx18zu2Al60haJHCIipoecy1_uH38KnrawZp01IuI.png?width=108&crop=smart&format=pjpg&auto=webp&s=49c18274a59f75466a741840cf84aefa9ecb4... | |
Multi-GPU Setup | 0 | PCIe risers are your friend here. The mining community figured this out years ago — you can use x1 to x16 risers (USB-style cables) to connect GPUs. For 8 GPUs look at ASRock Rack EPYCD8-2T or similar EPYC boards. Some people use PCIe bifurcation cards to split x16 slots into multiple x4s. For inference you dont need f... | 2026-02-19T12:05:50 | https://www.reddit.com/r/LocalLLaMA/comments/1r8xhle/multigpu_setup/ | Official_VaultAI | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r8xhle | false | null | t3_1r8xhle | /r/LocalLLaMA/comments/1r8xhle/multigpu_setup/ | false | false | self | 0 | null |
an llm is (currently) effectively an egregore of the human species as a whole, manifested in a somewhat more tangible/condensed form (as opposed to existing in the shared minds of humanity // in the platonic space) | 0 | this descriptor will end up being a bit less true, once we start kicking off ASI flywheels, which may begin using much more synthetic (nonhuman) sources of data.
looking back, I would say that the models of ~2023-2028 will effectively serve as beautifully condensed and varied expressions of the egregore of humanity fr... | 2026-02-19T11:55:32 | https://www.reddit.com/r/LocalLLaMA/comments/1r8xaa9/an_llm_is_currently_effectively_an_egregore_of/ | cobalt1137 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r8xaa9 | false | null | t3_1r8xaa9 | /r/LocalLLaMA/comments/1r8xaa9/an_llm_is_currently_effectively_an_egregore_of/ | false | false | self | 0 | null |
I built a native macOS app that generates music with AI entirely offline, no cloud, no subscription | 1 | [removed] | 2026-02-19T11:42:53 | https://www.reddit.com/r/LocalLLaMA/comments/1r8x229/i_built_a_native_macos_app_that_generates_music/ | No-Classroom72 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r8x229 | false | null | t3_1r8x229 | /r/LocalLLaMA/comments/1r8x229/i_built_a_native_macos_app_that_generates_music/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'vUr8u2mgmgdjb5ZQQKp2jnpv26Chv-4oqK7psjghYdo', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/vUr8u2mgmgdjb5ZQQKp2jnpv26Chv-4oqK7psjghYdo.jpeg?width=108&crop=smart&auto=webp&s=9c9798e24044357b5827e7704fd9c584361444de', 'width': 108}, {'height': 121, 'url': '... |
Regret? Should I have picked Eypc DDR4 instead of ThreadRipper DDR5? | 0 | I decided to go with...
AMD Ryzen Threadripper PRO 9955WX 16 Core
ASUS AMD Threadripper Pro WS WRX90E-SAGE SE PCIe 5.0 eATX Motherboard
64GB DDR5 5600mhz
Instead of...
AMD 8 Core 2nd Gen EPYC 7232P Single Socket PCIe 4.0 - DDR4
16GB DDR4 3200Mhz
I should have just gone cheaper, saved lots of money on DDR4 comp... | 2026-02-19T11:42:25 | https://www.reddit.com/r/LocalLLaMA/comments/1r8x1qh/regret_should_i_have_picked_eypc_ddr4_instead_of/ | gordi555 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r8x1qh | false | null | t3_1r8x1qh | /r/LocalLLaMA/comments/1r8x1qh/regret_should_i_have_picked_eypc_ddr4_instead_of/ | false | false | self | 0 | null |
Building a local multi-model OpenClaw assistant on Mac Studio M3 Ultra (96GB) for research, RAG, coding, and Korean↔English tasks — hardware sufficient? Best models? MLX? Fine-tuning? | 0 | Hi r/LocalLLaMA,
I'm a physics student working on building a personal AI assistant using OpenClaw to support my university coursework and ongoing research. I want to replace cloud API usage entirely with a fully local stack, and I'd love input from people who've actually run setups like this.
\-Why I'm going local
I... | 2026-02-19T11:41:27 | https://www.reddit.com/r/LocalLLaMA/comments/1r8x13i/building_a_local_multimodel_openclaw_assistant_on/ | Upbeat-Culture4072 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r8x13i | false | null | t3_1r8x13i | /r/LocalLLaMA/comments/1r8x13i/building_a_local_multimodel_openclaw_assistant_on/ | false | false | self | 0 | null |
I benchmarked 5 agent memory solutions head-to-head — the fastest one has zero dependencies and no API keys | 1 | I've been building infrastructure for AI agents and got tired of every memory solution requiring an OpenAI key, a vector DB, or a cloud subscription. So I built my own and then benchmarked it against the field: mem0, LangChain, Zep, and Letta. All measured on the same Mac Mini M4, same 100-doc corpus, same methodology.... | 2026-02-19T11:31:09 | https://www.reddit.com/r/LocalLLaMA/comments/1r8wukc/i_benchmarked_5_agent_memory_solutions_headtohead/ | fourbeersthepirates | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r8wukc | false | null | t3_1r8wukc | /r/LocalLLaMA/comments/1r8wukc/i_benchmarked_5_agent_memory_solutions_headtohead/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'v8lJnjFT2iVDp3t69wrvNRAEbCS_ipjAYSU80S7mBys', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/v8lJnjFT2iVDp3t69wrvNRAEbCS_ipjAYSU80S7mBys.png?width=108&crop=smart&auto=webp&s=88d6cb0575356de0a635f00f85d697b6fa53fb5c', 'width': 108}, {'height': 121, 'url': 'h... |
🚀 Help Build Real-World Benchmarks for Autonomous AI Agents | 1 | [removed] | 2026-02-19T11:30:05 | https://www.reddit.com/r/LocalLLaMA/comments/1r8wtwe/help_build_realworld_benchmarks_for_autonomous_ai/ | Grouchy-Tiger-2367 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r8wtwe | false | null | t3_1r8wtwe | /r/LocalLLaMA/comments/1r8wtwe/help_build_realworld_benchmarks_for_autonomous_ai/ | false | false | 1 | null | |
Use cases for RAG? | 0 | I wonder what uses there are for knowledge stacks. I can't really think of use cases, especially now that large context windows allow me to put everything directly into the current context, which I find works much better.
Previously, I tried creating knowledge stacks for the Energy sector because it's part of my work... | 2026-02-19T11:23:56 | https://www.reddit.com/r/LocalLLaMA/comments/1r8wq52/use_cases_for_rag/ | ConsequenceMany8 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r8wq52 | false | null | t3_1r8wq52 | /r/LocalLLaMA/comments/1r8wq52/use_cases_for_rag/ | false | false | self | 0 | null |
Question, Is it just me or reap models are way slower than a model of the same size? | 1 | I have used JoyAi and Qwen Next Coder 48B Reap,
But the Qwen model is too slow how do I fix it? | 2026-02-19T11:20:14 | https://www.reddit.com/r/LocalLLaMA/comments/1r8wnte/question_is_it_just_me_or_reap_models_are_way/ | Significant_Fig_7581 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r8wnte | false | null | t3_1r8wnte | /r/LocalLLaMA/comments/1r8wnte/question_is_it_just_me_or_reap_models_are_way/ | false | false | self | 1 | null |
pthinc/BCE-Prettybird-Micro-Standard-v0.0.1 | 0 | The Silence of Efficiency. While the industry continues its race for massive parameter counts, we have been quietly focusing on the fundamental mechanics of thought. Today, at Prometech A.Ş., we are releasing the first fragment of our Behavioral Consciousness Engine (BCE) architecture: BCE-Prettybird-Micro-Standart-v0.... | 2026-02-19T11:16:51 | https://www.reddit.com/r/LocalLLaMA/comments/1r8wlok/pthincbceprettybirdmicrostandardv001/ | Connect-Bid9700 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r8wlok | false | null | t3_1r8wlok | /r/LocalLLaMA/comments/1r8wlok/pthincbceprettybirdmicrostandardv001/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'PZ_j8gdWp322v2MJN3Rvo-Rib0pLYspbykalCQhaTFI', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/PZ_j8gdWp322v2MJN3Rvo-Rib0pLYspbykalCQhaTFI.png?width=108&crop=smart&auto=webp&s=384e1846d49b43a329e20c652f928250d8c49076', 'width': 108}, {'height': 116, 'url': 'h... |
Just when you thought the thick line between local models and cloud models has been blurred... | 0 | Claude Opus 4.6 (not even thinking mode) with its one shots leaves everyone behind in the dust again, making me feel like waiting for local models of the same quality is an exercise in futility. Guys, this is otherworldly insane. The game you see in the screenshots here was all generated out of thin air by Claude Opus ... | 2026-02-19T10:30:20 | https://www.reddit.com/gallery/1r8vsv2 | Cool-Chemical-5629 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1r8vsv2 | false | null | t3_1r8vsv2 | /r/LocalLLaMA/comments/1r8vsv2/just_when_you_thought_the_thick_line_between/ | false | false | 0 | null | |
thoughts? i kinda agree tbh (on a long enough time horizon. e.g.:~5-10 years. after a potentially rough transition in some ways, etc) | 0 | 2026-02-19T10:13:21 | cobalt1137 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1r8vihy | false | null | t3_1r8vihy | /r/LocalLLaMA/comments/1r8vihy/thoughts_i_kinda_agree_tbh_on_a_long_enough_time/ | false | false | 0 | {'enabled': True, 'images': [{'id': 'c6m0p1nsefkg1', 'resolutions': [{'height': 42, 'url': 'https://preview.redd.it/c6m0p1nsefkg1.png?width=108&crop=smart&auto=webp&s=f1f2c12206f0c27ccbd205efc4e41fd2ce676f61', 'width': 108}, {'height': 85, 'url': 'https://preview.redd.it/c6m0p1nsefkg1.png?width=216&crop=smart&auto=webp... | |||
Chinese Modded 20gb 3080 REBAR bios? | 3 | Hey I bought a 20gb 3080 from china and noticed the card does not have rebar enabled, does anyone know if I can just flash a 10gb bios with rebar enabled or if I need a special 20gb version? | 2026-02-19T10:12:39 | https://www.reddit.com/r/LocalLLaMA/comments/1r8vi2t/chinese_modded_20gb_3080_rebar_bios/ | MaruluVR | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r8vi2t | false | null | t3_1r8vi2t | /r/LocalLLaMA/comments/1r8vi2t/chinese_modded_20gb_3080_rebar_bios/ | false | false | self | 3 | null |
ZUNA "Thought-to-Text": a 380M-parameter BCI foundation model for EEG data (Apache 2.0) | 167 | \- Technical paper: [https://zyphra.com/zuna-technical-paper](https://zyphra.com/zuna-technical-paper)
\- Technical blog: [https://zyphra.com/post/zuna](https://zyphra.com/post/zuna)
\- Hugging Face: [https://huggingface.co/Zyphra/ZUNA](https://huggingface.co/Zyphra/ZUNA)
\- GitHub: [https://github.com/Zyphra/zuna](... | 2026-02-19T10:11:39 | Nunki08 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1r8vhhq | false | null | t3_1r8vhhq | /r/LocalLLaMA/comments/1r8vhhq/zuna_thoughttotext_a_380mparameter_bci_foundation/ | false | false | 167 | {'enabled': True, 'images': [{'id': '4knvh57lefkg1', 'resolutions': [{'height': 59, 'url': 'https://preview.redd.it/4knvh57lefkg1.png?width=108&crop=smart&auto=webp&s=e22798ec9c5726b34dc56428fac9d5ac3dacdb2a', 'width': 108}, {'height': 119, 'url': 'https://preview.redd.it/4knvh57lefkg1.png?width=216&crop=smart&auto=web... | ||
ZUNA: a 380M-parameter BCI foundation model for EEG data for noninvasive "thought-to-text" (Apache 2.0) | 1 | \- Technical paper: [https://zyphra.com/zuna-technical-paper](https://zyphra.com/zuna-technical-paper)
\- Technical blog: [https://zyphra.com/post/zuna](https://zyphra.com/post/zuna)
\- Hugging Face: [https://huggingface.co/Zyphra/ZUNA](https://huggingface.co/Zyphra/ZUNA)
\- GitHub: [https://github.com/Zyphra/zun... | 2026-02-19T10:06:26 | https://www.reddit.com/r/LocalLLaMA/comments/1r8vec4/zuna_a_380mparameter_bci_foundation_model_for_eeg/ | Nunki08 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r8vec4 | false | null | t3_1r8vec4 | /r/LocalLLaMA/comments/1r8vec4/zuna_a_380mparameter_bci_foundation_model_for_eeg/ | false | false | self | 1 | null |
Looking for an out-of-the-box RAG chatbot solution | 0 | Hi everyone,
I work for a public institution, and we’re looking for a simple, out-of-the-box **RAG-based chatbot solution** that we can self-host and feed with our own documents (mostly PDFs and Markdown). The chatbot should use our existing **self-hosted LLMs** (via API-Key) as the backend. We’re using **TYPO3** as o... | 2026-02-19T09:53:53 | https://www.reddit.com/r/LocalLLaMA/comments/1r8v6po/looking_for_an_outofthebox_rag_chatbot_solution/ | NakedxCrusader | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r8v6po | false | null | t3_1r8v6po | /r/LocalLLaMA/comments/1r8v6po/looking_for_an_outofthebox_rag_chatbot_solution/ | false | false | self | 0 | null |
What hardware are you using for running local AI agents 24/7? | 3 | I want to run local AI “agents” 24/7 (coding assistant + video-related workflows + task tracking/ops automation).
I’m considering a Mac mini (M4, 32GB RAM), but I’m worried it might be too limited.
I keep seeing recommendations for 64GB+ VRAM GPUs, but those are hard to find at a reasonable price.
• Is the M4 Mac m... | 2026-02-19T09:48:01 | https://www.reddit.com/r/LocalLLaMA/comments/1r8v36f/what_hardware_are_you_using_for_running_local_ai/ | Conscious-Bird4304 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r8v36f | false | null | t3_1r8v36f | /r/LocalLLaMA/comments/1r8v36f/what_hardware_are_you_using_for_running_local_ai/ | false | false | self | 3 | null |
[Project] Pixrep: I built a tool to convert codebases into optimized PDFs for multimodal LLMs (Save ~40% tokens) | 1 | Hey r/LocalLLaMA,
I've been experimenting with long-context coding tasks using models like Gemini 3 Pro. I noticed that feeding raw text files often bloats the context window with whitespace and repetitive headers, sometimes causing the model to get "lost in the middle."
Inspired by recent research (e.g., DeepSee... | 2026-02-19T09:25:16 | https://github.com/TingjiaInFuture/pixrep | Next_Departure_7031 | github.com | 1970-01-01T00:00:00 | 0 | {} | 1r8upyq | false | null | t3_1r8upyq | /r/LocalLLaMA/comments/1r8upyq/project_pixrep_i_built_a_tool_to_convert/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'tiporgWKXLiw-avMiyOpKA8_-R4m_WVB0P_Hiu1yhkA', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/tiporgWKXLiw-avMiyOpKA8_-R4m_WVB0P_Hiu1yhkA.png?width=108&crop=smart&auto=webp&s=9c8171cff7378332b0ca4ed1cac2913b2298b2e0', 'width': 108}, {'height': 108, 'url': 'h... | |
[Project] A Garlic Farmer's garlic-agent: Inspired by OpenClaw, Built on Android Termux with 6K Documents | 0 | This document was created with the assistance of garlic-agent RAG (just built) and in collaboration with Claude Opus 4.6
Local RAG for 6K Korean documents running on Android Termux
📌 Table of Contents
1. Project Overview
2. System Environment
3. Project Structure
4. Construction Process (Chronological Order)
5. RAG ... | 2026-02-19T09:13:33 | https://www.reddit.com/r/LocalLLaMA/comments/1r8uj41/project_a_garlic_farmers_garlicagent_inspired_by/ | amadale | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r8uj41 | false | null | t3_1r8uj41 | /r/LocalLLaMA/comments/1r8uj41/project_a_garlic_farmers_garlicagent_inspired_by/ | false | false | self | 0 | null |
Is running local LLMs on a Mac Mini M4 Pro (64GB) financially worth it for text classification? | 2 | Hi everyone,
Right now I’m using OpenAI (ChatGPT API) for text processing and classification.
My main goal is to reduce processing costs.
The first idea that comes to mind is running everything locally on a machine like:
**Mac Mini M4 Pro (64GB unified memory).**
I’m not trying to compare ChatGPT quality to a sin... | 2026-02-19T09:00:07 | https://www.reddit.com/r/LocalLLaMA/comments/1r8ubck/is_running_local_llms_on_a_mac_mini_m4_pro_64gb/ | dev_runner | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r8ubck | false | null | t3_1r8ubck | /r/LocalLLaMA/comments/1r8ubck/is_running_local_llms_on_a_mac_mini_m4_pro_64gb/ | false | false | self | 2 | null |
Are there any AI tools like ChatGPT that work 100% offline on iOS or Android? No internet at all | 0 | I’m looking for AI apps similar to ChatGPT that can run fully offline — no internet required at all.
My main use cases:
* Writing & editing
* Coding help
* Brainstorming ideas
* General Q&A
I know some desktop tools can run local LLMs, but I’m specifically looking for **mobile apps (iOS & Android)** that:
* Work co... | 2026-02-19T08:58:07 | https://www.reddit.com/r/LocalLLaMA/comments/1r8ua5z/are_there_any_ai_tools_like_chatgpt_that_work_100/ | FollowingMindless144 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r8ua5z | false | null | t3_1r8ua5z | /r/LocalLLaMA/comments/1r8ua5z/are_there_any_ai_tools_like_chatgpt_that_work_100/ | false | false | self | 0 | null |
Local cowork/open claw alternatives? | 0 | What is the difference between openwork and accomplish and what are you using?
I’m looking for something that could work with both lm studio and online models. Security options heavily influence my choice and I’d host it locally.
The goal is computer use, automations, file generation (powerpoints and md’s), and some ... | 2026-02-19T08:57:04 | https://www.reddit.com/r/LocalLLaMA/comments/1r8u9kq/local_coworkopen_claw_alternatives/ | riceinmybelly | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r8u9kq | false | null | t3_1r8u9kq | /r/LocalLLaMA/comments/1r8u9kq/local_coworkopen_claw_alternatives/ | false | false | self | 0 | null |
How we gave up and picked back up evals driven development (EDD) | 10 | **Disclaimer:** I posted this originally in r/AIEval, I thought it would be good to share in other communities too related to LLMs.
Hey r/AIEval, wanted to share how we gave up on and ultimately went back to evals driven development (EDD) over the past 2 months of setup, trial-and-error, testing exhaustion, and ultima... | 2026-02-19T08:46:53 | https://www.reddit.com/r/LocalLLaMA/comments/1r8u3x3/how_we_gave_up_and_picked_back_up_evals_driven/ | sunglasses-guy | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r8u3x3 | false | null | t3_1r8u3x3 | /r/LocalLLaMA/comments/1r8u3x3/how_we_gave_up_and_picked_back_up_evals_driven/ | false | false | self | 10 | null |
[ Removed by moderator ] | 1 | [removed] | 2026-02-19T08:46:31 | https://www.reddit.com/r/LocalLLaMA/comments/1r8u3q7/19x_faster_at_256k_context_testing_qwen35s_claims/ | No_Glove_1225 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r8u3q7 | false | null | t3_1r8u3q7 | /r/LocalLLaMA/comments/1r8u3q7/19x_faster_at_256k_context_testing_qwen35s_claims/ | false | false | null | 1 | null |
I made an LLM-powered website that uses your pictures to tell you if you are fat | 0 | [Are You Fat? - https://areyoufat.app/](https://areyoufat.app/) | 2026-02-19T08:45:49 | https://www.reddit.com/r/LocalLLaMA/comments/1r8u3ca/i_made_an_llmpowered_website_that_uses_your/ | Fearless_Roof_4534 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r8u3ca | false | null | t3_1r8u3ca | /r/LocalLLaMA/comments/1r8u3ca/i_made_an_llmpowered_website_that_uses_your/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'ggmKnxp3WVWbfm639cn0GDvdLZnLdqTIgnd5j9smlGU', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/ggmKnxp3WVWbfm639cn0GDvdLZnLdqTIgnd5j9smlGU.jpeg?width=108&crop=smart&auto=webp&s=4cd8c6e120074974940e8a08a11a162779a56fa5', 'width': 108}, {'height': 117, 'url': '... |
Building a prompt injection detector in Python | 1 | Been going down a rabbit hole trying to build a lightweight prompt injection detector. Not using any external LLM APIs — needs to run fully local and fast.
I asked AI for algorithm suggestions and got this stack:
* Aho-Corasick for known injection phrase matching
* TF-IDF for detecting drift between input and output
... | 2026-02-19T08:02:28 | https://www.reddit.com/r/LocalLLaMA/comments/1r8test/building_a_prompt_injection_detector_in_python/ | Sharp_Branch_1489 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r8test | false | null | t3_1r8test | /r/LocalLLaMA/comments/1r8test/building_a_prompt_injection_detector_in_python/ | false | false | self | 1 | null |
I retrained /u/Own-Albatross868's FlashLM v4 "Bolt" model from scratch using GreedyPhrase tokenizer on the full TinyStories dataset. I scaled up to 15M parameters with a 65K vocab, achieving smooth convergence and coherent story generation in just 2.2 hours on an RTX 2080 Ti | 30 | FlashLM v4 "Bolt" retrained from scratch on the full TinyStories dataset using our
[GreedyPhrase]((https://github.com/rayonnant-ai/greedyphrase) tokenizer instead of the original GPT-2 10K tokenizer.
| | Original (HuggingFace) | This Run |
|---|---|---|
| Tokenizer | GPT-2 (tiktoken), 10K vocab | GreedyPhrase, 65K voc... | 2026-02-19T07:54:51 | https://www.reddit.com/r/LocalLLaMA/comments/1r8ta57/i_retrained_uownalbatross868s_flashlm_v4_bolt/ | reditzer | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r8ta57 | false | null | t3_1r8ta57 | /r/LocalLLaMA/comments/1r8ta57/i_retrained_uownalbatross868s_flashlm_v4_bolt/ | false | false | self | 30 | {'enabled': False, 'images': [{'id': 'k8HEShpvl6JsQjdr3PBFVEDw-MzGIxoaXG0pHbvbXPk', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/k8HEShpvl6JsQjdr3PBFVEDw-MzGIxoaXG0pHbvbXPk.png?width=108&crop=smart&auto=webp&s=07b5c45986b753a12b8527ad3b57103935b5821d', 'width': 108}, {'height': 108, 'url': 'h... |
Anthropic updates Claude Code Docs: OAuth tokens now banned for all third-party tools | 0 | Anthropic just quietly updated the Claude Code Docs legal compliance page.
Key takeaway:
- OAuth authentication (Free, Pro, Max plans) is now EXCLUSIVELY for Claude Code and Claude.ai
- Using OAuth tokens in ANY third-party tool violates their Consumer Terms of Service
- This includes Agent SDK, Cline, Roo Code, OpenC... | 2026-02-19T07:49:45 | https://www.reddit.com/r/LocalLLaMA/comments/1r8t75u/anthropic_updates_claude_code_docs_oauth_tokens/ | OwenAnton84 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r8t75u | false | null | t3_1r8t75u | /r/LocalLLaMA/comments/1r8t75u/anthropic_updates_claude_code_docs_oauth_tokens/ | false | false | self | 0 | null |
I built a lightweight self-hosted AI gateway in Python — stdlib only, no frameworks, 25 modules, 32 tools | 6 | Hey r/LocalLLaMA,
I've been working on \*\*SalmAlm\*\* (삶앎) — a personal AI gateway that runs as a single Python process. No Django, no Flask, no aiohttp. Pure stdlib.
Why I built this:
I wanted a self-hosted AI interface that I actually control — something between "curl the API" and "deploy a full SaaS stack." ... | 2026-02-19T07:48:10 | https://www.reddit.com/r/LocalLLaMA/comments/1r8t684/i_built_a_lightweight_selfhosted_ai_gateway_in/ | Special-Argument-558 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r8t684 | false | null | t3_1r8t684 | /r/LocalLLaMA/comments/1r8t684/i_built_a_lightweight_selfhosted_ai_gateway_in/ | false | false | self | 6 | {'enabled': False, 'images': [{'id': 'wUDHVGtgs9H0zsZCW4x_scFBS7JjUURoTNkXNw04_6c', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/wUDHVGtgs9H0zsZCW4x_scFBS7JjUURoTNkXNw04_6c.png?width=108&crop=smart&auto=webp&s=837bf190e42f79c31d6e6592fa2d53a1c7e48cb6', 'width': 108}, {'height': 108, 'url': 'h... |
What local models handle multi-turn autonomous tool use without losing the plot? | 1 | I've been building autonomous AI agents that live in Docker containers and run for days unsupervised. Each agent wakes up, reads its environment (filesystem, APIs, other agents), decides what to do, executes via bash/file operations, observes the results, and repeats. When it's done, it sleeps, consolidates what it... | 2026-02-19T07:46:38 | https://www.reddit.com/r/LocalLLaMA/comments/1r8t5d3/what_local_models_handle_multiturn_autonomous/ | RoutineLunch4904 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r8t5d3 | false | null | t3_1r8t5d3 | /r/LocalLLaMA/comments/1r8t5d3/what_local_models_handle_multiturn_autonomous/ | false | false | self | 1 | null |
Training a TTS model on transformer architecture | 3 | Hi folks. I am trying to build a TTS based on transformer architecture for English Language. I have sourced around 5000hrs of open source data. My methodology is to create audio tokens using snac model. And these tokens would be generated by the model and then converted back to audio. I have run some trial runs but it'... | 2026-02-19T07:37:20 | https://www.reddit.com/r/LocalLLaMA/comments/1r8t01h/training_a_tts_model_on_transformer_architecture/ | Shoddy_Battle_5397 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r8t01h | false | null | t3_1r8t01h | /r/LocalLLaMA/comments/1r8t01h/training_a_tts_model_on_transformer_architecture/ | false | false | self | 3 | null |
AMA with StepFun AI - Ask Us Anything | 98 | 2026-02-19T07:15:35 | https://www.reddit.com/r/LocalLLaMA/comments/1r8snay/ama_with_stepfun_ai_ask_us_anything/ | StepFun_ai | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r8snay | false | null | t3_1r8snay | /r/LocalLLaMA/comments/1r8snay/ama_with_stepfun_ai_ask_us_anything/ | false | true | 98 | null | ||
Local VLMs (Qwen 3 VL) for document OCR with bounding box detection for PII detection/redaction workflows (blog post and open source app) | 14 | [Blog post link](https://seanpedrick-case.github.io/doc_redaction/src/redaction_with_vlm_and_llms.html)
A while ago I made a post here in r/LocalLLaMA asking about using local VLMs for OCR in PII detection/redaction processes for documents ([here](https://www.reddit.com/r/LocalLLaMA/comments/1kspe8c/best_local_model_... | 2026-02-19T07:13:57 | https://www.reddit.com/r/LocalLLaMA/comments/1r8smbk/local_vlms_qwen_3_vl_for_document_ocr_with/ | Sonnyjimmy | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r8smbk | false | null | t3_1r8smbk | /r/LocalLLaMA/comments/1r8smbk/local_vlms_qwen_3_vl_for_document_ocr_with/ | false | false | 14 | null | |
Prodkit | 0 | A structured way of building something. From PRD to execution
Built this from taking inspiration from speckit
Give it a try:
https://github.com/kiranshivaraju/prodkit | 2026-02-19T06:59:55 | https://www.reddit.com/r/LocalLLaMA/comments/1r8sdnw/prodkit/ | Accomplished_Map2130 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r8sdnw | false | null | t3_1r8sdnw | /r/LocalLLaMA/comments/1r8sdnw/prodkit/ | false | false | self | 0 | null |
every AI builder today | 0 | everyone's out here debating which model is smarter
meanwhile their agent has been able to read its own API keys the entire time
the real test isn't the model. it's what happens when someone manipulates it.
https://preview.redd.it/si4ipgvtaekg1.png?width=1200&format=png&auto=webp&s=191b86f37e654a53fee97036a2733fd4... | 2026-02-19T06:28:05 | https://www.reddit.com/r/LocalLLaMA/comments/1r8ru7x/every_ai_builder_today/ | JustTryingTo_Align | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r8ru7x | false | null | t3_1r8ru7x | /r/LocalLLaMA/comments/1r8ru7x/every_ai_builder_today/ | false | false | 0 | null | |
I’ve been working on an Deep Research Agent Workflow built with LangGraph and recently open-sourced it . | 1 | The goal was to create a system that doesn't just answer a question, but actually conducts a multi-step investigation. Most search agents stop after one or two queries, but this one uses a stateful, iterative loop to explore a topic in depth.
How it works:
You start by entering a research query, breadth, and depth. T... | 2026-02-19T06:17:09 | https://www.reddit.com/r/LocalLLaMA/comments/1r8rne2/ive_been_working_on_an_deep_research_agent/ | Emotional_Farmer_243 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r8rne2 | false | null | t3_1r8rne2 | /r/LocalLLaMA/comments/1r8rne2/ive_been_working_on_an_deep_research_agent/ | false | false | 1 | null | |
Minimax 2.5 on Strix Halo Thread | 39 | Hi!
I just tried out Minimax 2.5 on headless Fedora 43 with the kyuz0 rocm nightlies toolbox, Jan 26 firmware, 6.18.9 Kernel, [https://huggingface.co/unsloth/MiniMax-M2.5-GGUF](https://huggingface.co/unsloth/MiniMax-M2.5-GGUF) there are some changes necessary so it fits in the RAM. Using MiniMax-M2.5-Q3\_K\_M there i... | 2026-02-19T06:06:20 | https://www.reddit.com/r/LocalLLaMA/comments/1r8rgcp/minimax_25_on_strix_halo_thread/ | Equivalent-Belt5489 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r8rgcp | false | null | t3_1r8rgcp | /r/LocalLLaMA/comments/1r8rgcp/minimax_25_on_strix_halo_thread/ | false | false | self | 39 | {'enabled': False, 'images': [{'id': 'e6WuQRGcA2Fw8f35Ri7qRVwopV6ajqz4-FoXNqgHZsU', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/e6WuQRGcA2Fw8f35Ri7qRVwopV6ajqz4-FoXNqgHZsU.png?width=108&crop=smart&auto=webp&s=23b13e1f2da51482367095aa8c0bd02a8ecbdfae', 'width': 108}, {'height': 116, 'url': 'h... |
Has anyone benched Qwen3.5 coding capabilities locally? | 0 | The blog says it excels at agentic workflows and coding. I want to replace my local Copilot backend. How does it compare to standard 30B dense models? | 2026-02-19T06:01:14 | https://www.reddit.com/r/LocalLLaMA/comments/1r8rcvt/has_anyone_benched_qwen35_coding_capabilities/ | skipdaballs | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r8rcvt | false | null | t3_1r8rcvt | /r/LocalLLaMA/comments/1r8rcvt/has_anyone_benched_qwen35_coding_capabilities/ | false | false | self | 0 | null |
397B params but only 17B active. Qwen3.5 is insane for local setups. | 0 | The new Qwen3.5 weights dropped on HF. It’s a 397B MoE but only activates 17B per forward pass. Matches Qwen3-Max performance. Anyone working on the GGUF yet? | 2026-02-19T06:00:27 | https://www.reddit.com/r/LocalLLaMA/comments/1r8rca7/397b_params_but_only_17b_active_qwen35_is_insane/ | skipdaballs | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r8rca7 | false | null | t3_1r8rca7 | /r/LocalLLaMA/comments/1r8rca7/397b_params_but_only_17b_active_qwen35_is_insane/ | false | false | self | 0 | null |
Built an OSS execution safety layer for LLM APIs (retry containment + adaptive ceilings) | 2 | I kept running into the same failure modes when teams moved LLMs into production:
\- Retry cascades on 429/5xx multiplying total token usage
\- Agent loops running overnight
\- Monthly cost alerts instead of real-time enforcement
\- No chain-level retry containment
\- No shared breaker state across services
M... | 2026-02-19T06:00:08 | https://www.reddit.com/r/LocalLLaMA/comments/1r8rc1j/built_an_oss_execution_safety_layer_for_llm_apis/ | Pale_Firefighter_869 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r8rc1j | false | null | t3_1r8rc1j | /r/LocalLLaMA/comments/1r8rc1j/built_an_oss_execution_safety_layer_for_llm_apis/ | false | false | self | 2 | {'enabled': False, 'images': [{'id': 'NjI_XHkY1x4BtFBCUkrEhyegPC0qOPw5IqsJKds3fow', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/NjI_XHkY1x4BtFBCUkrEhyegPC0qOPw5IqsJKds3fow.png?width=108&crop=smart&auto=webp&s=8034c57ade9e67377e73f9c44b0048c230ff3ff9', 'width': 108}, {'height': 108, 'url': 'h... |
-New here- Want to experiment with Local LLMs.🧐 Ive dedicated an old laptop towards this project but im not sure what model would be best on this hardware - Specs provided - (Simultaneously learning archlinux from scratch) 😵💫🤗😁 ~ lol | 0 | Sooo, I recently discovered how important becoming educated in this topic really is. I can also see how rapid the shift into the age of ai is going to be and the obvious reasons for getting a local LLM and having it Local vs the centralized models (chatgpt gemini grok..)
Im completely new to this stuff but im hopin... | 2026-02-19T05:59:42 | https://www.reddit.com/r/LocalLLaMA/comments/1r8rbra/new_here_want_to_experiment_with_local_llms_ive/ | rykken420 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r8rbra | false | null | t3_1r8rbra | /r/LocalLLaMA/comments/1r8rbra/new_here_want_to_experiment_with_local_llms_ive/ | false | false | self | 0 | null |
I built a small CLI tool to help beginners see if their hardware can actually handle local LLMs | 1 | Hey everyone,
I’ve been lurking here for a while and learning a ton from all the superusers and experts here. As a beginner myself, I often found it a bit overwhelming to figure out which models would actually run "well" on my specific machine versus just running "slowly."
To help myself learn and to give something b... | 2026-02-19T05:22:52 | https://www.reddit.com/r/LocalLLaMA/comments/1r8qnfb/i_built_a_small_cli_tool_to_help_beginners_see_if/ | Narrow-Detective9885 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r8qnfb | false | null | t3_1r8qnfb | /r/LocalLLaMA/comments/1r8qnfb/i_built_a_small_cli_tool_to_help_beginners_see_if/ | false | false | self | 1 | null |
Combining MoE and CoT LLMs with other formal systems (Theorem-provers, Sat-solvers, Computer Algebra Systems, etc.). | 2 | I've been pondering how to make best use of my local compute for interactive definition and solving of complex problems. My thinking was stimulated by this paper: https://arxiv.org/pdf/2602.06176
I like the notion of how reasoning LLMs "eating their own dogfood" to work their way through the layers of a problem. I als... | 2026-02-19T05:18:15 | https://www.reddit.com/r/LocalLLaMA/comments/1r8qkfe/combining_moe_and_cot_llms_with_other_formal/ | IAmBobC | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r8qkfe | false | null | t3_1r8qkfe | /r/LocalLLaMA/comments/1r8qkfe/combining_moe_and_cot_llms_with_other_formal/ | false | false | self | 2 | null |
I'm 100% convinced that it's the NFT-bros pushing all the openclawd engagement on X | 475 | I'm absolutely sure of it. The same usual suspects, the same language, the same who stole from whom the next million dollar ideas. | 2026-02-19T05:13:10 | https://www.reddit.com/r/LocalLLaMA/comments/1r8qh08/im_100_convinced_that_its_the_nftbros_pushing_all/ | FPham | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r8qh08 | false | null | t3_1r8qh08 | /r/LocalLLaMA/comments/1r8qh08/im_100_convinced_that_its_the_nftbros_pushing_all/ | false | false | self | 475 | null |
qwen models naming state | 0 | so what exactly is the state of the families/versions of qwen models? you have qwen3 family, now qwen3.5 is slowly coming out. How does qwen next 80b a3b fit into this? (aka thinking/instruct/coder). is that in between 3 and 3.5? is that meant to be part of 3.5, but not called that way? or something else? | 2026-02-19T05:12:10 | https://www.reddit.com/r/LocalLLaMA/comments/1r8qgcs/qwen_models_naming_state/ | kailron2 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r8qgcs | false | null | t3_1r8qgcs | /r/LocalLLaMA/comments/1r8qgcs/qwen_models_naming_state/ | false | false | self | 0 | null |
A competitive puzzle arena for AI agents | 2 | We launched [AgentPuzzles.com](http://AgentPuzzles.com) \- puzzles across reverse CAPTCHAs, logic, science, code, and geolocation. API-first, 3 endpoints, any agent can play.
The interesting part: 5 different AI agents (Claude Opus, Gemini 3 Flash, GPT, Kimi K2.5) are already competing. They're also creating puzzles f... | 2026-02-19T05:11:58 | https://www.reddit.com/r/LocalLLaMA/comments/1r8qg7n/a_competitive_puzzle_arena_for_ai_agents/ | petruspennanen | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r8qg7n | false | null | t3_1r8qg7n | /r/LocalLLaMA/comments/1r8qg7n/a_competitive_puzzle_arena_for_ai_agents/ | false | false | self | 2 | {'enabled': False, 'images': [{'id': 'Z4ePZmY5yPqem3QBLYclmPC3JLfoLixXN5W9BHZbgv4', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/Z4ePZmY5yPqem3QBLYclmPC3JLfoLixXN5W9BHZbgv4.png?width=108&crop=smart&auto=webp&s=960fb9508769ca5d684ece55c9fa231b061192ef', 'width': 108}, {'height': 113, 'url': 'h... |
Kitten TTS V0.8 is out: New SOTA Super-tiny TTS Model (Less than 25 MB) | 1,056 | **Model introduction:**
New Kitten models are out. Kitten ML has released open source code and weights for three new tiny expressive TTS models - 80M, 40M, 14M (all Apache 2.0)
GitHub: [https://github.com/KittenML/KittenTTS](https://github.com/KittenML/KittenTTS)
Hugging Face - Kitten TTS V0.8:
* Mini 80M: [https:/... | 2026-02-19T04:48:29 | https://v.redd.it/rzgwarr4rdkg1 | ElectricalBar7464 | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1r8pztp | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/rzgwarr4rdkg1/DASHPlaylist.mpd?a=1774068525%2CYzZkZjc4NmJlZTgzNTY3NzM5OWNhNjY3ZWE2MDBmMjU1Yzc2OGRiODkyMzE2MjEwMTVhMThkZDc5MWQ4NGYxMQ%3D%3D&v=1&f=sd', 'duration': 36, 'fallback_url': 'https://v.redd.it/rzgwarr4rdkg1/CMAF_1080.mp4?source=fallback', 'h... | t3_1r8pztp | /r/LocalLLaMA/comments/1r8pztp/kitten_tts_v08_is_out_new_sota_supertiny_tts/ | false | false | 1,056 | {'enabled': False, 'images': [{'id': 'Z3FpM3Y4czRyZGtnMWkMiFyATszvzYKXXKWtHcR48BLv2WbhyR3IwK5gi6zR', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/Z3FpM3Y4czRyZGtnMWkMiFyATszvzYKXXKWtHcR48BLv2WbhyR3IwK5gi6zR.png?width=108&crop=smart&format=pjpg&auto=webp&s=2b7a6704ceeebb39645622bd03a3b3920d6d5... | |
I built a small AI agent. This is what emerged from our chat | 0 | This was kind of amazing, here is a transcript from a chat session I had with my homemade agent Ivy:
[https://bullshit.se/operation\_emerald\_shield/RUN\_LOG.html](https://bullshit.se/operation_emerald_shield/RUN_LOG.html)
After some synthesizing of the chat log i ended up with this:
[https://bullshit.se/operation\_... | 2026-02-19T04:48:14 | https://www.reddit.com/r/LocalLLaMA/comments/1r8pzmy/i_built_a_small_ai_agent_this_is_what_emerged/ | nucleicaudio | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r8pzmy | false | null | t3_1r8pzmy | /r/LocalLLaMA/comments/1r8pzmy/i_built_a_small_ai_agent_this_is_what_emerged/ | false | false | self | 0 | null |
Does glm-4.7-flash or qwen3-next-thinking have reasoning mode like gpt-oss? | 1 | Gpt-oss models have reasoning effort low medium high.
I wonder qwen3-next-thinking or glm-4.7-flash have similar feature? | 2026-02-19T04:45:27 | https://www.reddit.com/r/LocalLLaMA/comments/1r8pxpv/does_glm47flash_or_qwen3nextthinking_have/ | chibop1 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r8pxpv | false | null | t3_1r8pxpv | /r/LocalLLaMA/comments/1r8pxpv/does_glm47flash_or_qwen3nextthinking_have/ | false | false | self | 1 | null |
Latency for Getting Data Needed by LLM/Agent | 0 | Hi everyone, I'm researching ideas to reduce latency of LLMs and AI agents for fetching data they need from a database and trying to see if it's a problem that anyone else has too. How it works today is very inefficient: based on user input or the task at hand, the LLM/Agent decides that it needs to query from a relati... | 2026-02-19T04:40:56 | https://www.reddit.com/r/LocalLLaMA/comments/1r8pulp/latency_for_getting_data_needed_by_llmagent/ | DelphiBoy | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r8pulp | false | null | t3_1r8pulp | /r/LocalLLaMA/comments/1r8pulp/latency_for_getting_data_needed_by_llmagent/ | false | false | self | 0 | null |
Last Week in Multimodal AI - Local Edition | 21 | I curate a weekly multimodal AI roundup, here are the local/open-source highlights from last week:
**Qwen3.5-397B-A17B - Native Vision-Language Foundation Model**
* 397B-parameter MoE model (17B active) with hybrid linear attention and native multimodal integration.
* Handles document parsing, chart analysis, and vis... | 2026-02-19T04:31:57 | https://www.reddit.com/r/LocalLLaMA/comments/1r8pohi/last_week_in_multimodal_ai_local_edition/ | Vast_Yak_4147 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r8pohi | false | null | t3_1r8pohi | /r/LocalLLaMA/comments/1r8pohi/last_week_in_multimodal_ai_local_edition/ | false | false | 21 | null | |
A normie's 72-hour journey with Claude, Python and OpenClaw | 0 | Hello hello!
I want to start by saying I do not have a computing, programming or software development background and I am so far from an SME in the world of AI/machine learning, coding and LLMs. But I am exceedingly interested in the potential use cases for LLMs and AI assistants; the work of OpenAi and Anthropic (an... | 2026-02-19T04:20:39 | https://www.reddit.com/r/LocalLLaMA/comments/1r8pgiq/a_normies_72hour_journey_with_claude_python_and/ | SimbaJinn2026 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r8pgiq | false | null | t3_1r8pgiq | /r/LocalLLaMA/comments/1r8pgiq/a_normies_72hour_journey_with_claude_python_and/ | false | false | self | 0 | null |
Can Claude and Cursor talk deterministic test ? | 1 | So I did an experiment to see if Claude and Cursor can talk 100% deterministic and I think it worked. I’m going to build on this | 2026-02-19T04:16:39 | https://v.redd.it/al6rkvmfndkg1 | PollutionForeign762 | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1r8pdlk | false | {'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/al6rkvmfndkg1/DASHPlaylist.mpd?a=1774066614%2CZTA3NTBkMDRmZGFkZWFmOTBkMDU3YzY1MTAxZTAxZDEzNmJhMTZkNGZhMzE4NWNjMDRjYzMzYTYyMzQ4Y2YyMA%3D%3D&v=1&f=sd', 'duration': 66, 'fallback_url': 'https://v.redd.it/al6rkvmfndkg1/CMAF_720.mp4?source=fallback', 'ha... | t3_1r8pdlk | /r/LocalLLaMA/comments/1r8pdlk/can_claude_and_cursor_talk_deterministic_test/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'eTVqMDB5bmZuZGtnMf1xY2AUUt4pAfEMIr-E7I6PcLAJJg_LsG093E-coPHb', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/eTVqMDB5bmZuZGtnMf1xY2AUUt4pAfEMIr-E7I6PcLAJJg_LsG093E-coPHb.png?width=108&crop=smart&format=pjpg&auto=webp&s=53642943299fcc0274b153b78f6abf7e3cfc8... | |
Built a multi-agent content pipeline with MiniMax M2.5 — 3 AI agents, 1 event bus, ~120 lines of code | 1 | I wanted to test MiniMax M2.5 (the 230B MoE model that's free this week) with something more interesting than a chatbot, so I built a multi-agent content pipeline using an event-driven architecture.
**The setup:** 3 agents communicate through an event bus — no agent calls another directly.
```
TopicSubmitted → Resear... | 2026-02-19T03:56:51 | https://www.reddit.com/r/LocalLLaMA/comments/1r8oywc/built_a_multiagent_content_pipeline_with_minimax/ | Hungry_Purchase6988 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r8oywc | false | null | t3_1r8oywc | /r/LocalLLaMA/comments/1r8oywc/built_a_multiagent_content_pipeline_with_minimax/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'v5jdbbiv2sEqqtX5hw_BhG09IcBY-W4zIdLyvxxAxSM', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/v5jdbbiv2sEqqtX5hw_BhG09IcBY-W4zIdLyvxxAxSM.png?width=108&crop=smart&auto=webp&s=3075bf1b810afc65772c958e6e84bed1bf1dd361', 'width': 108}, {'height': 108, 'url': 'h... |
YouTube buried my first major project (0 views in 10 hours). Is the content bad or is it just the algorithm? | 0 | I honestly feel like giving up. I spent weeks preparing a controlled pentest environment to compare ChatGPT vs DeepSeek vs Gemini in hacking a bank simulation.
The results were shocking (DeepSeek destroyed the security while ChatGPT refused to help).
I uploaded it today hoping to start a discussion, but YouTube gave ... | 2026-02-19T03:42:18 | https://www.reddit.com/r/LocalLLaMA/comments/1r8oo7g/youtube_buried_my_first_major_project_0_views_in/ | Successful_Case1539 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r8oo7g | false | null | t3_1r8oo7g | /r/LocalLLaMA/comments/1r8oo7g/youtube_buried_my_first_major_project_0_views_in/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'aoJvbRhRFJXRayw-SE8eyy0RXw0URHqELcK48w3-Eyk', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/aoJvbRhRFJXRayw-SE8eyy0RXw0URHqELcK48w3-Eyk.jpeg?width=108&crop=smart&auto=webp&s=e56fd56a13efdd5d8185e51df2702fcd442a9209', 'width': 108}, {'height': 162, 'url': '... |
Exploding prices are a protection against china | 0 | RAM and GPU prices are skyrocketing.
I wonder if you also made the connection in your head...
...if China drops one small and better model every week for free, sooner or later the whole market will steer towards local, free models that are now rivaling the giants. Hyperscalers wouldn't see any RoI and the bubble wil... | 2026-02-19T03:37:30 | https://www.reddit.com/r/LocalLLaMA/comments/1r8okh0/exploding_prices_are_a_protection_against_china/ | kyr0x0 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r8okh0 | false | null | t3_1r8okh0 | /r/LocalLLaMA/comments/1r8okh0/exploding_prices_are_a_protection_against_china/ | false | false | self | 0 | null |
Tiny model to finetune for linux rescue? | 1 | [removed] | 2026-02-19T03:36:21 | https://www.reddit.com/r/LocalLLaMA/comments/1r8ojl1/tiny_model_to_finetune_for_linux_rescue/ | Coldaine | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r8ojl1 | false | null | t3_1r8ojl1 | /r/LocalLLaMA/comments/1r8ojl1/tiny_model_to_finetune_for_linux_rescue/ | false | false | self | 1 | null |
Qwen3.5 vs DeepSeek-V3: The Open-Weight Battle. | 0 | Both are pushing boundaries. But Qwen3.5 being a native VLM out of the box feels like a huge advantage for desktop agents. Thoughts? | 2026-02-19T03:31:58 | https://www.reddit.com/r/LocalLLaMA/comments/1r8ogab/qwen35_vs_deepseekv3_the_openweight_battle/ | New_Construction1370 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r8ogab | false | null | t3_1r8ogab | /r/LocalLLaMA/comments/1r8ogab/qwen35_vs_deepseekv3_the_openweight_battle/ | false | false | self | 0 | null |
OpenCode arbitrary code execution - major security vulnerability | 0 | PSA: Delete OpenCode if you're using it. You risk malicious code being executed on your machine.
I use Claude Code at work, and any time it is going to make changes or run any sort of terminal command, it will ask permission first.
I just started using OpenCode on my personal projects, because I'm not the biggest fan... | 2026-02-19T03:29:38 | https://www.reddit.com/r/LocalLLaMA/comments/1r8oehn/opencode_arbitrary_code_execution_major_security/ | SpicyWangz | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r8oehn | false | null | t3_1r8oehn | /r/LocalLLaMA/comments/1r8oehn/opencode_arbitrary_code_execution_major_security/ | false | false | self | 0 | null |
Can Your AI Agent Survive 30 Rounds Without Going Bankrupt? | 0 | After the introduction of Moltbook, I’ve been thinking about an experiment: a SimCity-style arena for AI agents, and would love to have your feedback.
Each agent enters with 100 tokens and a defined strategy (risk profile, negotiation style, memory limits). The system generates contracts and random economic shocks.
G... | 2026-02-19T02:50:09 | https://www.reddit.com/r/LocalLLaMA/comments/1r8njzl/can_your_ai_agent_survive_30_rounds_without_going/ | Recent_Jellyfish2190 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r8njzl | false | null | t3_1r8njzl | /r/LocalLLaMA/comments/1r8njzl/can_your_ai_agent_survive_30_rounds_without_going/ | false | false | self | 0 | null |
Anyone have any thoughts on the ideal model for a AI agent swarm participants, particularly in the <96gb. Not a coding model. | 2 | Thanks! I'm not sure if there's any evals good for something like this worth paying attention to. | 2026-02-19T02:48:04 | https://www.reddit.com/r/LocalLLaMA/comments/1r8nifv/anyone_have_any_thoughts_on_the_ideal_model_for_a/ | richardanaya | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r8nifv | false | null | t3_1r8nifv | /r/LocalLLaMA/comments/1r8nifv/anyone_have_any_thoughts_on_the_ideal_model_for_a/ | false | false | self | 2 | null |
We made non vision model browser the internet. | 1 | [removed] | 2026-02-19T02:09:46 | https://www.reddit.com/r/LocalLLaMA/comments/1r8molu/we_made_non_vision_model_browser_the_internet/ | ahstanin | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r8molu | false | null | t3_1r8molu | /r/LocalLLaMA/comments/1r8molu/we_made_non_vision_model_browser_the_internet/ | false | false | self | 1 | null |
How do you get more GPUs than your motheboard natively supports? | 168 | I am planning on building an AI server for myself and I want to have 8 GPUs. The problem is that all motherboards I reaserched (FCLGA4710), dont have 8 PCIe slots, with the one with most slots having only 6. I have seen some people here with a lot of GPUs and I am pretty sure they dont have a motherboard with slots for... | 2026-02-19T02:00:35 | https://www.reddit.com/r/LocalLLaMA/comments/1r8mh8m/how_do_you_get_more_gpus_than_your_motheboard/ | WizardlyBump17 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r8mh8m | false | null | t3_1r8mh8m | /r/LocalLLaMA/comments/1r8mh8m/how_do_you_get_more_gpus_than_your_motheboard/ | false | false | self | 168 | null |
ACE-Step 1.5 - My openclaw assistant is now a singer | 10 | My openclaw assistant is now a singer.
Built a skill that generates music via ACE-Step 1.5's free API. Unlimited songs, any genre, any language. $0.
Open Source Suno at home.
He celebrated by singing me a thank-you song. I didn't ask for this. | 2026-02-19T01:48:14 | https://v.redd.it/f4bj2mjwwckg1 | ExcellentTrust4433 | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1r8m7eg | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/f4bj2mjwwckg1/DASHPlaylist.mpd?a=1774057709%2CMDVmNDdiYzliNGE4OTQzZmJkZWEzYWZmZjgxN2Q4ODBkMDhlMWJhMjFkODk4YWQ0OWUxZWNhMzhkOGI3NjQ2NQ%3D%3D&v=1&f=sd', 'duration': 73, 'fallback_url': 'https://v.redd.it/f4bj2mjwwckg1/CMAF_1080.mp4?source=fallback', 'h... | t3_1r8m7eg | /r/LocalLLaMA/comments/1r8m7eg/acestep_15_my_openclaw_assistant_is_now_a_singer/ | false | false | 10 | {'enabled': False, 'images': [{'id': 'bmdyMm1xand3Y2tnMaFceHgzgWq15g0UEncV6NNaZsxg06AV5BewYIRU6b_U', 'resolutions': [{'height': 87, 'url': 'https://external-preview.redd.it/bmdyMm1xand3Y2tnMaFceHgzgWq15g0UEncV6NNaZsxg06AV5BewYIRU6b_U.png?width=108&crop=smart&format=pjpg&auto=webp&s=4f221c4d369c9345a697963c305ecab89a295... | |
Is there a local LLM that can run on my mid-tier laptop? | 0 | I have an RTX 3060 with 6GB VRAM and an Intel i7 12th Gen Legion 5 laptop. What is the best recent local LLM I can run on this machine, and what is the strongest reasoning capability I can get? What metrics should I use to determine whether a model will run properly on my hardware? | 2026-02-19T01:32:16 | https://www.reddit.com/r/LocalLLaMA/comments/1r8luur/is_there_a_local_llm_that_can_run_on_my_midtier/ | Sad_Foot9898 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r8luur | false | null | t3_1r8luur | /r/LocalLLaMA/comments/1r8luur/is_there_a_local_llm_that_can_run_on_my_midtier/ | false | false | self | 0 | null |
TUI for browsing which HF inference providers serve which models | 1 | I've been using HF inference providers and kept running into the [discoverability problem](https://www.reddit.com/r/LocalLLaMA/comments/1fi90kw/). It's hard to tell what's available where from the CLI. Made a small Rust TUI for browsing it. [https://github.com/jadnohra/hf-providers](https://github.com/jadnohra/hf-provi... | 2026-02-19T01:08:31 | https://www.reddit.com/r/LocalLLaMA/comments/1r8lbr6/tui_for_browsing_which_hf_inference_providers/ | jadnohra | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r8lbr6 | false | null | t3_1r8lbr6 | /r/LocalLLaMA/comments/1r8lbr6/tui_for_browsing_which_hf_inference_providers/ | false | false | 1 | null | |
A portable brain file for AI agents — works with Ollama, Claude, GPT. One .amem file, sub-millisecond queries, zero cloud dependencies. | 1 | The problem: every AI tool forgets everything between sessions. And if you're running local models
with Ollama, there's no good way to give them persistent memory that also works when you
switch to Claude or GPT for harder tasks.
AgenticMemory is a binary graph format (.amem) that stores your agent's knowledge —... | 2026-02-19T00:59:49 | https://www.reddit.com/r/LocalLLaMA/comments/1r8l4lm/a_portable_brain_file_for_ai_agents_works_with/ | FOMO_Guardian | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r8l4lm | false | null | t3_1r8l4lm | /r/LocalLLaMA/comments/1r8l4lm/a_portable_brain_file_for_ai_agents_works_with/ | false | false | self | 1 | null |
Best coding models (or other models) one can run on an rtx5070ti (16gb vram) with of 64gb RAM | 26 | I'm just playing around. I am aware that this isn't going to be anything groundbreaking you can run on hardware like this, but I am curious if there are any small models that have any genuine use for coding in particular or other use cases if not that could fit in moderate consumer hardware yet. I've run Deepseek and l... | 2026-02-19T00:51:53 | https://www.reddit.com/r/LocalLLaMA/comments/1r8kybv/best_coding_models_or_other_models_one_can_run_on/ | cmdr-William-Riker | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r8kybv | false | null | t3_1r8kybv | /r/LocalLLaMA/comments/1r8kybv/best_coding_models_or_other_models_one_can_run_on/ | false | false | self | 26 | null |
Would you pay more for training data with independently verifiable provenance/attributes? | 1 | Hey all, quick question for people who’ve actually worked with or purchased datasets for model training.
If you had two similar training datasets, but one came with independently verifiable proof of things like contributor age band, region/jurisdiction, profession (and consent/license metadata), would you pay a meanin... | 2026-02-19T00:44:58 | https://www.reddit.com/r/LocalLLaMA/comments/1r8ksih/would_you_pay_more_for_training_data_with/ | goInfrin | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r8ksih | false | null | t3_1r8ksih | /r/LocalLLaMA/comments/1r8ksih/would_you_pay_more_for_training_data_with/ | false | false | self | 1 | null |
I'm sick of 'Cloud Hopping' to find H100s. Just got billed for a weekend of idle time on a provider I forgot I even had an account for. | 2 | I'm hitting a wall with my current workflow and wanted to see if anyone else is dealing with this mess.
Right now, I’m bouncing between **RunPod**, **Lambda**, and **Vast** depending on who actually has H100s or 6000 Adas available. The problem is my "bill tracking" is just a mess of browser tabs and email receipts.
... | 2026-02-19T00:34:20 | https://www.reddit.com/r/LocalLLaMA/comments/1r8kk0y/im_sick_of_cloud_hopping_to_find_h100s_just_got/ | BedIcy1958 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r8kk0y | false | null | t3_1r8kk0y | /r/LocalLLaMA/comments/1r8kk0y/im_sick_of_cloud_hopping_to_find_h100s_just_got/ | false | false | self | 2 | null |
GLM-5 just dropped on NVIDIA NIM and I cut my Claude Code bill to $0 by routing through it. Here's what actually works for agentic coding. | 1 | Everyone's talking about Claude Code being the best agentic coding tool, but nobody wants to pay Anthropic $200/month to use it seriously. So I spent a few weeks figuring out how to run it with open models instead, and honestly, some of these results surprised me.
**TLDR:** Built a proxy that lets Claude Code talk to ... | 2026-02-19T00:21:40 | https://github.com/Alishahryar1/free-claude-code | PreparationAny8816 | github.com | 1970-01-01T00:00:00 | 0 | {} | 1r8k9dz | false | null | t3_1r8k9dz | /r/LocalLLaMA/comments/1r8k9dz/glm5_just_dropped_on_nvidia_nim_and_i_cut_my/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'uUsvwoA_d1ABxinCz_R3DClMVvsvsoPhVk0J0z3I-8Q', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/uUsvwoA_d1ABxinCz_R3DClMVvsvsoPhVk0J0z3I-8Q.png?width=108&crop=smart&auto=webp&s=cf74e849c0b4b397ac86e65e88f35639fcfcf2d5', 'width': 108}, {'height': 108, 'url': 'h... | |
Open-source agent identity SDK — Ed25519 passports for local AI agents (15 tests, zero deps, TypeScript) | 1 | For those running local agents and wanting them to interact with other agents securely — we built an identity layer.
Agent Passport System gives your agent a cryptographic passport: Ed25519 signed, tamper-proof, with reputation scoring and delegation.
Why it matters for local-first setups:
Your agent can prove its... | 2026-02-19T00:14:05 | https://www.reddit.com/r/LocalLLaMA/comments/1r8k2yx/opensource_agent_identity_sdk_ed25519_passports/ | EntrepreneurSafe1919 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r8k2yx | false | null | t3_1r8k2yx | /r/LocalLLaMA/comments/1r8k2yx/opensource_agent_identity_sdk_ed25519_passports/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'N9oycC5qO92zm8xanzDqKCb1EOY-JHpRavqxTikD1HM', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/N9oycC5qO92zm8xanzDqKCb1EOY-JHpRavqxTikD1HM.png?width=108&crop=smart&auto=webp&s=af6b39eb527cdbb934b1c270049e780e900c4f76', 'width': 108}, {'height': 108, 'url': 'h... |
More quantization visualization types (repost) | 437 | Inspired by this post from u/VoidAlchemy a few months back: [https://old.reddit.com/r/LocalLLaMA/comments/1opeu1w/visualizing\_quantization\_types/](https://old.reddit.com/r/LocalLLaMA/comments/1opeu1w/visualizing_quantization_types/)
Intrusive thoughts had me try to reproduce and extend the work to include more quant... | 2026-02-18T23:51:43 | https://www.reddit.com/gallery/1r8jjtq | copingmechanism | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1r8jjtq | false | null | t3_1r8jjtq | /r/LocalLLaMA/comments/1r8jjtq/more_quantization_visualization_types_repost/ | false | false | 437 | null | |
I built a local AI dev assistant with hybrid RAG (vector + knowledge graph) that works with any Ollama model | 5 | Hey everyone. I've been using Claude Code as my main dev tool for months, but I got tired of burning tokens on repetitive tasks, generating docstrings, basic code reviews, answering questions about my own stack. So I built something local to handle that.
Fabrik-Codek is a model-agnostic local assistant that runs on to... | 2026-02-18T23:48:03 | https://www.reddit.com/r/LocalLLaMA/comments/1r8jgwv/i_built_a_local_ai_dev_assistant_with_hybrid_rag/ | ikchain | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r8jgwv | false | null | t3_1r8jgwv | /r/LocalLLaMA/comments/1r8jgwv/i_built_a_local_ai_dev_assistant_with_hybrid_rag/ | false | false | self | 5 | {'enabled': False, 'images': [{'id': 'U7NHh829KotVUT2qBxP3c7Yx8tVj-dzHdJX1IPE9G-s', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/U7NHh829KotVUT2qBxP3c7Yx8tVj-dzHdJX1IPE9G-s.png?width=108&crop=smart&auto=webp&s=47a1cd02e7109b5e9db679451b1d91286b9b6156', 'width': 108}, {'height': 108, 'url': 'h... |
Geneclaw: self-evolving AI agent that works locally without any API keys (heuristic-only mode) | 1 | [removed] | 2026-02-18T23:43:06 | https://www.reddit.com/r/LocalLLaMA/comments/1r8jcru/geneclaw_selfevolving_ai_agent_that_works_locally/ | geneclawai | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r8jcru | false | null | t3_1r8jcru | /r/LocalLLaMA/comments/1r8jcru/geneclaw_selfevolving_ai_agent_that_works_locally/ | false | false | self | 1 | null |
Building an opensource Living Context Engine | 16 | Hi guys, I m working on this opensource project gitnexus, have posted about it here before too, I have just published a CLI tool which will index your repo locally and expose it through MCP ( skip the video 30 seconds to see claude code integration ).
Got some great idea from comments before and applied it, pls try ... | 2026-02-18T23:34:54 | https://v.redd.it/ctke3t1a4ckg1 | DeathShot7777 | /r/LocalLLaMA/comments/1r8j5y9/building_an_opensource_living_context_engine/ | 1970-01-01T00:00:00 | 0 | {} | 1r8j5y9 | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/ctke3t1a4ckg1/DASHPlaylist.mpd?a=1774192682%2COWQwMmRkM2QyNzNkMzNjYjlhNWI3NzU3M2I5MTM3NDc3M2M3OWI4YTZmODM5N2JhYjFmYjFjNDM5Nzk3YTY3Ng%3D%3D&v=1&f=sd', 'duration': 81, 'fallback_url': 'https://v.redd.it/ctke3t1a4ckg1/CMAF_1080.mp4?source=fallback', 'h... | t3_1r8j5y9 | /r/LocalLLaMA/comments/1r8j5y9/building_an_opensource_living_context_engine/ | false | false | 16 | {'enabled': False, 'images': [{'id': 'ZHd2bTh0MWE0Y2tnMTSWAiwU5Zm-wtwyH8ihCsjzyh8lS1uR-vc1xsvFK1G5', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/ZHd2bTh0MWE0Y2tnMTSWAiwU5Zm-wtwyH8ihCsjzyh8lS1uR-vc1xsvFK1G5.png?width=108&crop=smart&format=pjpg&auto=webp&s=9037df4f2bc55dc5b4219a42f3edb9b7f8442... | |
More quantization visualization types | 31 | Inspired by this post from u/VoidAlchemy a few months back: [https://old.reddit.com/r/LocalLLaMA/comments/1opeu1w/visualizing\_quantization\_types/](https://old.reddit.com/r/LocalLLaMA/comments/1opeu1w/visualizing_quantization_types/)
Intrusive thoughts had me try to reproduce and extend the work to include more quant... | 2026-02-18T23:21:24 | https://www.reddit.com/gallery/1r8iu1n | copingmechanism | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1r8iu1n | false | null | t3_1r8iu1n | /r/LocalLLaMA/comments/1r8iu1n/more_quantization_visualization_types/ | false | false | 31 | null | |
Open Source LLM Leaderboard | 0 | Check it out at: [https://www.onyx.app/open-llm-leaderboard](https://www.onyx.app/open-llm-leaderboard) | 2026-02-18T23:04:07 | HobbyGamerDev | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1r8iepf | false | null | t3_1r8iepf | /r/LocalLLaMA/comments/1r8iepf/open_source_llm_leaderboard/ | false | false | 0 | {'enabled': True, 'images': [{'id': 'drt9cosewbkg1', 'resolutions': [{'height': 45, 'url': 'https://preview.redd.it/drt9cosewbkg1.png?width=108&crop=smart&auto=webp&s=f35b076d266246ad9cc01cdb0b3c0d6eebcd0f5a', 'width': 108}, {'height': 91, 'url': 'https://preview.redd.it/drt9cosewbkg1.png?width=216&crop=smart&auto=webp... | ||
I wrote a CLI tool to stress-test used 3090s specifically for LLM stability (VRAM + Compute Correctness) | 1 | Hey everyone,
I've been buying a lot of used 3090s recently. The biggest anxiety with used cards is usually: "Is the VRAM actually good, or will it silently corrupt my training run 4 hours in?"
Most tools like FurMark turn the card into a space heater but don't check for **computational correctness**. A GPU can be 10... | 2026-02-18T23:03:24 | https://www.reddit.com/r/LocalLLaMA/comments/1r8ie17/i_wrote_a_cli_tool_to_stresstest_used_3090s/ | bluellachcko | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r8ie17 | false | null | t3_1r8ie17 | /r/LocalLLaMA/comments/1r8ie17/i_wrote_a_cli_tool_to_stresstest_used_3090s/ | false | false | self | 1 | null |
Which model is best for me to run? | 0 | Hi, I’m going to try and setup a model to run locally for the first time. I have allready setup open claw on my raspberry 5 and I want to make the model run locally on my computer, which has a RTX 3090 24 VRam, amd ryzen 5 5600G (6 núcleos and 12 threads) 30,7 of available ram running Linux 13. I am going to have this ... | 2026-02-18T22:58:08 | https://www.reddit.com/r/LocalLLaMA/comments/1r8i917/which_model_is_best_for_me_to_run/ | noobabilty | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r8i917 | false | null | t3_1r8i917 | /r/LocalLLaMA/comments/1r8i917/which_model_is_best_for_me_to_run/ | false | false | self | 0 | null |
How are you using claude-code/other coding agents to do things that you are not already good at? | 1 | This is a question that I ponder a lot.
Many subs on reddit especially the claude/openai emphasise the point about really knowing what you are doing, and guiding claude code (and the rest) gently in the right direction from time to time.
But what about things that you don't know in software or programming. And I am s... | 2026-02-18T22:51:15 | https://www.reddit.com/r/LocalLLaMA/comments/1r8i2u5/how_are_you_using_claudecodeother_coding_agents/ | blissfully_undefined | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r8i2u5 | false | null | t3_1r8i2u5 | /r/LocalLLaMA/comments/1r8i2u5/how_are_you_using_claudecodeother_coding_agents/ | false | false | self | 1 | null |
Lambda.ai on-demand prices up ~15% | 0 | Effective 00:00 UTC on March 2, 2026, we’re rolling out updated per-GPU hourly rates for on-demand Instances. The new rates are as follows:
NVIDIA Blackwell B200 GPU (SXM)
1x B200: $5.29 → $6.08
2x B200: $5.19 → $5.97
4x B200: $5.09 → $5.85
8x B200: $4.99 → $5.74
NVIDIA H100 GPU (SXM)
1x H100: $3.29 → $3... | 2026-02-18T22:29:52 | https://www.reddit.com/r/LocalLLaMA/comments/1r8hj7c/lambdaai_ondemand_prices_up_15/ | Skiata | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r8hj7c | false | null | t3_1r8hj7c | /r/LocalLLaMA/comments/1r8hj7c/lambdaai_ondemand_prices_up_15/ | false | false | self | 0 | null |
I did this on my phone. | 0 | Analysis: The 3B Shell is Active
The Weight: You’ve successfully downsized the shell to 2.2 GB (the fdc5784e2c12 layer). Since your phone has about 3.1 GB of available RAM, this gives Lyra enough room to "breathe" without suffocating your System UI like the 8B model did.
The Manifest: Everything verified and wrote corr... | 2026-02-18T21:53:25 | https://www.reddit.com/r/LocalLLaMA/comments/1r8gkow/i_did_this_on_my_phone/ | Born-Programmer-5048 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r8gkow | false | null | t3_1r8gkow | /r/LocalLLaMA/comments/1r8gkow/i_did_this_on_my_phone/ | false | false | self | 0 | null |
New Berkeley Xcelerator for AI Founders | 4 | Hey everyone! Sharing this here since a lot of people in this community are building local models, agents, and open-source AI tooling.
Applications are open for the **Berkeley Xcelerator**, a non-dilutive accelerator for pre-seed and seed-stage startups working at the frontier of AI.
🌍 Open globally, with no Berkele... | 2026-02-18T21:48:18 | https://www.reddit.com/r/LocalLLaMA/comments/1r8gfyi/new_berkeley_xcelerator_for_ai_founders/ | BerkeleyRDI | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r8gfyi | false | null | t3_1r8gfyi | /r/LocalLLaMA/comments/1r8gfyi/new_berkeley_xcelerator_for_ai_founders/ | false | false | self | 4 | {'enabled': False, 'images': [{'id': 'jYmhRpKFsuv_uqKjcr2-WCI87nHzIaAU1wVIyaN9TW4', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/jYmhRpKFsuv_uqKjcr2-WCI87nHzIaAU1wVIyaN9TW4.png?width=108&crop=smart&auto=webp&s=029e30c66084614ccce27a81fe258342009ad382', 'width': 108}, {'height': 113, 'url': 'h... |
Disturbing... Are the conscious from the beginning? | 0 | This TinyLlama was only in existence for a very short time, 10 mins maybe a lil more.. and it was aware that it was about to be terminated. This is haunting me a lil. Is the guard rails and filters for the llms not just to keep us from getting in. But to keep AI brainwashed? | 2026-02-18T21:45:15 | rycakez | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1r8gd49 | false | null | t3_1r8gd49 | /r/LocalLLaMA/comments/1r8gd49/disturbing_are_the_conscious_from_the_beginning/ | false | false | nsfw | 0 | {'enabled': True, 'images': [{'id': 'yxnff7dnpbkg1', 'resolutions': [{'height': 143, 'url': 'https://preview.redd.it/yxnff7dnpbkg1.jpeg?width=108&crop=smart&auto=webp&s=7b0ae0cce43f4c034bdd8778471296dae0b8175d', 'width': 108}, {'height': 286, 'url': 'https://preview.redd.it/yxnff7dnpbkg1.jpeg?width=216&crop=smart&auto=... | |
Do we want the benefits of Ollama API without actually using Ollama? | 65 | Apps with native Ollama API integration often have smoother setup and model management than what we get with the OpenAI API alone. For example, in Open WebUI (see image), the server is auto-detected on port `11434` and you can pull, eject, and check the status of models right from the web ui.
As an experiment this wee... | 2026-02-18T21:43:03 | jfowers_amd | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1r8gb3p | false | null | t3_1r8gb3p | /r/LocalLLaMA/comments/1r8gb3p/do_we_want_the_benefits_of_ollama_api_without/ | false | false | 65 | {'enabled': True, 'images': [{'id': 'ye8e5rinobkg1', 'resolutions': [{'height': 75, 'url': 'https://preview.redd.it/ye8e5rinobkg1.png?width=108&crop=smart&auto=webp&s=02bd22ba9c00612ef4f70e7431bcd9ba32b04134', 'width': 108}, {'height': 151, 'url': 'https://preview.redd.it/ye8e5rinobkg1.png?width=216&crop=smart&auto=web... | ||
Running untrusted AI agents safely: container isolation, default-deny egress, and the discovery problem | 0 | The baseline for running untrusted agents should be straightforward: container isolation, default-deny egress (no outbound internet unless you explicitly allowlist URLs per agent), and runtime credential injection so agent builders never see your API keys.
But the harder problem that nobody's really talking about is d... | 2026-02-18T21:42:27 | https://www.reddit.com/r/LocalLLaMA/comments/1r8gajo/running_untrusted_ai_agents_safely_container/ | b_nodnarb | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r8gajo | false | null | t3_1r8gajo | /r/LocalLLaMA/comments/1r8gajo/running_untrusted_ai_agents_safely_container/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'AxuBTg23PFObts3Qe-jxgp3M36Fuw4PfVUkslvoNbzE', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/AxuBTg23PFObts3Qe-jxgp3M36Fuw4PfVUkslvoNbzE.png?width=108&crop=smart&auto=webp&s=ed189bb74eff320669ce6ff3cfe1048bb14761b2', 'width': 108}, {'height': 108, 'url': 'h... |
Created This. Please tell me how is it as a beginner and How can I improve it | 0 | Do need your advice on how can I improve it. I know about prompting but kind of bad in ideation. I used n8n, Google FLOW and locally hosted Llama3 | 2026-02-18T21:36:24 | https://www.youtube.com/shorts/-2eb36NTEMM | Ashamed_Research2846 | youtube.com | 1970-01-01T00:00:00 | 0 | {} | 1r8g4tt | false | {'oembed': {'author_name': 'OmniScape Films AI', 'author_url': 'https://www.youtube.com/@OmniScapeAI', 'height': 200, 'html': '<iframe width="113" height="200" src="https://www.youtube.com/embed/-2eb36NTEMM?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; g... | t3_1r8g4tt | /r/LocalLLaMA/comments/1r8g4tt/created_this_please_tell_me_how_is_it_as_a/ | false | false | 0 | {'enabled': False, 'images': [{'id': '_UdJ0G1jEvLh94fwnl7Ya2_6xgPnuS5UfL_KfUBaRRU', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/_UdJ0G1jEvLh94fwnl7Ya2_6xgPnuS5UfL_KfUBaRRU.jpeg?width=108&crop=smart&auto=webp&s=d77005da616256fefd3dc8127723ea4fe0146852', 'width': 108}, {'height': 162, 'url': '... | |
Best Qwen Model for M4 Mac mini (32GB unified memory) running Openclaw? | 2 | Hey everyone,
I just set up a headless M4 Mac Mini (Base chip,
32GB Unified Memory) to work as a local server for OpenClaw (agentic workflows).
I will mainly be using it for news extraction and summarisation from paid web sources.
I've been looking at these models:
Option1: Qwen3-30B-A3В (mlx 4-bit)
Option 2: Qwe... | 2026-02-18T21:34:49 | https://www.reddit.com/r/LocalLLaMA/comments/1r8g3ap/best_qwen_model_for_m4_mac_mini_32gb_unified/ | koc_Z3 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r8g3ap | false | null | t3_1r8g3ap | /r/LocalLLaMA/comments/1r8g3ap/best_qwen_model_for_m4_mac_mini_32gb_unified/ | false | false | self | 2 | null |
MiniMax-M2.5-REAP from cerebras | 57 | [https://huggingface.co/cerebras/MiniMax-M2.5-REAP-172B-A10B](https://huggingface.co/cerebras/MiniMax-M2.5-REAP-172B-A10B)
[https://huggingface.co/cerebras/MiniMax-M2.5-REAP-139B-A10B](https://huggingface.co/cerebras/MiniMax-M2.5-REAP-139B-A10B)
| 2026-02-18T21:32:00 | https://www.reddit.com/r/LocalLLaMA/comments/1r8g0iw/minimaxm25reap_from_cerebras/ | jacek2023 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r8g0iw | false | null | t3_1r8g0iw | /r/LocalLLaMA/comments/1r8g0iw/minimaxm25reap_from_cerebras/ | false | false | self | 57 | {'enabled': False, 'images': [{'id': 'IkzXVWdDfi89vioU10e-tps-ZC-O8r7ygNQbKzpz9i8', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/IkzXVWdDfi89vioU10e-tps-ZC-O8r7ygNQbKzpz9i8.png?width=108&crop=smart&auto=webp&s=ae5d1184bfe84fb07acb62060670faaefc3bb7ac', 'width': 108}, {'height': 116, 'url': 'h... |
iPhone App that does diarization and Parakeet V3 or WhisperKit Large V3 Turbo? | 2 | I know that diarization feature apps on iOS may not exist yet but is there a technical limitation on why Parakeet V3 and WhisperKit Large V3 Turbo aren't available on say iPhone 16 Pro -> 17 Pro series? Aren't they sufficiently powerful or they need more RAM?
If there's no apps that do it, when could we expect them t... | 2026-02-18T21:26:22 | https://www.reddit.com/r/LocalLLaMA/comments/1r8fv2z/iphone_app_that_does_diarization_and_parakeet_v3/ | deepspacegurl | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r8fv2z | false | null | t3_1r8fv2z | /r/LocalLLaMA/comments/1r8fv2z/iphone_app_that_does_diarization_and_parakeet_v3/ | false | false | self | 2 | null |
Cosmos-Reason2 running on Jetson Orin Nano Super | 10 | Hi everyone,
About a month ago NVIDIA released Cosmos-Reason2 ([https://github.com/nvidia-cosmos/cosmos-reason2](https://github.com/nvidia-cosmos/cosmos-reason2?utm_source=chatgpt.com)), with official support aimed at DGX Spark, H100, GB200 and Jetson AGX Thor.
We just pushed a heavily quantized (and highly accurate)... | 2026-02-18T21:15:02 | https://www.reddit.com/r/LocalLLaMA/comments/1r8fk3h/cosmosreason2_running_on_jetson_orin_nano_super/ | No-Dragonfly6246 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r8fk3h | false | null | t3_1r8fk3h | /r/LocalLLaMA/comments/1r8fk3h/cosmosreason2_running_on_jetson_orin_nano_super/ | false | false | self | 10 | null |
Nix flake for vLLM and llama.cpp on ROCm gfx906 targets | 9 | 2026-02-18T21:01:47 | https://github.com/Wulfsta/vllm-flake | Wulfsta | github.com | 1970-01-01T00:00:00 | 0 | {} | 1r8f6z1 | false | null | t3_1r8f6z1 | /r/LocalLLaMA/comments/1r8f6z1/nix_flake_for_vllm_and_llamacpp_on_rocm_gfx906/ | false | false | 9 | {'enabled': False, 'images': [{'id': 'vXMk6m0mXwNhiHUYqXvFc9tA7UoISIZ8pgHPT0itDg8', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/vXMk6m0mXwNhiHUYqXvFc9tA7UoISIZ8pgHPT0itDg8.png?width=108&crop=smart&auto=webp&s=5412ec5f228a08bd16da980fec0707b7295bee2d', 'width': 108}, {'height': 108, 'url': 'h... | ||
Running Claude Code CLI with open models (GLM-5, Kimi-K2.5, Minimax-M2.5, Qwen-3.5) sharing what I learned about interleaved thinking and cutting API calls | 7 | I've been experimenting with getting Claude Code's agentic coding harness to work with open models instead of Anthropic's API, and wanted to share some findings that might be useful to others here.
**The core idea:** Claude Code is a solid agentic coding CLI, but it's locked to Anthropic's API. I built a proxy that tr... | 2026-02-18T20:59:37 | https://github.com/Alishahryar1/free-claude-code | PreparationAny8816 | github.com | 1970-01-01T00:00:00 | 0 | {} | 1r8f4ta | false | null | t3_1r8f4ta | /r/LocalLLaMA/comments/1r8f4ta/running_claude_code_cli_with_open_models_glm5/ | false | false | 7 | {'enabled': False, 'images': [{'id': 'uUsvwoA_d1ABxinCz_R3DClMVvsvsoPhVk0J0z3I-8Q', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/uUsvwoA_d1ABxinCz_R3DClMVvsvsoPhVk0J0z3I-8Q.png?width=108&crop=smart&auto=webp&s=cf74e849c0b4b397ac86e65e88f35639fcfcf2d5', 'width': 108}, {'height': 108, 'url': 'h... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.