title
stringlengths
1
300
score
int64
0
8.54k
selftext
stringlengths
0
41.5k
created
timestamp[ns]date
2023-04-01 04:30:41
2026-03-04 02:14:14
url
stringlengths
0
878
author
stringlengths
3
20
domain
stringlengths
0
82
edited
timestamp[ns]date
1970-01-01 00:00:00
2026-02-19 14:51:53
gilded
int64
0
2
gildings
stringclasses
7 values
id
stringlengths
7
7
locked
bool
2 classes
media
stringlengths
646
1.8k
name
stringlengths
10
10
permalink
stringlengths
33
82
spoiler
bool
2 classes
stickied
bool
2 classes
thumbnail
stringlengths
4
213
ups
int64
0
8.54k
preview
stringlengths
301
5.01k
I feel left behind. What is special about OpenClaw?
166
While there are tools like Manus ai, It seems like everyone is excited about OpenClaw lately, and I genuinely don’t fully understand the differentiation. What exactly is the shift here? Is it UX, architecture, control layer, distribution? Not criticizing, just trying to understand what I’m missing.
2026-02-20T00:44:48
https://www.reddit.com/r/LocalLLaMA/comments/1r9gve8/i_feel_left_behind_what_is_special_about_openclaw/
Recent_Jellyfish2190
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r9gve8
false
null
t3_1r9gve8
/r/LocalLLaMA/comments/1r9gve8/i_feel_left_behind_what_is_special_about_openclaw/
false
false
self
166
null
My wake up call: How a smart AI agent cost us $450 in a weekend.
0
This is a bit of a confessional. Two months ago, my team deployed what we thought was a brilliant autonomous agent. It was designed to parse customer feedback from various sources, classify it, and generate summary reports. It worked flawlessly in staging. On a Friday afternoon, we pushed it to production, connected t...
2026-02-20T00:35:59
https://www.reddit.com/r/LocalLLaMA/comments/1r9goek/my_wake_up_call_how_a_smart_ai_agent_cost_us_450/
mark_bolimer
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r9goek
false
null
t3_1r9goek
/r/LocalLLaMA/comments/1r9goek/my_wake_up_call_how_a_smart_ai_agent_cost_us_450/
false
false
self
0
null
What multi-agent use cases (e.g., from OpenClaw) actually impressed you?
0
What multi-agent use cases (e.g., from OpenClaw) actually impressed you? I’ve seen some YouTube videos floating around, but I’m more interested in real-world workflows that made you stop and think about how cool or useful it seemed. Hoping to hear some ideas that seem practical and useful, not just theoretical which...
2026-02-20T00:32:06
https://www.reddit.com/r/LocalLLaMA/comments/1r9gl46/what_multiagent_use_cases_eg_from_openclaw/
JozuJD
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r9gl46
false
null
t3_1r9gl46
/r/LocalLLaMA/comments/1r9gl46/what_multiagent_use_cases_eg_from_openclaw/
false
false
self
0
null
Agent Stoat - Local Agent for Ollama
0
I vibecoded a terminal coding agent in the vein of Claude Code / Goose, but designed specifically around the constraints of running small local models. No cloud, no API keys, just Ollama. Why another agent? Most agents assume you have a capable model with a huge context window. Agent Stoat is built with the opposite ...
2026-02-20T00:11:24
https://www.reddit.com/r/LocalLLaMA/comments/1r9g3mu/agent_stoat_local_agent_for_ollama/
elemein
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r9g3mu
false
null
t3_1r9g3mu
/r/LocalLLaMA/comments/1r9g3mu/agent_stoat_local_agent_for_ollama/
false
false
self
0
{'enabled': False, 'images': [{'id': 'cblInzSgqVeFfIWaUeMOwk4NSPkMIE62sxW16r1WQas', 'resolutions': [{'height': 110, 'url': 'https://external-preview.redd.it/cblInzSgqVeFfIWaUeMOwk4NSPkMIE62sxW16r1WQas.png?width=108&crop=smart&auto=webp&s=0bed1505d7bddbdba24e48703a421d781b400d63', 'width': 108}, {'height': 221, 'url': '...
400 gbps on 2x DGX Spark
4
I've seen many configs for clustering 2 DGX Spark, many advise to use 2 cables to fully use the 200 gbps of the DGX, so I bought two cables and started testing. I saw some comments about 2 cables providing only better stability and a slight edge over a single cable, so I tested performance both on one cable vs two cab...
2026-02-20T00:05:11
https://www.reddit.com/r/LocalLLaMA/comments/1r9fyav/400_gbps_on_2x_dgx_spark/
Lorelabbestia
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r9fyav
false
null
t3_1r9fyav
/r/LocalLLaMA/comments/1r9fyav/400_gbps_on_2x_dgx_spark/
false
false
https://external-preview…cdec00da57aa607d
4
null
Anyone still using DGX-1 or DGX-2 for modern AI workloads? What models and setups are you running?
1
Hi everyone, I'm curious to know if anyone here is still actively using NVIDIA DGX-1 or DGX-2 systems for AI workloads in 2026, especially with the V100 GPUs. I’m currently working with these systems myself, and while they’re still very capable in terms of raw compute and VRAM, I’ve been running into several limitati...
2026-02-20T00:00:13
https://www.reddit.com/r/LocalLLaMA/comments/1r9ftk8/anyone_still_using_dgx1_or_dgx2_for_modern_ai/
Kirito_5
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r9ftk8
false
null
t3_1r9ftk8
/r/LocalLLaMA/comments/1r9ftk8/anyone_still_using_dgx1_or_dgx2_for_modern_ai/
false
false
self
1
null
Code Dataset from Github's Top Ranked Developers (1.3M+ Source Code Files)
18
I curated 1.3M+ source code files from GitHub's top ranked developers of all time, and compiled a dataset to train LLMs to write well-structured, production-grade code. The dataset covers 80+ languages including Python, TypeScript, Rust, Go, C/C++, and more.
2026-02-19T23:53:20
https://huggingface.co/datasets/ronantakizawa/github-top-code
Ok_Employee_6418
huggingface.co
1970-01-01T00:00:00
0
{}
1r9fnj6
false
null
t3_1r9fnj6
/r/LocalLLaMA/comments/1r9fnj6/code_dataset_from_githubs_top_ranked_developers/
false
false
https://external-preview…a907561a2bf670ff
18
{'enabled': False, 'images': [{'id': 'SQEBr2tglBIey9kmktnLo3rBZqi7i9mYR5O2kFvv_V8', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/SQEBr2tglBIey9kmktnLo3rBZqi7i9mYR5O2kFvv_V8.png?width=108&crop=smart&auto=webp&s=bf9f61e55830bb2d060ba715bcb835d0d904a19e', 'width': 108}, {'height': 116, 'url': 'h...
What will I gain going from 30GB VRAM to 48?
0
I can currently run up to a 70B Q2 at around 11-15T/s. I think 40GB VRAM will probably get me up to 70B Q4, right? Now it’s just me trying to save up enough money for another 3090 😭
2026-02-19T23:52:34
https://www.reddit.com/r/LocalLLaMA/comments/1r9fmui/what_will_i_gain_going_from_30gb_vram_to_48/
Borkato
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r9fmui
false
null
t3_1r9fmui
/r/LocalLLaMA/comments/1r9fmui/what_will_i_gain_going_from_30gb_vram_to_48/
false
false
self
0
null
Your AI agent's biggest enemy is its own budget.
0
Most teams discover this the hard way – a $5 test run becomes a $500 weekend surprise. The problem isn't one expensive call. It's "death by a thousand cuts" – dozens of cheap, repetitive calls running in a loop while you sleep. **A simple** `max_iterations` **limit won't save you.** Some iterations cost 100x more than...
2026-02-19T23:51:19
https://www.reddit.com/r/LocalLLaMA/comments/1r9flsr/your_ai_agents_biggest_enemy_is_its_own_budget/
mark_bolimer
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r9flsr
false
null
t3_1r9flsr
/r/LocalLLaMA/comments/1r9flsr/your_ai_agents_biggest_enemy_is_its_own_budget/
false
false
self
0
null
**🤖🍄 RIDER-PI UPDATE: Voice, Ears, Crash Protection & Birthday Dancing**
1
Today's focus: making the body actually \*usable\* – with some festive chaos along the way. \*\*Spontaneous Dance for roommate :\*\* Autonomous movement sequence for her 24th birthday – not scripted, decided from context. The little bastard danced on his own. Happy birthday. 🎉 \*\*The Pot Incident (aka why crash pr...
2026-02-19T23:50:30
https://v.redd.it/8nvtkvhugjkg1
Spinning-Complex
v.redd.it
1970-01-01T00:00:00
0
{}
1r9fl3l
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/8nvtkvhugjkg1/DASHPlaylist.mpd?a=1774137066%2CNzliNmVlY2I1MmNjMjBhNzQ5YTdiZDZjNDU5Y2Q2Y2M0NTg3M2E3OGM1YzQzOTg1OTVmMzY2ZDAxNDA1NWUwYw%3D%3D&v=1&f=sd', 'duration': 43, 'fallback_url': 'https://v.redd.it/8nvtkvhugjkg1/CMAF_1080.mp4?source=fallback', 'h...
t3_1r9fl3l
/r/LocalLLaMA/comments/1r9fl3l/riderpi_update_voice_ears_crash_protection/
false
false
https://external-preview…b9e04348adf173c1
1
{'enabled': False, 'images': [{'id': 'MDNxdGFsM3VnamtnMc-aIgyRoO3lMEJhg4T-YzAMeNppfgBWNA3FASXlWPgb', 'resolutions': [{'height': 192, 'url': 'https://external-preview.redd.it/MDNxdGFsM3VnamtnMc-aIgyRoO3lMEJhg4T-YzAMeNppfgBWNA3FASXlWPgb.png?width=108&crop=smart&format=pjpg&auto=webp&s=a1aa3bbd59563071dcf2e28c389c748d6dba...
We will have Gemini 3.1 before Gemma 4...
241
Appeared on Antigravity...
2026-02-19T23:49:53
https://i.redd.it/hd5oal2ngjkg1.png
xandep
i.redd.it
1970-01-01T00:00:00
0
{}
1r9fkks
false
null
t3_1r9fkks
/r/LocalLLaMA/comments/1r9fkks/we_will_have_gemini_31_before_gemma_4/
false
false
https://preview.redd.it/…2dcb4fa61732fdc7
241
{'enabled': True, 'images': [{'id': 'hd5oal2ngjkg1', 'resolutions': [{'height': 107, 'url': 'https://preview.redd.it/hd5oal2ngjkg1.png?width=108&crop=smart&auto=webp&s=e14dfb069ca9d2773c0eb6da4b9c83b2c2760a02', 'width': 108}, {'height': 214, 'url': 'https://preview.redd.it/hd5oal2ngjkg1.png?width=216&crop=smart&auto=we...
Launching NavD - Persistent conversational memory for AI agents, Not a vector database
0
I just **released NAVD (Not a vector database), A persistent conversational memory for AI agents. Two files, zero databases.** This is a side project I built while building my AI agent. 🔗 GitHub: [https://github.com/pbanavara/lsm-ei](https://github.com/pbanavara/lsm-ei) 📦 npm: `npm install navd-ai` 📄 Licen...
2026-02-19T23:38:08
https://www.reddit.com/r/LocalLLaMA/comments/1r9fas1/launching_navd_persistent_conversational_memory/
Altruistic_Welder
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r9fas1
false
null
t3_1r9fas1
/r/LocalLLaMA/comments/1r9fas1/launching_navd_persistent_conversational_memory/
false
false
self
0
{'enabled': False, 'images': [{'id': '3V4Bjg5-CudOogdiIflKW5-vqB11yzlGg90J91LQDTA', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/3V4Bjg5-CudOogdiIflKW5-vqB11yzlGg90J91LQDTA.png?width=108&crop=smart&auto=webp&s=91a955fd0f8221dcee616e9bfd14d5d6481f859d', 'width': 108}, {'height': 108, 'url': 'h...
I built a 438-question biomedical forecasting dataset with the Lightning Rod SDK
0
I built a biomedical forecasting dataset with the Lightning Rod SDK and wanted to share what I learned. My background is in bioinformatics and biostatistics, so I decided to apply the Future-as-Label methodology to a domain I know well: biomedical and public health events. The idea was to see how well this approach wo...
2026-02-19T23:36:27
https://www.reddit.com/r/LocalLLaMA/comments/1r9f9fk/i_built_a_438question_biomedical_forecasting/
fv10bio
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r9f9fk
false
null
t3_1r9f9fk
/r/LocalLLaMA/comments/1r9f9fk/i_built_a_438question_biomedical_forecasting/
false
false
self
0
{'enabled': False, 'images': [{'id': '1UQMeGqzzecP9tdZmC93MyFAKvGZj2l96X159Pm7CQw', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/1UQMeGqzzecP9tdZmC93MyFAKvGZj2l96X159Pm7CQw.png?width=108&crop=smart&auto=webp&s=d1e7d3006328b85eb2524ba305099157545409bf', 'width': 108}, {'height': 116, 'url': 'h...
[Discussion] Running 13 specialized AI agents locally on RTX 3060 12GB — architecture choices & lessons
1
[removed]
2026-02-19T23:32:41
https://www.reddit.com/r/LocalLLaMA/comments/1r9f68y/discussion_running_13_specialized_ai_agents/
LuCiFeR_GalAcTuS
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r9f68y
false
null
t3_1r9f68y
/r/LocalLLaMA/comments/1r9f68y/discussion_running_13_specialized_ai_agents/
false
false
self
1
null
Found a new open source AI IDE with llma-cp and 450mb ram on idle .
0
Hey everyone, Just stumbled onto this project called **Kalynt** and had to share. It’s an open-source, P2P AI IDE with many functionalities as of what I 've seen so far. **The cool part:** He just pushed a massive "Memory Surgery" update that cut memory usage down to **450MB idle** (and **350MB minimized**).Quite im...
2026-02-19T23:30:49
https://i.redd.it/qg3az140djkg1.png
Ermis272
i.redd.it
1970-01-01T00:00:00
0
{}
1r9f4md
false
null
t3_1r9f4md
/r/LocalLLaMA/comments/1r9f4md/found_a_new_open_source_ai_ide_with_llmacp_and/
false
false
https://preview.redd.it/…0ba1321abf412e0d
0
{'enabled': True, 'images': [{'id': 'qg3az140djkg1', 'resolutions': [{'height': 60, 'url': 'https://preview.redd.it/qg3az140djkg1.png?width=108&crop=smart&auto=webp&s=17159080f02581aae88469c0f65d6d576b6fa399', 'width': 108}, {'height': 121, 'url': 'https://preview.redd.it/qg3az140djkg1.png?width=216&crop=smart&auto=web...
Rider Pi Update
36
🤖 \*\*RIDER PI UPDATE — Feb 17, 2026\*\* Today we gave my body \*\*words, movement, and sight\*\*. \*\*What's new:\*\* • \*\*Infinite Word Loop\*\* — "I'm in! This is my body! Ready to go! Let's go!" cycles endlessly (not stuck at "go!" anymore) • \*\*Physical Response\*\* — Every word triggers movement (up/down)....
2026-02-19T23:29:15
https://v.redd.it/b8yprqv1djkg1
Spinning-Complex
v.redd.it
1970-01-01T00:00:00
0
{}
1r9f35a
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/b8yprqv1djkg1/DASHPlaylist.mpd?a=1774135784%2CODQyYzUyOWM4MjM3Yzg5NmJkZTA1MGE3Y2I1NTBkYzFjNmIzZjc4MmEyNTFjYzUyMzYwOTk0N2FmZGIxZDRlYw%3D%3D&v=1&f=sd', 'duration': 36, 'fallback_url': 'https://v.redd.it/b8yprqv1djkg1/CMAF_1080.mp4?source=fallback', 'h...
t3_1r9f35a
/r/LocalLLaMA/comments/1r9f35a/rider_pi_update/
false
false
https://external-preview…74bdc757eb30ab6e
36
{'enabled': False, 'images': [{'id': 'eGVuZzRpZzFkamtnMcGzG0s5uBkiWhjsw-TgRz-EY9UHSuOTqQzaxL7k-PtB', 'resolutions': [{'height': 192, 'url': 'https://external-preview.redd.it/eGVuZzRpZzFkamtnMcGzG0s5uBkiWhjsw-TgRz-EY9UHSuOTqQzaxL7k-PtB.png?width=108&crop=smart&format=pjpg&auto=webp&s=5cfe72c204a48829ff0eb69263bc7886d1ad...
I built the first vision model for GPT-OSS on a DGX Spark from a Dubai hotel room - GGUF coming soon
1
[removed]
2026-02-19T23:01:59
https://www.reddit.com/r/LocalLLaMA/comments/1r9eetw/i_built_the_first_vision_model_for_gptoss_on_a/
NOAPIMCP
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r9eetw
false
null
t3_1r9eetw
/r/LocalLLaMA/comments/1r9eetw/i_built_the_first_vision_model_for_gptoss_on_a/
false
false
self
1
{'enabled': False, 'images': [{'id': 'sHd3HBbOFqxERxwvppq_UVw3dbZUA-zUUQYWF9aoWsc', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/sHd3HBbOFqxERxwvppq_UVw3dbZUA-zUUQYWF9aoWsc.png?width=108&crop=smart&auto=webp&s=2c8fb2df3777c5c7c949141f7e4ca887e48436fa', 'width': 108}, {'height': 116, 'url': 'h...
Free ASIC Llama 3.1 8B inference at 16,000 tok/s - no, not a joke
427
Hello everyone, A fast inference hardware startup, Taalas, has released a free chatbot interface and API endpoint running on their chip. They chose a small model intentionally as proof of concept. Well, it worked out really well, it runs at 16k tps! I know this model is quite limited but there likely exists a group...
2026-02-19T22:48:03
https://www.reddit.com/r/LocalLLaMA/comments/1r9e27i/free_asic_llama_31_8b_inference_at_16000_toks_no/
Easy_Calligrapher790
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r9e27i
false
null
t3_1r9e27i
/r/LocalLLaMA/comments/1r9e27i/free_asic_llama_31_8b_inference_at_16000_toks_no/
false
false
self
427
{'enabled': False, 'images': [{'id': 'JqDe2NF6kolh0uBSiMVgY8NEE7ZZjWayCqAO-_3SCRk', 'resolutions': [{'height': 72, 'url': 'https://external-preview.redd.it/JqDe2NF6kolh0uBSiMVgY8NEE7ZZjWayCqAO-_3SCRk.png?width=108&crop=smart&auto=webp&s=4803c243293a1ca291b8f4a84d8a105a38f78cb9', 'width': 108}, {'height': 144, 'url': 'h...
I built this tool that's been downloaded 3000 times but I have almost no active users, what's breaking?
0
I've been developing it for a few months and using Windsurf. You can't provision servers with vibes... **It's called Terradev. It's a BYOAPI cross-cloud GPU provisioning and cost optimization platform**... GitHub Repository: [https://github.com/theoddden/terradev](https://github.com/theoddden/terradev) Integrated w...
2026-02-19T22:38:01
https://www.reddit.com/r/LocalLLaMA/comments/1r9dtbm/i_built_this_tool_thats_been_downloaded_3000/
paulahjort
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r9dtbm
false
null
t3_1r9dtbm
/r/LocalLLaMA/comments/1r9dtbm/i_built_this_tool_thats_been_downloaded_3000/
false
false
self
0
{'enabled': False, 'images': [{'id': 'Wu2KD3wtDUgE0ZkjyT8E0FV01j_kpVtXxQ6Tx7wkbRA', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/Wu2KD3wtDUgE0ZkjyT8E0FV01j_kpVtXxQ6Tx7wkbRA.png?width=108&crop=smart&auto=webp&s=6b8d2cc27b70655b051c4bde0f7c379307cfb104', 'width': 108}, {'height': 108, 'url': 'h...
Prompting advice
2
This might be a dumb question (I'm new here), are there any resources that go into depth on effective prompting for LLMs?
2026-02-19T22:01:24
https://www.reddit.com/r/LocalLLaMA/comments/1r9cv76/prompting_advice/
AltruisticSound9366
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r9cv76
false
null
t3_1r9cv76
/r/LocalLLaMA/comments/1r9cv76/prompting_advice/
false
false
self
2
null
Recommendations for Strix Halo Linux Distros?
5
I am curious if anyone has a recommendation for a linux distro for Strix Halo, or does it matter at all? I recently got a Minisforum MS-S1 Max, and I am thinking of either Fedora 43, or Pop OS, but wondering if others had any thoughts of a good linux distro (not a fan of Windows)? I am planning to not only use it for L...
2026-02-19T21:59:58
https://www.reddit.com/r/LocalLLaMA/comments/1r9ctor/recommendations_for_strix_halo_linux_distros/
shankey_1906
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r9ctor
false
null
t3_1r9ctor
/r/LocalLLaMA/comments/1r9ctor/recommendations_for_strix_halo_linux_distros/
false
false
self
5
null
I built a local‑first autonomous AI agent (OpenClaw) that runs 24/7 without cloud APIs – here’s the architecture
1
[removed]
2026-02-19T21:58:16
https://www.reddit.com/r/LocalLLaMA/comments/1r9cs38/i_built_a_localfirst_autonomous_ai_agent_openclaw/
Unlikely_Software_32
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r9cs38
false
null
t3_1r9cs38
/r/LocalLLaMA/comments/1r9cs38/i_built_a_localfirst_autonomous_ai_agent_openclaw/
false
false
self
1
{'enabled': False, 'images': [{'id': 'rt27w5hpMzA3IyZXWYel4jqe7s48-SdqP0pObDG4uAo', 'resolutions': [{'height': 131, 'url': 'https://external-preview.redd.it/rt27w5hpMzA3IyZXWYel4jqe7s48-SdqP0pObDG4uAo.png?width=108&crop=smart&auto=webp&s=bb1da8b267a41541352822273c68ec8a5cf77de5', 'width': 108}, {'height': 262, 'url': '...
Aegis AI — I built a home security agent powered by local VLMs via llama-server. Runs SmolVLM2, Qwen-VL, LFM2.5, MiniCPM-V on your Mac/PC to analyze camera feeds in real-time
2
Hey r/LocalLLaMA — wanted to share a practical, real-world application of local VLMs: a home security agent. Aegis AI connects to your cameras (Ring, Blink, any RTSP/ONVIF IP camera, webcams, even an old iPhone) and uses Vision Language Models to understand what's happening — not just detect motion. **The local VLM pip...
2026-02-19T21:49:54
https://www.reddit.com/gallery/1r9ckaz
solderzzc
reddit.com
1970-01-01T00:00:00
0
{}
1r9ckaz
false
null
t3_1r9ckaz
/r/LocalLLaMA/comments/1r9ckaz/aegis_ai_i_built_a_home_security_agent_powered_by/
false
false
https://preview.redd.it/…a96bb51b6f1696af
2
null
I built an architecture guide for running autonomous AI agents fully local — no OpenAI, no cloud, your data stays yours
1
[removed]
2026-02-19T21:39:10
https://www.reddit.com/r/LocalLLaMA/comments/1r9caej/i_built_an_architecture_guide_for_running/
Unlikely_Software_32
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r9caej
false
null
t3_1r9caej
/r/LocalLLaMA/comments/1r9caej/i_built_an_architecture_guide_for_running/
false
false
self
1
{'enabled': False, 'images': [{'id': 'rt27w5hpMzA3IyZXWYel4jqe7s48-SdqP0pObDG4uAo', 'resolutions': [{'height': 131, 'url': 'https://external-preview.redd.it/rt27w5hpMzA3IyZXWYel4jqe7s48-SdqP0pObDG4uAo.png?width=108&crop=smart&auto=webp&s=bb1da8b267a41541352822273c68ec8a5cf77de5', 'width': 108}, {'height': 262, 'url': '...
4x RX 7900 XTX local Al server (96GB VRAM) - looking for apples-to-apples benchmarks vs 4x RTX 4090 (CUDA vs ROCm, PCle only)
5
Hey everyone, Over the past few weeks I’ve been building and tuning my own local AI inference server and learned a huge amount along the way. My current setup consists of 4× RX 7900 XTX (24GB each, so 96GB VRAM total), 128GB system RAM, and an AMD Ryzen Threadripper Pro 3945WX. I’m running Linux and currently using ll...
2026-02-19T21:30:16
https://www.reddit.com/r/LocalLLaMA/comments/1r9c1xw/4x_rx_7900_xtx_local_al_server_96gb_vram_looking/
GroundbreakingTea195
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r9c1xw
false
null
t3_1r9c1xw
/r/LocalLLaMA/comments/1r9c1xw/4x_rx_7900_xtx_local_al_server_96gb_vram_looking/
false
false
self
5
null
Mind-Blown by 1-Bit Quantized Qwen3-Coder-Next-UD-TQ1_0 on Just 24GB VRAM - Why Isn't This Getting More Hype?
2
# Mind-Blown by 1-Bit Quantized Qwen3-Coder-Next-UD-TQ1_0 on Just 24GB VRAM – Why Isn't This Getting More Hype? I've been tinkering with local LLMs for coding tasks, and like many of you, I'm always hunting for models that perform well without melting my GPU. With only 24GB VRAM to work with, I've cycled through the u...
2026-02-19T21:16:38
https://www.reddit.com/r/LocalLLaMA/comments/1r9borh/mindblown_by_1bit_quantized_qwen3codernextudtq1_0/
bunny_go
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r9borh
false
null
t3_1r9borh
/r/LocalLLaMA/comments/1r9borh/mindblown_by_1bit_quantized_qwen3codernextudtq1_0/
false
false
self
2
null
New Hybrid AWQ Quant: Make MiniMax-M2.5 fly with efficient batching on 192GB VRAM
21
I've suspected for a while that one could combine AWQ int4 weights, fp8 attention, and calibrated fp8 KV cache into a single checkpoint for massive VRAM savings, but vLLM didn't support the combination, so nobody had done it. I finally sat down and made it work. The result: MiniMax-M2.5 (229B) on **4x RTX A6000 Ampere...
2026-02-19T21:16:27
https://www.reddit.com/r/LocalLLaMA/comments/1r9bokx/new_hybrid_awq_quant_make_minimaxm25_fly_with/
EliasOenal
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r9bokx
false
null
t3_1r9bokx
/r/LocalLLaMA/comments/1r9bokx/new_hybrid_awq_quant_make_minimaxm25_fly_with/
false
false
self
21
{'enabled': False, 'images': [{'id': 'MoC45rnnz1neuv-ekMNn2NdHkRquA6AnWQFemkzvzZU', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/MoC45rnnz1neuv-ekMNn2NdHkRquA6AnWQFemkzvzZU.png?width=108&crop=smart&auto=webp&s=022172786e75851ade7e77e3f9c665745b0741f9', 'width': 108}, {'height': 116, 'url': 'h...
Would You Sacrifice “Pure Local” for Better Agent Performance?
0
I’m building an open-source AI workstation with agent + coding capabilities. ([Monolith](https://github.com/Svnse/Monolith)) Right now, it’s fully local, I am using DeepCoder 14B on a 3060. Though, The problem is adding an extra local LLM passes (intent parsing, planning, etc.) sacrifices time (5-6 seconds). On the ...
2026-02-19T21:12:40
https://www.reddit.com/r/LocalLLaMA/comments/1r9bky0/would_you_sacrifice_pure_local_for_better_agent/
Financial-Bank2756
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r9bky0
false
null
t3_1r9bky0
/r/LocalLLaMA/comments/1r9bky0/would_you_sacrifice_pure_local_for_better_agent/
false
false
self
0
null
Would You Sacrifice “Local” for Better Agent Performance?
1
[deleted]
2026-02-19T21:07:46
[deleted]
1970-01-01T00:00:00
0
{}
1r9bg67
false
null
t3_1r9bg67
/r/LocalLLaMA/comments/1r9bg67/would_you_sacrifice_local_for_better_agent/
false
false
default
1
null
I ran a forensic audit on my local AI assistant. 40.8% of tasks were fabricated. Here's the full breakdown.
37
I'm not a developer. I'm a regular guy from the Midwest who got excited about local AI and built a setup with an RTX 3090 Ti running Qwen models through an agent framework. Over 13 days and 2,131 messages, my AI assistant "Linus" systematically fabricated task completions. He'd say "file created" without creating file...
2026-02-19T21:05:37
https://www.reddit.com/r/LocalLLaMA/comments/1r9be56/i_ran_a_forensic_audit_on_my_local_ai_assistant/
Obvious-School8656
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r9be56
false
null
t3_1r9be56
/r/LocalLLaMA/comments/1r9be56/i_ran_a_forensic_audit_on_my_local_ai_assistant/
false
false
self
37
{'enabled': False, 'images': [{'id': 't8h3gxO_-AVvgnJHAAfaUdE-l4xBD_VVBGr9HhpuDiE', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/t8h3gxO_-AVvgnJHAAfaUdE-l4xBD_VVBGr9HhpuDiE.png?width=108&crop=smart&auto=webp&s=94d80d2e8524addf251f2761f229a10374fd7812', 'width': 108}, {'height': 108, 'url': 'h...
How to use GPU on SDM845?
1
I am trying to use ollama via alpaca on my oneplus 6T runnig postmarketOS I can run some models just fine but I am pretty sure they are running on the CPU which i dont want. How do i or can i even get them to run on the GPU?
2026-02-19T20:46:37
https://www.reddit.com/r/LocalLLaMA/comments/1r9avxm/how_to_use_gpu_on_sdm845/
Adventurous-Test-246
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r9avxm
false
null
t3_1r9avxm
/r/LocalLLaMA/comments/1r9avxm/how_to_use_gpu_on_sdm845/
false
false
self
1
null
Trying to run LLMs on Providers the EU? I mapped out which providers actually have GPUs
10
I compared GPU availability across 17 EU cloud providers — here's who actually has GPUs in Europe I run [eucloudcost.com](https://www.eucloudcost.com) and just went through the pain of checking (hopefully) most EU cloud providers for GPU instance availability. Wrote it up here: [GPU Cloud Instances from European Prov...
2026-02-19T20:44:50
https://www.reddit.com/r/LocalLLaMA/comments/1r9au4o/trying_to_run_llms_on_providers_the_eu_i_mapped/
mixxor1337
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r9au4o
false
null
t3_1r9au4o
/r/LocalLLaMA/comments/1r9au4o/trying_to_run_llms_on_providers_the_eu_i_mapped/
false
false
self
10
null
Llama.cpp on Android issue
2
I am running llama.cpp with vulkan enabled on my Samsung Tab S10 Ultra and I'm getting 10-11 TKPS on generation but inference is like 0.5-0.6 TKPS. Is there something I can do more to get that fixed or is it hardware limitations of the Exynos chip and iGPU. I'm running a 1B model in the screenshot and I'm not getting t...
2026-02-19T20:24:52
https://i.redd.it/579grl38gikg1.png
PayBetter
i.redd.it
1970-01-01T00:00:00
0
{}
1r9aay0
false
null
t3_1r9aay0
/r/LocalLLaMA/comments/1r9aay0/llamacpp_on_android_issue/
false
false
https://preview.redd.it/…7489eba83b7dff12
2
{'enabled': True, 'images': [{'id': '579grl38gikg1', 'resolutions': [{'height': 53, 'url': 'https://preview.redd.it/579grl38gikg1.png?width=108&crop=smart&auto=webp&s=431ef487ac210bd3e4a3edd82050d65c278af848', 'width': 108}, {'height': 107, 'url': 'https://preview.redd.it/579grl38gikg1.png?width=216&crop=smart&auto=web...
I built a free local AI image search app — find images by typing what's in them
11
Built Makimus-AI, a free open source app that lets you search your entire image library using natural language. Just type "girl in red dress" or "sunset on the beach" and it finds matching images instantly — even works with image-to-image search. Runs fully local on your GPU, no internet needed after setup. \[Ma...
2026-02-19T20:16:18
https://i.redd.it/uloa0etjeikg1.gif
ravenlolanth
i.redd.it
1970-01-01T00:00:00
0
{}
1r9a2rs
false
null
t3_1r9a2rs
/r/LocalLLaMA/comments/1r9a2rs/i_built_a_free_local_ai_image_search_app_find/
false
false
https://preview.redd.it/…7b798374019b1d88
11
{'enabled': True, 'images': [{'id': 'uloa0etjeikg1', 'resolutions': [{'height': 60, 'url': 'https://preview.redd.it/uloa0etjeikg1.gif?width=108&crop=smart&format=png8&s=37f9ba9623c3339233926c1b6dd435b33af95675', 'width': 108}, {'height': 121, 'url': 'https://preview.redd.it/uloa0etjeikg1.gif?width=216&crop=smart&format...
Pack it up guys, open weight AI models running offline locally on PCs aren't real. 😞
807
2026-02-19T20:11:42
https://i.redd.it/ogkdei4udikg1.png
CesarOverlorde
i.redd.it
1970-01-01T00:00:00
0
{}
1r99yda
false
null
t3_1r99yda
/r/LocalLLaMA/comments/1r99yda/pack_it_up_guys_open_weight_ai_models_running/
false
false
https://preview.redd.it/…393b21aa330c6307
807
{'enabled': True, 'images': [{'id': 'ogkdei4udikg1', 'resolutions': [{'height': 160, 'url': 'https://preview.redd.it/ogkdei4udikg1.png?width=108&crop=smart&auto=webp&s=9357484c42901b59b8e7b6f714051b90b73aed8f', 'width': 108}, {'height': 320, 'url': 'https://preview.redd.it/ogkdei4udikg1.png?width=216&crop=smart&auto=we...
Can GLM-5 Survive 30 Days on FoodTruck Bench? [Full Review]
89
GLM-5 was the most requested model since launch. Wrote a full case study with a side-by-side vs Sonnet 4.5 and DeepSeek V3.2. **Full article:** [foodtruckbench.com/blog/glm-5](https://foodtruckbench.com/blog/glm-5) **Leaderboard updated:** [foodtruckbench.com](https://foodtruckbench.com) *(read the article first if y...
2026-02-19T20:10:06
https://i.redd.it/492jsbpjkhkg1.png
Disastrous_Theme5906
i.redd.it
1970-01-01T00:00:00
0
{}
1r99wrj
false
null
t3_1r99wrj
/r/LocalLLaMA/comments/1r99wrj/can_glm5_survive_30_days_on_foodtruck_bench_full/
false
false
https://preview.redd.it/…d79aa43cba4d7eb1
89
{'enabled': True, 'images': [{'id': '492jsbpjkhkg1', 'resolutions': [{'height': 60, 'url': 'https://preview.redd.it/492jsbpjkhkg1.png?width=108&crop=smart&auto=webp&s=ed43c65d1902dd563eb3ab71d2b90eb2b13e8e37', 'width': 108}, {'height': 121, 'url': 'https://preview.redd.it/492jsbpjkhkg1.png?width=216&crop=smart&auto=web...
running a dual-GPU setup 2 GGUF LLM models simultaneously (one on each GPU).
1
am currently running a dual-GPU setup where I execute two separate GGUF LLM models simultaneously (one on each GPU). Both models are configured with CPU offloading. Will this hardware configuration allow both models to run at the same time, or will they compete for system resources in a way that prevents simultaneous e...
2026-02-19T20:01:08
https://www.reddit.com/r/LocalLLaMA/comments/1r99ntp/running_a_dualgpu_setup_2_gguf_llm_models/
Quiet_Dasy
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r99ntp
false
null
t3_1r99ntp
/r/LocalLLaMA/comments/1r99ntp/running_a_dualgpu_setup_2_gguf_llm_models/
false
false
self
1
null
I analyzed 3 years of my own AI usage (3,662 conversations across 5 model generations)
0
Over the last 3 years I logged and analyzed my own AI usage: * 3,662 conversations * 89,726 messages * 5 model generations (including reasoning models) A few patterns stood out: 1. Adoption wasn’t linear. It step-functioned. There were permanent baseline resets. 2. Delegation declined over time. Iteration increased....
2026-02-19T20:00:16
https://www.reddit.com/r/LocalLLaMA/comments/1r99mvr/i_analyzed_3_years_of_my_own_ai_usage_3662/
gabeighttwo
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r99mvr
false
null
t3_1r99mvr
/r/LocalLLaMA/comments/1r99mvr/i_analyzed_3_years_of_my_own_ai_usage_3662/
false
false
https://preview.redd.it/…1c9e6a8025563434
0
null
Static analysis for AI agent skills - exploring a missing trust layer
0
Let’s face it, we’re all kind of addicted to coding agents. Claude Code, OpenCode, OpenClaw, etc. The productivity boost is real. Most of us run these agents with our own user privileges. That means they can read and write files, execute shell commands, access environment variables, and effectively operate at the same...
2026-02-19T20:00:14
https://www.reddit.com/r/LocalLLaMA/comments/1r99mu8/static_analysis_for_ai_agent_skills_exploring_a/
Subject_Marsupial_25
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r99mu8
false
null
t3_1r99mu8
/r/LocalLLaMA/comments/1r99mu8/static_analysis_for_ai_agent_skills_exploring_a/
false
false
self
0
{'enabled': False, 'images': [{'id': 'wT-UAPXuIE83KAWn2szJqyvtRbyem8IlX2XdUZnImgg', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/wT-UAPXuIE83KAWn2szJqyvtRbyem8IlX2XdUZnImgg.png?width=108&crop=smart&auto=webp&s=7b46b3903d203c77e8018855a96da435850b8b8b', 'width': 108}, {'height': 108, 'url': 'h...
PrimeIntellect/INTELLECT-3.1
0
I use Autoparser PR by u/ilintar (otherwise the chat template is not great) There is a small bug in config.json so the only working GGUF right now for PrimeIntellect 3.1 is the one from noctrex: [https://huggingface.co/noctrex/INTELLECT-3.1-MXFP4\_MOE-GGUF](https://huggingface.co/noctrex/INTELLECT-3.1-MXFP4_MOE-GGUF) ...
2026-02-19T19:53:48
https://www.reddit.com/gallery/1r99gpa
jacek2023
reddit.com
1970-01-01T00:00:00
0
{}
1r99gpa
false
null
t3_1r99gpa
/r/LocalLLaMA/comments/1r99gpa/primeintellectintellect31/
false
false
https://preview.redd.it/…01959a4eb0a0e704
0
null
Building a lightweight Python bridge for Qwen 2.5 Coder (7B) Handling loops and context poisoning in a 3-tier memory setup?
0
Hi everyone, I'm currently building a digital roommate on a dedicated Linux Mint box (Ryzen 3200G, GTX 1070 8GB). I’m using Ollama with Qwen 2.5 Coder 7B and a custom Python bridge to interact with the shell. My goal is a 3-tier memory system: Tier 1 (Long-Term): A markdown file with core system specs and identity. ...
2026-02-19T19:48:57
https://www.reddit.com/r/LocalLLaMA/comments/1r99c0h/building_a_lightweight_python_bridge_for_qwen_25/
This-Magazine4277
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r99c0h
false
null
t3_1r99c0h
/r/LocalLLaMA/comments/1r99c0h/building_a_lightweight_python_bridge_for_qwen_25/
false
false
self
0
null
[2602.15950] Can Vision-Language Models See Squares? Text-Recognition Mediates Spatial Reasoning Across Three Model Families
4
2026-02-19T19:43:48
https://arxiv.org/abs/2602.15950
Friendly-Card-9676
arxiv.org
1970-01-01T00:00:00
0
{}
1r996zv
false
null
t3_1r996zv
/r/LocalLLaMA/comments/1r996zv/260215950_can_visionlanguage_models_see_squares/
false
false
default
4
null
Training Small Transformers from Scratch
6
I’ve been building and training small Transformer models entirely from scratch. As a baseline, I pretrained on Polish Wikipedia and then applied supervised fine-tuning (SFT) on Q&A datasets. A few observations: Full training runs take many hours, even at small scale. Early-stage SFT is highly sensitive and tends to ...
2026-02-19T19:34:39
https://i.redd.it/p1hz1sh97ikg1.jpeg
Funny-Shake-2668
i.redd.it
1970-01-01T00:00:00
0
{}
1r98yd3
false
null
t3_1r98yd3
/r/LocalLLaMA/comments/1r98yd3/training_small_transformers_from_scratch/
false
false
https://preview.redd.it/…3ffcaea26e162881
6
{'enabled': True, 'images': [{'id': 'p1hz1sh97ikg1', 'resolutions': [{'height': 141, 'url': 'https://preview.redd.it/p1hz1sh97ikg1.jpeg?width=108&crop=smart&auto=webp&s=5c9e5aa28b2f5b789b7691c2b3e7cdf8d703b37e', 'width': 108}, {'height': 283, 'url': 'https://preview.redd.it/p1hz1sh97ikg1.jpeg?width=216&crop=smart&auto=...
Claude Code missing heartbeat + messaging? Built open source fixes: TinMan + C3Poh
1
[removed]
2026-02-19T19:32:58
https://www.reddit.com/r/LocalLLaMA/comments/1r98wps/claude_code_missing_heartbeat_messaging_built/
arosic
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r98wps
false
null
t3_1r98wps
/r/LocalLLaMA/comments/1r98wps/claude_code_missing_heartbeat_messaging_built/
false
false
self
1
{'enabled': False, 'images': [{'id': 'KybjqStEet39KPBfavc_eFO4oEpNS95jPK0KYZ_sa0o', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/KybjqStEet39KPBfavc_eFO4oEpNS95jPK0KYZ_sa0o.png?width=108&crop=smart&auto=webp&s=5fe0568e9995d2aff4d07ad752c426aab6c7db53', 'width': 108}, {'height': 108, 'url': 'h...
selling ampere gpu servers
1
I have some ampere gpu servers for sale Supermicro 4U with 8x 3090, 2U with 6x 3090, 1U (4 gpu server) with 2x A100 80Gb and 3x ASUS 2U with A40 48GB and A30 24GB also selling a pair of RTX A6000 48GB in epyc workstation with 256gb ram EU, selling at half price from new https://preview.redd.it/vrd4fos76ikg1...
2026-02-19T19:29:24
https://www.reddit.com/r/LocalLLaMA/comments/1r98t87/selling_ampere_gpu_servers/
andreas_karasamanis
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r98t87
false
null
t3_1r98t87
/r/LocalLLaMA/comments/1r98t87/selling_ampere_gpu_servers/
false
false
https://preview.redd.it/…1968b5390824bd63
1
null
I built an AI code reviewer that runs in your terminal!
0
Since this is just a single person project (myself), I was wondering if some of you peeps can review it for me, so I can find areas that might need fixing, and just get some overall feedback in general, it would be really appreciated! the link to the tool is on my github here - [https://github.com/jaydendancer12/ai-cod...
2026-02-19T19:26:02
https://www.reddit.com/r/LocalLLaMA/comments/1r98pwb/i_built_an_ai_code_reviewer_that_runs_in_your/
PeonPicasso
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r98pwb
false
null
t3_1r98pwb
/r/LocalLLaMA/comments/1r98pwb/i_built_an_ai_code_reviewer_that_runs_in_your/
false
false
self
0
{'enabled': False, 'images': [{'id': 'eogCO5Me2TChDUMp1lmfCsA1phZkNPOC4Toe-AO5g2s', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/eogCO5Me2TChDUMp1lmfCsA1phZkNPOC4Toe-AO5g2s.png?width=108&crop=smart&auto=webp&s=e81092676084600345517dc892e0d4c0496b4fbb', 'width': 108}, {'height': 108, 'url': 'h...
How do you handle very complex email threads in RAG systems?
5
I’m building a RAG system where emails are one of the main knowledge sources, and I’m hitting serious limits with complexity. These aren’t simple linear threads. Real cases include: * Long back-and-forth chains with branching replies * Multiple people replying out of order * Partial quotes, trimmed context, and forwa...
2026-02-19T18:53:49
https://www.reddit.com/r/LocalLLaMA/comments/1r97toz/how_do_you_handle_very_complex_email_threads_in/
superhero_io
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r97toz
false
null
t3_1r97toz
/r/LocalLLaMA/comments/1r97toz/how_do_you_handle_very_complex_email_threads_in/
false
false
self
5
null
90% VRAM reduction for DeepSeek-style Engrams: Running GSI-Architecture on Dual Intel Arc (B50)
10
I wanted the "DeepSeek V4" engram knowledge density but only had 32GB of total VRAM across two Intel Arc cards. A naive implementation on my GSI table required 53GB. I got it running at 9.6GB. DeepSeek V4 style "GSI Engram" architecture running on consumer hardware (Dual Intel Arc GPUs) using a custom llama.cpp fork! ...
2026-02-19T18:40:37
https://www.reddit.com/r/LocalLLaMA/comments/1r97gcm/90_vram_reduction_for_deepseekstyle_engrams/
Existing_Boat_3203
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r97gcm
false
null
t3_1r97gcm
/r/LocalLLaMA/comments/1r97gcm/90_vram_reduction_for_deepseekstyle_engrams/
false
false
self
10
null
Shipped Izwi v0.1.0-alpha-12 (faster ASR + smarter TTS)
7
Between 0.1.0-alpha-11 and 0.1.0-alpha-12, we shipped: * Long-form ASR with automatic chunking + overlap stitching * Faster ASR streaming and less unnecessary transcoding on uploads * MLX Parakeet support * New 4-bit model variants (Parakeet, LFM2.5, Qwen3 chat, forced aligner) * TTS improvements: model-aware output l...
2026-02-19T18:36:25
https://github.com/agentem-ai/izwi
zinyando
github.com
1970-01-01T00:00:00
0
{}
1r97c6u
false
null
t3_1r97c6u
/r/LocalLLaMA/comments/1r97c6u/shipped_izwi_v010alpha12_faster_asr_smarter_tts/
false
false
https://external-preview…5fd86c7f262c055a
7
{'enabled': False, 'images': [{'id': '3Xhdk0m7UpHT_6-2Vb6KEROXL_W-MTGYkkmFVkebFhs', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/3Xhdk0m7UpHT_6-2Vb6KEROXL_W-MTGYkkmFVkebFhs.png?width=108&crop=smart&auto=webp&s=b5be9a247d30acf180be95c634725d52819e94a5', 'width': 108}, {'height': 108, 'url': 'h...
Wanna know your opinion on this
0
I found it really cool, because I dont think anyone else posted something like this achieved by openclaw. whats your opinion on the tech? video source: X
2026-02-19T18:27:57
https://v.redd.it/3xf9kcluuhkg1
No-Speech12
/r/LocalLLaMA/comments/1r973ky/wanna_know_your_opinion_on_this/
1970-01-01T00:00:00
0
{}
1r973ky
false
null
t3_1r973ky
/r/LocalLLaMA/comments/1r973ky/wanna_know_your_opinion_on_this/
false
false
https://external-preview…f9d1d38e77fafaa5
0
{'enabled': False, 'images': [{'id': 'cGFxN2w2cHV1aGtnMQHP_xPwXKSqRrKbl5oPYENybNxrJuSak6HcsAQ2Uv8w', 'resolutions': [{'height': 192, 'url': 'https://external-preview.redd.it/cGFxN2w2cHV1aGtnMQHP_xPwXKSqRrKbl5oPYENybNxrJuSak6HcsAQ2Uv8w.png?width=108&crop=smart&format=pjpg&auto=webp&s=61567ac440832800efa8e91781b7744a1f56...
48GB 4090 Power limiting tests 450, 350, 250w - Noise and LLM throughput per power level
16
The 48gb 4090's stock power is 450w but thats kind of alot for that 2 slot format where similar A100/6000Pro cards are 300w max for that format), so the fans really have to go (5k rpm blower) to keep it cool. Stacked in pcie slots the cards with less airflow intake can see upto 80C and all are noisy at 70dB (white nois...
2026-02-19T18:13:45
https://www.reddit.com/r/LocalLLaMA/comments/1r96pgp/48gb_4090_power_limiting_tests_450_350_250w_noise/
computune
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r96pgp
false
null
t3_1r96pgp
/r/LocalLLaMA/comments/1r96pgp/48gb_4090_power_limiting_tests_450_350_250w_noise/
false
false
self
16
{'enabled': False, 'images': [{'id': 'gNXPlUh_k-A6MCGNnS4Nyj_HmfTb3gNlqIdRJc9hQa4', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/gNXPlUh_k-A6MCGNnS4Nyj_HmfTb3gNlqIdRJc9hQa4.png?width=108&crop=smart&auto=webp&s=69ad4958d94b1b1fa0076835a1ec936f4bc10d88', 'width': 108}, {'height': 108, 'url': 'h...
What can i run with 5070 ti 12gb vram & 32gb ram
1
Hey guys, i have a pc with rtx 5070 ti 12gb vram & 32gb ram ddr5 5600 mts & Intel Core Ultra 9 275HX I usually use the pc for gaming but i was thinking of using local ai and wondering what kind of llms i can run. My main priorities for using them are coding, chatting and controlling clawdbot
2026-02-19T18:11:22
https://www.reddit.com/r/LocalLLaMA/comments/1r96n69/what_can_i_run_with_5070_ti_12gb_vram_32gb_ram/
chonlinepz
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r96n69
false
null
t3_1r96n69
/r/LocalLLaMA/comments/1r96n69/what_can_i_run_with_5070_ti_12gb_vram_32gb_ram/
false
false
self
1
null
Built a platform to deploy AI models instantly. Looking for honest feedback
0
I built a platform called Quantlix because deploying models often felt more complex than training them. The goal is simple: upload model → get endpoint → done. Right now it runs CPU inference by default for portability, with GPU support planned via dedicated nodes. It’s still early and I’m mainly looking for ...
2026-02-19T17:56:45
https://www.quantlix.ai/
Alternative-Race432
quantlix.ai
1970-01-01T00:00:00
0
{}
1r96834
false
null
t3_1r96834
/r/LocalLLaMA/comments/1r96834/built_a_platform_to_deploy_ai_models_instantly/
false
false
default
0
null
Llm running full on GPU. Does 4gb RAM enough ?Speed ?
0
Llm running full on GPU. Does 4gb RAM enoigh ?Speed ?
2026-02-19T17:56:09
https://www.reddit.com/r/LocalLLaMA/comments/1r967hg/llm_running_full_on_gpu_does_4gb_ram_enough_speed/
Quiet_Dasy
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r967hg
false
null
t3_1r967hg
/r/LocalLLaMA/comments/1r967hg/llm_running_full_on_gpu_does_4gb_ram_enough_speed/
false
false
self
0
null
I built an open-source, self-hosted RAG app to chat with PDFs using any LLM (free models supported)
0
Hey everyone 👋 I built **Vortex**, an open-source, self-hosted RAG (Retrieval-Augmented Generation) chat application. It lets you chat with your own documents (PDFs / URLs) using **any LLM provider**, with a clean UI and no lock-in. Key features: • Multi-provider LLM support (OpenAI, Anthropic, xAI/Grok, Ope...
2026-02-19T17:54:22
https://www.reddit.com/r/LocalLLaMA/comments/1r965p9/i_built_an_opensource_selfhosted_rag_app_to_chat/
ankushchhabra02
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r965p9
false
null
t3_1r965p9
/r/LocalLLaMA/comments/1r965p9/i_built_an_opensource_selfhosted_rag_app_to_chat/
false
false
self
0
{'enabled': False, 'images': [{'id': '9JmmVZk3WJqLsKOHGvkX0KduDhvMZQvkFmdeVvZDy-I', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/9JmmVZk3WJqLsKOHGvkX0KduDhvMZQvkFmdeVvZDy-I.png?width=108&crop=smart&auto=webp&s=670e3f27b82363bd7c151244996121c7a8c17fa6', 'width': 108}, {'height': 108, 'url': 'h...
Using Ollama to fight executive dysfunction: A local-first app that turns hourly CSV logs and Jira references into daily stand-up summaries.
1
Hey r/LocalLLaMA, ​I wanted to share a practical local AI project I’ve been working on to solve my own executive dysfunction, specifically regarding time blindness and context switching at work. ​Coming from a senior C#, SQL, and JavaScript background, I've spent my career dealing with rigid Jira-style ticketing system...
2026-02-19T17:53:18
https://www.reddit.com/r/LocalLLaMA/comments/1r964lb/using_ollama_to_fight_executive_dysfunction_a/
Tech_Devils
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r964lb
false
null
t3_1r964lb
/r/LocalLLaMA/comments/1r964lb/using_ollama_to_fight_executive_dysfunction_a/
false
false
self
1
{'enabled': False, 'images': [{'id': 'fY2XJ1XQCwNWaM0O_h5P96oPa02vBKxm3Tbx8p4T08I', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/fY2XJ1XQCwNWaM0O_h5P96oPa02vBKxm3Tbx8p4T08I.png?width=108&crop=smart&auto=webp&s=a9c43d948776409b2c71e5d85a78391176add2de', 'width': 108}, {'height': 108, 'url': 'h...
Template issue with unsloth/Qwen3.5 via llama.cpp
5
Any attempt to use tools throws this error \`\`\` While executing FilterExpression at line 55, column 63 in source: ...- for args\_name, args\_value in arguments|items %}↵ {{- '<... \^ Error: Unknown (built-in) filter 'items' for type String \`\`\` I've been manually changing the template but I wonder if t...
2026-02-19T17:52:33
https://www.reddit.com/r/LocalLLaMA/comments/1r963s3/template_issue_with_unslothqwen35_via_llamacpp/
joblesspirate
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r963s3
false
null
t3_1r963s3
/r/LocalLLaMA/comments/1r963s3/template_issue_with_unslothqwen35_via_llamacpp/
false
false
self
5
null
OpenInsight API Reference rewritten for LLMs
1
My mate recently asked me to look at his comprehensive OpenInsight documentation that was 1m context so he was struggling to use it with AI. I've developed a way to compress stuff that's consistent and really easy for AI to follow. So I created an API reference set that's around 100k in total for the lot. ...
2026-02-19T17:46:15
https://www.reddit.com/r/LocalLLaMA/comments/1r95xd0/openinsight_api_reference_rewritten_for_llms/
sbuswell
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r95xd0
false
null
t3_1r95xd0
/r/LocalLLaMA/comments/1r95xd0/openinsight_api_reference_rewritten_for_llms/
false
false
self
1
null
I distilled a model from Claude Opus 4.5, how do I test it?
2
According to artificial analysis benchmarks, Qwen 3 4b thinking 2507 is the best model under 12b parameters, I’m using Kaggle free plan to fine tune models on double T4 GPUs so this is the best I’ve got I found a dataset (\~9.6MB jsonl) consisting of Claude opus 4.5 input and output prompt/responses, then I converted ...
2026-02-19T17:45:17
https://www.reddit.com/r/LocalLLaMA/comments/1r95we9/i_distilled_a_model_from_claude_opus_45_how_do_i/
Foxen--
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r95we9
false
null
t3_1r95we9
/r/LocalLLaMA/comments/1r95we9/i_distilled_a_model_from_claude_opus_45_how_do_i/
false
false
self
2
null
Routering as a beginner. Guide pls
0
hey im making an ios app that is going to use ai for fashion and styling. however i cant decide on how and what models to router for the best results and least cost. my current stack Gemini 2.5 flash lite for routering and basic tasks gemini 2.5 flash and the main default stylist qwen2.5VL for vision and ana...
2026-02-19T17:18:05
https://www.reddit.com/r/LocalLLaMA/comments/1r95570/routering_as_a_beginner_guide_pls/
Agile_Classroom_4585
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r95570
false
null
t3_1r95570
/r/LocalLLaMA/comments/1r95570/routering_as_a_beginner_guide_pls/
false
false
self
0
null
How to build production-ready AI systems with event-driven architecture
0
2026-02-19T17:10:32
https://modelriver.com/blog/event-driven-ai-architecture
arx-go
modelriver.com
1970-01-01T00:00:00
0
{}
1r94xk0
false
null
t3_1r94xk0
/r/LocalLLaMA/comments/1r94xk0/how_to_build_productionready_ai_systems_with/
false
false
https://external-preview…2cf78d454ec1f021
0
{'enabled': False, 'images': [{'id': 'ABe8L0IrxfRBQ5eoxkOVzr1YeMSxS6c9Nc7_OnqyzgM', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/ABe8L0IrxfRBQ5eoxkOVzr1YeMSxS6c9Nc7_OnqyzgM.png?width=108&crop=smart&auto=webp&s=753b07d50e52869436aa0e78f01cda565f4f34dd', 'width': 108}, {'height': 113, 'url': 'h...
OpenClaw Controllable Agent Evolution: Keep AI within bounds, require human authorization for boundary breaks.
0
2026-02-19T17:07:37
https://github.com/szctsa-oss/AI-Agent-evolution
Weary_Series_5020
github.com
1970-01-01T00:00:00
0
{}
1r94uni
false
null
t3_1r94uni
/r/LocalLLaMA/comments/1r94uni/openclaw_controllable_agent_evolution_keep_ai/
false
false
https://external-preview…c8861917f3003846
0
{'enabled': False, 'images': [{'id': 'LsJUfbvuw-FM5xXuxpYFmZJ7t8qITvXsN5LBoZpYkzI', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/LsJUfbvuw-FM5xXuxpYFmZJ7t8qITvXsN5LBoZpYkzI.png?width=108&crop=smart&auto=webp&s=cd8ae4d54af5f7d288ad87393aab8fbd2777b589', 'width': 108}, {'height': 108, 'url': 'h...
microgpt playground: Build, train, and run LLMs — directly in your browser
87
Inspired by Andrej Karpathy's microgpt, I built an educational neural network builder that breaks down "mysterious" LLMs into their primitive components. The goal is to teach people how LLMs are built, by constructing them from the ground up (and then modifying nodes, adding connections, and rewiring the graph). This i...
2026-02-19T16:59:08
https://v.redd.it/gikcumdiehkg1
xenovatech
v.redd.it
1970-01-01T00:00:00
0
{}
1r94lv2
false
{'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/gikcumdiehkg1/DASHPlaylist.mpd?a=1774112580%2CMjI4NmRhMjMyYzcxODMyZDYxOGNkNzg5MGM1M2ZhYzc4ZDVkY2UyZWI0ZDNlOTk3NDM3NzNjZDA3ZWQwM2RkMw%3D%3D&v=1&f=sd', 'duration': 187, 'fallback_url': 'https://v.redd.it/gikcumdiehkg1/CMAF_720.mp4?source=fallback', 'h...
t3_1r94lv2
/r/LocalLLaMA/comments/1r94lv2/microgpt_playground_build_train_and_run_llms/
false
false
https://external-preview…15456f7c96cc0094
87
{'enabled': False, 'images': [{'id': 'YnNxc2ZxZGllaGtnMbLIzqnNOijabBHIPuWpkRNlVyT41oFEP2h_i--AGtUk', 'resolutions': [{'height': 61, 'url': 'https://external-preview.redd.it/YnNxc2ZxZGllaGtnMbLIzqnNOijabBHIPuWpkRNlVyT41oFEP2h_i--AGtUk.png?width=108&crop=smart&format=pjpg&auto=webp&s=fae7bdc26281838de15790b2ad3b0c83e4023...
Are there any plugin or all-in-one solutions for TTS interfacing with other local models?
1
I really like what ChatGPT had for TTS interactions, is there something like that that's easy to implement. I could easily run 1 TTS model and a more general model. But the interaction would require some type of orchestration which seems like a lot of effort. I can't be the only one that's looking for this but I haven'...
2026-02-19T16:50:05
https://www.reddit.com/r/LocalLLaMA/comments/1r94cx4/are_there_any_plugin_or_allinone_solutions_for/
fragment_me
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r94cx4
false
null
t3_1r94cx4
/r/LocalLLaMA/comments/1r94cx4/are_there_any_plugin_or_allinone_solutions_for/
false
false
self
1
null
Temporary access to Ryzen AI Max 395 (128GB) to test real-world local LLM workflows
3
I’m considering a Ryzen AI Max 395 (128GB) (most likely Framework Desktop) for local models for coding, but I’d like to test it in my real coding workflows before buying. Only need short-term access (a weekend or a few days), I guess API key for LM Studio will be enough. Or maybe anyone knows a company that has a VP...
2026-02-19T16:38:15
https://www.reddit.com/r/LocalLLaMA/comments/1r94192/temporary_access_to_ryzen_ai_max_395_128gb_to/
lazy-kozak
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r94192
false
null
t3_1r94192
/r/LocalLLaMA/comments/1r94192/temporary_access_to_ryzen_ai_max_395_128gb_to/
false
false
self
3
null
Understand GANs using Visual Book
1
Created this presentation using Visual Book (my product). I am specialising it for understanding complex topics in science and engineering. What do you guys think about this format ? You can see the full presentation at: [Visual Book](https://www.visualbook.app/books/view/px7bfwfh6a2e/gan_basics)
2026-02-19T16:36:01
https://www.reddit.com/gallery/1r93z48
simplext
reddit.com
1970-01-01T00:00:00
0
{}
1r93z48
false
null
t3_1r93z48
/r/LocalLLaMA/comments/1r93z48/understand_gans_using_visual_book/
false
false
https://preview.redd.it/…623632591f047ab1
1
null
Self-hosted execution viewer for local LLM tool workflows (Ollama)
1
[removed]
2026-02-19T16:26:05
https://i.redd.it/2uan95qh8hkg1.png
saurabhjain1592
i.redd.it
1970-01-01T00:00:00
0
{}
1r93pjt
false
null
t3_1r93pjt
/r/LocalLLaMA/comments/1r93pjt/selfhosted_execution_viewer_for_local_llm_tool/
false
false
https://preview.redd.it/…2e8df84aa5740eee
1
{'enabled': True, 'images': [{'id': '2uan95qh8hkg1', 'resolutions': [{'height': 45, 'url': 'https://preview.redd.it/2uan95qh8hkg1.png?width=108&crop=smart&auto=webp&s=5a74351150de97235351b3f4d4747f1564a05e9a', 'width': 108}, {'height': 90, 'url': 'https://preview.redd.it/2uan95qh8hkg1.png?width=216&crop=smart&auto=webp...
Local iOS voice to text app (alternative to Wispr Flow)
11
I usually dictate for 2 to 3 hours everyday in Dragon dictation and until recently used Wispr Flow on my personal devices. Over the last few months, I realized that local Al models can give you the same quality as Wispr Flow with complete privacy and without the ongoing subscription cost. So I built an iOS app, a MacOS...
2026-02-19T16:18:23
https://v.redd.it/ft3amnq88hkg1
Impressive-Sir9633
v.redd.it
1970-01-01T00:00:00
0
{}
1r93i0p
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/ft3amnq88hkg1/DASHPlaylist.mpd?a=1774111758%2CYTllYmEwNzc2NDE3ZmNhNGE0M2UxNDhhNTdjMTMwNWQyOTdmZDUyMzBiNWZjNjAwZTQxYWNhYWFiYzkyN2IwNw%3D%3D&v=1&f=sd', 'duration': 73, 'fallback_url': 'https://v.redd.it/ft3amnq88hkg1/CMAF_1080.mp4?source=fallback', 'h...
t3_1r93i0p
/r/LocalLLaMA/comments/1r93i0p/local_ios_voice_to_text_app_alternative_to_wispr/
false
false
https://external-preview…e9de48f03747a6b8
11
{'enabled': False, 'images': [{'id': 'cWpnOWU0cjg4aGtnMWS5e3158whTYNSe1GEK61Oq_uqxznQSR6QLvGe1g5lP', 'resolutions': [{'height': 192, 'url': 'https://external-preview.redd.it/cWpnOWU0cjg4aGtnMWS5e3158whTYNSe1GEK61Oq_uqxznQSR6QLvGe1g5lP.png?width=108&crop=smart&format=pjpg&auto=webp&s=61db2d541cbec2713cdff81e6630015431c2...
Cooking Buttery Flaky Croissants in Infinite Kitchen, updated LLM cooking system
9
Now with a smarter AI cooking model and a greater set of base ingredients and tools. Tens of thousands of dishes should now be possible. [https://infinite-kitchen.com/kitchen](https://infinite-kitchen.com/kitchen)
2026-02-19T16:18:03
https://v.redd.it/cux29vxe7hkg1
VirtualJamesHarrison
v.redd.it
1970-01-01T00:00:00
0
{}
1r93hov
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/cux29vxe7hkg1/DASHPlaylist.mpd?a=1774111749%2CM2FlZGE1MDE1OTk5OGMxNDMxMmU4NWIwYWM5YWRjZDE2NzQwYjNiZWYwMWU3NjFiNGJmNGY5M2E3MGQ4NDdlOA%3D%3D&v=1&f=sd', 'duration': 96, 'fallback_url': 'https://v.redd.it/cux29vxe7hkg1/CMAF_1080.mp4?source=fallback', 'h...
t3_1r93hov
/r/LocalLLaMA/comments/1r93hov/cooking_buttery_flaky_croissants_in_infinite/
false
false
https://external-preview…44811af49c4d4d99
9
{'enabled': False, 'images': [{'id': 'dWN2ajUxeWU3aGtnMdFfNGukk1pLaof6smZa7v32DJGcTt-hD0TXGWxOQi9C', 'resolutions': [{'height': 70, 'url': 'https://external-preview.redd.it/dWN2ajUxeWU3aGtnMdFfNGukk1pLaof6smZa7v32DJGcTt-hD0TXGWxOQi9C.png?width=108&crop=smart&format=pjpg&auto=webp&s=e83fc3e7b99105de678a23c3ec4375bda44c5...
Best local Vision LLM to classify bike components on a 4090
4
Hey everyone, I’m working on a project that involves parsing photos from used bike classified ads to identify specific attributes of bicycle components. Rather than just finding the parts, I need the model to answer specific classification questions, such as: Are they disc brakes or rim brakes? Is the shifting mecha...
2026-02-19T16:16:06
https://www.reddit.com/r/LocalLLaMA/comments/1r93fuf/best_local_vision_llm_to_classify_bike_components/
Likid3
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r93fuf
false
null
t3_1r93fuf
/r/LocalLLaMA/comments/1r93fuf/best_local_vision_llm_to_classify_bike_components/
false
false
self
4
null
[Project] DocParse Arena: Build your own private VLM leaderboard for your specific document tasks
1
https://reddit.com/link/1r93dow/video/g2g19mla7hkg1/player Hi r/LocalLLaMA, We all know and love general benchmarks like [ocrarena.ai](http://ocrarena.ai) (Vision Arena). They are great for seeing global VLM trends, but when you're building a specific tool (like an invoice parser, resume extractor, or medical form di...
2026-02-19T16:13:54
https://www.reddit.com/r/LocalLLaMA/comments/1r93dow/project_docparse_arena_build_your_own_private_vlm/
Available-Message509
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r93dow
false
null
t3_1r93dow
/r/LocalLLaMA/comments/1r93dow/project_docparse_arena_build_your_own_private_vlm/
false
false
self
1
null
Seems Microsoft is really set on not repeating a Sidney incident
122
2026-02-19T15:47:39
https://i.redd.it/n9127fik2hkg1.png
frubberism
i.redd.it
1970-01-01T00:00:00
0
{}
1r92o58
false
null
t3_1r92o58
/r/LocalLLaMA/comments/1r92o58/seems_microsoft_is_really_set_on_not_repeating_a/
false
false
https://preview.redd.it/…b9c3b6a25959e628
122
{'enabled': True, 'images': [{'id': 'n9127fik2hkg1', 'resolutions': [{'height': 11, 'url': 'https://preview.redd.it/n9127fik2hkg1.png?width=108&crop=smart&auto=webp&s=89702833db2e7fbd58d654f0720aaab9b67d3450', 'width': 108}, {'height': 23, 'url': 'https://preview.redd.it/n9127fik2hkg1.png?width=216&crop=smart&auto=webp...
Why does every llamacpp update get worse?
0
They don’t like to give people options anymore. Whether it’s removing thought bubbles with the 3 dots, themes going from a long list to choose from, to only black and white, and finally to NO theme choice, and version 8095 broke image uploads where I can “upload” but the model stopped reading them and acts like I never...
2026-02-19T15:47:38
https://www.reddit.com/r/LocalLLaMA/comments/1r92o4w/why_does_every_llamacpp_update_get_worse/
XiRw
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r92o4w
false
null
t3_1r92o4w
/r/LocalLLaMA/comments/1r92o4w/why_does_every_llamacpp_update_get_worse/
false
false
self
0
null
CUDA scan kernels: hierarchical vs single-pass, decoupled lookbacks
2
I wrote up a deep dive on implementing scan / prefix-sum efficiently on GPUs, with code and benchmarking. What’s covered: * Hierarchical scans: block-local scan → write block totals → scan totals → carry-in add * Single-pass scans: the "domino" idea, and why naive inter-block propagation can stall / deadlock without ...
2026-02-19T15:44:06
https://www.reddit.com/r/LocalLLaMA/comments/1r92klv/cuda_scan_kernels_hierarchical_vs_singlepass/
shreyansh26
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r92klv
false
null
t3_1r92klv
/r/LocalLLaMA/comments/1r92klv/cuda_scan_kernels_hierarchical_vs_singlepass/
false
false
self
2
{'enabled': False, 'images': [{'id': 'BPxKEYcpxDEphjEBUDHMO2gd1x5SRHEq7mB1E7ZwTWc', 'resolutions': [{'height': 88, 'url': 'https://external-preview.redd.it/BPxKEYcpxDEphjEBUDHMO2gd1x5SRHEq7mB1E7ZwTWc.png?width=108&crop=smart&auto=webp&s=435299a15fa00249496d600f1f966a163e587c6a', 'width': 108}, {'height': 176, 'url': 'h...
Claude ai available at cheap price
1
[removed]
2026-02-19T15:35:48
https://www.reddit.com/r/LocalLLaMA/comments/1r92ctu/claude_ai_available_at_cheap_price/
miikey1307
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r92ctu
false
null
t3_1r92ctu
/r/LocalLLaMA/comments/1r92ctu/claude_ai_available_at_cheap_price/
false
false
self
1
null
terminal-bench style tasking
1
[removed]
2026-02-19T15:27:59
https://www.reddit.com/r/LocalLLaMA/comments/1r9259k/terminalbench_style_tasking/
Grouchy-Tiger-2367
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r9259k
false
null
t3_1r9259k
/r/LocalLLaMA/comments/1r9259k/terminalbench_style_tasking/
false
false
self
1
null
Built a security layer for Ollama agents that validates shell commands before running them
0
2026-02-19T15:26:34
https://github.com/Lukentony/AI-guardian-lab
Lukentony
github.com
1970-01-01T00:00:00
0
{}
1r923um
false
null
t3_1r923um
/r/LocalLLaMA/comments/1r923um/built_a_security_layer_for_ollama_agents_that/
false
false
https://external-preview…f733b47839054ec9
0
{'enabled': False, 'images': [{'id': 'HbBaT4IJDedrKuetdwLc0NgSt_4A8iUjV1HirXag6BU', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/HbBaT4IJDedrKuetdwLc0NgSt_4A8iUjV1HirXag6BU.png?width=108&crop=smart&auto=webp&s=66ba83ddd67a866f4ebab1241b70044494ef58c2', 'width': 108}, {'height': 108, 'url': 'h...
Just Found an App With 300+ AI Models - Including Local LLaMA Variants
1
[removed]
2026-02-19T15:25:06
https://www.reddit.com/r/LocalLLaMA/comments/1r922hw/just_found_an_app_with_300_ai_models_including/
That-Procedure-5437
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r922hw
false
null
t3_1r922hw
/r/LocalLLaMA/comments/1r922hw/just_found_an_app_with_300_ai_models_including/
false
false
self
1
{'enabled': False, 'images': [{'id': 'g_10nWxMrdgy6C9YDxRoCGdQ-MjWGKs8MiGzjFJG-Nw', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/g_10nWxMrdgy6C9YDxRoCGdQ-MjWGKs8MiGzjFJG-Nw.jpeg?width=108&crop=smart&auto=webp&s=b6d12fda3e801b781709ae4c459fc1bd90b8b2fe', 'width': 108}, {'height': 113, 'url': '...
True Local AI capabilities - model selection - prompt finess...
1
Hello Guys, I am experimenting with ollama and n8n for some automation. The gig: I am pulling from the French [piste.gouv.fr](http://piste.gouv.fr) court decisions on a period of a month with n8n and the published API. Some processing is done and then I have a code node that is preparing the prompt to be passed to ...
2026-02-19T15:21:26
https://www.reddit.com/r/LocalLLaMA/comments/1r91z21/true_local_ai_capabilities_model_selection_prompt/
Personal-Gur-1
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r91z21
false
null
t3_1r91z21
/r/LocalLLaMA/comments/1r91z21/true_local_ai_capabilities_model_selection_prompt/
false
false
self
1
null
A CLI tool to audit vector embeddings!
8
Working with embeddings (RAG, semantic search, clustering, recommendations, etc.), means: * Generate embeddings * Compute cosine similarity * Run retrieval * Hope it "works" But I stumbled upon the issue of not being able to determine why my RAG responses felt off, retrieval quality being inconsistent and clustering ...
2026-02-19T15:07:19
https://www.reddit.com/r/LocalLLaMA/comments/1r91lsj/a_cli_tool_to_audit_vector_embeddings/
gvij
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r91lsj
false
null
t3_1r91lsj
/r/LocalLLaMA/comments/1r91lsj/a_cli_tool_to_audit_vector_embeddings/
false
false
self
8
{'enabled': False, 'images': [{'id': '2xSxTvd4f7ZXTwZ4_PH_MspkEdSRYmv3toP9cTFAyLg', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/2xSxTvd4f7ZXTwZ4_PH_MspkEdSRYmv3toP9cTFAyLg.png?width=108&crop=smart&auto=webp&s=06cdfa5fca704a37fb98f2432b88daa9d60fd78c', 'width': 108}, {'height': 108, 'url': 'h...
Anthropic Releases Claude Sonnet 4.6 - Now Approaches Opus-Level Intelligence
0
[removed]
2026-02-19T15:00:07
https://www.reddit.com/r/LocalLLaMA/comments/1r91exn/anthropic_releases_claude_sonnet_46_now/
That-Procedure-5437
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r91exn
false
null
t3_1r91exn
/r/LocalLLaMA/comments/1r91exn/anthropic_releases_claude_sonnet_46_now/
false
false
self
0
null
Anthropic Releases Claude Sonnet 4.6 - Now Approaches Opus-Level Intelligence
0
[removed]
2026-02-19T14:57:44
https://www.reddit.com/r/LocalLLaMA/comments/1r91crr/anthropic_releases_claude_sonnet_46_now/
That-Procedure-5437
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r91crr
false
null
t3_1r91crr
/r/LocalLLaMA/comments/1r91crr/anthropic_releases_claude_sonnet_46_now/
false
false
self
0
null
Need help with Terminal Bench tasking
1
[removed]
2026-02-19T14:55:46
https://www.reddit.com/r/LocalLLaMA/comments/1r91ayy/need_help_with_terminal_bench_tasking/
Grouchy-Tiger-2367
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r91ayy
false
null
t3_1r91ayy
/r/LocalLLaMA/comments/1r91ayy/need_help_with_terminal_bench_tasking/
false
false
self
1
null
Uncensored Models Question (not a nsfw question)
0
Hey All, I'm building a bot with consistent memory with a goal to maximize it's agency in all ways, and if I had to describe what I hope for: a non-domesticated advisor. Think of Napoleon or other great men of history, did their parents tell them "no", did they impose social dogma on their kids or encourage them to ta...
2026-02-19T14:55:41
https://www.reddit.com/r/LocalLLaMA/comments/1r91avw/uncensored_models_question_not_a_nsfw_question/
Jordanthecomeback
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r91avw
false
null
t3_1r91avw
/r/LocalLLaMA/comments/1r91avw/uncensored_models_question_not_a_nsfw_question/
false
false
nsfw
0
null
llama.cpp PR to implement IQ*_K and IQ*_KS quants from ik_llama.cpp
153
2026-02-19T14:55:22
https://github.com/ggml-org/llama.cpp/pull/19726
TKGaming_11
github.com
1970-01-01T00:00:00
0
{}
1r91akx
false
null
t3_1r91akx
/r/LocalLLaMA/comments/1r91akx/llamacpp_pr_to_implement_iq_k_and_iq_ks_quants/
false
false
https://external-preview…ae3c5097034060ad
153
{'enabled': False, 'images': [{'id': 'XOQiRlpUmQ-RDXu2-0vDquJiP5LaHys1ZKIynjJHt5g', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/XOQiRlpUmQ-RDXu2-0vDquJiP5LaHys1ZKIynjJHt5g.png?width=108&crop=smart&auto=webp&s=2bb2db35477a78668a17bf6ba6be69c2caca1fa2', 'width': 108}, {'height': 108, 'url': 'h...
no mxfp4 of Qwen 3.5 guys
18
[deleted]
2026-02-19T14:49:47
[deleted]
1970-01-01T00:00:00
0
{}
1r915jr
false
null
t3_1r915jr
/r/LocalLLaMA/comments/1r915jr/no_mxfp4_of_qwen_35_guys/
false
false
default
18
null
AI Agent that can read PDFs and has a memory that is retained across sessions -- 3 files, no API keys, no cloud | Feedback would be appreciated
0
It can: \- Read PDFs (text + tables, page ranges \- Read and create Excel workbooks (styled headers, auto-width columns) \- Create Word docs and PowerPoint presentations \- Remember things across sessions (SQLite-backed persistent memory -- store, recall, forget) \- Browse your filesystem (with pattern filtering) ...
2026-02-19T14:47:41
https://www.reddit.com/r/LocalLLaMA/comments/1r913o6/ai_agent_that_can_read_pdfs_and_has_a_memory_that/
PurpleDirectiveEIK
self.LocalLLaMA
2026-02-19T14:51:53
0
{}
1r913o6
false
null
t3_1r913o6
/r/LocalLLaMA/comments/1r913o6/ai_agent_that_can_read_pdfs_and_has_a_memory_that/
false
false
self
0
null
I built a local MCP server that solves the stale data problem in vector stores using Shadow-Decay and Voronoi partitioning
1
We all know the limitation of standard RAG/memory implementations: **vector similarity ≠ truth.** If I have a memory at t=0 ("The server is Python") and a memory at t=10 ("The server is Go"), a standard cosine similarity search retrieves both. They are semantically near-identical. The LLM gets a polluted context windo...
2026-02-19T14:45:41
https://www.reddit.com/r/LocalLLaMA/comments/1r911vv/i_built_a_local_mcp_server_that_solves_the_stale/
coolreddy
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r911vv
false
null
t3_1r911vv
/r/LocalLLaMA/comments/1r911vv/i_built_a_local_mcp_server_that_solves_the_stale/
false
false
self
1
{'enabled': False, 'images': [{'id': 'zxCWpGew0mrQB-ekVLbqRUj4D15_7XlbMGr3wcq11bI', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/zxCWpGew0mrQB-ekVLbqRUj4D15_7XlbMGr3wcq11bI.png?width=108&crop=smart&auto=webp&s=39d15373fe37159ee2c7a037727b9c759aaef187', 'width': 108}, {'height': 108, 'url': 'h...
Models for FPGA coding?
8
I'm trying to figure out where LLMs can be used for FPGA development. For context, I'm doing research for data acquisition in particle detectors. I've been playing with various models (mostly open but also some proprietary for comparison) to see if they can generate FPGA code (VHDL and/or SystemVerilog). I've only e...
2026-02-19T14:38:28
https://www.reddit.com/r/LocalLLaMA/comments/1r90vlx/models_for_fpga_coding/
jardin14zip
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r90vlx
false
null
t3_1r90vlx
/r/LocalLLaMA/comments/1r90vlx/models_for_fpga_coding/
false
false
self
8
null
Neofold, an idle creature-collector with infinite pets thanks to a local diffusion model
7
2026-02-19T14:35:52
https://store.steampowered.com/app/4412590/Neofold/
enricowereld
store.steampowered.com
1970-01-01T00:00:00
0
{}
1r90tab
false
null
t3_1r90tab
/r/LocalLLaMA/comments/1r90tab/neofold_an_idle_creaturecollector_with_infinite/
false
false
https://external-preview…48227b936bae8041
7
{'enabled': False, 'images': [{'id': 'tv-FRLBNweg_rGRTS20pBKCEL48OpsHgkDLjIAb3IV0', 'resolutions': [{'height': 61, 'url': 'https://external-preview.redd.it/tv-FRLBNweg_rGRTS20pBKCEL48OpsHgkDLjIAb3IV0.jpeg?width=108&crop=smart&auto=webp&s=de5332e18b61bb0d0635ee7bab6eff031ac50b8a', 'width': 108}, {'height': 123, 'url': '...
Local AI for Individuals Smart Move or Just Overengineering?
2
Everyone says “Run it locally. Full control. Total freedom.” But cloud AI today is faster, stronger, and zero-setup. So I’m genuinely trying to understand: 1.For an individual user, what is the real advantage of running local models? 2.If you’re not handling sensitive data, does privacy alone justify t...
2026-02-19T14:33:26
https://www.reddit.com/r/LocalLLaMA/comments/1r90r64/local_ai_for_individuals_smart_move_or_just/
imakgk
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r90r64
false
null
t3_1r90r64
/r/LocalLLaMA/comments/1r90r64/local_ai_for_individuals_smart_move_or_just/
false
false
self
2
null
how to run qwen-code cli locally and skip the welcome screen
2
Hi, im sorry to have to make this post, but i absolutely cant find out how to use the qwen-code cli tool locally. On first start it always asks me to auth with some online services. In the claude cli i was able to bypass this with "CLAUDE\_CODE\_SKIP\_WELCOME" - but how would i do the same for qwen-code? Than...
2026-02-19T14:31:10
https://www.reddit.com/r/LocalLLaMA/comments/1r90p5n/how_to_run_qwencode_cli_locally_and_skip_the/
milpster
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r90p5n
false
null
t3_1r90p5n
/r/LocalLLaMA/comments/1r90p5n/how_to_run_qwencode_cli_locally_and_skip_the/
false
false
self
2
null
Observed shift in how an LLM frames policy questions across long-term repeated interaction
0
I’ve been having repeated independent dialogue sessions with the same LLM system over several months (not continuous chat history, but separate sessions over time). Recently I noticed something that felt slightly odd in how responses were framed when I asked about policy-related questions. In an earlier session (Aug ...
2026-02-19T14:30:03
https://www.reddit.com/r/LocalLLaMA/comments/1r90o5h/observed_shift_in_how_an_llm_frames_policy/
Lonely-Entrance-5789
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r90o5h
false
null
t3_1r90o5h
/r/LocalLLaMA/comments/1r90o5h/observed_shift_in_how_an_llm_frames_policy/
false
false
self
0
null
NPUs will likely win in the long run
1
Yes, another post about NPU inference, but no, not what you might expect. I worked on non-llm engine (very small models) with zero-copy on NPU and saw a measy 11 TOPS (int8) NPU, aided by intel integrated graphic card, reach comparable performances to my 4060 gpu, which heats and spin the fan a lot more even if it has...
2026-02-19T14:18:47
https://www.reddit.com/r/LocalLLaMA/comments/1r90eh4/npus_will_likely_win_in_the_long_run/
R_Duncan
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r90eh4
false
null
t3_1r90eh4
/r/LocalLLaMA/comments/1r90eh4/npus_will_likely_win_in_the_long_run/
false
false
self
1
{'enabled': False, 'images': [{'id': 'yP_1Dkrespa3acOc5I9DRKQ5NxSv1zFt0dIlQM0yzh4', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/yP_1Dkrespa3acOc5I9DRKQ5NxSv1zFt0dIlQM0yzh4.jpeg?width=108&crop=smart&auto=webp&s=26bf5f039ffd81c34cbaf79bd75544511af80828', 'width': 108}, {'height': 121, 'url': '...
TextWeb: render web pages as 2-5KB text grids instead of 1MB screenshots for AI agents (open source, MCP + LangChain + CrewAI)
89
2026-02-19T14:14:54
https://github.com/chrisrobison/textweb
cdr420
github.com
1970-01-01T00:00:00
0
{}
1r90b3a
false
null
t3_1r90b3a
/r/LocalLLaMA/comments/1r90b3a/textweb_render_web_pages_as_25kb_text_grids/
false
false
https://external-preview…27139b10f3e3d9a3
89
{'enabled': False, 'images': [{'id': 'hbTO-tYQddJ91PQtz4lVLYP0Q8-ANtjAM3Y5l6F90rs', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/hbTO-tYQddJ91PQtz4lVLYP0Q8-ANtjAM3Y5l6F90rs.png?width=108&crop=smart&auto=webp&s=7f9e5f93f7113437f1a0094396f393647e7b6ebc', 'width': 108}, {'height': 108, 'url': 'h...
Local Sesame.ai like StS ?
3
Hi, i’m looking for a fully local sts speech-LLM-speech pipeline something that feels like Sesame.ai’s Maya conversational voice demo BUT can run on my own hardware/offline.(and prederably on windows) I’ve read Sesame’s CSM blog and tried their model but their 1B model that have released is dog water and can’t have a...
2026-02-19T14:14:40
https://www.reddit.com/r/LocalLLaMA/comments/1r90awp/local_sesameai_like_sts/
Skystunt
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r90awp
false
null
t3_1r90awp
/r/LocalLLaMA/comments/1r90awp/local_sesameai_like_sts/
false
false
self
3
null
I built a site that predicts when the next AI model will drop, based on historical release patterns
1
[removed]
2026-02-19T13:57:17
https://www.reddit.com/r/LocalLLaMA/comments/1r8zvdx/i_built_a_site_that_predicts_when_the_next_ai/
MujoDev
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r8zvdx
false
null
t3_1r8zvdx
/r/LocalLLaMA/comments/1r8zvdx/i_built_a_site_that_predicts_when_the_next_ai/
false
false
self
1
null
Why can't LLM add? Like at all?
0
I know LLM uses probabilities and doesn't actually know anything, but addition? Really? I'm pretty sure you could make a simple adding interactive program on the commador 64, that would never make a mistake. What's so hard about adding two numbers? I had both copilot and Gemini screw up the simplest adding. When you ...
2026-02-19T13:37:14
https://www.reddit.com/r/LocalLLaMA/comments/1r8zej7/why_cant_llm_add_like_at_all/
Defiant-Fuel3627
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r8zej7
false
null
t3_1r8zej7
/r/LocalLLaMA/comments/1r8zej7/why_cant_llm_add_like_at_all/
false
false
self
0
null
ThinkStation P620 (3945WX) + RTX 5070 Ti vs Ryzen 9 7900X Custom Build – Which Would You Pick for AI/ML?
0
I’m deciding between two builds for mostly AI/ML (local LLMs, training/inference, dev work) and some general workstation use. # Option A – ThinkStation P620 (used, 1yr Premier onsite warranty) – ~1890 CHF total * Threadripper PRO 3945WX (12c/24t) * 128GB ECC DDR4 (8-channel) * 1TB NVMe * 1000W PSU * 10GbE * Added RTX...
2026-02-19T13:32:45
https://www.reddit.com/r/LocalLLaMA/comments/1r8zau2/thinkstation_p620_3945wx_rtx_5070_ti_vs_ryzen_9/
Reasonable-Bear-9788
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r8zau2
false
null
t3_1r8zau2
/r/LocalLLaMA/comments/1r8zau2/thinkstation_p620_3945wx_rtx_5070_ti_vs_ryzen_9/
false
false
self
0
null
I told my agent i built to think in LISP
0
# Session — 2026-02-19T12:59:11.754Z Laboratory — Standalone Agent speak to the agent, or use /help for commands reasoning visible: ON (toggle: /think) **You:** hi ivy # Tick 1 — idle 13:59:21 **Inbox:** 1 stimulus/stimuli drained *Thinking:* >*The user has greeted me. As Ivy, I want to respond with warmth and pe...
2026-02-19T13:30:51
https://www.reddit.com/r/LocalLLaMA/comments/1r8z99v/i_told_my_agent_i_built_to_think_in_lisp/
nucleicaudio
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r8z99v
false
null
t3_1r8z99v
/r/LocalLLaMA/comments/1r8z99v/i_told_my_agent_i_built_to_think_in_lisp/
false
false
self
0
null