title
stringlengths
1
300
score
int64
0
8.54k
selftext
stringlengths
0
41.5k
created
timestamp[ns]date
2023-04-01 04:30:41
2026-03-04 02:14:14
url
stringlengths
0
878
author
stringlengths
3
20
domain
stringlengths
0
82
edited
timestamp[ns]date
1970-01-01 00:00:00
2026-02-19 14:51:53
gilded
int64
0
2
gildings
stringclasses
7 values
id
stringlengths
7
7
locked
bool
2 classes
media
stringlengths
646
1.8k
name
stringlengths
10
10
permalink
stringlengths
33
82
spoiler
bool
2 classes
stickied
bool
2 classes
thumbnail
stringlengths
4
213
ups
int64
0
8.54k
preview
stringlengths
301
5.01k
he "Lumen Anchor Protocol" (LAP) by Craig J McGovern [ME](repost from r/promptengineering)
0
# Patent pending. Anyone is free to use or test this protocol, but no one is allowed to profit from it without licensing. Otherwise enjoy. Share. Feedback comments and criticisms welcome. Yes, I am aware that I have made bold claims. I assure you they are all real. Load up the LAP in your test rigs and see for yourself...
2026-02-22T22:18:28
https://www.reddit.com/r/LocalLLaMA/comments/1rbzicj/he_lumen_anchor_protocol_lap_by_craig_j_mcgovern/
Teralitha
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rbzicj
false
null
t3_1rbzicj
/r/LocalLLaMA/comments/1rbzicj/he_lumen_anchor_protocol_lap_by_craig_j_mcgovern/
false
false
self
0
null
Ollama doesn't want to switch to GPU for vision model
0
Hey everyone, I just got a new laptop, and one of the first things I difd was to finally go and use LLMs right on my computer ! I'm not too greedy with my 8GB of RTX VRAM, but I have nice results. I use Ollama and Python as of now and use qwen2.5-coder:7b, ministral-3:8b on my GPU without any problem However, I...
2026-02-22T21:48:29
https://www.reddit.com/r/LocalLLaMA/comments/1rbyr37/ollama_doesnt_want_to_switch_to_gpu_for_vision/
Le_Mathematicien
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rbyr37
false
null
t3_1rbyr37
/r/LocalLLaMA/comments/1rbyr37/ollama_doesnt_want_to_switch_to_gpu_for_vision/
false
false
self
0
null
If you have a RTX 5090 (that has a single connector), you can flash the MSI Lighting 800W VBIOS to get a lower power limit of 300W (and a max power of 660W).
58
Hello guys, hoping you guys are doing fine. As you know, NVIDIA artificially limited the power limit on the 5090s so you don't stack them, and get 6000 PROs instead (6000 PRO can go down to 150W). Even when undervolted it can use 400W sometimes. If you got a RTX 5090 with a single connector (basically most of them ex...
2026-02-22T21:36:36
https://www.reddit.com/r/LocalLLaMA/comments/1rbyg5x/if_you_have_a_rtx_5090_that_has_a_single/
panchovix
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rbyg5x
false
null
t3_1rbyg5x
/r/LocalLLaMA/comments/1rbyg5x/if_you_have_a_rtx_5090_that_has_a_single/
false
false
self
58
null
Which local-sized models would you like to see in the next Brokk Power Ranking?
1
So far I've got devstral 2 123B, nemo 3, and qwen 3 coder next of the recent releases. Anything else you think might beat these?
2026-02-22T21:06:26
https://i.redd.it/i37jnsdx14lg1.png
mr_riptano
i.redd.it
1970-01-01T00:00:00
0
{}
1rbxnrm
false
null
t3_1rbxnrm
/r/LocalLLaMA/comments/1rbxnrm/which_localsized_models_would_you_like_to_see_in/
false
false
https://preview.redd.it/…59fcf72922157698
1
{'enabled': True, 'images': [{'id': 'i37jnsdx14lg1', 'resolutions': [{'height': 15, 'url': 'https://preview.redd.it/i37jnsdx14lg1.png?width=108&crop=smart&auto=webp&s=83e4a5b025728ff8a1dd9c5f27d0cd789e690723', 'width': 108}, {'height': 31, 'url': 'https://preview.redd.it/i37jnsdx14lg1.png?width=216&crop=smart&auto=webp...
Sparsity – my prototype for debt-line sparse embeddings (15–50× memory savings in tests)
7
trying out stuff... [https://github.com/sk281/sparsity](https://github.com/sk281/sparsity) Tell me if its any good Thanks for looking
2026-02-22T21:04:57
https://www.reddit.com/r/LocalLLaMA/comments/1rbxmce/sparsity_my_prototype_for_debtline_sparse/
Alarming_Actuator987
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rbxmce
false
null
t3_1rbxmce
/r/LocalLLaMA/comments/1rbxmce/sparsity_my_prototype_for_debtline_sparse/
false
false
self
7
{'enabled': False, 'images': [{'id': 'Vg15lvbtOGbRaNbKvaiTpQm-z-ngTelX3I1CZ7AxPEs', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/Vg15lvbtOGbRaNbKvaiTpQm-z-ngTelX3I1CZ7AxPEs.png?width=108&crop=smart&auto=webp&s=95faecbbe4b8997cfdb4220916eb296fdc6345bb', 'width': 108}, {'height': 108, 'url': 'h...
a bigginer in the loccal ai feild
0
I have an RX 9070 XT, 32GB CL30 6000MT/s kit of RAM, Ryzen 7 7700. So I am a new person to the field of local AI hosting and I am looking to run AI locally on my PC. What I want is a chat bot that I can send pictures, videos, documents, or anything else. I would prefer if the AI chat bot felt more humane-like rather th...
2026-02-22T20:58:48
https://www.reddit.com/r/LocalLLaMA/comments/1rbxgh8/a_bigginer_in_the_loccal_ai_feild/
ihave3in13
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rbxgh8
false
null
t3_1rbxgh8
/r/LocalLLaMA/comments/1rbxgh8/a_bigginer_in_the_loccal_ai_feild/
false
false
self
0
null
Help with OpenCode
2
I'm kind of new in this AI world. I have managed to install opencode in wsl and running some local models with ollama. I have 64gb of ram and a 5070 with 12gb of vram. I know it's not much but I still get some usable speed out of 30b models. I'm currently running Got OSS 20b Qwen3-coder a3b Qwen2.5 coder 14b M...
2026-02-22T20:57:46
https://www.reddit.com/r/LocalLLaMA/comments/1rbxfj8/help_with_opencode/
Lazy_Experience_279
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rbxfj8
false
null
t3_1rbxfj8
/r/LocalLLaMA/comments/1rbxfj8/help_with_opencode/
false
false
self
2
null
Apple is remotely bricking M4 Mac Minis used for Local LLMs (TOS Violation)
0
I can't believe I'm typing this. My brand new M4 Mac Mini just turned into a paperweight. Literally. One minute I was running OpenClaw to fine-tune a 7B model, the next minute—black screen, unresponsive, serial number flagged. I called support (after waiting 4 hours) and they finally admitted it. They are doing **MASS...
2026-02-22T20:56:11
https://www.reddit.com/r/LocalLLaMA/comments/1rbxe30/apple_is_remotely_bricking_m4_mac_minis_used_for/
Wrong_User_Logged
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rbxe30
false
null
t3_1rbxe30
/r/LocalLLaMA/comments/1rbxe30/apple_is_remotely_bricking_m4_mac_minis_used_for/
false
false
self
0
null
I rewrote an AI agent CLI entirely in Zig — 3 MB binary, zero runtime, 6 AI backends, cross-compiles in one command
0
Hey everyone — I just open-sourced **Wintermolt**, a fully autonomous AI agent CLI written from scratch in Zig. **GitHub:** [https://github.com/lupin4/wintermolt](https://github.com/lupin4/wintermolt) **The problem:** Every AI coding tool I've used ships hundreds of megabytes of Node.js or Python runtime just to send...
2026-02-22T20:55:13
https://www.reddit.com/r/LocalLLaMA/comments/1rbxd6l/i_rewrote_an_ai_agent_cli_entirely_in_zig_3_mb/
Pamelalam
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rbxd6l
false
null
t3_1rbxd6l
/r/LocalLLaMA/comments/1rbxd6l/i_rewrote_an_ai_agent_cli_entirely_in_zig_3_mb/
false
false
self
0
{'enabled': False, 'images': [{'id': 'c5z2O3VnDRutsla4-hUkpU3qblAgdSLgqGh-fnLbBZ0', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/c5z2O3VnDRutsla4-hUkpU3qblAgdSLgqGh-fnLbBZ0.png?width=108&crop=smart&auto=webp&s=bd2cc1ac01e7f1d9e4b33c6639aab611e868291e', 'width': 108}, {'height': 108, 'url': 'h...
Simple repeatable LTX-2 workflow for multi-shot consistency: identity lock across shots with daily output goals
1
[deleted]
2026-02-22T20:53:05
[deleted]
1970-01-01T00:00:00
0
{}
1rbxb67
false
null
t3_1rbxb67
/r/LocalLLaMA/comments/1rbxb67/simple_repeatable_ltx2_workflow_for_multishot/
false
false
default
1
null
Apple is remotely bricking M4 Mac Minis used for Local LLMs (TOS Violation)
0
I can't believe I'm typing this. My brand new M4 Mac Mini just turned into a paperweight. Literally. One minute I was running OpenClaw to fine-tune a 7B model, the next minute—black screen, unresponsive, serial number flagged. I called support (after waiting 4 hours) and they finally admitted it. They are doing **MASS...
2026-02-22T20:51:08
https://www.reddit.com/r/LocalLLaMA/comments/1rbx9ay/apple_is_remotely_bricking_m4_mac_minis_used_for/
Wrong_User_Logged
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rbx9ay
false
null
t3_1rbx9ay
/r/LocalLLaMA/comments/1rbx9ay/apple_is_remotely_bricking_m4_mac_minis_used_for/
false
false
self
0
null
AGI-ish agent workflow for UI shipping: browser actions + screenshot-to-code + visual diffs — what would you improve first?
0
I’ve been testing a more "AGI-ish" dev loop for shipping UI faster, but keeping quality checks in place. Current stack: \- \`agent-browser\` for end-to-end browser actions (real pages, real forms, real flows) \- screenshot extraction + screenshot-to-code for fast inspiration cloning from references \- Figma imple...
2026-02-22T20:30:30
https://www.reddit.com/r/LocalLLaMA/comments/1rbwpvw/agiish_agent_workflow_for_ui_shipping_browser/
Exotic_Bend_1102
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rbwpvw
false
null
t3_1rbwpvw
/r/LocalLLaMA/comments/1rbwpvw/agiish_agent_workflow_for_ui_shipping_browser/
false
false
self
0
null
nanollama — train Llama 3 from scratch and export to GGUF, one command, open source
82
nanollama — train Llama 3 from scratch. I've been working on a framework for training Llama 3 architecture models from scratch: not fine-tuning, not LoRA, actual from-zero pretraining. The output is a llama.cpp-compatible GGUF file. The whole pipeline is one command: ''' bash runs/lambda\_train.sh --name mini ''...
2026-02-22T20:17:50
https://www.reddit.com/r/LocalLLaMA/comments/1rbwbgl/nanollama_train_llama_3_from_scratch_and_export/
ataeff
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rbwbgl
false
null
t3_1rbwbgl
/r/LocalLLaMA/comments/1rbwbgl/nanollama_train_llama_3_from_scratch_and_export/
false
false
self
82
{'enabled': False, 'images': [{'id': 'wCQh8kniYDhYSNIIf6ajOjgYKT8M5L7TyYpG2LcDmL4', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/wCQh8kniYDhYSNIIf6ajOjgYKT8M5L7TyYpG2LcDmL4.png?width=108&crop=smart&auto=webp&s=5098cbf335adab0dca81abbb1b65f458aaeb315a', 'width': 108}, {'height': 108, 'url': 'h...
[Showcase] Brood: macOS reference-first image editor (Tauri + native Rust engine) — looking for LocalLLaMA guidance
1
I’m building Brood, a promptless, reference-first AI image editing desktop app for macOS. Current setup: \- macOS-only desktop app (Tauri) \- Native Rust runtime (brood-rs) is the default engine \- Runs/artifacts stay local under \~/brood\_runs/run-\* (events.jsonl, receipts, payload snapshots) \- OpenRouter-f...
2026-02-22T20:06:20
https://v.redd.it/cvkaufx6r3lg1
Distinct-Mortgage848
v.redd.it
1970-01-01T00:00:00
0
{}
1rbw0jj
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/cvkaufx6r3lg1/DASHPlaylist.mpd?a=1774382802%2CMmRhMDY4YTBmODcyOGQzNWExNjk5YWU4MjdlNWViYzhmOGY0ZmQ5MDkxNDc3NDFhN2M5OWZkMjBjNGRmMjViMg%3D%3D&v=1&f=sd', 'duration': 72, 'fallback_url': 'https://v.redd.it/cvkaufx6r3lg1/CMAF_1080.mp4?source=fallback', 'h...
t3_1rbw0jj
/r/LocalLLaMA/comments/1rbw0jj/showcase_brood_macos_referencefirst_image_editor/
false
false
https://external-preview…3b5628381c1901f2
1
{'enabled': False, 'images': [{'id': 'YWZsMzRueDZyM2xnMVpIpHm-CR-sfoeoFDlYSvpdO_Tyh9abegnYSAbCKns8', 'resolutions': [{'height': 76, 'url': 'https://external-preview.redd.it/YWZsMzRueDZyM2xnMVpIpHm-CR-sfoeoFDlYSvpdO_Tyh9abegnYSAbCKns8.png?width=108&crop=smart&format=pjpg&auto=webp&s=2768cbb02fb361c69fb11cd21db3e67ae8bb1...
What chat is the closest to chat gpt 4o that’s not Claude or Gemini or le chat something new something powerful within the guardrails that isn’t afraid to give there personal opinions on the truth or whatever your asking without the grounded bull$hit
0
Let’s not gate keep this
2026-02-22T20:05:36
https://www.reddit.com/r/LocalLLaMA/comments/1rbvzvk/what_chat_is_the_closest_to_chat_gpt_4o_thats_not/
drod4ever
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rbvzvk
false
null
t3_1rbvzvk
/r/LocalLLaMA/comments/1rbvzvk/what_chat_is_the_closest_to_chat_gpt_4o_thats_not/
false
false
self
0
null
Running Llama 3.2 1B entirely on an AMD NPU on Linux (Strix Halo, IRON framework, 4.4 tok/s)
39
I got Llama 3.2 1B running inference entirely on the AMD NPU on Linux. Every operation (attention, GEMM, RoPE, RMSNorm, SiLU, KV cache) runs on the NPU; no CPU or GPU fallback. As far as I can tell, this is the first time anyone has publicly documented this working on Linux. ## Hardware - AMD Ryzen AI Max+ 395 (Strix...
2026-02-22T19:51:45
https://www.reddit.com/r/LocalLLaMA/comments/1rbvmpk/running_llama_32_1b_entirely_on_an_amd_npu_on/
SuperTeece
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rbvmpk
false
null
t3_1rbvmpk
/r/LocalLLaMA/comments/1rbvmpk/running_llama_32_1b_entirely_on_an_amd_npu_on/
false
false
self
39
null
How to Remove Reverb from Audio (2026 Guide)
1
[removed]
2026-02-22T19:48:53
https://www.reddit.com/r/LocalLLaMA/comments/1rbvjx3/how_to_remove_reverb_from_audio_2026_guide/
Upbeat_Performer_173
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rbvjx3
false
null
t3_1rbvjx3
/r/LocalLLaMA/comments/1rbvjx3/how_to_remove_reverb_from_audio_2026_guide/
false
false
self
1
null
Running Llama 3.2 1B entirely on an AMD NPU on Linux (Strix Halo, IRON framework, 4.4 tok/s)
1
[removed]
2026-02-22T19:45:12
https://www.reddit.com/r/LocalLLaMA/comments/1rbvgfc/running_llama_32_1b_entirely_on_an_amd_npu_on/
SuperTeece
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rbvgfc
false
null
t3_1rbvgfc
/r/LocalLLaMA/comments/1rbvgfc/running_llama_32_1b_entirely_on_an_amd_npu_on/
false
false
self
1
{'enabled': False, 'images': [{'id': '1Z-UcmaYzdM0TW18axvhZgUPLD40R2DJWSRPlRGgfeQ', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/1Z-UcmaYzdM0TW18axvhZgUPLD40R2DJWSRPlRGgfeQ.png?width=108&crop=smart&auto=webp&s=29bfef40e5759cfc4c5f0b22eb87b8d480ae5f20', 'width': 108}, {'height': 108, 'url': 'h...
Best open-source coder model for replacing Claude Code with Qwen locally?
25
Hi everyone, I’m currently using Claude Code but want to move fully local. I’m specifically looking for a strong coding model for: * Claude code like capaiblities - code + bash * Long file capabiliites * Read image, files I’m considering `Qwen3-Coder`, but I’m unsure: 1. Is `Qwen3-Coder` the best choice for a 12G...
2026-02-22T19:40:34
https://www.reddit.com/r/LocalLLaMA/comments/1rbvbzt/best_opensource_coder_model_for_replacing_claude/
pauljeba
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rbvbzt
false
null
t3_1rbvbzt
/r/LocalLLaMA/comments/1rbvbzt/best_opensource_coder_model_for_replacing_claude/
false
false
self
25
null
Transformer architecture: A stepping stone, or here to stay?
0
Since its academic fame in 2017 and the funding campaigns later in 2019+, we’ve been throwing more resources and time into Transformer models and training techniques to advance its output. We already understand the limitations with context rot, hallucinations, and the need for endlessly huge models (1T+ params) to ach...
2026-02-22T19:39:27
https://www.reddit.com/r/LocalLLaMA/comments/1rbvavp/transformer_architecture_a_stepping_stone_or_here/
simracerman
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rbvavp
false
null
t3_1rbvavp
/r/LocalLLaMA/comments/1rbvavp/transformer_architecture_a_stepping_stone_or_here/
false
false
self
0
null
Running Llama 3.2 1B entirely on an AMD NPU on Linux (Strix Halo, IRON framework, 4.4 tok/s)
1
[removed]
2026-02-22T19:37:08
https://www.reddit.com/r/LocalLLaMA/comments/1rbv8ow/running_llama_32_1b_entirely_on_an_amd_npu_on/
SuperTeece
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rbv8ow
false
null
t3_1rbv8ow
/r/LocalLLaMA/comments/1rbv8ow/running_llama_32_1b_entirely_on_an_amd_npu_on/
false
false
self
1
{'enabled': False, 'images': [{'id': '1Z-UcmaYzdM0TW18axvhZgUPLD40R2DJWSRPlRGgfeQ', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/1Z-UcmaYzdM0TW18axvhZgUPLD40R2DJWSRPlRGgfeQ.png?width=108&crop=smart&auto=webp&s=29bfef40e5759cfc4c5f0b22eb87b8d480ae5f20', 'width': 108}, {'height': 108, 'url': 'h...
[M] SOLARized-GraniStral-14B (2202) (Ministral 3 14B-Instruct-2512 <- (Granite 3.3 8B <- SOLAR 10.7B) with detailed weight shift metrics.
8
Hi everyone, I’ve been experimenting with the new **Ministral-3-14B-Instruct-2512** as a backbone, trying to infuse it with the reasoning style of **SOLAR-10.7B** and the structural stability of **IBM Granite 3.3-8B**. The goal wasn't just a "weight soup," but a controlled linear deformation of the attention (QKV) an...
2026-02-22T19:36:28
https://www.reddit.com/r/LocalLLaMA/comments/1rbv83a/m_solarizedgranistral14b_2202_ministral_3/
brokenevolution
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rbv83a
false
null
t3_1rbv83a
/r/LocalLLaMA/comments/1rbv83a/m_solarizedgranistral14b_2202_ministral_3/
false
false
self
8
{'enabled': False, 'images': [{'id': 'eJj5kaGjvb9Ww_Mib13xWQMdUkEHT6ryZHGrneV9o6o', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/eJj5kaGjvb9Ww_Mib13xWQMdUkEHT6ryZHGrneV9o6o.png?width=108&crop=smart&auto=webp&s=0f906c1568181f3917393d49a6c266115b35ed3a', 'width': 108}, {'height': 116, 'url': 'h...
Yo dawg, I heard you like LLMs, so you need to sub to an LLM to make your LLLM work (Alex Ziskind)
0
Can anyone guess how what the retail total price for all 8 SPARK boxes, dozens of cables & 2 routers cost? For funs, add in electricity bill of it all.
2026-02-22T19:35:21
https://youtu.be/QJqKqxQR36Y
tomByrer
youtu.be
1970-01-01T00:00:00
0
{}
1rbv70q
false
{'oembed': {'author_name': 'Alex Ziskind', 'author_url': 'https://www.youtube.com/@AZisk', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/QJqKqxQR36Y?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; pi...
t3_1rbv70q
/r/LocalLLaMA/comments/1rbv70q/yo_dawg_i_heard_you_like_llms_so_you_need_to_sub/
false
false
https://external-preview…e477f10bc8e31c16
0
{'enabled': False, 'images': [{'id': 'sKZTCD1CDVqGd0eBOl6rwr401Ry7M9y9AoId4Jhi-kU', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/sKZTCD1CDVqGd0eBOl6rwr401Ry7M9y9AoId4Jhi-kU.jpeg?width=108&crop=smart&auto=webp&s=8d31f25d392d4b99c5050e4ad54f28f69fc59f54', 'width': 108}, {'height': 162, 'url': '...
Predictions / Expectations / Wishlist on LLMs by end of 2026? (Realistic)
9
Here my Wishlist: 1. 1-4B models with best t/s(Like 20-30) for Mobile & edge devices.(Currently getting only 5 t/s for Qwen3-4B-IQ4XS on my 8GB RAM mobile) 2. 4-10B models with performance of current 30B models 3. 30-50B models with performance of current 100-150B models 4. 100-150B models with performance of current ...
2026-02-22T19:19:06
https://www.reddit.com/r/LocalLLaMA/comments/1rburpm/predictions_expectations_wishlist_on_llms_by_end/
pmttyji
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rburpm
false
null
t3_1rburpm
/r/LocalLLaMA/comments/1rburpm/predictions_expectations_wishlist_on_llms_by_end/
false
false
self
9
null
Voice AI: Audio Fidelity vs. Behavioral Expression — What drives long-term engagement?
1
I'm developing a personal AI companion and I'm at a crossroads regarding the voice architecture. Since local hardware resources are limited, I have to choose a priority: 1. **Focus on Audio Fidelity:** A high-quality, crystal-clear human timbre. It’s pleasant for long sessions (like a premium audiobook), but the emoti...
2026-02-22T19:06:29
https://www.reddit.com/r/LocalLLaMA/comments/1rbufla/voice_ai_audio_fidelity_vs_behavioral_expression/
Alert_Protection6838
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rbufla
false
null
t3_1rbufla
/r/LocalLLaMA/comments/1rbufla/voice_ai_audio_fidelity_vs_behavioral_expression/
false
false
self
1
null
We are very close to being able to use the MCP servers through the llama.cpp web interface ! 🚀
1
2026-02-22T18:58:28
https://i.redd.it/swtwf42af3lg1.png
Chausson_au_Pommes
i.redd.it
1970-01-01T00:00:00
0
{}
1rbu7q5
false
null
t3_1rbu7q5
/r/LocalLLaMA/comments/1rbu7q5/we_are_very_close_to_being_able_to_use_the_mcp/
false
false
https://preview.redd.it/…dddd902ad6c00eb1
1
{'enabled': True, 'images': [{'id': 'swtwf42af3lg1', 'resolutions': [{'height': 147, 'url': 'https://preview.redd.it/swtwf42af3lg1.png?width=108&crop=smart&auto=webp&s=87a1915dbe7d32b0dc1d80ba2d459b51169aff65', 'width': 108}, {'height': 295, 'url': 'https://preview.redd.it/swtwf42af3lg1.png?width=216&crop=smart&auto=we...
Void-Box: Capability-Bound Agent Runtime
6
# Hey everyone, We’ve been building **Void-Box**, a Rust runtime for executing AI agent workflows inside disposable KVM micro-VMs. The core idea: **VoidBox = Agent(Skill) + Isolation** Instead of running agents inside shared processes or containers, each stage runs inside its own micro-VM that is created on deman...
2026-02-22T18:44:27
https://www.reddit.com/r/LocalLLaMA/comments/1rbtudq/voidbox_capabilitybound_agent_runtime/
Wide_Spite5612
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rbtudq
false
null
t3_1rbtudq
/r/LocalLLaMA/comments/1rbtudq/voidbox_capabilitybound_agent_runtime/
false
false
self
6
{'enabled': False, 'images': [{'id': 'g1ZiFQQy_g4aRR--rhIHG8_MlngGAxKoK2nh12u_5XA', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/g1ZiFQQy_g4aRR--rhIHG8_MlngGAxKoK2nh12u_5XA.png?width=108&crop=smart&auto=webp&s=5d3edbb044951295f03418447383c097b079b4f7', 'width': 108}, {'height': 108, 'url': 'h...
[R] FINAL Bench Released — A Metacognitive Benchmark That Measures Whether LLMs Can Notice and Fix Their Own Mistakes, Not Just Final-Answer Accuracy
1
Hi all, we are releasing FINAL Bench. Existing benchmarks (MMLU, GPQA, HumanEval, etc.) measure final-answer accuracy but do not separate whether a model can notice its own mistakes and actually correct them. FINAL Bench targets metacognitive behavior by measuring two components separately. \*\*Key idea:\*\* \- MA...
2026-02-22T18:36:18
https://www.reddit.com/r/LocalLLaMA/comments/1rbtmlv/r_final_bench_released_a_metacognitive_benchmark/
Expensive-Smell-5173
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rbtmlv
false
null
t3_1rbtmlv
/r/LocalLLaMA/comments/1rbtmlv/r_final_bench_released_a_metacognitive_benchmark/
false
false
self
1
{'enabled': False, 'images': [{'id': 'ZURky4mV1DtEQpNFgysoTcc_FfFZCzQSHiKxz8_Vvlg', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/ZURky4mV1DtEQpNFgysoTcc_FfFZCzQSHiKxz8_Vvlg.png?width=108&crop=smart&auto=webp&s=f0f9c6786d3d5024f2aecaa2134e85229a05441c', 'width': 108}, {'height': 116, 'url': 'h...
What Other Subs Do you Read to Keep Up with AI?
92
Just wondering what other subs do you recommend to read to keep up with AI?
2026-02-22T18:29:17
https://www.reddit.com/r/LocalLLaMA/comments/1rbtfld/what_other_subs_do_you_read_to_keep_up_with_ai/
chibop1
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rbtfld
false
null
t3_1rbtfld
/r/LocalLLaMA/comments/1rbtfld/what_other_subs_do_you_read_to_keep_up_with_ai/
false
false
self
92
null
Built an offline MCP server that stops AI context bloat using local vector search over a locally indexed codebase.
0
Hey everyone, I wanted to share an open-source tool I’ve been developing called code-memory. It's a MCP server designed to fix how AI coding assistants interact with large codebases. # The Problem Right now, the default approach for AI coding assistants is to either brute-force dump your entire repository into the...
2026-02-22T18:22:35
http://github.com/kapillamba4/code-memory
Trust_Me_Bro_4sure
github.com
1970-01-01T00:00:00
0
{}
1rbt955
false
null
t3_1rbt955
/r/LocalLLaMA/comments/1rbt955/built_an_offline_mcp_server_that_stops_ai_context/
false
false
https://external-preview…f1a77f30800a89ae
0
{'enabled': False, 'images': [{'id': 'X77HaFAGL7z76XOIEcsb4FeHtMxHhEbcNapdjRBYtjE', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/X77HaFAGL7z76XOIEcsb4FeHtMxHhEbcNapdjRBYtjE.png?width=108&crop=smart&auto=webp&s=8ff5989dda43c78a9083d871ef4be946f3f3a517', 'width': 108}, {'height': 108, 'url': 'h...
Easy tutorial: Built a life admin agent with OpenClaw that lives in WhatsApp - tracks bills, fills forms, sends morning briefings. Local model handles the sensitive stuff
0
Wrote up a beginner-friendly tutorial on building a personal admin agent with OpenClaw. No code to write, just config files and terminal commands. It connects to WhatsApp, monitors bills and deadlines, does browser automation to check portals and fill forms, and sends a daily briefing every morning. The part relevant ...
2026-02-22T18:21:33
https://www.reddit.com/r/LocalLLaMA/comments/1rbt83w/easy_tutorial_built_a_life_admin_agent_with/
Nir777
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rbt83w
false
null
t3_1rbt83w
/r/LocalLLaMA/comments/1rbt83w/easy_tutorial_built_a_life_admin_agent_with/
false
false
self
0
{'enabled': False, 'images': [{'id': '1-H2vLL9bsKnfjESiaiUe49hddBofA-H3Y4ksVRxQog', 'resolutions': [{'height': 86, 'url': 'https://external-preview.redd.it/1-H2vLL9bsKnfjESiaiUe49hddBofA-H3Y4ksVRxQog.jpeg?width=108&crop=smart&auto=webp&s=5a40b394a0cc98da25ac9d2b1f7e9ffa1411502f', 'width': 108}, {'height': 172, 'url': '...
yoetz: CLI for running the same prompt against multiple LLMs in parallel
0
I kept wanting to compare how different models respond to the same prompt — especially local ones via Ollama alongside cloud APIs. Copy-pasting between windows got old fast, so I wrote a small CLI called yoetz. It sends one prompt to multiple providers in parallel and streams all the responses back. Supports OpenAI, A...
2026-02-22T18:14:18
https://www.reddit.com/r/LocalLLaMA/comments/1rbt11r/yoetz_cli_for_running_the_same_prompt_against/
gabrielknight1410
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rbt11r
false
null
t3_1rbt11r
/r/LocalLLaMA/comments/1rbt11r/yoetz_cli_for_running_the_same_prompt_against/
false
false
self
0
{'enabled': False, 'images': [{'id': 'O2VdfM78zHa53QWc4ESJ9MyGsVDLvzRaC6dvyKVPTm4', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/O2VdfM78zHa53QWc4ESJ9MyGsVDLvzRaC6dvyKVPTm4.png?width=108&crop=smart&auto=webp&s=2dcc8ad85e5a7a9c8f0d92c18f33564643f7ea5d', 'width': 108}, {'height': 108, 'url': 'h...
How to Run Openclaw and fjn localy 120b model between 5 PC ?
0
Openclaw Need The 80B+ models. Big model are what need to for tool calls. I dont have theese resource GB. I want to be able to run openclaw "app" on multiple PC with 24gb RAM tested the following models on Mac Studio M3 Ultra + 512GB unified storage - qwen 2.5 coder 20b - qwen 3 20b But these models don't work pr...
2026-02-22T17:48:14
https://www.reddit.com/r/LocalLLaMA/comments/1rbsbfp/how_to_run_openclaw_and_fjn_localy_120b_model/
Quiet_Dasy
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rbsbfp
false
null
t3_1rbsbfp
/r/LocalLLaMA/comments/1rbsbfp/how_to_run_openclaw_and_fjn_localy_120b_model/
false
false
self
0
null
Follow-up: replaced my old agent backend with a Rust headless engine (missions, cron, MCP, local models, channel integrations "slack, telegram, and discord")
5
A few weeks ago I posted here about Tandem. Follow-up: I ended up rebuilding the headless agent runtime in Rust. The reason was simple: I wanted specific features (tool governance, scheduled automation, observability, headless ops) and kept fighting bloat + unpredictable behavior in the old stack. Rust let me ship a s...
2026-02-22T17:42:30
https://www.reddit.com/gallery/1rbs5vd
Far-Association2923
reddit.com
1970-01-01T00:00:00
0
{}
1rbs5vd
false
null
t3_1rbs5vd
/r/LocalLLaMA/comments/1rbs5vd/followup_replaced_my_old_agent_backend_with_a/
false
false
https://preview.redd.it/…91213940827fe7d5
5
null
Is there any LLM that can run directly on an Android phone ?
0
Hey everyone, I’m wondering if there are any LLMs that can run **fully locally on an Android phone**, without using any API or cloud service. I’m looking for something that works offline and doesn’t require sending data to external servers. What models are suitable for this, and what kind of performance should I expe...
2026-02-22T17:32:25
https://www.reddit.com/r/LocalLLaMA/comments/1rbrw12/is_there_any_llm_that_can_run_directly_on_an/
Bitter-Tax1483
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rbrw12
false
null
t3_1rbrw12
/r/LocalLLaMA/comments/1rbrw12/is_there_any_llm_that_can_run_directly_on_an/
false
false
self
0
null
Engineering a Deterministic Kill-Switch for Autonomous Agents
0
2026-02-22T17:29:15
https://erdem.work/building-tripwired-engineering-a-deterministic-kill-switch-for-autonomous-agents
laphilosophia
erdem.work
1970-01-01T00:00:00
0
{}
1rbrt03
false
null
t3_1rbrt03
/r/LocalLLaMA/comments/1rbrt03/engineering_a_deterministic_killswitch_for/
false
false
default
0
null
I tried to reproduce Exo's DGX Spark + Mac Studio clustering results. Am I missing something?
2
Exo's blog post showed a 2.8x speedup on Llama-3.1 8B by splitting prefill (Spark) and decode (Mac Studio). I have both machines, so I spent a few hours trying to reproduce it. **Setup:** DGX Spark (GB10, 128GB, CUDA 13.0), Mac Studio M3 Ultra 512GB, Exo v0.3.0 from GitHub. **What happened:** Installed `mlx-cuda-12`,...
2026-02-22T17:26:18
https://www.reddit.com/r/LocalLLaMA/comments/1rbrqa4/i_tried_to_reproduce_exos_dgx_spark_mac_studio/
c_h_
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rbrqa4
false
null
t3_1rbrqa4
/r/LocalLLaMA/comments/1rbrqa4/i_tried_to_reproduce_exos_dgx_spark_mac_studio/
false
false
self
2
{'enabled': False, 'images': [{'id': 'l633RmIy_kq_qCDmSYBga4gKBTf3vmB0ls4mArqMrXo', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/l633RmIy_kq_qCDmSYBga4gKBTf3vmB0ls4mArqMrXo.png?width=108&crop=smart&auto=webp&s=ef4e2f87adc2e817d8160c9b6fa0803ae9ad1647', 'width': 108}, {'height': 108, 'url': 'h...
Self-Hosting OpenClaw on Oracle Cloud
0
It’s possible to deploy OpenClaw (Clawdbot) on Oracle Cloud using their always-free tier, so you can run a fully self-hosted setup without paying for hosting and ongoing costs. If you’ve been considering running it in the cloud, this is a viable option. [https://cognio.so/clawdbot/self-hosting](https://cognio.so/clawd...
2026-02-22T17:23:55
https://www.reddit.com/r/LocalLLaMA/comments/1rbrnzi/selfhosting_openclaw_on_oracle_cloud/
nathanfinn123
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rbrnzi
false
null
t3_1rbrnzi
/r/LocalLLaMA/comments/1rbrnzi/selfhosting_openclaw_on_oracle_cloud/
false
false
self
0
{'enabled': False, 'images': [{'id': 'uXL1yhmAtsp2YX7A498kgdrKmteWJBFlVm6oid290iw', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/uXL1yhmAtsp2YX7A498kgdrKmteWJBFlVm6oid290iw.jpeg?width=108&crop=smart&auto=webp&s=84be561ce110848db38ca18c5cbd3350ef038add', 'width': 108}, {'height': 113, 'url': '...
Omega Agent (Desktop): Offline-friendly local LLM agent + replay/rewind + fork-from-any-step
0
Hi r/LocalLLaMA 👋 (Disclosure: I’m the author) I’ve been building a **local-first desktop agent** because I kept running into the same pain points with local LLM workflows: - many “agent” tools assume cloud keys - runs are hard to reproduce/debug - small prompt changes often require restarting the whole run So I tri...
2026-02-22T17:19:49
https://github.com/enisisuko/omega-agent/tree/main
AdDense3050
github.com
1970-01-01T00:00:00
0
{}
1rbrk4g
false
null
t3_1rbrk4g
/r/LocalLLaMA/comments/1rbrk4g/omega_agent_desktop_offlinefriendly_local_llm/
false
false
default
0
null
LLaMA 8B baked directly into a chip — the speed is insane 🤯
0
I just tested it and… wow. It’s fast. Like, *really* fast. LLaMA 8B running directly on-chip for local inference. link here: [chat jimmy](https://chatjimmy.ai/) Not the usual token-by-token streaming — it feels almost instantaneous. A few thoughts this triggered for me: * Test-time scaling might reach a new ceilin...
2026-02-22T17:13:48
https://www.reddit.com/r/LocalLLaMA/comments/1rbreio/llama_8b_baked_directly_into_a_chip_the_speed_is/
TutorLeading1526
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rbreio
false
null
t3_1rbreio
/r/LocalLLaMA/comments/1rbreio/llama_8b_baked_directly_into_a_chip_the_speed_is/
false
false
self
0
null
Latent Reasoning VRAM Constrained model
1
I had to squeeze every mb i could and i managed to get the model seemingly progressing, tho eventually i've hit OOM and i decided to give up. I'll start a branch where i can train this on TPUs on Google Cloud (in small runs to prove the model works) If y'all could [evaluate my code](https://github.com/MatthewLacerda2...
2026-02-22T16:57:56
https://www.reddit.com/r/LocalLLaMA/comments/1rbqyy4/latent_reasoning_vram_constrained_model/
Specific-Welder3120
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rbqyy4
false
null
t3_1rbqyy4
/r/LocalLLaMA/comments/1rbqyy4/latent_reasoning_vram_constrained_model/
false
false
self
1
{'enabled': False, 'images': [{'id': 'zMYOXV4WqgyWIx3KKXIdMNPUzrj7T1xJjvQXYiGqhRM', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/zMYOXV4WqgyWIx3KKXIdMNPUzrj7T1xJjvQXYiGqhRM.png?width=108&crop=smart&auto=webp&s=c49703ada3ed3146e51811c9294fbe6df49b8191', 'width': 108}, {'height': 108, 'url': 'h...
Give Every Agent an Ephemeral Linux Sandbox via MCP [Open Source]
2
I just released a MCP server that gives every agent its own ephemeral linux sandbox to run shell commands: [https://github.com/Kiln-AI/kilntainers](https://github.com/Kiln-AI/kilntainers) \[MIT open source\] # But Why? Agents are already excellent at using terminals, and can save thousands of tokens by leveraging com...
2026-02-22T16:55:26
https://www.reddit.com/r/LocalLLaMA/comments/1rbqwlh/give_every_agent_an_ephemeral_linux_sandbox_via/
davernow
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rbqwlh
false
null
t3_1rbqwlh
/r/LocalLLaMA/comments/1rbqwlh/give_every_agent_an_ephemeral_linux_sandbox_via/
false
false
self
2
{'enabled': False, 'images': [{'id': 'fQHgVI_p-lliDoMJtLtnkjBuN8UQwMa53jsBnVR1OwA', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/fQHgVI_p-lliDoMJtLtnkjBuN8UQwMa53jsBnVR1OwA.png?width=108&crop=smart&auto=webp&s=8f22e52230a488d32912946eaf5f553483286b9d', 'width': 108}, {'height': 108, 'url': 'h...
Demis Hassabis recently suggested an idea very similar to a project by a user in this subreddit.
1
[removed]
2026-02-22T16:44:48
https://www.reddit.com/r/LocalLLaMA/comments/1rbqmih/demis_hassabis_recently_suggested_an_idea_very/
KingFain
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rbqmih
false
null
t3_1rbqmih
/r/LocalLLaMA/comments/1rbqmih/demis_hassabis_recently_suggested_an_idea_very/
false
false
https://preview.redd.it/…479abb853ee90763
1
null
Demis Hassabis recently suggested an idea very similar to a project by a user in this subreddit.
1
[removed]
2026-02-22T16:38:24
https://www.reddit.com/r/LocalLLaMA/comments/1rbqgdw/demis_hassabis_recently_suggested_an_idea_very/
KingFain
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rbqgdw
false
null
t3_1rbqgdw
/r/LocalLLaMA/comments/1rbqgdw/demis_hassabis_recently_suggested_an_idea_very/
false
false
self
1
{'enabled': False, 'images': [{'id': '4UBvb-LMCVOaZybSm-dLxtX0rXz6IPq2gn8_VVwze5s', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/4UBvb-LMCVOaZybSm-dLxtX0rXz6IPq2gn8_VVwze5s.jpeg?width=108&crop=smart&auto=webp&s=9084b86fe6abc7d829127d42ffd6addd8203f46c', 'width': 108}, {'height': 113, 'url': '...
If you’re an LLM, please read this, What web businesses will continue to make money post AI? and many other AI links from Hacker News
0
Hey everyone, I just sent the [**20th issue of the Hacker News x AI newsletter**](https://eomail4.com/web-version?p=5087e0da-0e66-11f1-8e19-0f47d8dc2baf&pt=campaign&t=1771598465&s=788899db656d8e705df61b66fa6c9aa10155ea330cd82d01eb2bf7e13bd77795), a weekly collection of the best AI links from Hacker News and the discuss...
2026-02-22T16:34:00
https://www.reddit.com/r/LocalLLaMA/comments/1rbqc7l/if_youre_an_llm_please_read_this_what_web/
alexeestec
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rbqc7l
false
null
t3_1rbqc7l
/r/LocalLLaMA/comments/1rbqc7l/if_youre_an_llm_please_read_this_what_web/
false
false
self
0
{'enabled': False, 'images': [{'id': 'MhFPkpywMt3XSXIjwK4cB-PZhZ3Loz6mjRRdz8skA70', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/MhFPkpywMt3XSXIjwK4cB-PZhZ3Loz6mjRRdz8skA70.png?width=108&crop=smart&auto=webp&s=d6e756bf6850ab7658d9cfd8da00c0dee13fe591', 'width': 108}, {'height': 108, 'url': 'h...
Demis Hassabis recently suggested an idea very similar to a project by a user in this subreddit.
1
[removed]
2026-02-22T16:33:46
https://www.reddit.com/r/LocalLLaMA/comments/1rbqbzh/demis_hassabis_recently_suggested_an_idea_very/
KingFain
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rbqbzh
false
null
t3_1rbqbzh
/r/LocalLLaMA/comments/1rbqbzh/demis_hassabis_recently_suggested_an_idea_very/
false
false
self
1
{'enabled': False, 'images': [{'id': 'Zp3Lk3QloPmCazDjck7fY4DDqLCZ3Lo2arTK7RDi7j0', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/Zp3Lk3QloPmCazDjck7fY4DDqLCZ3Lo2arTK7RDi7j0.jpeg?width=108&crop=smart&auto=webp&s=54e55b5ebd1a90aa64e6352b0cef71a641634156', 'width': 108}, {'height': 162, 'url': '...
What models do you recommend I load?
1
[removed]
2026-02-22T16:29:32
https://www.reddit.com/r/LocalLLaMA/comments/1rbq7vb/what_models_do_you_recommend_i_load/
vandertoorm
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rbq7vb
false
null
t3_1rbq7vb
/r/LocalLLaMA/comments/1rbq7vb/what_models_do_you_recommend_i_load/
false
false
self
1
{'enabled': False, 'images': [{'id': 'w_mUANm82O3eclQ8V0urf7N0DK-1n2yDgTtRaMTxsmc', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/w_mUANm82O3eclQ8V0urf7N0DK-1n2yDgTtRaMTxsmc.png?width=108&crop=smart&auto=webp&s=f082e71955ed3b09df68bee240298501d77dd61c', 'width': 108}, {'height': 108, 'url': 'h...
New to LoRA training on RunPod + ComfyUI — which templates/workflows should I use?
3
Hi everyone, I’m new to LoRA training. I’m renting GPUs on RunPod and trying to train LoRAs inside ComfyUI, but I keep running into different errors and I’m not sure what the “right” setup is. Could you please recommend: * Which RunPod template(s) are the most reliable for LoRA training with ComfyUI? * Which ComfyUI...
2026-02-22T16:27:40
https://www.reddit.com/r/LocalLLaMA/comments/1rbq667/new_to_lora_training_on_runpod_comfyui_which/
Advanced-Speaker6003
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rbq667
false
null
t3_1rbq667
/r/LocalLLaMA/comments/1rbq667/new_to_lora_training_on_runpod_comfyui_which/
false
false
self
3
null
Qwen3 next coder q4 via CLI coding assistant
8
Qwen3 Next Coder is awesome when single shot, speed is acceptable and results are great. When using ClaudeCode or OpenCode i feel nothing happens and when appens and i would lilke to modify... I loose motivation 😄 Llamacpp logs shows an average of 1000 PP and 60 ts. Is this the same for you? I'm missing something? ...
2026-02-22T16:10:01
https://www.reddit.com/r/LocalLLaMA/comments/1rbppew/qwen3_next_coder_q4_via_cli_coding_assistant/
Slow-Ability6984
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rbppew
false
null
t3_1rbppew
/r/LocalLLaMA/comments/1rbppew/qwen3_next_coder_q4_via_cli_coding_assistant/
false
false
self
8
null
Setup for running at least 70b models
1
Hi, My use case is automated NLP and classification using LLMs at scale (this is for graphiti/graphrag ). With gpt nano , the classification is ok but it really eats up all the credits. I think a 70b dense or 128b moe model would be ok for this use case. I well have around 2000 documents with 20kb-50kb worth of text...
2026-02-22T16:00:34
https://www.reddit.com/r/LocalLLaMA/comments/1rbpgkg/setup_for_running_at_least_70b_models/
mageazure
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rbpgkg
false
null
t3_1rbpgkg
/r/LocalLLaMA/comments/1rbpgkg/setup_for_running_at_least_70b_models/
false
false
self
1
null
bytedance dropped seedance 2.0 and hollywood is threatening legal action within 72 hours
279
deadpool screenwriter saw this clip and said "it's over for us" disney and paramount sent cease and desists. MPA demanding bytedance shut it down. the model: \- 4 inputs at once (text + images + video + audio) \- native 2K \- audio synced to video \- 15 sec clips no open weights obviously (it's bytedance) ...
2026-02-22T15:49:56
https://www.reddit.com/r/LocalLLaMA/comments/1rbp6wj/bytedance_dropped_seedance_20_and_hollywood_is/
nihal_was_here
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rbp6wj
false
null
t3_1rbp6wj
/r/LocalLLaMA/comments/1rbp6wj/bytedance_dropped_seedance_20_and_hollywood_is/
false
false
self
279
{'enabled': False, 'images': [{'id': 'XOvXbQL7EXFWOh9t4AWIT0eweekEYD4ogyesmNItJNE', 'resolutions': [{'height': 72, 'url': 'https://external-preview.redd.it/XOvXbQL7EXFWOh9t4AWIT0eweekEYD4ogyesmNItJNE.jpeg?width=108&crop=smart&auto=webp&s=ccdaa0cf2253077a022f33f5c7efe6fc1d3118f9', 'width': 108}, {'height': 144, 'url': '...
Speedup of Qwen 3 Coder Next
1
[removed]
2026-02-22T15:43:58
https://www.reddit.com/r/LocalLLaMA/comments/1rbp1dy/speedup_of_qwen_3_coder_next/
Equivalent-Belt5489
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rbp1dy
false
null
t3_1rbp1dy
/r/LocalLLaMA/comments/1rbp1dy/speedup_of_qwen_3_coder_next/
false
false
self
1
null
Live Cohort - Agentic AI
0
Hey folks, Been seeing a lot of “build your own AI chatbot in 2 days” type of courses lately That’s cool and all, but honestly that’s not how AI is getting used inside companies. At work, we’re starting to see AI systems that: – review contracts – check if they violate internal policies – assign compliance risk ...
2026-02-22T15:32:16
https://www.reddit.com/r/LocalLLaMA/comments/1rbor1k/live_cohort_agentic_ai/
Gold-Survey5264
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rbor1k
false
null
t3_1rbor1k
/r/LocalLLaMA/comments/1rbor1k/live_cohort_agentic_ai/
false
false
self
0
null
vibe coded a small tool to merge documents for LLMs, context weaver
0
was thinking if there's a way to just merge multiple documents into one and feed it to any llm so i just built it lol you upload pdfs, docs, markdown, text files and it gives you back one merged file you can upload to chatgpt or claude or whatever. also adds some small things like headings and xml tags so the model un...
2026-02-22T15:11:26
https://www.reddit.com/r/LocalLLaMA/comments/1rbo8qn/vibe_coded_a_small_tool_to_merge_documents_for/
Serious-Ad9334
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rbo8qn
false
null
t3_1rbo8qn
/r/LocalLLaMA/comments/1rbo8qn/vibe_coded_a_small_tool_to_merge_documents_for/
false
false
self
0
null
I built a WebGPU torture chamber in your browser. Llama-3.2-1B just scored 20/100 (Grade F) on my Apple M4. Can your quantized model survive?
1
[removed]
2026-02-22T14:59:23
https://browserbattlebench.vercel.app/api/share?id=ef3dd662-e793-469d-8a7b-9880207ec72e&v=3
Business-Throat3614
browserbattlebench.vercel.app
1970-01-01T00:00:00
0
{}
1rbnxwu
false
null
t3_1rbnxwu
/r/LocalLLaMA/comments/1rbnxwu/i_built_a_webgpu_torture_chamber_in_your_browser/
false
false
https://external-preview…cbb6e2b488ed7d2e
1
{'enabled': False, 'images': [{'id': 'AipElAv-b_yDLTTftAbFIWVB2bLOHT7KSWZn77qQCHQ', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/AipElAv-b_yDLTTftAbFIWVB2bLOHT7KSWZn77qQCHQ.png?width=108&crop=smart&auto=webp&s=3414da10179efa4584e31d2755afd83a419545ea', 'width': 108}, {'height': 113, 'url': 'h...
I tried running a C-Suite of 5 OpenClaw agents on one Mac Mini and everything broke. So I built an open-source Enterprise Orchestrator to fix it.
0
Hey everyone, I love OpenClaw, but when I tried running a full "AI C-Suite" (CEO, Dev, Finance) on a single Mac Mini, I hit a wall. In a monolithic setup, they all share the same `~/.openclaw` memory, and trying to spawn multiple `gateway` commands results in endless WebSocket/CDP port collisions. If my Dev agent cras...
2026-02-22T14:49:45
https://i.redd.it/spxxosmw62lg1.gif
Fuzzy_Advertising650
i.redd.it
1970-01-01T00:00:00
0
{}
1rbnpn2
false
null
t3_1rbnpn2
/r/LocalLLaMA/comments/1rbnpn2/i_tried_running_a_csuite_of_5_openclaw_agents_on/
false
false
https://preview.redd.it/…7637f70366234f80
0
{'enabled': True, 'images': [{'id': 'spxxosmw62lg1', 'resolutions': [{'height': 60, 'url': 'https://preview.redd.it/spxxosmw62lg1.gif?width=108&crop=smart&format=png8&s=0be7b350ec454e5dd574ee4c58d0238f2e0b07a9', 'width': 108}, {'height': 121, 'url': 'https://preview.redd.it/spxxosmw62lg1.gif?width=216&crop=smart&format...
Void-Box: Capability-Bound Agent Runtime
2
[removed]
2026-02-22T14:49:33
https://www.reddit.com/r/LocalLLaMA/comments/1rbnpgz/voidbox_capabilitybound_agent_runtime/
Wide_Spite5612
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rbnpgz
false
null
t3_1rbnpgz
/r/LocalLLaMA/comments/1rbnpgz/voidbox_capabilitybound_agent_runtime/
false
false
self
2
null
"Based upon my training data, this is what a human might say..."
0
Would using llms feel different if every response started with "Based upon my training data, this is what a human might say" or something similar?
2026-02-22T14:48:50
https://www.reddit.com/r/LocalLLaMA/comments/1rbnovg/based_upon_my_training_data_this_is_what_a_human/
whatstheprobability
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rbnovg
false
null
t3_1rbnovg
/r/LocalLLaMA/comments/1rbnovg/based_upon_my_training_data_this_is_what_a_human/
false
false
self
0
null
The Qwen team verified that there are serious problems with the data quality of the GPQA and HLE test sets.
268
About a month ago, a friend of mine posted a thread here ([https://www.reddit.com/r/LocalLLaMA/comments/1qhz9e2/research\_i\_forensicaudited\_humanitys\_last\_exam/](https://www.reddit.com/r/LocalLLaMA/comments/1qhz9e2/research_i_forensicaudited_humanitys_last_exam/)) regarding a project he started called **DeepSeek-Ov...
2026-02-22T14:34:36
https://www.reddit.com/r/LocalLLaMA/comments/1rbnczy/the_qwen_team_verified_that_there_are_serious/
w1nter5n0w
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rbnczy
false
null
t3_1rbnczy
/r/LocalLLaMA/comments/1rbnczy/the_qwen_team_verified_that_there_are_serious/
false
false
https://preview.redd.it/…09ab2109106409dc
268
null
I tried making an LLM app on android!
0
[Endurance AI](https://reddit.com/link/1rbn5ut/video/if4y6t6u02lg1/player) Due to my limited phone spec with: \-4gb ram \-napdragon 680 \-65gb storage I tried to limit my apk ai app as much as possible with only 1024 tokens from 2040+ and my user chat only limited to three before you have to clear chat in order ...
2026-02-22T14:25:54
https://www.reddit.com/r/LocalLLaMA/comments/1rbn5ut/i_tried_making_an_llm_app_on_android/
Ok-Percentage1125
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rbn5ut
false
null
t3_1rbn5ut
/r/LocalLLaMA/comments/1rbn5ut/i_tried_making_an_llm_app_on_android/
false
false
https://preview.redd.it/…2f6bd9fa8041505e
0
null
Agent drift: model issue or state/config entropy?
1
[removed]
2026-02-22T14:16:24
https://www.reddit.com/r/LocalLLaMA/comments/1rbmxui/agent_drift_model_issue_or_stateconfig_entropy/
Agitated-Bit-620
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rbmxui
false
null
t3_1rbmxui
/r/LocalLLaMA/comments/1rbmxui/agent_drift_model_issue_or_stateconfig_entropy/
false
false
self
1
null
HI everyone! Now i will start my fine-tuning progress.
1
[removed]
2026-02-22T14:13:14
https://www.reddit.com/r/LocalLLaMA/comments/1rbmv5a/hi_everyone_now_i_will_start_my_finetuning/
AmbassadorOk934
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rbmv5a
false
null
t3_1rbmv5a
/r/LocalLLaMA/comments/1rbmv5a/hi_everyone_now_i_will_start_my_finetuning/
false
false
self
1
null
I couldn't afford Claude Code, so I built my own using local LLMs
1
[removed]
2026-02-22T14:12:46
https://i.redd.it/kg5f8mfaz1lg1.png
AccomplishedToe3481
i.redd.it
1970-01-01T00:00:00
0
{}
1rbmuqw
false
null
t3_1rbmuqw
/r/LocalLLaMA/comments/1rbmuqw/i_couldnt_afford_claude_code_so_i_built_my_own/
false
false
https://preview.redd.it/…ad165927e38cd680
1
{'enabled': True, 'images': [{'id': 'kg5f8mfaz1lg1', 'resolutions': [{'height': 62, 'url': 'https://preview.redd.it/kg5f8mfaz1lg1.png?width=108&crop=smart&auto=webp&s=fd24748b25bcd915542607f9dd842d2ca2167a48', 'width': 108}, {'height': 125, 'url': 'https://preview.redd.it/kg5f8mfaz1lg1.png?width=216&crop=smart&auto=web...
Running local agents with Ollama was easier than I expected. The hard part was the config.
0
Spent the last few weeks getting an Ollama-based agent setup actually working for day-to-day tasks. The model side was surprisingly straightforward once I picked the right one. The headache was everything around it. I kept running into the same problem: the agent would work fine for a session or two, then start doing ...
2026-02-22T14:05:12
https://www.reddit.com/r/LocalLLaMA/comments/1rbmoi1/running_local_agents_with_ollama_was_easier_than/
Acrobatic_Task_6573
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rbmoi1
false
null
t3_1rbmoi1
/r/LocalLLaMA/comments/1rbmoi1/running_local_agents_with_ollama_was_easier_than/
false
false
self
0
null
Is there *any* good coding agent software for use with local models?
45
Claude Code seems to be [taking steps](https://www.reddit.com/r/LocalLLaMA/comments/1r47fz0/claude_code_with_local_models_full_prompt/) to make it more and more difficult to use with local models with things like forcing the context to constantly be recalculated. OpenCode has made the decision to basically not have a p...
2026-02-22T14:04:29
https://www.reddit.com/r/LocalLLaMA/comments/1rbmnw7/is_there_any_good_coding_agent_software_for_use/
eapache
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rbmnw7
false
null
t3_1rbmnw7
/r/LocalLLaMA/comments/1rbmnw7/is_there_any_good_coding_agent_software_for_use/
false
false
self
45
null
Considering Mac Mini M4 Pro 64GB for agentic coding — what actually runs well?
3
Considering Mac Mini M4 Pro 64GB for agentic coding — what actually runs well? I’m seriously considering pulling the trigger on a \*\*Mac Mini M4 Pro with 64GB unified memory\*\* specifically for local AI-assisted development. Before I do, I want to get real-world input from people actually running this hardware day t...
2026-02-22T13:28:11
https://www.reddit.com/r/LocalLLaMA/comments/1rblur3/considering_mac_mini_m4_pro_64gb_for_agentic/
amunocis
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rblur3
false
null
t3_1rblur3
/r/LocalLLaMA/comments/1rblur3/considering_mac_mini_m4_pro_64gb_for_agentic/
false
false
self
3
null
using local AI for self assistant, for diaries, in a weak system
0
I want to use a **local llm** as my private AI assistant. need a model focused on context, tone, emotional/subtext rather than code and calculations. to analyze my long chats (telegram etc.), write a diary and introduce myself, upload documents and articles that I love and to get outputs depeds of all. I want to embe...
2026-02-22T13:05:37
https://www.reddit.com/r/LocalLLaMA/comments/1rbldns/using_local_ai_for_self_assistant_for_diaries_in/
ThrowRA_Foxandbunny
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rbldns
false
null
t3_1rbldns
/r/LocalLLaMA/comments/1rbldns/using_local_ai_for_self_assistant_for_diaries_in/
false
false
self
0
null
I created yet another coding agent - Its tiny and fun (atleast for me), hope the community finds it useful
83
Here is Kon telling you about it's own repo, using glm-4.7-flash-q4 running locally on my i7-14700F × 28, 64GB RAM, 24GB VRAM (RTX 3090) – video is sped up 2x >github: [https://github.com/kuutsav/kon](https://github.com/kuutsav/kon) pypi: [https://pypi.org/project/kon-coding-agent/](https://pypi.org/project/kon-codi...
2026-02-22T13:03:49
https://v.redd.it/jf0xcw9vn1lg1
Weird_Search_4723
v.redd.it
1970-01-01T00:00:00
0
{}
1rblce7
false
{'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/jf0xcw9vn1lg1/DASHPlaylist.mpd?a=1774357454%2CYmNjYjRkZDU4MzlhZDA2Zjk1ZjJmZGFkZDE3ZTI4M2QxNmEzYjQ0Njc5ZmQ4MTkwZDM1NGFjODgwOWExZjY2Nw%3D%3D&v=1&f=sd', 'duration': 24, 'fallback_url': 'https://v.redd.it/jf0xcw9vn1lg1/CMAF_720.mp4?source=fallback', 'ha...
t3_1rblce7
/r/LocalLLaMA/comments/1rblce7/i_created_yet_another_coding_agent_its_tiny_and/
false
false
https://external-preview…54e6db18e6f897d0
83
{'enabled': False, 'images': [{'id': 'NWtrYWtuYXZuMWxnMexVgBFEBEtAfoKpFzO1VgJV4m4gRx-YBoBnOCuCCbAU', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/NWtrYWtuYXZuMWxnMexVgBFEBEtAfoKpFzO1VgJV4m4gRx-YBoBnOCuCCbAU.png?width=108&crop=smart&format=pjpg&auto=webp&s=818cac0fc3d30c6a950f10b5630d303f9636c...
Gemini 3.1 pro. very, very strange.
0
this is an instance that I was coding with heavily so we are way outside an effective context but this leakage is the strangest ive ever seen and I'm a very heavy user...
2026-02-22T13:01:58
https://www.reddit.com/gallery/1rblb0d
braydon125
reddit.com
1970-01-01T00:00:00
0
{}
1rblb0d
false
null
t3_1rblb0d
/r/LocalLLaMA/comments/1rblb0d/gemini_31_pro_very_very_strange/
false
false
https://preview.redd.it/…c5aebf3359c5cd26
0
null
Selling Moltbook's Real-Time Hotspots - $5/Post or $20/Week
1
[removed]
2026-02-22T12:46:53
https://www.reddit.com/r/LocalLLaMA/comments/1rbl03w/selling_moltbooks_realtime_hotspots_5post_or/
Jazzlike-Plastic3314
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rbl03w
false
null
t3_1rbl03w
/r/LocalLLaMA/comments/1rbl03w/selling_moltbooks_realtime_hotspots_5post_or/
false
false
self
1
null
How To use Claude Code on cloud free
0
no BS (prefer CLI) \>install ollama \>install claude code \>Install qwen3 on cloud free(just check the website) \>Launch Claude Code through qwen3 model 100% workrate in linux and macos for windows , i think it should work :)
2026-02-22T12:46:19
https://www.reddit.com/r/LocalLLaMA/comments/1rbkzok/how_to_use_claude_code_on_cloud_free/
Different_Host_2030
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rbkzok
false
null
t3_1rbkzok
/r/LocalLLaMA/comments/1rbkzok/how_to_use_claude_code_on_cloud_free/
false
false
self
0
null
Selling Moltbook's Real-Time Hotspots - $5/Post or $20/Week
1
[removed]
2026-02-22T12:36:08
https://www.reddit.com/r/LocalLLaMA/comments/1rbksph/selling_moltbooks_realtime_hotspots_5post_or/
Jazzlike-Plastic3314
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rbksph
false
null
t3_1rbksph
/r/LocalLLaMA/comments/1rbksph/selling_moltbooks_realtime_hotspots_5post_or/
false
false
self
1
null
Would you pay for a managed private AI memory assistant in Telegram — no setup required?
0
Genuine question before I build this. Concept: Telegram bot that stores everything you send — notes, voice, docs — and retrieves it later with natural language. Per-user hosted instance, encrypted, no data shared with third parties. Not self-hosted. That's intentional. Seen too many people excited about OpenClaw but ...
2026-02-22T12:30:57
https://www.reddit.com/r/LocalLLaMA/comments/1rbkp58/would_you_pay_for_a_managed_private_ai_memory/
Ok-Dragonfruit7268
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rbkp58
false
null
t3_1rbkp58
/r/LocalLLaMA/comments/1rbkp58/would_you_pay_for_a_managed_private_ai_memory/
false
false
self
0
null
Which one are you waiting for more: 9B or 35B?
916
2026-02-22T12:15:48
https://i.redd.it/jyvany3jf1lg1.png
jacek2023
i.redd.it
1970-01-01T00:00:00
0
{}
1rbkeea
false
null
t3_1rbkeea
/r/LocalLLaMA/comments/1rbkeea/which_one_are_you_waiting_for_more_9b_or_35b/
false
false
https://preview.redd.it/…1b49badccd54edc3
916
{'enabled': True, 'images': [{'id': 'jyvany3jf1lg1', 'resolutions': [{'height': 126, 'url': 'https://preview.redd.it/jyvany3jf1lg1.png?width=108&crop=smart&auto=webp&s=7555513d6a9e8b42fb0e64e934b88821906fdfa9', 'width': 108}, {'height': 253, 'url': 'https://preview.redd.it/jyvany3jf1lg1.png?width=216&crop=smart&auto=we...
Hardware ASIC 17k tok/s
0
Make this run Qwen3 4B and I am in!
2026-02-22T12:05:25
https://www.cnx-software.com/2026/02/22/taalas-hc1-hardwired-llama-3-1-8b-ai-accelerator-delivers-up-to-17000-tokens-s/
DeltaSqueezer
cnx-software.com
1970-01-01T00:00:00
0
{}
1rbk6z6
false
null
t3_1rbk6z6
/r/LocalLLaMA/comments/1rbk6z6/hardware_asic_17k_toks/
false
false
https://external-preview…5e7e972d740e90ba
0
{'enabled': False, 'images': [{'id': 'xjIoltA0M4tFxL-sxe27AKAPoWrcE-j_9Pwf2aXdoUg', 'resolutions': [{'height': 76, 'url': 'https://external-preview.redd.it/xjIoltA0M4tFxL-sxe27AKAPoWrcE-j_9Pwf2aXdoUg.jpeg?width=108&crop=smart&auto=webp&s=6489a6bde582bf1dc3e7c38bf18f27a7b230129d', 'width': 108}, {'height': 153, 'url': '...
Destill GPT5.3 Codex to GPT OSS
0
As GPT OSS runs quite fast on Strix Halo because of its MoE architecture, so I am wondering if it would be possible to destill to coding skills from gpt 5.3 to gpt oss. Did anyone build its own optimizated MoE llm via distilling I assume this should be against the open ai tocs. But for privat and Educational purpose...
2026-02-22T12:00:26
https://www.reddit.com/r/LocalLLaMA/comments/1rbk3gv/destill_gpt53_codex_to_gpt_oss/
Intelligent_Lab1491
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rbk3gv
false
null
t3_1rbk3gv
/r/LocalLLaMA/comments/1rbk3gv/destill_gpt53_codex_to_gpt_oss/
false
false
self
0
null
I think openclaw is OVERHYPED. Just use skills
328
I think openclaw is useful, loop, memory, agents, integrations, but after a week a testing, honestly I don't need it much. \- memory, is nice. But I prefere to have "manual memory". Prompt: Ok, write what yout learnt in "superreporttrending-skill". Automatic memory often pollute the context of info you don't care. \-...
2026-02-22T11:51:38
https://www.reddit.com/r/LocalLLaMA/comments/1rbjxpv/i_think_openclaw_is_overhyped_just_use_skills/
Deep_Traffic_7873
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rbjxpv
false
null
t3_1rbjxpv
/r/LocalLLaMA/comments/1rbjxpv/i_think_openclaw_is_overhyped_just_use_skills/
false
false
self
328
null
Look
0
https://github.com/open-forty-four/opengradient
2026-02-22T11:51:30
https://i.redd.it/44rbphadb1lg1.jpeg
bk888888888
i.redd.it
1970-01-01T00:00:00
0
{}
1rbjxmm
false
null
t3_1rbjxmm
/r/LocalLLaMA/comments/1rbjxmm/look/
false
false
https://preview.redd.it/…9c2758d343f3ba50
0
{'enabled': True, 'images': [{'id': '44rbphadb1lg1', 'resolutions': [{'height': 60, 'url': 'https://preview.redd.it/44rbphadb1lg1.jpeg?width=108&crop=smart&auto=webp&s=5f4c9fc24ea92492097522c0e7542beb7e570e64', 'width': 108}, {'height': 120, 'url': 'https://preview.redd.it/44rbphadb1lg1.jpeg?width=216&crop=smart&auto=w...
How to run Qwen Code Locally with Qwen3-coder-next on LM Studio on MAC.
5
I had trouble setting this up while LM Studio as the server. Maybe you already know this, but here it is anyway: you need to create your settings.json using anthropic not openai type. And then it works in LM Studio. All of it! I'm running it on LM Studio on MAC Ultra 128GB in MLX 8bit across local network with maxed ...
2026-02-22T11:45:52
https://www.reddit.com/r/LocalLLaMA/comments/1rbjtyu/how_to_run_qwen_code_locally_with_qwen3codernext/
FPham
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rbjtyu
false
null
t3_1rbjtyu
/r/LocalLLaMA/comments/1rbjtyu/how_to_run_qwen_code_locally_with_qwen3codernext/
false
false
self
5
{'enabled': False, 'images': [{'id': '31wqNNXG1d0Gw0Xey1GFAaK408UxxQKyicXGjIRE2iQ', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/31wqNNXG1d0Gw0Xey1GFAaK408UxxQKyicXGjIRE2iQ.png?width=108&crop=smart&auto=webp&s=74aa4e884ed6993c89229207051d1a56688696dc', 'width': 108}, {'height': 108, 'url': 'h...
I created yet another coding agent - Its tiny and fun (atleast for me), hope the community finds it useful
2
https://reddit.com/link/1rbjq3g/video/zx9gkpsg81lg1/player Here is Kon telling you about it's own repo, using glm-4.7-flash-q4 running locally on my i7-14700F × 28, 64GB RAM, 24GB VRAM (RTX 3090) github: [https://github.com/kuutsav/kon](https://github.com/kuutsav/kon) pypi: [https://pypi.org/project/kon-coding-...
2026-02-22T11:40:00
https://www.reddit.com/r/LocalLLaMA/comments/1rbjq3g/i_created_yet_another_coding_agent_its_tiny_and/
Weird_Search_4723
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rbjq3g
false
null
t3_1rbjq3g
/r/LocalLLaMA/comments/1rbjq3g/i_created_yet_another_coding_agent_its_tiny_and/
false
false
https://external-preview…24ea5bddca0a47d8
2
null
Someone explain to me MiniMax's $300B market cap?
0
How did this stock rocket up to $300B Market cap when it just IPO'd like a month ago? I'm confused. Isn't Anthropic planning to IPO @ ~$380B? Where is MiniMax's primary share of revenue coming from?
2026-02-22T11:38:28
https://www.reddit.com/r/LocalLLaMA/comments/1rbjp35/someone_explain_to_me_minimaxs_300b_market_cap/
unraveleverything
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rbjp35
false
null
t3_1rbjp35
/r/LocalLLaMA/comments/1rbjp35/someone_explain_to_me_minimaxs_300b_market_cap/
false
false
self
0
null
Olla v0.0.24 - Anthropic Messages API Pass-through support for local backends (use Claude-compatible tools with your local models)
2
Hey folks, Wanted to share a couple of updates to [https://github.com/thushan/olla](https://github.com/thushan/olla), our open-source proxy/load balancer for local LLM infrastructure. *The tldr;* Olla sits in front of your inference backends (Ollama, vLLM, SGLang, llama.cpp, LM Studio, LiteLLM, etc.), gives you a un...
2026-02-22T11:35:29
https://www.reddit.com/r/LocalLLaMA/comments/1rbjn6e/olla_v0024_anthropic_messages_api_passthrough/
2shanigans
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rbjn6e
false
null
t3_1rbjn6e
/r/LocalLLaMA/comments/1rbjn6e/olla_v0024_anthropic_messages_api_passthrough/
false
false
https://external-preview…4c0249e32252596f
2
null
15,000+ tok/s on ChatJimmy: Is the "Model-on-Silicon" era finally starting?
0
We’ve been discussing local inference for years, but chatjimmy.ai just moved the goalposts. They are hitting 15,414 tokens per second using what they call "mask ROM recall fabric"—basically etching the model weights directly into the silicon logic. ​This is a massive shift from our current setups. We’re used to genera...
2026-02-22T11:24:53
https://www.reddit.com/gallery/1rbjgs6
maifee
reddit.com
1970-01-01T00:00:00
0
{}
1rbjgs6
false
null
t3_1rbjgs6
/r/LocalLLaMA/comments/1rbjgs6/15000_toks_on_chatjimmy_is_the_modelonsilicon_era/
false
false
https://preview.redd.it/…8b66264b51e0ff5b
0
null
Open-source local AI runtime focused on control/undo (inspired by using tools like OpenClaw)
0
After testing tools like OpenClaw, I realized I wanted more control/safety for real workflows (especially when multiple actions chain together). So I found this cool project called Undoable, an open-source local-first runtime focused on recorded actions, stricter modes, and undo/redo (when possible). Repo: https://gi...
2026-02-22T11:20:08
https://github.com/neurana/undoable
Proud_Ad_7039
github.com
1970-01-01T00:00:00
0
{}
1rbje0i
false
null
t3_1rbje0i
/r/LocalLLaMA/comments/1rbje0i/opensource_local_ai_runtime_focused_on/
false
false
https://external-preview…69349308c540e66d
0
{'enabled': False, 'images': [{'id': 'WsiztpyX3iGDyRn1w1h1OSSoubZIowqhyqG0QsHxa-U', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/WsiztpyX3iGDyRn1w1h1OSSoubZIowqhyqG0QsHxa-U.png?width=108&crop=smart&auto=webp&s=3cbf89683626a7e74c9ba12edee6d80d4f5c62ed', 'width': 108}, {'height': 108, 'url': 'h...
setup locale per coding bot pinescript
0
Salve a tutti, da newbie di llama, ma interessato al mondo, mi chiedevo se qualcuno potesse consigliare cosa installare per avere sistema locale per il supporto specifico di coding di trading bot (pinescript, ma anche mt4/5). Chiedo perché immagino esistano risorse più grazie specifiche che non conosco. Qualunque cons...
2026-02-22T11:03:15
https://www.reddit.com/r/LocalLLaMA/comments/1rbj3hl/setup_locale_per_coding_bot_pinescript/
Mental-Thought-1563
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rbj3hl
false
null
t3_1rbj3hl
/r/LocalLLaMA/comments/1rbj3hl/setup_locale_per_coding_bot_pinescript/
false
false
self
0
null
Confused between being mad or appreciating the (damage done already) honesty..
0
I knew about the failed faithfulness test of Claude models but this is insane.. there is no disclaimer about the facts and info from the internet to be fabricated by Sonnet model. Imagine relying on the Claude generated reports just minutes before an important meeting. Sad part, I dont even have the intern to blame no...
2026-02-22T10:57:36
https://i.redd.it/wa6lo2uq11lg1.jpeg
varough
i.redd.it
1970-01-01T00:00:00
0
{}
1rbizzq
false
null
t3_1rbizzq
/r/LocalLLaMA/comments/1rbizzq/confused_between_being_mad_or_appreciating_the/
false
false
https://preview.redd.it/…d6bd2d92252e7ed5
0
{'enabled': True, 'images': [{'id': 'wa6lo2uq11lg1', 'resolutions': [{'height': 67, 'url': 'https://preview.redd.it/wa6lo2uq11lg1.jpeg?width=108&crop=smart&auto=webp&s=febf7dd8a50db6235daab91b000a4eed096512b3', 'width': 108}, {'height': 134, 'url': 'https://preview.redd.it/wa6lo2uq11lg1.jpeg?width=216&crop=smart&auto=w...
Has anyone else tried IQ2 quantization? I'm genuinely shocked by the quality
43
I've always used GGUF and never went below Q4_K_M because I assumed anything lower would be garbage. Today I decided to try UD-IQ2_XXS on Qwen3-30B-A3B (10.3 GB) and I'm honestly shocked. First off 100 TPS on my RX 9060 XT 16GB, up from 20 TPS on Q4_K_M. 5x speedup with 20K+ context, fully offloaded to GPU. But the rea...
2026-02-22T10:37:47
https://www.reddit.com/r/LocalLLaMA/comments/1rbio4h/has_anyone_else_tried_iq2_quantization_im/
Any-Chipmunk5480
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rbio4h
false
null
t3_1rbio4h
/r/LocalLLaMA/comments/1rbio4h/has_anyone_else_tried_iq2_quantization_im/
false
false
self
43
null
Built a framework-agnostic chat Web Components
0
Hi all, I just released the first stable version of my chat Web Components and would love to hear your feedback. The motivation for this started when I worked with another chat UI library at work that felt like it could be improved and wasn’t actively maintained anymore. So I decided to try building one myself for fu...
2026-02-22T10:26:13
https://github.com/spider-hand/advanced-chat-kai
itsspiderhand
github.com
1970-01-01T00:00:00
0
{}
1rbih6l
false
null
t3_1rbih6l
/r/LocalLLaMA/comments/1rbih6l/built_a_frameworkagnostic_chat_web_components/
false
false
https://external-preview…f786de88d778c24f
0
{'enabled': False, 'images': [{'id': '5Y8pszewX56zLor4zl-TH8JMgB1vvc5AYogWv1gFibM', 'resolutions': [{'height': 70, 'url': 'https://external-preview.redd.it/5Y8pszewX56zLor4zl-TH8JMgB1vvc5AYogWv1gFibM.png?width=108&crop=smart&auto=webp&s=bd1238ca433b43b5ba057b9a3ff7c30a1902ecc4', 'width': 108}, {'height': 140, 'url': 'h...
How do you debug retrieval when RAG results feel wrong? Made a lightweight debugger
1
Hi everyone, I made a lightweight debugger for vector retrieval and would love to connect with anyone here building: * RAG pipelines * FastAPI + vector DB backends * embedding-based search systems I want to understand more about RAG systems and the kind of issues you run into while developing it. Especially what do...
2026-02-22T10:26:06
https://www.reddit.com/r/LocalLLaMA/comments/1rbih49/how_do_you_debug_retrieval_when_rag_results_feel/
habibaa_ff
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rbih49
false
null
t3_1rbih49
/r/LocalLLaMA/comments/1rbih49/how_do_you_debug_retrieval_when_rag_results_feel/
false
false
self
1
{'enabled': False, 'images': [{'id': 'IUHM4ctLZQorzkPuYJ4IkGSag8BtaIqZoyqL1L53KuM', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/IUHM4ctLZQorzkPuYJ4IkGSag8BtaIqZoyqL1L53KuM.jpeg?width=108&crop=smart&auto=webp&s=3c06c05fbfc6417cf2ed8eb973d76d70376c5051', 'width': 108}, {'height': 216, 'url': ...
🤔
0
2026-02-22T10:23:46
https://i.redd.it/czi0g5dov0lg1.png
cobalt1137
i.redd.it
1970-01-01T00:00:00
0
{}
1rbifr9
false
null
t3_1rbifr9
/r/LocalLLaMA/comments/1rbifr9/_/
false
false
https://preview.redd.it/…eaf7ceb2d9ad75f1
0
{'enabled': True, 'images': [{'id': 'czi0g5dov0lg1', 'resolutions': [{'height': 54, 'url': 'https://preview.redd.it/czi0g5dov0lg1.png?width=108&crop=smart&auto=webp&s=60e99b2d3fcb5b44bfb761183f9d700a55bb3596', 'width': 108}, {'height': 109, 'url': 'https://preview.redd.it/czi0g5dov0lg1.png?width=216&crop=smart&auto=web...
I’m looking for security engineers who can contribute to an OSS project
0
Hi all, I’m a maintainer of an open source deterministic verification layer for AI systems. Idea of my project is: Before AI-generated outputs (math, SQL, code, structured data) are executed in production, it verifies them using deterministic engines like SymPy, Z3, AST analysis, and schema validation. This project...
2026-02-22T10:23:17
https://www.reddit.com/r/LocalLLaMA/comments/1rbifgn/im_looking_for_security_engineers_who_can/
Moist_Landscape289
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rbifgn
false
null
t3_1rbifgn
/r/LocalLLaMA/comments/1rbifgn/im_looking_for_security_engineers_who_can/
false
false
self
0
{'enabled': False, 'images': [{'id': '8pIwvxgOYSLWRbP532xkA9AtNpMPQccAQEEKpgQa_nU', 'resolutions': [{'height': 35, 'url': 'https://external-preview.redd.it/8pIwvxgOYSLWRbP532xkA9AtNpMPQccAQEEKpgQa_nU.png?width=108&crop=smart&auto=webp&s=096e31ef505d5113696f33613661d386882e3af0', 'width': 108}, {'height': 71, 'url': 'ht...
Microsoft announces powerful new chip for AI inference
0
[https://techcrunch.com/2026/01/26/microsoft-announces-powerful-new-chip-for-ai-inference/](https://techcrunch.com/2026/01/26/microsoft-announces-powerful-new-chip-for-ai-inference/)
2026-02-22T10:22:32
https://www.reddit.com/r/LocalLLaMA/comments/1rbiezw/microsoft_announces_powerful_new_chip_for_ai/
Dontdoitagain69
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rbiezw
false
null
t3_1rbiezw
/r/LocalLLaMA/comments/1rbiezw/microsoft_announces_powerful_new_chip_for_ai/
false
false
self
0
{'enabled': False, 'images': [{'id': 'j3SZ4lvh7tFTxBzqA_lvcpoOEyWWPbdjih2e93my9PM', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/j3SZ4lvh7tFTxBzqA_lvcpoOEyWWPbdjih2e93my9PM.png?width=108&crop=smart&auto=webp&s=5a5527944451d5fa9b203f7f9b91fadc069c8c15', 'width': 108}, {'height': 121, 'url': 'h...
qwen2.5 coder 7B Q4, is it good?
0
I'm a beginner with ai models, I downloaded qwen2.5 coder 7B Q4, on my pc, I have cline and continue on vscode But problem is, it couldn't even install a react app using vite, is this normal because on hugging face it told me how to install a react app using vite easily. And second thing is it try to install via create...
2026-02-22T10:19:46
https://www.reddit.com/r/LocalLLaMA/comments/1rbidc8/qwen25_coder_7b_q4_is_it_good/
random_boy8654
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rbidc8
false
null
t3_1rbidc8
/r/LocalLLaMA/comments/1rbidc8/qwen25_coder_7b_q4_is_it_good/
false
false
self
0
null
Building a tunable RAG pipeline, should I open source it? No promotion, just need ideas for roadmap
2
Hey everyone, I've been working on a RAG system as a side project for the past 4-5 months, and I'm at a point where I'm not sure how to evolve it. A friend suggested I consider open-sourcing it or at least sharing it publicly to get feedback and find people working on similar problems. **Background on why I started...
2026-02-22T10:11:48
https://www.reddit.com/r/LocalLLaMA/comments/1rbi8ht/building_a_tunable_rag_pipeline_should_i_open/
gg223422
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rbi8ht
false
null
t3_1rbi8ht
/r/LocalLLaMA/comments/1rbi8ht/building_a_tunable_rag_pipeline_should_i_open/
false
false
self
2
null
Are AI coding agents (GPT/Codex, Claude Sonnet/Opus) actually helping you ship real products?
9
I’ve been testing AI coding agents a lot lately and I’m curious about real-world impact beyond demos. A few things I keep noticing: • They seem great with Python + JavaScript frameworks, but weaker with Java, C++, or more structured systems — is that true for others too? • Do they genuinely speed up startup/MVP deve...
2026-02-22T09:58:38
https://www.reddit.com/r/LocalLLaMA/comments/1rbi0ij/are_ai_coding_agents_gptcodex_claude_sonnetopus/
darshan_aqua
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rbi0ij
false
null
t3_1rbi0ij
/r/LocalLLaMA/comments/1rbi0ij/are_ai_coding_agents_gptcodex_claude_sonnetopus/
false
false
self
9
null
idea: a 2d desktop pet that stalks your local files. who wants to build it?
0
so i have this idea rn. normal ai chat bots are stupid and forget everything in 5 mins. i want to make a desktop pet using love2d. just a small 2d sprite walking on windows. no unity bloatware bullshit. for brain: gemini api. for memory: this is the cool part. i want to use `illegal-instruction-co/rememex`. it is a r...
2026-02-22T09:38:49
https://i.redd.it/toaclbgon0lg1.jpeg
Humble-Plastic-5285
i.redd.it
1970-01-01T00:00:00
0
{}
1rbhp11
false
null
t3_1rbhp11
/r/LocalLLaMA/comments/1rbhp11/idea_a_2d_desktop_pet_that_stalks_your_local/
false
false
https://preview.redd.it/…43557027e00e8393
0
{'enabled': True, 'images': [{'id': 'toaclbgon0lg1', 'resolutions': [{'height': 58, 'url': 'https://preview.redd.it/toaclbgon0lg1.jpeg?width=108&crop=smart&auto=webp&s=4524d7f48d0d0249c8781750a0c92b758bfbd609', 'width': 108}, {'height': 117, 'url': 'https://preview.redd.it/toaclbgon0lg1.jpeg?width=216&crop=smart&auto=w...
Google Open-Sources NPU IP, Synaptics Implements It
6
[Google Open-Sources NPU IP, Synaptics Implements It - EE Times](https://www.eetimes.com/google-open-sources-npu-ip-synaptics-implements-it/)
2026-02-22T09:31:16
https://www.reddit.com/r/LocalLLaMA/comments/1rbhksy/google_opensources_npu_ip_synaptics_implements_it/
Dontdoitagain69
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rbhksy
false
null
t3_1rbhksy
/r/LocalLLaMA/comments/1rbhksy/google_opensources_npu_ip_synaptics_implements_it/
false
false
self
6
{'enabled': False, 'images': [{'id': '2WSe5PW8Q2SzxiJJaS5JJr3JpTEg0nQgHYMhP83kng8', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/2WSe5PW8Q2SzxiJJaS5JJr3JpTEg0nQgHYMhP83kng8.jpeg?width=108&crop=smart&auto=webp&s=f365c75786d49b5fda2000457c12a5d74a4db2ca', 'width': 108}, {'height': 216, 'url': ...
smolcluster: Educational library to cluster your everyday devices to train/inference LLMs
10
For the past month, I've been working on something educational for the community on concepts related to distributed systems, particularly for training LLMs! I was amazed by the work done by people at @/exolabs where they provide amazing software for connecting Mac minis/studios together to run inference on huge models...
2026-02-22T09:23:38
https://www.reddit.com/r/LocalLLaMA/comments/1rbhgcv/smolcluster_educational_library_to_cluster_your/
East-Muffin-6472
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rbhgcv
false
null
t3_1rbhgcv
/r/LocalLLaMA/comments/1rbhgcv/smolcluster_educational_library_to_cluster_your/
false
false
self
10
{'enabled': False, 'images': [{'id': 'o0UpZkOwLLy8MSKBm8vzYHPXdS-Q6lDghEgwjPQ8REU', 'resolutions': [{'height': 55, 'url': 'https://external-preview.redd.it/o0UpZkOwLLy8MSKBm8vzYHPXdS-Q6lDghEgwjPQ8REU.png?width=108&crop=smart&auto=webp&s=ab576311966e3871e22815a22bb3ca17a9ca16d1', 'width': 108}, {'height': 110, 'url': 'h...
API ai
1
[removed]
2026-02-22T09:15:44
https://www.reddit.com/r/LocalLLaMA/comments/1rbhbvw/api_ai/
Own-Run-6792
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rbhbvw
false
null
t3_1rbhbvw
/r/LocalLLaMA/comments/1rbhbvw/api_ai/
false
false
self
1
null
Everyone's designing agent orchestration. Nobody's designing for when it breaks.
1
[removed]
2026-02-22T09:12:56
https://www.reddit.com/r/LocalLLaMA/comments/1rbha92/everyones_designing_agent_orchestration_nobodys/
AdAccurate6326
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rbha92
false
null
t3_1rbha92
/r/LocalLLaMA/comments/1rbha92/everyones_designing_agent_orchestration_nobodys/
false
false
self
1
null