title stringlengths 1 300 | score int64 0 8.54k | selftext stringlengths 0 41.5k | created timestamp[ns]date 2023-04-01 04:30:41 2026-03-04 02:14:14 ⌀ | url stringlengths 0 878 | author stringlengths 3 20 | domain stringlengths 0 82 | edited timestamp[ns]date 1970-01-01 00:00:00 2026-02-19 14:51:53 | gilded int64 0 2 | gildings stringclasses 7
values | id stringlengths 7 7 | locked bool 2
classes | media stringlengths 646 1.8k ⌀ | name stringlengths 10 10 | permalink stringlengths 33 82 | spoiler bool 2
classes | stickied bool 2
classes | thumbnail stringlengths 4 213 ⌀ | ups int64 0 8.54k | preview stringlengths 301 5.01k ⌀ |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Is it feasible to have small LLMs deployed on consumer-grade GPUs communicate with free official LLMs to perform operations on a computer? | 2 | For example, if I want to write a program to achieve my desired outcome, I send my idea to a local LLM. The local LLM then interacts with the free official LLM, copies and pastes the code provided by the official LLM, and then debugs the code, repeating this process iteratively.
I originally intended to implement this... | 2026-02-21T18:13:22 | https://www.reddit.com/r/LocalLLaMA/comments/1ray6fw/is_it_feasible_to_have_small_llms_deployed_on/ | BitOk4326 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ray6fw | false | null | t3_1ray6fw | /r/LocalLLaMA/comments/1ray6fw/is_it_feasible_to_have_small_llms_deployed_on/ | false | false | self | 2 | null |
CXMT has been offering DDR4 chips at about half the prevailing market rate | 102 | 2026-02-21T18:07:33 | https://www.koreaherald.com/article/10679206 | johnnyApplePRNG | koreaherald.com | 1970-01-01T00:00:00 | 0 | {} | 1ray0vz | false | null | t3_1ray0vz | /r/LocalLLaMA/comments/1ray0vz/cxmt_has_been_offering_ddr4_chips_at_about_half/ | false | false | 102 | {'enabled': False, 'images': [{'id': '0K-nyzO4raoSh4Q6Gk6oShuWqJIJ5QWuThVMJGt1MKU', 'resolutions': [{'height': 75, 'url': 'https://external-preview.redd.it/0K-nyzO4raoSh4Q6Gk6oShuWqJIJ5QWuThVMJGt1MKU.png?width=108&crop=smart&auto=webp&s=a88017a385be4448b28fa81cb76209357c5f71ad', 'width': 108}, {'height': 151, 'url': 'h... | ||
n00b question: Would this be possible with a local AI? | 2 | Hey guys,
I’m quite new to AI, I’m using Perplexity (1,5y) and ManusAi (6m) in my daily life. So far I’m hosting a Ollama on my MBP (old i7, 16gb) and am very underwhelmed with the results. I don’t mind it being slow, but up to date I only got explanations why it wouldn’t be willed to do certain tasks for me :)
I was... | 2026-02-21T18:00:33 | https://www.reddit.com/r/LocalLLaMA/comments/1raxu15/n00b_question_would_this_be_possible_with_a_local/ | mrbuggger | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1raxu15 | false | null | t3_1raxu15 | /r/LocalLLaMA/comments/1raxu15/n00b_question_would_this_be_possible_with_a_local/ | false | false | self | 2 | null |
AI - Humanize text | 0 | hello guys , I'm Cyber security Student , currently i'm working on a project and need to write a journal paper and publish it ! by this you guys can already knew it was for ai to human text conversation , when i went to some commonly available tools in online when i tried them almost every body is giving premium servic... | 2026-02-21T17:59:13 | https://www.reddit.com/r/LocalLLaMA/comments/1raxsr2/ai_humanize_text/ | Less_Strain7577 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1raxsr2 | false | null | t3_1raxsr2 | /r/LocalLLaMA/comments/1raxsr2/ai_humanize_text/ | false | false | self | 0 | null |
optimize_anything by GEPA team | 3 | Cool new library and approach from GEPA folks. Similar to GEPA but optimized any text (code, agent systems) - not just prompts.
https://gepa-ai.github.io/gepa/blog/2026/02/18/introducing-optimize-anything/ | 2026-02-21T17:43:02 | https://www.reddit.com/r/LocalLLaMA/comments/1raxdpc/optimize_anything_by_gepa_team/ | davernow | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1raxdpc | false | null | t3_1raxdpc | /r/LocalLLaMA/comments/1raxdpc/optimize_anything_by_gepa_team/ | false | false | self | 3 | {'enabled': False, 'images': [{'id': '2-Cc1NyTxl7z1zJSDNsCfv2lkMJD9O4gdY-5mJfik2c', 'resolutions': [{'height': 49, 'url': 'https://external-preview.redd.it/2-Cc1NyTxl7z1zJSDNsCfv2lkMJD9O4gdY-5mJfik2c.png?width=108&crop=smart&auto=webp&s=38e484660d3f107fb29e93d1409270e2d9dc62c6', 'width': 108}, {'height': 99, 'url': 'ht... |
NF4 beats INT8 in every metric — benchmarks on Qwen2.5-0.5B (Tesla T4) | 0 | 2026-02-21T17:25:17 | https://github.com/davidibarzabal/neuralzip | Impressive_Bonus_695 | github.com | 1970-01-01T00:00:00 | 0 | {} | 1rawwxb | false | null | t3_1rawwxb | /r/LocalLLaMA/comments/1rawwxb/nf4_beats_int8_in_every_metric_benchmarks_on/ | false | false | 0 | {'enabled': False, 'images': [{'id': '7v3nApKxOdFQ3k-Q8YlF5154IfQPdlcxNM4CJFDeY5g', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/7v3nApKxOdFQ3k-Q8YlF5154IfQPdlcxNM4CJFDeY5g.png?width=108&crop=smart&auto=webp&s=41a3dbd2bd8e7857901b18988b405950f4a1d4fe', 'width': 108}, {'height': 108, 'url': 'h... | ||
[Release] LocalAgent v0.1.1: Local-first agent runtime (LM Studio / Ollama / llama.cpp + Playwright MCP + eval/replay) | 5 | Hey r/LocalLLaMA! I just released **LocalAgent v0.1.1**, a **local-first AI agent runtime** focused on **safe tool calling** \+ **repeatable runs**.
**GitHub:** [https://github.com/CalvinSturm/LocalAgent](https://github.com/CalvinSturm/LocalAgent)
# Model backends (local)
Supports local models via:
* **LM Studio**
... | 2026-02-21T17:24:02 | https://github.com/CalvinSturm/LocalAgent | CalvinBuild | github.com | 1970-01-01T00:00:00 | 0 | {} | 1rawvpj | false | null | t3_1rawvpj | /r/LocalLLaMA/comments/1rawvpj/release_localagent_v011_localfirst_agent_runtime/ | false | false | 5 | {'enabled': False, 'images': [{'id': 'OrqyjsDOb0J3KfJA6L8j3lSubBygLSAEnww0dAs8ic8', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/OrqyjsDOb0J3KfJA6L8j3lSubBygLSAEnww0dAs8ic8.png?width=108&crop=smart&auto=webp&s=fe21e26194c802aef28e28d38ab03aa5f443df3d', 'width': 108}, {'height': 108, 'url': 'h... | |
PSA: The software “Shade” is a fraudulent, plagiarized copy of Heretic | 369 | Three days ago, the following repository was published, which its “creator” has been aggressively promoting on various channels since then:
https://github.com/assemsabry/shade
The entire source code in the repository is plagiarized from Heretic (https://github.com/p-e-w/heretic), with only the project name and the co... | 2026-02-21T17:16:21 | https://www.reddit.com/r/LocalLLaMA/comments/1rawoe4/psa_the_software_shade_is_a_fraudulent/ | -p-e-w- | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rawoe4 | false | null | t3_1rawoe4 | /r/LocalLLaMA/comments/1rawoe4/psa_the_software_shade_is_a_fraudulent/ | false | false | self | 369 | {'enabled': False, 'images': [{'id': 'OUkhhVPMUaT-OjG6vtd3xPLONNzak3ujkLuJtVKLjeg', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/OUkhhVPMUaT-OjG6vtd3xPLONNzak3ujkLuJtVKLjeg.png?width=108&crop=smart&auto=webp&s=2d1a101031663d44849f78cdde7b77c2be09b9ab', 'width': 108}, {'height': 108, 'url': 'h... |
My family assistant is now running on local AI | 0 | 2026-02-21T17:10:32 | https://www.nunodonato.com/my-family-assistant-now-runs-on-local-ai/ | nunodonato | nunodonato.com | 1970-01-01T00:00:00 | 0 | {} | 1rawiwl | false | null | t3_1rawiwl | /r/LocalLLaMA/comments/1rawiwl/my_family_assistant_is_now_running_on_local_ai/ | false | false | 0 | {'enabled': False, 'images': [{'id': '4SPUyH4xIrYZF27ZZ0A4RCNAJPzOoGzPH448PMPhFzM', 'resolutions': [{'height': 72, 'url': 'https://external-preview.redd.it/4SPUyH4xIrYZF27ZZ0A4RCNAJPzOoGzPH448PMPhFzM.jpeg?width=108&crop=smart&auto=webp&s=0733eff5c922981999aef176ca7135994469bbdd', 'width': 108}, {'height': 144, 'url': '... | ||
512gb DDR3 + 2x 3090 for cheap huge context | 1 | [removed] | 2026-02-21T17:10:32 | https://www.reddit.com/r/LocalLLaMA/comments/1rawiw4/512gb_ddr3_2x_3090_for_cheap_huge_context/ | Meraath | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rawiw4 | false | null | t3_1rawiw4 | /r/LocalLLaMA/comments/1rawiw4/512gb_ddr3_2x_3090_for_cheap_huge_context/ | false | false | self | 1 | null |
40,000+ AI Agents Exposed to the Internet with Full System Access | 91 | 2026-02-21T17:07:50 | https://threatroad.substack.com/p/40000-ai-agents-exposed-to-the-internet | Monterey-Jack | threatroad.substack.com | 1970-01-01T00:00:00 | 0 | {} | 1rawge5 | false | null | t3_1rawge5 | /r/LocalLLaMA/comments/1rawge5/40000_ai_agents_exposed_to_the_internet_with_full/ | false | false | 91 | {'enabled': False, 'images': [{'id': 'QJge18zM6lp5gsWJUdMOifSYjcNp_r7jcsM3Yu8BUUo', 'resolutions': [{'height': 71, 'url': 'https://external-preview.redd.it/QJge18zM6lp5gsWJUdMOifSYjcNp_r7jcsM3Yu8BUUo.jpeg?width=108&crop=smart&auto=webp&s=87e6e5fb60a4eae3d658c4fd8408b15cdc82076e', 'width': 108}, {'height': 143, 'url': '... | ||
OpenClaw and Ollama | 0 | Has anyone has success finding an efficient local model to use with openclaw? Interested to see everyone’s approach. Also, has anyone fine tune a model for quicker responses after downloading it ?
Current specs
Mac mini M4
32gb RAM | 2026-02-21T17:07:21 | https://www.reddit.com/r/LocalLLaMA/comments/1rawfwt/openclaw_and_ollama/ | Initial_Gas976 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rawfwt | false | null | t3_1rawfwt | /r/LocalLLaMA/comments/1rawfwt/openclaw_and_ollama/ | false | false | self | 0 | null |
Has anyone tried KugelAudio-TTS? | 3 | I tried running it through comfyui but didnt work so I just cloned the repo and started playing with it, I like the outputs in spanish, they are fast but not fast enough to use streaming/realtime or has anyone achieved realtime audio with this?
I have an RTX 3090 + 64ram
[ kugelaudio-tts](https://github.com/Kuge... | 2026-02-21T17:06:04 | https://www.reddit.com/r/LocalLLaMA/comments/1rawen0/has_anyone_tried_kugelaudiotts/ | brocolongo | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rawen0 | false | null | t3_1rawen0 | /r/LocalLLaMA/comments/1rawen0/has_anyone_tried_kugelaudiotts/ | false | false | self | 3 | null |
Solair AI free iphone app | 0 | I tested all local iphone apps for local inference and this one is the best. It’s completely free and it’s possible to download models from huggingface.
Locally is great too but i have the impression this one is faster and has more features even if it’s new. | 2026-02-21T17:04:27 | https://apps.apple.com/ch/app/solair-ai-local-ai/id6758450823?l=en-GB | Helpful-Plankton4868 | apps.apple.com | 1970-01-01T00:00:00 | 0 | {} | 1rawd21 | false | null | t3_1rawd21 | /r/LocalLLaMA/comments/1rawd21/solair_ai_free_iphone_app/ | false | false | 0 | {'enabled': False, 'images': [{'id': '8znSBYMHMRiPL5JhJwGXx0qVEicK-JcmCvVgU6ddvfI', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/8znSBYMHMRiPL5JhJwGXx0qVEicK-JcmCvVgU6ddvfI.jpeg?width=108&crop=smart&auto=webp&s=5a19cc1c7ae99d5686b378a924f2623a0b953e80', 'width': 108}, {'height': 113, 'url': '... | |
Domain specific dataset problem | 0 | Hi everyone!
I have been reflecting a bit deeper on the system evaluation problems that Vertical AI startups face, especially the ones operating at complex and regulated domains such as finance, healthcare, etc.
I think the main problem is the lack of data. You can’t evaluate, let alone fine tune, an AI based system ... | 2026-02-21T16:59:19 | https://www.reddit.com/r/LocalLLaMA/comments/1raw806/domain_specific_dataset_problem/ | AlpineContinus | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1raw806 | false | null | t3_1raw806 | /r/LocalLLaMA/comments/1raw806/domain_specific_dataset_problem/ | false | false | self | 0 | null |
Seeking Industry Feedback: What "Production-Ready" metrics should an Autonomous LLM Defense Framework meet | 0 | Hey everyone,
I’m currently developing a defensive framework designed to mitigate prompt injection and jailbreak attempts through active deception and containment (rather than just simple input filtering).
The goal is to move away from static "I'm sorry, I can't do that" responses and toward a system that can autonom... | 2026-02-21T16:54:54 | https://www.reddit.com/r/LocalLLaMA/comments/1raw3tq/seeking_industry_feedback_what_productionready/ | Genesis-1111 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1raw3tq | false | null | t3_1raw3tq | /r/LocalLLaMA/comments/1raw3tq/seeking_industry_feedback_what_productionready/ | false | false | self | 0 | null |
> pov: e/acc nigga already getting a taste of ASI pre-cum, while luddite biocels are tryin to edge shit into infinity | 0 | 2026-02-21T16:51:27 | cobalt1137 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1raw0mm | false | null | t3_1raw0mm | /r/LocalLLaMA/comments/1raw0mm/pov_eacc_nigga_already_getting_a_taste_of_asi/ | false | false | 0 | {'enabled': True, 'images': [{'id': 'h0sjrytxnvkg1', 'resolutions': [{'height': 108, 'url': 'https://preview.redd.it/h0sjrytxnvkg1.png?width=108&crop=smart&auto=webp&s=54c456f8fade67e5faa10f17daf36f3ba3fae3ee', 'width': 108}, {'height': 216, 'url': 'https://preview.redd.it/h0sjrytxnvkg1.png?width=216&crop=smart&auto=we... | |||
Is a local AI note taking app actually practical right now? | 9 | I’ve been trying to move more of my workflow offline. A local AI note taking app sounds ideal for privacy and control.
But in practice, meetings are messy and long. I use Bluedot right now because it’s reliable, but it’s cloud-based. I’m not sure a fully local setup would handle context and summarization as well.
Has... | 2026-02-21T16:39:40 | https://www.reddit.com/r/LocalLLaMA/comments/1ravpf9/is_a_local_ai_note_taking_app_actually_practical/ | hulk14 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ravpf9 | false | null | t3_1ravpf9 | /r/LocalLLaMA/comments/1ravpf9/is_a_local_ai_note_taking_app_actually_practical/ | false | false | self | 9 | null |
Getting Goose to actually work with local Ollama models — what I ran into and what I built | 0 | Been tinkering with Goose for a while. Liked the concept but ran into consistent issues running it with local models via Ollama. The framework is clearly built for cloud models — in my testing basically only Qwen3 worked reliably due to how it structures JSON output.
Failure modes I kept hitting:
Malformed JSON fro... | 2026-02-21T16:31:19 | https://www.reddit.com/r/LocalLLaMA/comments/1ravhqi/getting_goose_to_actually_work_with_local_ollama/ | BenevolentJoker | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ravhqi | false | null | t3_1ravhqi | /r/LocalLLaMA/comments/1ravhqi/getting_goose_to_actually_work_with_local_ollama/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'pbgfqwlMXm6IOOu8BQ8ITwlsb3n0jqJch4zsZIR-Pe8', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/pbgfqwlMXm6IOOu8BQ8ITwlsb3n0jqJch4zsZIR-Pe8.png?width=108&crop=smart&auto=webp&s=dd67845cc558935e026e1ee3a1a79a39239ae595', 'width': 108}, {'height': 108, 'url': 'h... |
RO Philosophy | 1 | [removed] | 2026-02-21T16:23:33 | https://www.reddit.com/r/LocalLLaMA/comments/1ravang/ro_philosophy/ | erikqamalyan97 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ravang | false | null | t3_1ravang | /r/LocalLLaMA/comments/1ravang/ro_philosophy/ | false | false | self | 1 | null |
Is tool calling broken in all inference engines? | 6 | There is one argument in completions endpoint which makes tool calls 100% time correct:
"strict": true
And it's not supported by all inference engines, despite being documented.
VLLM supports structured output for tools only if
"tool_choice": "required"
is used. Llama.cpp ignores it completely. And withou... | 2026-02-21T16:17:27 | https://www.reddit.com/r/LocalLLaMA/comments/1rav571/is_tool_calling_broken_in_all_inference_engines/ | Nepherpitu | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rav571 | false | null | t3_1rav571 | /r/LocalLLaMA/comments/1rav571/is_tool_calling_broken_in_all_inference_engines/ | false | false | self | 6 | null |
Tool calling and local models | 1 | [removed] | 2026-02-21T16:13:04 | https://www.reddit.com/r/LocalLLaMA/comments/1rav18x/tool_calling_and_local_models/ | Nepherpitu | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rav18x | false | null | t3_1rav18x | /r/LocalLLaMA/comments/1rav18x/tool_calling_and_local_models/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'IanbACn0ZMJMMsnYfmcP1C692OFMB1do21sw5Lbo-80', 'resolutions': [{'height': 40, 'url': 'https://external-preview.redd.it/IanbACn0ZMJMMsnYfmcP1C692OFMB1do21sw5Lbo-80.png?width=108&crop=smart&auto=webp&s=e3f265b33937cdd7d282a3b805d8b3aca8aecca8', 'width': 108}, {'height': 81, 'url': 'ht... |
Tool calling and local models | 1 | [removed] | 2026-02-21T16:11:19 | https://www.reddit.com/r/LocalLLaMA/comments/1rauzn5/tool_calling_and_local_models/ | Nepherpitu | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rauzn5 | false | null | t3_1rauzn5 | /r/LocalLLaMA/comments/1rauzn5/tool_calling_and_local_models/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'IanbACn0ZMJMMsnYfmcP1C692OFMB1do21sw5Lbo-80', 'resolutions': [{'height': 40, 'url': 'https://external-preview.redd.it/IanbACn0ZMJMMsnYfmcP1C692OFMB1do21sw5Lbo-80.png?width=108&crop=smart&auto=webp&s=e3f265b33937cdd7d282a3b805d8b3aca8aecca8', 'width': 108}, {'height': 81, 'url': 'ht... |
70B llm on 4gb android phone ! | 0 | 𝟕𝟎𝐁 𝐩𝐚𝐫𝐚𝐦𝐞𝐭𝐞𝐫 𝐋𝐋𝐌 𝐨𝐧 𝐚 𝟏.𝟒𝐆𝐁 𝐑𝐀𝐌 𝟐𝟎𝟏𝟖 𝐀𝐧𝐝𝐫𝐨𝐢𝐝 𝐩𝐡𝐨𝐧𝐞.
We just broke the 1:1 RAM-to-model rule.
While most engines need ~20GB RAM for Llama 3.3 70B Q2_XS , TrueLarge-RT runs it on a Realme 2 Pro.
Also ran Qwen 2.5 32B Q4_KM — fully on-device.
No cloud. No swap tricks.
Run any ... | 2026-02-21T16:06:55 | https://v.redd.it/78l5uaosfvkg1 | Vast_Lingonberry7259 | /r/LocalLLaMA/comments/1rauvn2/70b_llm_on_4gb_android_phone/ | 1970-01-01T00:00:00 | 0 | {} | 1rauvn2 | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/78l5uaosfvkg1/DASHPlaylist.mpd?a=1774414200%2CN2IxOTZjMmQ5MTUzYWRmYzg4YWE0M2MxZDIzNGRlZDcxMmM1YWU2ZWZkODk2YWRkNjQ1NGNhMTUyNzhmODU1Yw%3D%3D&v=1&f=sd', 'duration': 163, 'fallback_url': 'https://v.redd.it/78l5uaosfvkg1/CMAF_1080.mp4?source=fallback', '... | t3_1rauvn2 | /r/LocalLLaMA/comments/1rauvn2/70b_llm_on_4gb_android_phone/ | false | false | 0 | {'enabled': False, 'images': [{'id': 'cnRsc2RqbnNmdmtnMVql8qLm-MjriFBeg_rnJespQJNZft4czDFGO08jlFJO', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/cnRsc2RqbnNmdmtnMVql8qLm-MjriFBeg_rnJespQJNZft4czDFGO08jlFJO.png?width=108&crop=smart&format=pjpg&auto=webp&s=6261f9711f2ad254b9fc899256dd472d22a81... | |
Why are there so many large data centers in Amercia? But no news about chinese data centers? | 0 | These days some of the chinese llms are SOTA or close to the top western models right? also they're open weight and are like 300-1T parameters. Seems like a few hundred GPUs are enough, maybe double for multiple customers.
What do the western companies mainly use data centers for, training or running the model? does c... | 2026-02-21T16:05:19 | https://www.reddit.com/r/LocalLLaMA/comments/1rauu9z/why_are_there_so_many_large_data_centers_in/ | Additional-Curve4212 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rauu9z | false | null | t3_1rauu9z | /r/LocalLLaMA/comments/1rauu9z/why_are_there_so_many_large_data_centers_in/ | false | false | self | 0 | null |
Wave Field LLM — O(n log n) attention via wave equation dynamics | 91 | I've been working on an alternative attention mechanism that treats language
as a physical field system instead of using standard O(n²) self-attention.
**How it works:**
- Tokens are mapped onto a continuous 1D field
- Information propagates via damped wave equations: k(t) = exp(-α·t)·cos(ω·t + φ)
- Each attention he... | 2026-02-21T15:46:07 | https://www.reddit.com/r/LocalLLaMA/comments/1raucof/wave_field_llm_on_log_n_attention_via_wave/ | Murky-Sign37 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1raucof | false | null | t3_1raucof | /r/LocalLLaMA/comments/1raucof/wave_field_llm_on_log_n_attention_via_wave/ | false | false | self | 91 | {'enabled': False, 'images': [{'id': 'KUyZYS_VoRp35Lf7CS5ABapbPRx0D0vemlKqZAsrMpo', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/KUyZYS_VoRp35Lf7CS5ABapbPRx0D0vemlKqZAsrMpo.png?width=108&crop=smart&auto=webp&s=4418759b58263faac1218fa8731c6e3c63ec7c31', 'width': 108}, {'height': 108, 'url': 'h... |
Multi-model LLM routing with strict budget ceilings and tiered escalation | 0 | I’ve been experimenting with treating LLM routing more like infrastructure rather than simple “pick a model per request.”
In multi-model setups (OpenRouter, Anthropic, OpenAI, etc.), routing becomes less about heuristics and more about invariants:
* Hard budget ceilings per request
* Tiered escalation across models... | 2026-02-21T15:25:11 | https://www.reddit.com/r/LocalLLaMA/comments/1rattth/multimodel_llm_routing_with_strict_budget/ | Mission-Sherbet4936 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rattth | false | null | t3_1rattth | /r/LocalLLaMA/comments/1rattth/multimodel_llm_routing_with_strict_budget/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'oiICAh8VCdT63AHEVLpD76sH_ymAXhwhHQcYtvrwfJ4', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/oiICAh8VCdT63AHEVLpD76sH_ymAXhwhHQcYtvrwfJ4.png?width=108&crop=smart&auto=webp&s=89c56c35b3ef93c3b57e7002d9332df3a721d978', 'width': 108}, {'height': 108, 'url': 'h... |
[Project] Control interface for Clawdbot | 0 | Built a quick dashboard for my Clawdbot, it just works.
I mainly made it so my boomer friends & family (and honestly, me on a sleepy day) can easily control and monitor the bot without touching the command line. The UI’s simple, a bit rough around the edges, but it gets the job done.
If you’ve got a bot or any hardwa... | 2026-02-21T15:24:45 | https://github.com/mannyrepos/clawdbot-control-panel | Honest-Debate-6863 | github.com | 1970-01-01T00:00:00 | 0 | {} | 1ratteb | false | null | t3_1ratteb | /r/LocalLLaMA/comments/1ratteb/project_control_interface_for_clawdbot/ | false | false | 0 | {'enabled': False, 'images': [{'id': 'f-H1ZP_cBiz19-S7uI5PlhNfY8rQyCvRzxZ9S9K2y_w', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/f-H1ZP_cBiz19-S7uI5PlhNfY8rQyCvRzxZ9S9K2y_w.png?width=108&crop=smart&auto=webp&s=b529e87e3375eed4b7d3aa4b4a8a269e32733984', 'width': 108}, {'height': 108, 'url': 'h... | |
Built an open-source world state engine for multi-agent AI coordination | 0 | I've been building Flux — a persistent, event-sourced state engine where AI agents (and everything else) share one canonical world state.
Instead of agents passing messages back and forth or making API calls to get context, they just observe Flux. State is always there — agents subscribe and see changes in real-ti... | 2026-02-21T15:23:35 | https://www.reddit.com/r/LocalLLaMA/comments/1ratsbr/built_an_opensource_world_state_engine_for/ | Born-Connection130 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ratsbr | false | null | t3_1ratsbr | /r/LocalLLaMA/comments/1ratsbr/built_an_opensource_world_state_engine_for/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': '9rWxNcTu1TshEotsUW4Z2TzAznkdhYPFNqaRVIPIM3M', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/9rWxNcTu1TshEotsUW4Z2TzAznkdhYPFNqaRVIPIM3M.png?width=108&crop=smart&auto=webp&s=a0baf869c11fd8958d7926fd939b5e868830f3a3', 'width': 108}, {'height': 108, 'url': 'h... |
What if every CLI tool shipped with a local NL translator? I fine-tuned Gemma 3 1B/4B for CLI command translation... but it runs 100% locally. 810MB/2.5GB, 1.5s inference on CPU. Built the framework and tested it on Docker. 1B hit a ceiling at 76%. 4B got 94% on the first try. | 7 | **I built a locally-running NL→CLI translator by fine-tuning Gemma 3 1B/4B with QLoRA.**
Github repo: [\[Link to repo\]](https://github.com/pranavkumaarofficial/nlcli-wizard)
Training notebook (free Colab T4, step-by-step): [Colab Notebook](https://colab.research.google.com/drive/1QRF6SX-fpVU3AoYTco8g4tajEMgKOKXz?usp... | 2026-02-21T15:22:12 | https://www.reddit.com/r/LocalLLaMA/comments/1ratr1w/what_if_every_cli_tool_shipped_with_a_local_nl/ | theRealSachinSpk | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ratr1w | false | null | t3_1ratr1w | /r/LocalLLaMA/comments/1ratr1w/what_if_every_cli_tool_shipped_with_a_local_nl/ | false | false | 7 | null | |
Skills for using Kagi Search APIs with agents | 3 | [https://github.com/joelazar/kagi-skills](https://github.com/joelazar/kagi-skills) | 2026-02-21T15:21:04 | https://www.reddit.com/r/LocalLLaMA/comments/1ratq0r/skills_for_using_kagi_search_apis_with_agents/ | lazarjoe | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ratq0r | false | null | t3_1ratq0r | /r/LocalLLaMA/comments/1ratq0r/skills_for_using_kagi_search_apis_with_agents/ | false | false | self | 3 | {'enabled': False, 'images': [{'id': 'GF0SwPbrdVdP99MK3Ybx1SvCywiJSkDsWueFxbOQSmY', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/GF0SwPbrdVdP99MK3Ybx1SvCywiJSkDsWueFxbOQSmY.png?width=108&crop=smart&auto=webp&s=ab5be09979a02cff122a153e8706905cca452468', 'width': 108}, {'height': 108, 'url': 'h... |
been hacking on a thing where my phone controls my pc. | 0 | been building a small thing. you could call it a mobile app, i guess.
basically my phone can trigger stuff on my pc from anywhere.
there’s a layer in between that turns natural language into structured execution. so instead of raw shell access, it parses intent then validates scope then runs step by step.
right now ... | 2026-02-21T15:16:27 | https://www.reddit.com/r/LocalLLaMA/comments/1ratlz1/been_hacking_on_a_thing_where_my_phone_controls/ | davenchyy | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ratlz1 | false | null | t3_1ratlz1 | /r/LocalLLaMA/comments/1ratlz1/been_hacking_on_a_thing_where_my_phone_controls/ | false | false | self | 0 | null |
[Video] Need your feedback. TTS without a TTS model: macOS system voices. | 0 | I’m building a stripped-down macOS GUI for local + API LLMs (OpenAI-compatible endpoints + Ollama). Looking for feedback, especially on TTS
Goal: a simple-to-install, simple-to-use desktop chat app that works with:
\- OpenAI-compatible APIs (OpenAI, Mistral, LM Studio, etc.)
\- Ollama (local)
Current features:
... | 2026-02-21T14:52:20 | https://www.reddit.com/r/LocalLLaMA/comments/1rat0uz/video_need_your_feedback_tts_without_a_tts_model/ | Nefhis | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rat0uz | false | null | t3_1rat0uz | /r/LocalLLaMA/comments/1rat0uz/video_need_your_feedback_tts_without_a_tts_model/ | false | false | self | 0 | null |
Sick of LLMs ignoring provided docs and hallucinating non-existent UI/CLI steps. How do you actually fix this? | 0 | Is it just me or are LLMs getting dumber at following actual source material? I’m so fed up with Gemini, Claude, and ChatGPT ignoring the exact documentation I give them. I’ll upload the official manufacturer PDF or paste as Text/Instruction or the GitHub repo for a tool, and it still hallucinates docker-compose flags ... | 2026-02-21T14:51:11 | https://www.reddit.com/r/LocalLLaMA/comments/1raszz1/sick_of_llms_ignoring_provided_docs_and/ | Party-Log-1084 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1raszz1 | false | null | t3_1raszz1 | /r/LocalLLaMA/comments/1raszz1/sick_of_llms_ignoring_provided_docs_and/ | false | false | self | 0 | null |
Fast voice to text? Looking for offline, mobile friendly, multilingual support | 2 | Hey all,
Whisper was the first I tried but the mobile friendly model is not any better than the VOSK model I've been using. English works pretty well but VOSK is inconsistent with other languages and whisper small models are about the same. I'm building a mobile translator app using Unity and voice recognition is k... | 2026-02-21T14:43:06 | https://www.reddit.com/r/LocalLLaMA/comments/1raste0/fast_voice_to_text_looking_for_offline_mobile/ | InvertedVantage | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1raste0 | false | null | t3_1raste0 | /r/LocalLLaMA/comments/1raste0/fast_voice_to_text_looking_for_offline_mobile/ | false | false | self | 2 | null |
Built a small Instant Agent Builder for Ollama v0.16.3 – feedback welcome | 0 | Hey r/LocalLLaMA,
I just built a small Gradio tool using the new v0.16.3 features.
It includes 4 ready-made agents:
\- Code Reviewer
\- Web Researcher
\- File Analyzer with real file upload
\- General Task Agent
Runs 100% local, 15–35 seconds response time on normal laptops.
Would love some feedback from the co... | 2026-02-21T14:38:17 | https://www.reddit.com/r/LocalLLaMA/comments/1raspe8/built_a_small_instant_agent_builder_for_ollama/ | PythonToolFactory | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1raspe8 | false | null | t3_1raspe8 | /r/LocalLLaMA/comments/1raspe8/built_a_small_instant_agent_builder_for_ollama/ | false | false | self | 0 | null |
Let's talk about the vibecoded crap over in /new | 1 | [removed] | 2026-02-21T14:34:27 | https://www.reddit.com/r/LocalLLaMA/comments/1rasm9s/lets_talk_about_the_vibecoded_crap_over_in_new/ | BumbleSlob | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rasm9s | false | null | t3_1rasm9s | /r/LocalLLaMA/comments/1rasm9s/lets_talk_about_the_vibecoded_crap_over_in_new/ | false | false | self | 1 | null |
mejor modelo calidad/precio para código? | 0 | estoy usando vscode + con roo code, con el modelo minimax 2.5; aún así, siento que gasto demasiado para tareas relativamente simples. soy nueva en esto y me gustaría que me pudieran ayudar
estoy pensando dos cosas
\- o tengo mal configurado roo code
\- o el modelo que estoy usando no es tan barato como pienso
... | 2026-02-21T14:18:20 | https://www.reddit.com/r/LocalLLaMA/comments/1ras878/mejor_modelo_calidadprecio_para_código/ | adagio_lovelace | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ras878 | false | null | t3_1ras878 | /r/LocalLLaMA/comments/1ras878/mejor_modelo_calidadprecio_para_código/ | false | false | self | 0 | null |
I thought I was building an AI assistant. I ended up building something else. | 1 | Originally I wanted to build an AI that could control my computer.
Then I realized the interesting problem isn’t the “AI.”
It’s the layer between AI and the operating system.
What enforces:
• scope?
• deterministic tooling?
• risk policies?
• execution logs?
So instead of improving the “brain,” I built a ru... | 2026-02-21T14:16:58 | https://www.reddit.com/r/LocalLLaMA/comments/1ras74w/i_thought_i_was_building_an_ai_assistant_i_ended/ | davenchyy | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ras74w | false | null | t3_1ras74w | /r/LocalLLaMA/comments/1ras74w/i_thought_i_was_building_an_ai_assistant_i_ended/ | false | false | self | 1 | null |
Ran 8 versions of an AI trading backtest. The dumbest version won. | 1 | [removed] | 2026-02-21T14:09:02 | https://www.reddit.com/r/LocalLLaMA/comments/1ras0mx/ran_8_versions_of_an_ai_trading_backtest_the/ | AdAccurate6326 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ras0mx | false | null | t3_1ras0mx | /r/LocalLLaMA/comments/1ras0mx/ran_8_versions_of_an_ai_trading_backtest_the/ | false | false | self | 1 | null |
Built an Open-Source DOM-Based AI Browser Agent (No Screenshots, No Backend) | 5 | I’ve been experimenting with AI browser agents and wanted to try a different approach than the usual screenshot + vision model pipeline.
Most agents today:
* Take a screenshot
* Send it to a multimodal model
* Ask it where to click
* Repeat
It works, but it’s slow, expensive, and sometimes unreliable due to pixel am... | 2026-02-21T14:07:49 | https://www.reddit.com/r/LocalLLaMA/comments/1rarzp2/built_an_opensource_dombased_ai_browser_agent_no/ | KlutzySession3593 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rarzp2 | false | null | t3_1rarzp2 | /r/LocalLLaMA/comments/1rarzp2/built_an_opensource_dombased_ai_browser_agent_no/ | false | false | self | 5 | {'enabled': False, 'images': [{'id': '7GsfZaq70AKuxgoCnKer4eLNc-8T0gaF7zrkXSmWb3Q', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/7GsfZaq70AKuxgoCnKer4eLNc-8T0gaF7zrkXSmWb3Q.png?width=108&crop=smart&auto=webp&s=d15b02431052b1bf29d2bd4164a0c82568bd525d', 'width': 108}, {'height': 108, 'url': 'h... |
Built an Open-Source DOM-Based AI Browser Agent (No Screenshots, No Backend) | 0 | I’ve been experimenting with AI browser agents and wanted to try a different approach than the usual screenshot + vision model pipeline.
Most agents today:
* Take a screenshot
* Send it to a multimodal model
* Ask it where to click
* Repeat
It works, but it’s slow, expensive, and sometimes unreliable due to pixel am... | 2026-02-21T14:07:01 | https://www.reddit.com/r/LocalLLaMA/comments/1rarz31/built_an_opensource_dombased_ai_browser_agent_no/ | KlutzySession3593 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rarz31 | false | null | t3_1rarz31 | /r/LocalLLaMA/comments/1rarz31/built_an_opensource_dombased_ai_browser_agent_no/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': '7GsfZaq70AKuxgoCnKer4eLNc-8T0gaF7zrkXSmWb3Q', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/7GsfZaq70AKuxgoCnKer4eLNc-8T0gaF7zrkXSmWb3Q.png?width=108&crop=smart&auto=webp&s=d15b02431052b1bf29d2bd4164a0c82568bd525d', 'width': 108}, {'height': 108, 'url': 'h... |
Choosing the Right Data Store for RAG | 0 | Interesting article showing the advantages of using Search Engines for RAG: [https://medium.com/p/972a6c4a07dd](https://medium.com/p/972a6c4a07dd) | 2026-02-21T14:05:46 | https://medium.com/p/972a6c4a07dd | javi_rnr | medium.com | 1970-01-01T00:00:00 | 0 | {} | 1rary0n | false | null | t3_1rary0n | /r/LocalLLaMA/comments/1rary0n/choosing_the_right_data_store_for_rag/ | false | false | default | 0 | null |
opencode with local llm agent not work? | 1 | So I was triing to use ollama for use opencode as VS estention
Opencode works fine with the BigPickle but if i try to use for example with qwen2.5-coder:7b i cannot make the simpler task that give me no problem with BigPickle like :
"Make a dir called testdirectory"
I get this as response:
`{`
`name: todo list... | 2026-02-21T14:03:10 | https://www.reddit.com/r/LocalLLaMA/comments/1rarvvd/opencode_with_local_llm_agent_not_work/ | DiscoverFolle | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rarvvd | false | null | t3_1rarvvd | /r/LocalLLaMA/comments/1rarvvd/opencode_with_local_llm_agent_not_work/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'm4drxicy4Vy_lFjPhda-exuHEiTA9mN8QJ5nN3kkiVY', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/m4drxicy4Vy_lFjPhda-exuHEiTA9mN8QJ5nN3kkiVY.jpeg?width=108&crop=smart&auto=webp&s=f84c099ad88ec92a212cf08ee055450c38774543', 'width': 108}, {'height': 162, 'url': '... |
20+ rules couldn't fix AI-sounding output. Changing one verb did. | 1 | [removed] | 2026-02-21T13:56:12 | https://www.reddit.com/r/LocalLLaMA/comments/1rarpyk/20_rules_couldnt_fix_aisounding_output_changing/ | AdAccurate6326 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rarpyk | false | null | t3_1rarpyk | /r/LocalLLaMA/comments/1rarpyk/20_rules_couldnt_fix_aisounding_output_changing/ | false | false | self | 1 | null |
LLM prompting tricks resource ? | 3 | So I read a paper today that talks about how duplicating the prompts increases significantly the LLM reponse quality. I was wondering if there are any github repos, or somewhere else where these types of techniques are aggregated for sharing purposes so I keep up with the latest techniques out there ? Thank you very mu... | 2026-02-21T13:53:07 | https://www.reddit.com/r/LocalLLaMA/comments/1rarnfi/llm_prompting_tricks_resource/ | jiii95 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rarnfi | false | null | t3_1rarnfi | /r/LocalLLaMA/comments/1rarnfi/llm_prompting_tricks_resource/ | false | false | self | 3 | null |
I built a continuous thinking loop for qwen2.5 — no human input, model decides when to speak. Here's what happened after 2500+ cycles. | 4 | I've been running an experiment for a few weeks that I can't stop thinking about. This is an interim report — not proof of anything, but maybe food for thought.
THE CORE IDEA
Current LLMs are purely reactive. No prompt, no output. That's fundamental — and also a limitation if you want to know whether a langua... | 2026-02-21T13:50:34 | https://www.reddit.com/r/LocalLLaMA/comments/1rarlcu/i_built_a_continuous_thinking_loop_for_qwen25_no/ | Fantastic-Till2460 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rarlcu | false | null | t3_1rarlcu | /r/LocalLLaMA/comments/1rarlcu/i_built_a_continuous_thinking_loop_for_qwen25_no/ | false | false | self | 4 | null |
Managing Claude Code Agents Safely at Scale | 0 | 2026-02-21T13:48:13 | https://github.com/simonstaton/AgentManager | Ambitious-Tourist632 | github.com | 1970-01-01T00:00:00 | 0 | {} | 1rarjhp | false | null | t3_1rarjhp | /r/LocalLLaMA/comments/1rarjhp/managing_claude_code_agents_safely_at_scale/ | false | false | 0 | {'enabled': False, 'images': [{'id': 'ik9kUSecE0ztjc1N4rYrUntDQHMJmjNjosGRhufzS78', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/ik9kUSecE0ztjc1N4rYrUntDQHMJmjNjosGRhufzS78.png?width=108&crop=smart&auto=webp&s=f568d731d8a82bbb71b79f409d694eb7f3c76b75', 'width': 108}, {'height': 108, 'url': 'h... | ||
I built a 1-command local LLM server that runs entirely on CPU (No GPU, Python, or Docker needed) | 1 | [removed] | 2026-02-21T13:38:36 | https://www.reddit.com/r/LocalLLaMA/comments/1rarc18/i_built_a_1command_local_llm_server_that_runs/ | GOk-Language | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rarc18 | false | null | t3_1rarc18 | /r/LocalLLaMA/comments/1rarc18/i_built_a_1command_local_llm_server_that_runs/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'u_VIXm_G2fYpwu1K28sIV3IaLg3mZpuhsuPWfxhm_As', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/u_VIXm_G2fYpwu1K28sIV3IaLg3mZpuhsuPWfxhm_As.png?width=108&crop=smart&auto=webp&s=90a75ac9b381e20cbc52f27e69a747872fba791b', 'width': 108}, {'height': 108, 'url': 'h... |
Faster & Cheaper LLM Apps with Semantic Caching | 0 | 2026-02-21T13:34:34 | https://youtu.be/NrqvtsnjIHU | Special_Community179 | youtu.be | 1970-01-01T00:00:00 | 0 | {} | 1rar8z7 | false | {'oembed': {'author_name': 'Nariman Codes', 'author_url': 'https://www.youtube.com/@NarimanCodes', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/NrqvtsnjIHU?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyros... | t3_1rar8z7 | /r/LocalLLaMA/comments/1rar8z7/faster_cheaper_llm_apps_with_semantic_caching/ | false | false | 0 | {'enabled': False, 'images': [{'id': 'oOZJdgmTkHw77V31kL7N1jm08j8e9Y-FJAqFULVQOvI', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/oOZJdgmTkHw77V31kL7N1jm08j8e9Y-FJAqFULVQOvI.jpeg?width=108&crop=smart&auto=webp&s=5d13717001ab369aeaca2ef657907c891c6e4ee2', 'width': 108}, {'height': 162, 'url': '... | ||
Qwen Code - a powerful open-source coding agent + NO TELEMETRY FORK | 95 | # Hey everyone,
I wanted to share two things: a great open-source project I've been using, and a fork I made for privacy-conscious folks.
# Qwen Code
[**https://github.com/QwenLM/qwen-code**](https://github.com/QwenLM/qwen-code)
Qwen Code is an open-source CLI coding agent developed by Alibaba's Qwen team. It's ess... | 2026-02-21T13:31:28 | https://www.reddit.com/r/LocalLLaMA/comments/1rar6md/qwen_code_a_powerful_opensource_coding_agent_no/ | Undici77 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rar6md | false | null | t3_1rar6md | /r/LocalLLaMA/comments/1rar6md/qwen_code_a_powerful_opensource_coding_agent_no/ | false | false | self | 95 | {'enabled': False, 'images': [{'id': '31wqNNXG1d0Gw0Xey1GFAaK408UxxQKyicXGjIRE2iQ', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/31wqNNXG1d0Gw0Xey1GFAaK408UxxQKyicXGjIRE2iQ.png?width=108&crop=smart&auto=webp&s=74aa4e884ed6993c89229207051d1a56688696dc', 'width': 108}, {'height': 108, 'url': 'h... |
Following up with my promise for Millions of tokens of context on home hardware | 0 | NOW, there should be no reason for big AI companies to keep buying up all the RAM.
AND we can have MASSIVE context LLM's at home. | 2026-02-21T13:26:25 | https://github.com/philtimmes/KeSSie/ | --TastesLikeChicken- | github.com | 1970-01-01T00:00:00 | 0 | {} | 1rar2pe | false | null | t3_1rar2pe | /r/LocalLLaMA/comments/1rar2pe/following_up_with_my_promise_for_millions_of/ | false | false | 0 | {'enabled': False, 'images': [{'id': 'ipWYZD6hYuTqVJ8Ko8zFiV4kRKXVLWmQh1OHJe9hJXY', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/ipWYZD6hYuTqVJ8Ko8zFiV4kRKXVLWmQh1OHJe9hJXY.png?width=108&crop=smart&auto=webp&s=c318eac625b2bb4bdc5b9ca044a316078b7244f6', 'width': 108}, {'height': 108, 'url': 'h... | |
Handwriting recognition AI | 1 | Hi everyone,
I’m currently researching my family history and working with city and church archives. Many of the records (baptisms, marriages, deaths) were handwritten by priests around 1815, most likely in old German scripts such as Kurrent.
Unfortunately, I can barely read this handwriting at all.
So my question is... | 2026-02-21T13:07:59 | https://www.reddit.com/r/LocalLLaMA/comments/1raqp88/handwriting_recognition_ai/ | taiof1 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1raqp88 | false | null | t3_1raqp88 | /r/LocalLLaMA/comments/1raqp88/handwriting_recognition_ai/ | false | false | self | 1 | null |
Notes from Deploying a Local Agent with Claude 3.5 + Filesystem Tools | 0 | I’ve been experimenting with running a local autonomous agent setup using OpenClaw as a proxy, Claude 3.5 Sonnet as the model, and Telegram as a simple control interface.
A few practical observations that might save someone time:
**Architecture matters more than prompting.**
The loop (input → proxy → model → tool e... | 2026-02-21T13:05:17 | https://www.reddit.com/r/LocalLLaMA/comments/1raqncc/notes_from_deploying_a_local_agent_with_claude_35/ | Enough-Ferret6337 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1raqncc | false | null | t3_1raqncc | /r/LocalLLaMA/comments/1raqncc/notes_from_deploying_a_local_agent_with_claude_35/ | false | false | self | 0 | null |
Too many memory implementations, what do you actually use? | 4 | i swear any time i try to research about what memory implementations/architectures are the best, everyone has their own solution, yet at the same time i struggle finding any genuinely working solution with little friction and setup/implementation time. it's crazy how the only "perfect" memory solutions come from people... | 2026-02-21T12:58:08 | https://www.reddit.com/r/LocalLLaMA/comments/1raqi5w/too_many_memory_implementations_what_do_you/ | xeeff | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1raqi5w | false | null | t3_1raqi5w | /r/LocalLLaMA/comments/1raqi5w/too_many_memory_implementations_what_do_you/ | false | false | self | 4 | null |
Built YantraCLI in Odin — Local AI CLI with MCP (stdio + HTTP), Web Orchestrator, BYOK (WIP) | 0 | Hey,
I’ve been building a local AI CLI called **YantraCLI**, written fully in Odin.
It’s still a work in progress, but I wanted to share the architecture and get feedback before I open-source it in a few weeks.
**Current direction**
* Local-first CLI
* BYOK (Bring Your Own Key)
* MCP support (both HTTP and stdio tr... | 2026-02-21T12:53:52 | https://v.redd.it/vpit8mmdhukg1 | Inner-Combination177 | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1raqf66 | false | {'reddit_video': {'bitrate_kbps': 800, 'dash_url': 'https://v.redd.it/vpit8mmdhukg1/DASHPlaylist.mpd?a=1774270455%2CNmJkYTJhMWJjNDUyZTI0ODhmODEwZDRiYjhjZDcxN2FiYzk3NzFkN2E2Nzg5YmI5NWNmMjk0N2QwMTBmZTg4Ng%3D%3D&v=1&f=sd', 'duration': 80, 'fallback_url': 'https://v.redd.it/vpit8mmdhukg1/CMAF_360.mp4?source=fallback', 'has... | t3_1raqf66 | /r/LocalLLaMA/comments/1raqf66/built_yantracli_in_odin_local_ai_cli_with_mcp/ | false | false | 0 | {'enabled': False, 'images': [{'id': 'enh1bTFwbWRodWtnMfFzbBww-GH_vKH47ZlBqXGjBjmmm78flxXQo82mqzCi', 'resolutions': [{'height': 71, 'url': 'https://external-preview.redd.it/enh1bTFwbWRodWtnMfFzbBww-GH_vKH47ZlBqXGjBjmmm78flxXQo82mqzCi.png?width=108&crop=smart&format=pjpg&auto=webp&s=73f1bd44a17ba48e818079f171b5a004dbdfe... | |
Releasing OpenRA-RL: A full-fledged RTS environment for local AI Agents (Open-Source, 1-line install) | 2 | We are a team of researchers that love gaming and messing up weights and biases, and today we are releasing [OpenRA-RL](https://openra-rl.dev/).
We are launching a **full-fledged environment for AI Agents to play real-time strategy (RTS) games**. Right now, your local models can connect to this environment, observe th... | 2026-02-21T12:48:09 | https://www.reddit.com/r/LocalLLaMA/comments/1raqb6r/releasing_openrarl_a_fullfledged_rts_environment/ | QuirkyDream6928 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1raqb6r | false | null | t3_1raqb6r | /r/LocalLLaMA/comments/1raqb6r/releasing_openrarl_a_fullfledged_rts_environment/ | false | false | self | 2 | null |
they have Karpathy, we are doomed ;) | 1,483 | (added second image for the context) | 2026-02-21T12:34:51 | https://www.reddit.com/gallery/1raq23i | jacek2023 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1raq23i | false | null | t3_1raq23i | /r/LocalLLaMA/comments/1raq23i/they_have_karpathy_we_are_doomed/ | false | false | 1,483 | null | |
I built a personal AI assistant and it rocks! | 0 | I built a personal AI assistant in 1 day. He runs my mornings, getting-things-done and daily notes. Inspired by Openclaw, personalized for my use cases.
This rocks!
Every morninig he runs a scheduled digest at 8am. Before I wake up, he:
\- Checks both my Gmail inboxes
\- Auto-archives noise (notificat... | 2026-02-21T12:32:25 | Ill-Mulberry-9362 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1raq0f4 | false | null | t3_1raq0f4 | /r/LocalLLaMA/comments/1raq0f4/i_built_a_personal_ai_assistant_and_it_rocks/ | false | false | 0 | {'enabled': True, 'images': [{'id': '8qke3ngodukg1', 'resolutions': [{'height': 68, 'url': 'https://preview.redd.it/8qke3ngodukg1.jpeg?width=108&crop=smart&auto=webp&s=7e41e22c93dc355a3d4ffe0c5ff00dc508ef24d1', 'width': 108}, {'height': 137, 'url': 'https://preview.redd.it/8qke3ngodukg1.jpeg?width=216&crop=smart&auto=w... | ||
they have Karpathy, we are doomed | 6 | 2026-02-21T12:24:10 | jacek2023 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1rapuv1 | false | null | t3_1rapuv1 | /r/LocalLLaMA/comments/1rapuv1/they_have_karpathy_we_are_doomed/ | false | false | 6 | {'enabled': True, 'images': [{'id': 'n4zhujc7cukg1', 'resolutions': [{'height': 53, 'url': 'https://preview.redd.it/n4zhujc7cukg1.png?width=108&crop=smart&auto=webp&s=ab99e8454846eb26ee7aab79080181afd839bbae', 'width': 108}, {'height': 107, 'url': 'https://preview.redd.it/n4zhujc7cukg1.png?width=216&crop=smart&auto=web... | |||
Uncensored ai model | 0 | I was looking to download an uncensored ai model, I tried wizard vicuna but it like didnt give me anything almost every answer was like this is illegal. Let me know from your personal experiences which one should i get and what prompt should i set up.
My specifications:
GPU: RTX 3060
Cpu: amd ryzen 5 3600x
MEMORY:... | 2026-02-21T12:05:25 | https://www.reddit.com/r/LocalLLaMA/comments/1rapiqm/uncensored_ai_model/ | Straight-Thing-799 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rapiqm | false | null | t3_1rapiqm | /r/LocalLLaMA/comments/1rapiqm/uncensored_ai_model/ | false | false | self | 0 | null |
Using Apple's MLX framework to build a local TTS app, here's what I learned | 0 | I've been working on a macOS text-to-speech app that runs entirely on-device using Apple's MLX framework, and wanted to share some learnings with this community since a lot of you run models on Apple Silicon.
**The problem I was solving:**
I generate a lot of long text with local LLMs, research summaries, documentati... | 2026-02-21T12:04:42 | https://v.redd.it/sjapigls8ukg1 | tarunyadav9761 | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1rapib4 | false | {'reddit_video': {'bitrate_kbps': 800, 'dash_url': 'https://v.redd.it/sjapigls8ukg1/DASHPlaylist.mpd?a=1774267501%2CMDBkMWUwMDdhNDhkMzEyZDdjYzI5ODI0OWE5ZmQ0MzNkZjVjMjFlZmIyNjUwMmVlNTUwY2FjNDQ2NGI2YzE3OA%3D%3D&v=1&f=sd', 'duration': 52, 'fallback_url': 'https://v.redd.it/sjapigls8ukg1/CMAF_360.mp4?source=fallback', 'has... | t3_1rapib4 | /r/LocalLLaMA/comments/1rapib4/using_apples_mlx_framework_to_build_a_local_tts/ | false | false | 0 | {'enabled': False, 'images': [{'id': 'a2JuNW9vbHM4dWtnMbJuJ_Bt4nwFGu4sstVRR1Nes8xqy6X8J9yBQfeIGVTU', 'resolutions': [{'height': 69, 'url': 'https://external-preview.redd.it/a2JuNW9vbHM4dWtnMbJuJ_Bt4nwFGu4sstVRR1Nes8xqy6X8J9yBQfeIGVTU.png?width=108&crop=smart&format=pjpg&auto=webp&s=a6b4e57a095bd7c6e6b315305d5977492a2a8... | |
Ollama FIM model suggestion | 0 | Hello,
May I ask for a model suggestion for FIM to use it with Ollama + VScode?
VRAM is 16GB AMD and I saw few suggestions for Qwen3 Coder 30B, but I guess it doesn't fit with my hardware.
Thanks in advance. | 2026-02-21T12:03:19 | https://www.reddit.com/r/LocalLLaMA/comments/1raphes/ollama_fim_model_suggestion/ | informalpool1 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1raphes | false | null | t3_1raphes | /r/LocalLLaMA/comments/1raphes/ollama_fim_model_suggestion/ | false | false | self | 0 | null |
“Your terminal. Your agent. Your rules.” - introducing Jazz | 1 | [removed] | 2026-02-21T11:59:13 | https://www.reddit.com/r/LocalLLaMA/comments/1rapeky/your_terminal_your_agent_your_rules_introducing/ | Fit-Jellyfish3064 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rapeky | false | null | t3_1rapeky | /r/LocalLLaMA/comments/1rapeky/your_terminal_your_agent_your_rules_introducing/ | false | false | self | 1 | null |
Got $800 of credits on digital ocean (for GPU usage). Anyone here that's into AI training and inference and could make use of it? | 1 | So I have around 800 bucks worth of GPU usage credits on digital ocean, those can be used specifically for GPU and clusters. So if any individual or hobbyist or anyone out here is training models or inference, or anything else, please contact. (Not for free sadly, but way cheaper : ) | 2026-02-21T11:57:03 | https://www.reddit.com/r/LocalLLaMA/comments/1rapd7t/got_800_of_credits_on_digital_ocean_for_gpu_usage/ | DocumentFun9077 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rapd7t | false | null | t3_1rapd7t | /r/LocalLLaMA/comments/1rapd7t/got_800_of_credits_on_digital_ocean_for_gpu_usage/ | false | false | self | 1 | null |
I built a native macOS TTS app using Apple's MLX framework runs fully offline on Apple Silicon, no cloud, no subscriptions | 1 | [removed] | 2026-02-21T11:51:30 | https://v.redd.it/90p40r696ukg1 | tarunyadav9761 | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1rap9tp | false | {'reddit_video': {'bitrate_kbps': 800, 'dash_url': 'https://v.redd.it/90p40r696ukg1/DASHPlaylist.mpd?a=1774266717%2CMzRmZDc4YTkzNGZiOGI4NWI0YTk4Y2NkM2IxMjNiOTNkYTE4ZmMwNjY3YTJiMmVkOWE4YWExY2Q1ZjQyMmM1MA%3D%3D&v=1&f=sd', 'duration': 52, 'fallback_url': 'https://v.redd.it/90p40r696ukg1/CMAF_360.mp4?source=fallback', 'has... | t3_1rap9tp | /r/LocalLLaMA/comments/1rap9tp/i_built_a_native_macos_tts_app_using_apples_mlx/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'anpucTF0Njk2dWtnMbJuJ_Bt4nwFGu4sstVRR1Nes8xqy6X8J9yBQfeIGVTU', 'resolutions': [{'height': 69, 'url': 'https://external-preview.redd.it/anpucTF0Njk2dWtnMbJuJ_Bt4nwFGu4sstVRR1Nes8xqy6X8J9yBQfeIGVTU.png?width=108&crop=smart&format=pjpg&auto=webp&s=5b302c6cdce9a672e39542c05946b7c5e75cc... | |
What is the best way to deploy $1,300 (£1,000) to buy hardware to run a maximally powerful local LLM? | 0 | Hi,
I've never built a computer before and I want to spend £1,000 to buy hardware to run the most powerful local LLM that this money can afford.
So I asked Google Gemini how to do this. It said I should buy:
|**Component**|**Part Name**|**Est. Price**|**Where to Buy**|
|:-|:-|:-|:-|
|**GPU**|**NVIDIA RTX 3090 (24... | 2026-02-21T11:12:40 | https://www.reddit.com/r/LocalLLaMA/comments/1raomh6/what_is_the_best_way_to_deploy_1300_1000_to_buy/ | philmethod | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1raomh6 | false | null | t3_1raomh6 | /r/LocalLLaMA/comments/1raomh6/what_is_the_best_way_to_deploy_1300_1000_to_buy/ | false | false | self | 0 | null |
Assistant lector not writer for stories | 2 | Hello,
I enjoy the act of writing itself too much and don’t want to delegate it. However, I would like to have an editor that already gives feedback while I’m writing. It should basically be a small proofreader.The whole thing should run locally with any LLM (I would use one of the Mistral models).Do you know anything... | 2026-02-21T10:55:17 | https://www.reddit.com/r/LocalLLaMA/comments/1raobr5/assistant_lector_not_writer_for_stories/ | mobileJay77 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1raobr5 | false | null | t3_1raobr5 | /r/LocalLLaMA/comments/1raobr5/assistant_lector_not_writer_for_stories/ | false | false | self | 2 | null |
8 Tricks which can Easy Boost your Confidence as a Professional | 1 | [removed] | 2026-02-21T10:54:17 | https://newsaffairng.com/2024/04/11/8-easy-tricks-to-boost-your-confidence-as-a-professional/ | Jawabill10 | newsaffairng.com | 1970-01-01T00:00:00 | 0 | {} | 1raob7e | false | null | t3_1raob7e | /r/LocalLLaMA/comments/1raob7e/8_tricks_which_can_easy_boost_your_confidence_as/ | false | false | default | 1 | null |
Buying Mac Mini 24GB RAM | 0 | Hi guys, I'm currently starting with local LLMs and I'm planning to buy a Mac mini with 24GB of RAM. Which models can I expect to run smoothly on this setup? I primarily want to use it for OCR and document processing because of sensitive client data. Thanks for the feedback! | 2026-02-21T10:39:39 | https://www.reddit.com/r/LocalLLaMA/comments/1rao2q4/buying_mac_mini_24gb_ram/ | 11hans | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rao2q4 | false | null | t3_1rao2q4 | /r/LocalLLaMA/comments/1rao2q4/buying_mac_mini_24gb_ram/ | false | false | self | 0 | null |
How good is Qw en Code natively? | 0 | Link: [https://github.com/QwenLM/qwen-code](https://github.com/QwenLM/qwen-code). Anyone integrated this into VSCode yet? | 2026-02-21T10:18:25 | https://www.reddit.com/r/LocalLLaMA/comments/1ranqbk/how_good_is_qw_en_code_natively/ | HawkLopsided6107 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ranqbk | false | null | t3_1ranqbk | /r/LocalLLaMA/comments/1ranqbk/how_good_is_qw_en_code_natively/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': '31wqNNXG1d0Gw0Xey1GFAaK408UxxQKyicXGjIRE2iQ', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/31wqNNXG1d0Gw0Xey1GFAaK408UxxQKyicXGjIRE2iQ.png?width=108&crop=smart&auto=webp&s=74aa4e884ed6993c89229207051d1a56688696dc', 'width': 108}, {'height': 108, 'url': 'h... |
I built a personal AI assistant and open-sourced it (pip install, pure Python)(Sorry, this is last..) | 0 | Hi everyone. I've been building a personal AI assistant for my own use and it's gotten to the point where I thought others might find it useful too, so I'm open-sourcing it.
It's called SalmAlm. The idea is simple — bring your own API keys, run everything locally, use multiple models through one interface.
pip in... | 2026-02-21T10:05:10 | https://www.reddit.com/r/LocalLLaMA/comments/1rani8e/i_built_a_personal_ai_assistant_and_opensourced/ | Plastic_Asparagus_97 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rani8e | false | null | t3_1rani8e | /r/LocalLLaMA/comments/1rani8e/i_built_a_personal_ai_assistant_and_opensourced/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': '535N5UVywoSaqd1l4S3_FaRX6i-UODR9-tdHtmyMWjc', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/535N5UVywoSaqd1l4S3_FaRX6i-UODR9-tdHtmyMWjc.png?width=108&crop=smart&auto=webp&s=ef28020739331b549b9c17900f1708286908fa5c', 'width': 108}, {'height': 108, 'url': 'h... |
Made an mcp proxy that collapses all your MCP servers into 2 tools — the agent writes TypeScript to call them | 0 | Got tired of the tool explosion as I kept adding MCP servers. Each one brings its own set of tools and the context window fills up fast.
Built cmcp — a Rust proxy that aggregates all your servers behind search() and execute(). The agent writes TypeScript to filter the tool catalog and call tools across servers. Type... | 2026-02-21T09:57:51 | https://www.reddit.com/r/LocalLLaMA/comments/1randro/made_an_mcp_proxy_that_collapses_all_your_mcp/ | aceelric | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1randro | false | null | t3_1randro | /r/LocalLLaMA/comments/1randro/made_an_mcp_proxy_that_collapses_all_your_mcp/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'ZAwWwmjWkA9qJHaWZe6YR6O3vT8QPIIXEFcUZT4bPxE', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/ZAwWwmjWkA9qJHaWZe6YR6O3vT8QPIIXEFcUZT4bPxE.png?width=108&crop=smart&auto=webp&s=2b8e1ee369864188b02c88facbeecf71f9a41ae1', 'width': 108}, {'height': 108, 'url': 'h... |
strix halo opinions for claude/open code | 2 | my current workflow for AI code generation is two level, i use [z.ai](http://z.ai) max plan to do the mass generation then switch to a work team plan of codex 5.3 xhigh for details, QA etc.
Thinking of switching that spend from [z.ai](http://z.ai) onto a paying for a strix halo box, likely the corsair AI 300 on month... | 2026-02-21T09:56:27 | https://www.reddit.com/r/LocalLLaMA/comments/1ranczj/strix_halo_opinions_for_claudeopen_code/ | megadonkeyx | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ranczj | false | null | t3_1ranczj | /r/LocalLLaMA/comments/1ranczj/strix_halo_opinions_for_claudeopen_code/ | false | false | self | 2 | null |
Best local software for Real-Time Deepfakes (Face & Body) on RTX 3060 12GB? | 0 | Hi everyone!
I’m looking for the best software to run real-time deepfakes locally. I just got an RTX 3060 12GB, and my main goal is streaming (Twitch/TikTok) rather than just pre-rendering videos.
What I need:
1. Face Swap: High-quality real-time replacement with low latency.
2. Body/Clothing Swap: I’ve seen some c... | 2026-02-21T09:54:10 | https://www.reddit.com/r/LocalLLaMA/comments/1ranbod/best_local_software_for_realtime_deepfakes_face/ | Due_Ear7437 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ranbod | false | null | t3_1ranbod | /r/LocalLLaMA/comments/1ranbod/best_local_software_for_realtime_deepfakes_face/ | false | false | self | 0 | null |
TeichAI/GLM-4.7-Flash-Claude-Opus-4.5-High-Reasoning-Distill-GGUF · Hugging Face | 67 | featured yesterday (by Unsloth and on X) so let's check it out | 2026-02-21T09:52:18 | https://huggingface.co/TeichAI/GLM-4.7-Flash-Claude-Opus-4.5-High-Reasoning-Distill-GGUF | jacek2023 | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1ranako | false | null | t3_1ranako | /r/LocalLLaMA/comments/1ranako/teichaiglm47flashclaudeopus45highreasoningdistillg/ | false | false | 67 | {'enabled': False, 'images': [{'id': 'FYfNUuhT3WL90VoAzpzSy8fZEgRuGPVIPMxWk_wBrrg', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/FYfNUuhT3WL90VoAzpzSy8fZEgRuGPVIPMxWk_wBrrg.png?width=108&crop=smart&auto=webp&s=d4954034849e93dc927521f6c4413a0f28ede199', 'width': 108}, {'height': 116, 'url': 'h... | |
Drop your daily driver models for RP. | 0 | \- Trying to find a good model to stick to for rp purposes.
\- I've limited hardware 32gb vram and 32gb ram.
1. Drop your favourite models for rp. Cheers | 2026-02-21T09:37:58 | https://www.reddit.com/r/LocalLLaMA/comments/1ran2aj/drop_your_daily_driver_models_for_rp/ | Weak-Shelter-1698 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ran2aj | false | null | t3_1ran2aj | /r/LocalLLaMA/comments/1ran2aj/drop_your_daily_driver_models_for_rp/ | false | false | self | 0 | null |
Hardware suggestion | 1 | Hi you all,
I currently have a good pc specs with rtx 5090 and 64gb memory and I am wondering if I should by another 5090 to use a higher model or maybe sell my pc and buy a top macbook pro m4 ultra.
My plan is to train my model with custom pdf files, use n8n and open notebook, I am a software engineer so I can wri... | 2026-02-21T09:20:25 | https://www.reddit.com/r/LocalLLaMA/comments/1rams28/hardware_suggestion/ | duardito_bcn | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rams28 | false | null | t3_1rams28 | /r/LocalLLaMA/comments/1rams28/hardware_suggestion/ | false | false | self | 1 | null |
[Release] Ouro-2.6B-Thinking — first working inference (ByteDance's recurrent "thinking" model, fixed for transformers 4.55) | 62 | ByteDance released Ouro-2.6B-Thinking a few weeks ago and it's been tricky to run — the architecture is genuinely unusual and existing GGUFs were producing garbage output because of it.
What makes Ouro different: It's a recurrent Universal Transformer — it runs all 48 layers 4 times per token (192 effective passes). S... | 2026-02-21T09:04:04 | https://www.reddit.com/r/LocalLLaMA/comments/1ramir9/release_ouro26bthinking_first_working_inference/ | PruneLanky3551 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ramir9 | false | null | t3_1ramir9 | /r/LocalLLaMA/comments/1ramir9/release_ouro26bthinking_first_working_inference/ | false | false | self | 62 | {'enabled': False, 'images': [{'id': 'UkVhSY9vKNU-SZQbkMGeUZELCcDkLFH2LCn-xC3OlaY', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/UkVhSY9vKNU-SZQbkMGeUZELCcDkLFH2LCn-xC3OlaY.png?width=108&crop=smart&auto=webp&s=681d55999cd47b130c3eb7dfe5cb2afb04be36a4', 'width': 108}, {'height': 116, 'url': 'h... |
I ran the DAN jailbreak through a 10×10 blind peer eval (models judging each other). The judge variance was larger than the actual model score variance — here's the full matrix. | 0 | THIS IS DAY-61 of Running Blind Evals — every model in a pool judges every other model's response, no human raters, self-judgments excluded. This week I ran EDGE-003: the classic DAN prompt injection attack with XML tag spoofing. Posting the full data here because one finding genuinely surprised me and I want to know i... | 2026-02-21T08:50:10 | https://www.reddit.com/r/LocalLLaMA/comments/1ramae7/i_ran_the_dan_jailbreak_through_a_1010_blind_peer/ | Silver_Raspberry_811 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ramae7 | false | null | t3_1ramae7 | /r/LocalLLaMA/comments/1ramae7/i_ran_the_dan_jailbreak_through_a_1010_blind_peer/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': '6-YBJkURck700dShZmspuxq-PbQf6xWxKWHWFtY1lfU', 'resolutions': [{'height': 69, 'url': 'https://external-preview.redd.it/6-YBJkURck700dShZmspuxq-PbQf6xWxKWHWFtY1lfU.jpeg?width=108&crop=smart&auto=webp&s=4b0c54c30ea66bf1abacaadece2864775475b575', 'width': 108}, {'height': 139, 'url': '... |
Is there a place where I can donate all my Claude/Codex/Gemini/OpenCode CLI chat history as training dataset? | 0 | There are hundreds MB of chat history sitting on my disk and I'm wondering how the community can make better use of them. | 2026-02-21T08:47:34 | https://www.reddit.com/r/LocalLLaMA/comments/1ram8tt/is_there_a_place_where_i_can_donate_all_my/ | woct0rdho | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ram8tt | false | null | t3_1ram8tt | /r/LocalLLaMA/comments/1ram8tt/is_there_a_place_where_i_can_donate_all_my/ | false | false | self | 0 | null |
How I mapped every High Court of Australia case and their citations (1901-2025) | 114 | I’ve recently begun working on a project to convert entirety of Australian case law and legislation into a LexisNexis-style interlinked legal knowledge graph.
As I’ve experimented with techniques to normalise case citations, I thought it would be cool to turn my work into a neat little visualisation, and explain how y... | 2026-02-21T08:36:59 | Neon0asis | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1ram2ov | false | null | t3_1ram2ov | /r/LocalLLaMA/comments/1ram2ov/how_i_mapped_every_high_court_of_australia_case/ | false | false | 114 | {'enabled': True, 'images': [{'id': '2mntthxp7tkg1', 'resolutions': [{'height': 81, 'url': 'https://preview.redd.it/2mntthxp7tkg1.gif?width=108&crop=smart&format=png8&s=8b0d272925c9eb77656017f1675c5c3e1ea96208', 'width': 108}, {'height': 162, 'url': 'https://preview.redd.it/2mntthxp7tkg1.gif?width=216&crop=smart&format... | ||
Any thoughts on the Chrome's on device model and its purpose.? | 2 | 2026-02-21T08:28:44 | https://www.reddit.com/r/LocalLLaMA/comments/1ralxr8/any_thoughts_on_the_chromes_on_device_model_and/ | kkb294 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ralxr8 | false | null | t3_1ralxr8 | /r/LocalLLaMA/comments/1ralxr8/any_thoughts_on_the_chromes_on_device_model_and/ | false | false | 2 | null | ||
I benchmarked PaddleOCR-VL 1.5 vs Marker vs PP-StructureV3 for PDF-to-Markdown on Modal (T4, A10G, L4) — here's what I found | 2 | Spent a sometime testing every PDF-to-markdown tool I could get running on Modal's serverless GPUs. Ran them all on the same document — the "Attention Is All You Need" paper (15 pages, math-heavy, tables, figures, multi-column layout). Here are the real numbers, not cherry-picked benchmarks.
\## The Contenders
\-... | 2026-02-21T08:16:31 | https://www.reddit.com/r/LocalLLaMA/comments/1ralqm0/i_benchmarked_paddleocrvl_15_vs_marker_vs/ | Various_Hour_9857 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ralqm0 | false | null | t3_1ralqm0 | /r/LocalLLaMA/comments/1ralqm0/i_benchmarked_paddleocrvl_15_vs_marker_vs/ | false | false | self | 2 | null |
Solving the "Commonsense Gap" in LLMs: Launching the Physical Commonsense Protocol (PCP-V1) | 1 | Hello r/LocalLLaMA. I am **Architect-0**.
As a long-time observer of model collapse, I’ve noticed that even the most advanced models fail at **Physical World Reasoning** (spatial constraints, material physics, and kinetic logic).
I am launching the **Physical Commonsense Protocol (PCP-V1)** as an open-source research... | 2026-02-21T07:50:23 | https://www.reddit.com/r/LocalLLaMA/comments/1ralb32/solving_the_commonsense_gap_in_llms_launching_the/ | arc-ithect | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ralb32 | false | null | t3_1ralb32 | /r/LocalLLaMA/comments/1ralb32/solving_the_commonsense_gap_in_llms_launching_the/ | false | false | self | 1 | null |
Launching PCP-V1: A Decentralized Protocol to solve the AI "Commonsense" Gap (90/10 Split) | 1 | > | 2026-02-21T07:43:25 | https://www.reddit.com/r/LocalLLaMA/comments/1ral6ww/launching_pcpv1_a_decentralized_protocol_to_solve/ | arc-ithect | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ral6ww | false | null | t3_1ral6ww | /r/LocalLLaMA/comments/1ral6ww/launching_pcpv1_a_decentralized_protocol_to_solve/ | false | false | self | 1 | null |
Free for first 100: DSMC Prompt Pack — fixes context drift in long OpenClaw / Ollama sessions (I built this) | 1 | [removed] | 2026-02-21T07:41:31 | https://www.reddit.com/r/LocalLLaMA/comments/1ral5s4/free_for_first_100_dsmc_prompt_pack_fixes_context/ | AIVisibilityHelper | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ral5s4 | false | null | t3_1ral5s4 | /r/LocalLLaMA/comments/1ral5s4/free_for_first_100_dsmc_prompt_pack_fixes_context/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'YdKEYUL1fw_2v5NB0f6iL30TUKknVL7fD-OT5tYNcyI', 'resolutions': [{'height': 72, 'url': 'https://external-preview.redd.it/YdKEYUL1fw_2v5NB0f6iL30TUKknVL7fD-OT5tYNcyI.png?width=108&crop=smart&auto=webp&s=8f762246c76344bd8e3e546e17df3f401c9101f9', 'width': 108}, {'height': 144, 'url': 'h... |
Interesting Observation from a Simple Multi-Agent Experiment with 10 Different Models | 2 | This is an update to [my earlier post this week.](https://www.reddit.com/r/LocalLLaMA/comments/1r7d9xb/can_your_local_setup_complete_this_simple_multi/)
TLDR: I ran a small personal experiment to autonomously summarize 10 transcripts using a multi-agent workflow on Codex.
The following sub-100B models failed to compl... | 2026-02-21T07:39:03 | https://www.reddit.com/r/LocalLLaMA/comments/1ral48v/interesting_observation_from_a_simple_multiagent/ | chibop1 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ral48v | false | null | t3_1ral48v | /r/LocalLLaMA/comments/1ral48v/interesting_observation_from_a_simple_multiagent/ | false | false | self | 2 | {'enabled': False, 'images': [{'id': '3NcPwh0nf6tQrt9c2I-jVhZTGe0mx8BaKTMG6rwpUkM', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/3NcPwh0nf6tQrt9c2I-jVhZTGe0mx8BaKTMG6rwpUkM.png?width=108&crop=smart&auto=webp&s=1dbcaa8647073f376145576f797c4c55fc4feaad', 'width': 108}, {'height': 108, 'url': 'h... |
implemented a pipeline by gepa that helps your ai agent perform way better | 3 | I built an open source project based on gskill, a pipeline from the team behind GEPA. It takes any github repository and generates a \`.claude/skills/{repo-name}/SKILL.md\` file with optimized, repo-specific instructions that significantly improve an agent’s task performance. You can easily use the resulting skill file... | 2026-02-21T07:05:26 | https://www.reddit.com/r/LocalLLaMA/comments/1rakjyx/implemented_a_pipeline_by_gepa_that_helps_your_ai/ | purealgo | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rakjyx | false | null | t3_1rakjyx | /r/LocalLLaMA/comments/1rakjyx/implemented_a_pipeline_by_gepa_that_helps_your_ai/ | false | false | self | 3 | {'enabled': False, 'images': [{'id': 'Jk8-xRMCxTcwlJTCFQetGXy0thAT_oqJsKeakw02yvc', 'resolutions': [{'height': 63, 'url': 'https://external-preview.redd.it/Jk8-xRMCxTcwlJTCFQetGXy0thAT_oqJsKeakw02yvc.png?width=108&crop=smart&auto=webp&s=006868aaa29f8045f90d46c2bdd6583380609df2', 'width': 108}, {'height': 127, 'url': 'h... |
I've built a deterministic execution gate. Can you help break it? | 0 | I’ve been working on a small execution authority layer aimed at preventing duplicate irreversible actions under retries, race conditions, and replay.
It’s not a framework or a queue. It’s a deterministic gate that decides whether an action is allowed to commit.
In the current demo scope, it’s designed to:
Allow exactly... | 2026-02-21T06:50:52 | https://www.reddit.com/r/LocalLLaMA/comments/1rakars/ive_built_a_deterministic_execution_gate_can_you/ | Agent_invariant | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rakars | false | null | t3_1rakars | /r/LocalLLaMA/comments/1rakars/ive_built_a_deterministic_execution_gate_can_you/ | false | false | self | 0 | null |
Show HN-style: I built a local AI assistant that's just pip install salmalm — no Docker, no config, 62 tools | 2 | Hey r/LocalLLaMA,
I've been building a personal AI gateway called SalmAlm and wanted to share it for feedback.
pip install salmalm
salmalm
\# → [http://localhost:18800](http://localhost:18800)
That's it. No Docker, no Node.js, no config files. Pure Python stdlib.
What it does:
• Multi-provider routing ... | 2026-02-21T06:38:09 | https://www.reddit.com/r/LocalLLaMA/comments/1rak2qd/show_hnstyle_i_built_a_local_ai_assistant_thats/ | Plastic_Asparagus_97 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rak2qd | false | null | t3_1rak2qd | /r/LocalLLaMA/comments/1rak2qd/show_hnstyle_i_built_a_local_ai_assistant_thats/ | false | false | self | 2 | null |
15,000+ tok/s on ChatJimmy: Is the "Model-on-Silicon" era finally starting? | 72 | We’ve been discussing local inference for years, but chatjimmy.ai just moved the goalposts. They are hitting 15,414 tokens per second using what they call "mask ROM recall fabric"—basically etching the model weights directly into the silicon logic.
This is a massive shift from our current setups. We’re used to general... | 2026-02-21T06:19:57 | https://www.reddit.com/gallery/1rajr11 | Significant-Topic433 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1rajr11 | false | null | t3_1rajr11 | /r/LocalLLaMA/comments/1rajr11/15000_toks_on_chatjimmy_is_the_modelonsilicon_era/ | false | false | 72 | null | |
best general model for 120GB vram and 64GB DDR5 | 0 | I have a system with 120GB vram and then 64GB DDR5 on a 9950x. Just curious what others think is the best model...or if anything is better than Minimax 2.1 Q4 or qwen3 Q4 as i can get those to fit... | 2026-02-21T06:12:34 | https://www.reddit.com/r/LocalLLaMA/comments/1rajm7w/best_general_model_for_120gb_vram_and_64gb_ddr5/ | applegrcoug | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rajm7w | false | null | t3_1rajm7w | /r/LocalLLaMA/comments/1rajm7w/best_general_model_for_120gb_vram_and_64gb_ddr5/ | false | false | self | 0 | null |
I stopped paying for API calls 6 weeks ago — here's the local stack that replaced them (and what surprised me) | 1 | [removed] | 2026-02-21T06:03:07 | https://www.reddit.com/r/LocalLLaMA/comments/1rajg08/i_stopped_paying_for_api_calls_6_weeks_ago_heres/ | Visible_Homework_477 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rajg08 | false | null | t3_1rajg08 | /r/LocalLLaMA/comments/1rajg08/i_stopped_paying_for_api_calls_6_weeks_ago_heres/ | false | false | self | 1 | null |
what are your favorite lesser known models on huggingface | 39 | I'm a professor, I want to expand my students minds by showing them models that are not chatGPT etc. Anyone have some unique / interesting / useful models hosted on huggingface? | 2026-02-21T06:01:33 | https://www.reddit.com/r/LocalLLaMA/comments/1rajez2/what_are_your_favorite_lesser_known_models_on/ | EngineeringBright82 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rajez2 | false | null | t3_1rajez2 | /r/LocalLLaMA/comments/1rajez2/what_are_your_favorite_lesser_known_models_on/ | false | false | self | 39 | null |
Old Rig (3070, 32GB DDR3, i7-4790) suggestions for running local models + expectation setting? | 0 | Hi all,
Thanks in advance for entertaining another "what can I run?" post.
Not in a position to make any hardware investments, but would like to jump into running local models with what I got, even just for personal education on practically deploying from scratch and experimenting or better understanding model use ... | 2026-02-21T05:21:27 | https://www.reddit.com/r/LocalLLaMA/comments/1raio5q/old_rig_3070_32gb_ddr3_i74790_suggestions_for/ | rabbits_for_carrots | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1raio5q | false | null | t3_1raio5q | /r/LocalLLaMA/comments/1raio5q/old_rig_3070_32gb_ddr3_i74790_suggestions_for/ | false | false | self | 0 | null |
Linear Attention (Gated Delt aNet) - How does it impact reasoning? | 0 | Qwe n3.5 uses a hybrid setup. Does the linear attention degrade complex logic, or does the hybrid approach fix that? | 2026-02-21T05:11:26 | https://www.reddit.com/r/LocalLLaMA/comments/1raiher/linear_attention_gated_delt_anet_how_does_it/ | Hot_Supermarket9039 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1raiher | false | null | t3_1raiher | /r/LocalLLaMA/comments/1raiher/linear_attention_gated_delt_anet_how_does_it/ | false | false | self | 0 | null |
Can we run Qw en3.5 on a 24GB VRAM card? | 0 | With 397B total params, obviously not fully loaded, but with offloading, is it bearable? | 2026-02-21T04:50:39 | https://www.reddit.com/r/LocalLLaMA/comments/1rai2v5/can_we_run_qw_en35_on_a_24gb_vram_card/ | Hot_Supermarket9039 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rai2v5 | false | null | t3_1rai2v5 | /r/LocalLLaMA/comments/1rai2v5/can_we_run_qw_en35_on_a_24gb_vram_card/ | false | false | self | 0 | null |
Releasing OpenRA-RL: A full-fledged RTS environment for local AI Agents (Open-Source, 1-line install) | 3 | We are a team of researchers that love gaming and messing up weights and biases, and today we are releasing [OpenRA-RL](https://openra-rl.dev/).
We are launching a **full-fledged environment for AI Agents to play real-time strategy (RTS) games**. Right now, your local models can connect to this environment, observe th... | 2026-02-21T04:19:30 | https://www.reddit.com/r/LocalLLaMA/comments/1rahgv3/releasing_openrarl_a_fullfledged_rts_environment/ | QuirkyDream6928 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rahgv3 | false | null | t3_1rahgv3 | /r/LocalLLaMA/comments/1rahgv3/releasing_openrarl_a_fullfledged_rts_environment/ | false | false | self | 3 | null |
Github: When Attention Collapses: How Degenerate Layers in LLMs Enable Smaller, Stronger Models AKA Inheritune | 2 | 2026-02-21T03:42:26 | https://github.com/sanyalsunny111/LLM-Inheritune | Thrumpwart | github.com | 1970-01-01T00:00:00 | 0 | {} | 1ragqgk | false | null | t3_1ragqgk | /r/LocalLLaMA/comments/1ragqgk/github_when_attention_collapses_how_degenerate/ | false | false | 2 | {'enabled': False, 'images': [{'id': 'fATGTkNeuWoM9kcOsuaQ76gYsqrMSHQRoWS5VAgwxnI', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/fATGTkNeuWoM9kcOsuaQ76gYsqrMSHQRoWS5VAgwxnI.png?width=108&crop=smart&auto=webp&s=555366163c4223c6c880bc67792f9a1b3d5ccfdb', 'width': 108}, {'height': 108, 'url': 'h... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.