title stringlengths 1 300 | score int64 0 8.54k | selftext stringlengths 0 41.5k | created timestamp[ns]date 2023-04-01 04:30:41 2026-03-04 02:14:14 ⌀ | url stringlengths 0 878 | author stringlengths 3 20 | domain stringlengths 0 82 | edited timestamp[ns]date 1970-01-01 00:00:00 2026-02-19 14:51:53 | gilded int64 0 2 | gildings stringclasses 7
values | id stringlengths 7 7 | locked bool 2
classes | media stringlengths 646 1.8k ⌀ | name stringlengths 10 10 | permalink stringlengths 33 82 | spoiler bool 2
classes | stickied bool 2
classes | thumbnail stringlengths 4 213 ⌀ | ups int64 0 8.54k | preview stringlengths 301 5.01k ⌀ |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
meanwhile in China | 532 | 2026-02-24T04:42:30 | https://v.redd.it/j4ujf22ngdlg1 | Tiny_Judge_2119 | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1rd64c5 | false | {'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/j4ujf22ngdlg1/DASHPlaylist.mpd?a=1774500172%2CMDNhNTQxMTI2N2IxMDIxMTU2NDkzNjlkZWViZThhNjY2Yjc3Nzg2ZmUzZDMyNjg3YjMwNWVkNGYxN2U1YTNhMg%3D%3D&v=1&f=sd', 'duration': 18, 'fallback_url': 'https://v.redd.it/j4ujf22ngdlg1/CMAF_720.mp4?source=fallback', 'ha... | t3_1rd64c5 | /r/LocalLLaMA/comments/1rd64c5/meanwhile_in_china/ | false | false | 532 | {'enabled': False, 'images': [{'id': 'bmE5aWI2Mm5nZGxnMf036yKzUhZ8EQqJaE3HIdg_QOMox8iiJVO5Ps1DTMuW', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/bmE5aWI2Mm5nZGxnMf036yKzUhZ8EQqJaE3HIdg_QOMox8iiJVO5Ps1DTMuW.png?width=108&crop=smart&format=pjpg&auto=webp&s=c64c0c3b585b95d4ffce8df38f03fead49abd... | ||
Claude sonnet 4.6 says it's DeepSeek when system prompt is empty | 0 | Empty the system prompt and ask its name in Chinese,it will response it’s DeepSeek. Apparently distilled from DeepSeek and other Chinese models but accusing them , how ironic and double standard | 2026-02-24T04:37:49 | https://www.reddit.com/gallery/1rd5y2u | Separate_Tip_8215 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1rd5y2u | false | null | t3_1rd5y2u | /r/LocalLLaMA/comments/1rd5y2u/claude_sonnet_46_says_its_deepseek_when_system/ | false | false | 0 | null | |
experimented with openclaw - am I missing something? | 1 | I like the interface, and being able to queue off tasks but for the most part it's just as interactive as using the website. I also tried to link it to chrome with the openclaw extension but had a lot of difficulty getting that to work (it kept saying 18792 relay not connected). No matter what token I used. I ended up ... | 2026-02-24T03:54:47 | https://www.reddit.com/r/LocalLLaMA/comments/1rd4ekg/experimented_with_openclaw_am_i_missing_something/ | retrorays | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rd4ekg | false | null | t3_1rd4ekg | /r/LocalLLaMA/comments/1rd4ekg/experimented_with_openclaw_am_i_missing_something/ | false | false | self | 1 | null |
Hot Take: 90% of RAG failure is bad data parsing, not the LLM. | 1 | [removed] | 2026-02-24T03:41:32 | https://www.reddit.com/r/LocalLLaMA/comments/1rd3yes/hot_take_90_of_rag_failure_is_bad_data_parsing/ | Thisath_Thewnitha | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rd3yes | false | null | t3_1rd3yes | /r/LocalLLaMA/comments/1rd3yes/hot_take_90_of_rag_failure_is_bad_data_parsing/ | false | false | self | 1 | null |
People are getting it wrong; Anthropic doesn't care about the distillation, they just want to counter the narrative about Chinese open-source models catching up with closed-source frontier models | 763 | Why would they care about distillation when they probably have done the same with OpenAI models and the Chinese labs are paying for the tokens? This is just their attempt to explain to investors and the US government that cheap Chinese models will never be as good as their models without distillation or stealing model ... | 2026-02-24T02:54:22 | obvithrowaway34434 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1rd2x61 | false | null | t3_1rd2x61 | /r/LocalLLaMA/comments/1rd2x61/people_are_getting_it_wrong_anthropic_doesnt_care/ | false | false | 763 | {'enabled': True, 'images': [{'id': '1ulaheylwclg1', 'resolutions': [{'height': 58, 'url': 'https://preview.redd.it/1ulaheylwclg1.png?width=108&crop=smart&auto=webp&s=a4af69a728630c1c414b9af1441c3eba5c75fafb', 'width': 108}, {'height': 117, 'url': 'https://preview.redd.it/1ulaheylwclg1.png?width=216&crop=smart&auto=web... | ||
Round 2: Quick MoE quantization comparison: LFM2-8B-A1B, OLMoE-1B-7B-0924-Instruct, granite-4.0-h-tiny | 35 | I chose three small, recent, and different MoE models that fit my VRAM for a quick assessment (these are not models I actually use).
The goal is to check on MXFP4 and evaluate the smallest quantization variants.
For the non initiated:
KLD (KL Divergence): Measures "Faithfulness." It shows how much the quantized mode... | 2026-02-24T02:28:32 | https://www.reddit.com/r/LocalLLaMA/comments/1rd2cdu/round_2_quick_moe_quantization_comparison/ | TitwitMuffbiscuit | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rd2cdu | false | null | t3_1rd2cdu | /r/LocalLLaMA/comments/1rd2cdu/round_2_quick_moe_quantization_comparison/ | false | false | 35 | null | |
Rasbery Pi 5 16 GB 9k context running byteshape devstral and goose ai agent coder framework. by extending timeout. roo code kilo code on rasbery pi next? | 0 | # ByteShape Devstral Time Out Increased scripts for Raspberry Pi 5 16GB running Goose Ai Agent Coder Framework
I got goose to run on rasbary pi 5 16gb with devstral a vision model at 12k context 98 minute response time. 53 minutes 9k context I think.
What SYSTEM prompt would you use to stylise your assistant agent co... | 2026-02-24T02:15:43 | https://www.reddit.com/r/LocalLLaMA/comments/1rd223u/rasbery_pi_5_16_gb_9k_context_running_byteshape/ | Josheeg39 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rd223u | false | null | t3_1rd223u | /r/LocalLLaMA/comments/1rd223u/rasbery_pi_5_16_gb_9k_context_running_byteshape/ | false | false | self | 0 | null |
Exclusive: China's DeepSeek trained AI model on Nvidia's best chip despite US ban, official says | 188 | 2026-02-24T02:05:11 | https://www.reuters.com/world/china/chinas-deepseek-trained-ai-model-nvidias-best-chip-despite-us-ban-official-says-2026-02-24/ | blahblahsnahdah | reuters.com | 1970-01-01T00:00:00 | 0 | {} | 1rd1tj9 | false | null | t3_1rd1tj9 | /r/LocalLLaMA/comments/1rd1tj9/exclusive_chinas_deepseek_trained_ai_model_on/ | false | false | 188 | {'enabled': False, 'images': [{'id': 'LwC39wQsKjPNUsKdGmLUh6SkmdTxf4euiX9LEkSLsqY', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/LwC39wQsKjPNUsKdGmLUh6SkmdTxf4euiX9LEkSLsqY.jpeg?width=108&crop=smart&auto=webp&s=29b2cbcf357039ad05158e65f169dd591768095a', 'width': 108}, {'height': 113, 'url': '... | ||
American vs Chinese AI is a false narrative. | 235 | **TL;DR:** The real war (IF there is one) is between closed source and open source. Don't fall/propagate the America vs China narrative which is just tactics to get investors to loosen pursestrings and lawmakers/politicians to acquiesce to demands.
--------------
There's been an uptick of nationalistic posts (mostly... | 2026-02-24T01:57:22 | https://www.reddit.com/r/LocalLLaMA/comments/1rd1lmz/american_vs_chinese_ai_is_a_false_narrative/ | rm-rf-rm | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rd1lmz | false | null | t3_1rd1lmz | /r/LocalLLaMA/comments/1rd1lmz/american_vs_chinese_ai_is_a_false_narrative/ | false | false | self | 235 | null |
The Physical Renaissance: The Brutal Aesthetics of Hardwiring AI into Silicon | 1 | [removed] | 2026-02-24T01:55:02 | https://www.reddit.com/r/LocalLLaMA/comments/1rd1jg3/the_physical_renaissance_the_brutal_aesthetics_of/ | MarsQiu | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rd1jg3 | false | null | t3_1rd1jg3 | /r/LocalLLaMA/comments/1rd1jg3/the_physical_renaissance_the_brutal_aesthetics_of/ | false | false | 1 | null | |
Experimenting with Qwen3-VL-32B | 2 | I'd like to put a model specifically of this size to the test to see the performance gap between smaller models and medium-sized models for my complex ternary (three-way) text classification task. I will tune using RL-esque methods.
Should I tune Qwen 3 32B VL Thinking or Instruct? Which is the best one to tune for 1,... | 2026-02-24T01:52:26 | https://www.reddit.com/r/LocalLLaMA/comments/1rd1h6s/experimenting_with_qwen3vl32b/ | Extra-Campaign7281 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rd1h6s | false | null | t3_1rd1h6s | /r/LocalLLaMA/comments/1rd1h6s/experimenting_with_qwen3vl32b/ | false | false | self | 2 | null |
Hey Is anyone interested in Pretraining a 3b Or 7b model from scratch? | 1 | As the title says and to keep the Cost down We'll not train more then 100-150B tokens in Pretraining.
For 3b the Cost might be 300-500 USD, 7B 2K USD or similar in range.
If anyone is interested then surely DM, and ofcourse we'll open source it | 2026-02-24T01:51:43 | https://www.reddit.com/r/LocalLLaMA/comments/1rd1gb3/hey_is_anyone_interested_in_pretraining_a_3b_or/ | Vegetable_Prompt_583 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rd1gb3 | false | null | t3_1rd1gb3 | /r/LocalLLaMA/comments/1rd1gb3/hey_is_anyone_interested_in_pretraining_a_3b_or/ | false | false | self | 1 | null |
Running autonomous agents locally feels reckless. Am I overthinking this? | 4 | I’ve been experimenting with OpenClaw-style autonomous agents recently.
The thing that keeps bothering me:
They have filesystem access.
They have network access.
They can execute arbitrary code.
Even if the model isn’t “malicious,” a bad tool call or hallucinated shell command could do real damage.
I realized m... | 2026-02-24T01:21:48 | https://www.reddit.com/r/LocalLLaMA/comments/1rd0mj6/running_autonomous_agents_locally_feels_reckless/ | tallen0913 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rd0mj6 | false | null | t3_1rd0mj6 | /r/LocalLLaMA/comments/1rd0mj6/running_autonomous_agents_locally_feels_reckless/ | false | false | self | 4 | null |
Steerling-8B, a language model that can explain any token it generates | 1 | [removed] | 2026-02-24T01:21:39 | https://www.reddit.com/r/LocalLLaMA/comments/1rd0mcc/steerling8b_a_language_model_that_can_explain_any/ | luulinh90s | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rd0mcc | false | null | t3_1rd0mcc | /r/LocalLLaMA/comments/1rd0mcc/steerling8b_a_language_model_that_can_explain_any/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'W36wlT2FZ1hudJiJnCzPO3HZkbJh13qfUnXtx9cKhB4', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/W36wlT2FZ1hudJiJnCzPO3HZkbJh13qfUnXtx9cKhB4.png?width=108&crop=smart&auto=webp&s=bf79eb94119bcc41fbb34bcef106e0a0aef0cfbe', 'width': 108}, {'height': 113, 'url': 'h... |
Steerling-8B, a language model that can explain any token it generates | 1 | We are releasing Steerling-8B, the first interpretable model that can trace any token it generates to its input context, concepts a human can understand, and its training data. Trained on 1.35 trillion tokens, the model achieves downstream performance within range of models trained on 2–7× more data.
Steerling-8B unlo... | 2026-02-24T01:18:32 | https://www.reddit.com/r/LocalLLaMA/comments/1rd0jal/steerling8b_a_language_model_that_can_explain_any/ | luulinh90s | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rd0jal | false | null | t3_1rd0jal | /r/LocalLLaMA/comments/1rd0jal/steerling8b_a_language_model_that_can_explain_any/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'W36wlT2FZ1hudJiJnCzPO3HZkbJh13qfUnXtx9cKhB4', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/W36wlT2FZ1hudJiJnCzPO3HZkbJh13qfUnXtx9cKhB4.png?width=108&crop=smart&auto=webp&s=bf79eb94119bcc41fbb34bcef106e0a0aef0cfbe', 'width': 108}, {'height': 113, 'url': 'h... |
The Physical Renaissance: The Brutal Aesthetics of Hardwiring AI into Silicon | 0 | While the world scrambles for NVIDIA’s high-end GPUs, a Toronto-based startup called **Taalas** is bucking the trend. They are ditching liquid cooling, abandoning HBM (High Bandwidth Memory), and sacrificing general-purpose flexibility. Through what they call “Physical Aesthetics,” they are etching Large Language Model... | 2026-02-24T01:09:48 | https://www.reddit.com/r/LocalLLaMA/comments/1rd0ans/the_physical_renaissance_the_brutal_aesthetics_of/ | MarsQiu | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rd0ans | false | null | t3_1rd0ans | /r/LocalLLaMA/comments/1rd0ans/the_physical_renaissance_the_brutal_aesthetics_of/ | false | false | 0 | null | |
Seeking reliable AI tools/scripts for batch tagging thousands of legal/academic PDFs and DOCX files | 3 | Hi all,
I have thousands of documents (.docx and PDFs) accumulated over years, covering legal/political/economic topics.
They're in folders but lack consistent metadata or tags, making thematic searches impossible without manual review—which isn't feasible.
I'm looking for practical solutions to auto-generate tag... | 2026-02-24T00:25:41 | https://www.reddit.com/r/LocalLLaMA/comments/1rcz6nx/seeking_reliable_ai_toolsscripts_for_batch/ | jatovarv88 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rcz6nx | false | null | t3_1rcz6nx | /r/LocalLLaMA/comments/1rcz6nx/seeking_reliable_ai_toolsscripts_for_batch/ | false | false | self | 3 | null |
Which model to chose? | 3 | Hello guys,
I have an RTX 4080 with 16GB VRAM and 64GB of DDR5 RAM. I want to run some coding models where I can give a task either via a prompt or an agent and let the model work on it while I do something else.
I am not looking for speed. My goal is to submit a task to the model and have it produce quality code for... | 2026-02-24T00:19:40 | https://www.reddit.com/r/LocalLLaMA/comments/1rcyzvl/which_model_to_chose/ | toorhax | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rcyzvl | false | null | t3_1rcyzvl | /r/LocalLLaMA/comments/1rcyzvl/which_model_to_chose/ | false | false | self | 3 | null |
What models are you eagerly anticipating or wishing for? | 24 | Just out of curiosity, I've been wishing for three particular LLMs, and curious what other people are wishing for also. | 2026-02-24T00:17:50 | https://www.reddit.com/r/LocalLLaMA/comments/1rcyy8j/what_models_are_you_eagerly_anticipating_or/ | jinnyjuice | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rcyy8j | false | null | t3_1rcyy8j | /r/LocalLLaMA/comments/1rcyy8j/what_models_are_you_eagerly_anticipating_or/ | false | false | self | 24 | null |
Qwen 3 coder next ud-q8-xl F16 filling up the two orin rpc mesh! | 25 | running great and as you can see here llama.cpp -fit is doing a great job at splitting this evenly . the largest piece of traffic between these two during initial tensor transfer was <5Gbps | 2026-02-23T23:47:24 | https://v.redd.it/hvlsxvdyzblg1 | braydon125 | /r/LocalLLaMA/comments/1rcy5wv/qwen_3_coder_next_udq8xl_f16_filling_up_the_two/ | 1970-01-01T00:00:00 | 0 | {} | 1rcy5wv | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/hvlsxvdyzblg1/DASHPlaylist.mpd?a=1774618433%2CMGYyNTRiZGJkYzdkMjIzYTU3ODY0MzIxNmJmMjNkNjg4NGVmNTJhNmRlN2ZjNDU3YTM2NDcwNTNlZjBkZGQwYw%3D%3D&v=1&f=sd', 'duration': 21, 'fallback_url': 'https://v.redd.it/hvlsxvdyzblg1/CMAF_1080.mp4?source=fallback', 'h... | t3_1rcy5wv | /r/LocalLLaMA/comments/1rcy5wv/qwen_3_coder_next_udq8xl_f16_filling_up_the_two/ | false | false | 25 | {'enabled': False, 'images': [{'id': 'aWVhZHNnaXl6YmxnMWrUBUxyMMPidJm6SSrNb-W9WQcIAZj84NetEddKYJ3y', 'resolutions': [{'height': 192, 'url': 'https://external-preview.redd.it/aWVhZHNnaXl6YmxnMWrUBUxyMMPidJm6SSrNb-W9WQcIAZj84NetEddKYJ3y.png?width=108&crop=smart&format=pjpg&auto=webp&s=761720f3d3efeefa84c6e0666fb6f7f8e563... | |
Today, I’m introducing something we’ve been quietly building.
Meet Keovil. | 1 | [removed] | 2026-02-23T23:35:42 | https://www.reddit.com/r/LocalLLaMA/comments/1rcxvm5/today_im_introducing_something_weve_been_quietly/ | Interesting-Yam2001 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rcxvm5 | false | null | t3_1rcxvm5 | /r/LocalLLaMA/comments/1rcxvm5/today_im_introducing_something_weve_been_quietly/ | false | false | self | 1 | null |
How Do Backends Like Ollama, LMStudio, etc. Adapt to All The Different Chat Templates of The Various Models They Support? | 6 | Same as Title, I go through the chat templates of different small local models (GLM-4.7-Flash, Nanbeige-4.1-3b, GPT-OSS-20B, etc.) and see that all of them have different chat templates and formats. I am trying to use mlx-lm to run these models and parse the response into reasoning and content blocks but the change in ... | 2026-02-23T23:31:28 | https://www.reddit.com/r/LocalLLaMA/comments/1rcxrs4/how_do_backends_like_ollama_lmstudio_etc_adapt_to/ | Solus23451 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rcxrs4 | false | null | t3_1rcxrs4 | /r/LocalLLaMA/comments/1rcxrs4/how_do_backends_like_ollama_lmstudio_etc_adapt_to/ | false | false | self | 6 | null |
Opencode Manager - New Release | 6 | [https://github.com/chriswritescode-dev/opencode-manager](https://github.com/chriswritescode-dev/opencode-manager)
* [Optional Memory Plugin ](https://www.npmjs.com/package/@opencode-manager/memory)
* Enhanced Git commit view
https://reddit.com/link/1rcwsl2/video/l073ir0aqblg1/player
| 2026-02-23T22:52:59 | https://www.reddit.com/r/LocalLLaMA/comments/1rcwsl2/opencode_manager_new_release/ | getfitdotus | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rcwsl2 | false | null | t3_1rcwsl2 | /r/LocalLLaMA/comments/1rcwsl2/opencode_manager_new_release/ | false | false | 6 | null | |
I don't have any clue on the code that claude wrote | 0 | I've always been fascinated of people that know how to code, remember functions and ending up building something that doesn't crash, like a real language they could speak.
I can have ideas and understand logic, but never got into learning these languages, fortunately I can get some help now. Everytime something works... | 2026-02-23T22:39:55 | https://www.reddit.com/r/LocalLLaMA/comments/1rcwg9p/i_dont_have_any_clue_on_the_code_that_claude_wrote/ | CRYPT_EXE | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rcwg9p | false | null | t3_1rcwg9p | /r/LocalLLaMA/comments/1rcwg9p/i_dont_have_any_clue_on_the_code_that_claude_wrote/ | false | false | self | 0 | null |
Technical question about MOE and Active Parameters | 3 | Minimax's model card on LM Studio says:
\> MiniMax-M2 is a Mixture of Experts (MoE) model (230 billion total parameters with 10 billion active parameters)
\> To run the smallest minimax-m2, you need at least 121 GB of RAM.
Does that mean my VRAM only needs to hold 10b parameters at a time? And I can hold the rest on... | 2026-02-23T22:33:19 | https://www.reddit.com/r/LocalLLaMA/comments/1rcwa1d/technical_question_about_moe_and_active_parameters/ | _manteca | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rcwa1d | false | null | t3_1rcwa1d | /r/LocalLLaMA/comments/1rcwa1d/technical_question_about_moe_and_active_parameters/ | false | false | self | 3 | null |
Inference Engineering (Book) | 1 | 2026-02-23T22:32:38 | philipkiely | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1rcw9dw | false | null | t3_1rcw9dw | /r/LocalLLaMA/comments/1rcw9dw/inference_engineering_book/ | false | false | 1 | {'enabled': True, 'images': [{'id': 'zfvtaz8kmblg1', 'resolutions': [{'height': 204, 'url': 'https://preview.redd.it/zfvtaz8kmblg1.png?width=108&crop=smart&auto=webp&s=cf085b4d6d8fec83b2ed0d9e2680475c868d8dec', 'width': 108}, {'height': 408, 'url': 'https://preview.redd.it/zfvtaz8kmblg1.png?width=216&crop=smart&auto=we... | |||
Qwen 3 Next Coder Hallucinating Tools? | 4 | Anyone else experiencing this? I was workshopping a website prototype when I noticed it got stuck in a loop continuously attempting to "make" the website infrastructor itself.
[Qwen 3 Coder Next hallucinating tool call in LM Studio](https://preview.redd.it/d147gfsolblg1.png?width=1218&format=png&auto=webp&s=e8319a814e... | 2026-02-23T22:27:50 | https://www.reddit.com/r/LocalLLaMA/comments/1rcw4sk/qwen_3_next_coder_hallucinating_tools/ | CSEliot | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rcw4sk | false | null | t3_1rcw4sk | /r/LocalLLaMA/comments/1rcw4sk/qwen_3_next_coder_hallucinating_tools/ | false | false | 4 | null | |
Looking to switch away from codex | 2 | Anything similar in the open source you recommend for coding purpose | 2026-02-23T22:17:23 | https://www.reddit.com/r/LocalLLaMA/comments/1rcvuw0/looking_to_switch_away_from_codex/ | apunker | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rcvuw0 | false | null | t3_1rcvuw0 | /r/LocalLLaMA/comments/1rcvuw0/looking_to_switch_away_from_codex/ | false | false | self | 2 | null |
Distillation when you do it. Training when we do it. | 3,196 | 2026-02-23T22:04:41 | Xhehab_ | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1rcvimv | false | null | t3_1rcvimv | /r/LocalLLaMA/comments/1rcvimv/distillation_when_you_do_it_training_when_we_do_it/ | false | false | 3,196 | {'enabled': True, 'images': [{'id': '9rc0jqbohblg1', 'resolutions': [{'height': 108, 'url': 'https://preview.redd.it/9rc0jqbohblg1.jpeg?width=108&crop=smart&auto=webp&s=7d4a14a37954dc95a98e8a432e3f2822dc1af836', 'width': 108}, {'height': 217, 'url': 'https://preview.redd.it/9rc0jqbohblg1.jpeg?width=216&crop=smart&auto=... | |||
Amber ICI | 0 | If you run ollama models for OSINT work or need to keep things local with zero telemetry, you may be in interested this slick interface that allows you to put agents and chains to work for your needs. [https://github.com/gs-ai/AMBER-ICI](https://github.com/gs-ai/AMBER-ICI) | 2026-02-23T21:58:13 | FreonMuskOfficial | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1rcvc8d | false | null | t3_1rcvc8d | /r/LocalLLaMA/comments/1rcvc8d/amber_ici/ | false | false | 0 | {'enabled': True, 'images': [{'id': 'au60fketfblg1', 'resolutions': [{'height': 72, 'url': 'https://preview.redd.it/au60fketfblg1.png?width=108&crop=smart&auto=webp&s=6190d9de66e00f869296e06a557437f87eed0bde', 'width': 108}, {'height': 144, 'url': 'https://preview.redd.it/au60fketfblg1.png?width=216&crop=smart&auto=web... | ||
Looking for local AI agent driven coding environment. | 0 | Was wanting to get some recommends for a local dev environment. I'm wanting something that is AI driven to write the code but allows me to follow along in a IDE and make changes manually if I choose to do so. Generally I want to write web apps in react, node.js, java script or just html. But, I want something that ca... | 2026-02-23T21:53:48 | https://www.reddit.com/r/LocalLLaMA/comments/1rcv7zc/looking_for_local_ai_agent_driven_coding/ | nealhamiltonjr | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rcv7zc | false | null | t3_1rcv7zc | /r/LocalLLaMA/comments/1rcv7zc/looking_for_local_ai_agent_driven_coding/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'zIMo4NP_JpMnjN5L4tWviAisv_EHZhe_sVUcurruOCY', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/zIMo4NP_JpMnjN5L4tWviAisv_EHZhe_sVUcurruOCY.png?width=108&crop=smart&auto=webp&s=906fe58b744d379700b6668d2aea8b08a559c006', 'width': 108}, {'height': 108, 'url': 'h... |
The Model Is the Orchestrator | 0 | \#
\*\*Lessons from 10 Autonomous Multi-Agent Software Builds Without Programmatic Scaffolding — A Case Study\*\*
February 2026 · Working Draft
Corpus: 88 Codex worker sessions · 10 Claude orchestrator sessions · 295M tokens · 6.1M lines of worker output · 3 controlled ablation experiments · 1 scope contamina... | 2026-02-23T21:53:32 | https://www.reddit.com/r/LocalLLaMA/comments/1rcv7pn/the_model_is_the_orchestrator/ | No-Student6539 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rcv7pn | false | null | t3_1rcv7pn | /r/LocalLLaMA/comments/1rcv7pn/the_model_is_the_orchestrator/ | false | false | self | 0 | null |
Best Waifu/gooning AI you've ever used under 30b ? | 0 | Curious too hear | 2026-02-23T21:51:05 | https://www.reddit.com/r/LocalLLaMA/comments/1rcv5cq/best_waifugooning_ai_youve_ever_used_under_30b/ | Opening-Ad6258 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rcv5cq | false | null | t3_1rcv5cq | /r/LocalLLaMA/comments/1rcv5cq/best_waifugooning_ai_youve_ever_used_under_30b/ | false | false | self | 0 | null |
Building infrastructure that lets local agents hire & pay humans for real-world tasks (looking for perspectives & critiques) | 0 | Hey everyone,
*(Disclosure: I’m the builder. Posting here because this community consistently has some of the strongest opinions on agents and tooling, and I’m much more interested in technical feedback than hype.)*
You’ve probably seen [RentAHuman.ai](http://RentAHuman.ai) go viral recently. That wave resonated with... | 2026-02-23T21:45:01 | https://v.redd.it/vkty64dwcblg1 | IntelligentAbroad729 | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1rcuzc9 | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/vkty64dwcblg1/DASHPlaylist.mpd?a=1774475127%2CYzY0YTIwNDg3MDE5N2I5YzMxYzIzMjVlMGVhMmZiNWJmZTJiOTk4MTlkNTc1Njc3NTMwOTNmZjlkNDNkMGQ1Mg%3D%3D&v=1&f=sd', 'duration': 61, 'fallback_url': 'https://v.redd.it/vkty64dwcblg1/CMAF_1080.mp4?source=fallback', 'h... | t3_1rcuzc9 | /r/LocalLLaMA/comments/1rcuzc9/building_infrastructure_that_lets_local_agents/ | false | false | 0 | {'enabled': False, 'images': [{'id': 'NGNoaWVsZHdjYmxnMZzrMOs4zlQnZCELaCCVXJ1pDh4-gx1tXfi9jZFHn9EJ', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/NGNoaWVsZHdjYmxnMZzrMOs4zlQnZCELaCCVXJ1pDh4-gx1tXfi9jZFHn9EJ.png?width=108&crop=smart&format=pjpg&auto=webp&s=c0d6f74dd15aac388b10003cd26fd33e0ff9e... | |
the internet scrapers are now devastated that someone scraped their weights | 14 | turns out scraping is only bad when it happens to you. | 2026-02-23T21:38:23 | dictionizzle | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1rcuswx | false | null | t3_1rcuswx | /r/LocalLLaMA/comments/1rcuswx/the_internet_scrapers_are_now_devastated_that/ | false | false | 14 | {'enabled': True, 'images': [{'id': 'hwncjpizcblg1', 'resolutions': [{'height': 108, 'url': 'https://preview.redd.it/hwncjpizcblg1.jpeg?width=108&crop=smart&auto=webp&s=c189cb50fe4c8302714c2a5ff770eb2244ceca54', 'width': 108}, {'height': 216, 'url': 'https://preview.redd.it/hwncjpizcblg1.jpeg?width=216&crop=smart&auto=... | ||
Building an infrastructure that lets local agents hire & pay humans for real-world tasks (looking for perspectives & critiques) | 1 | 2026-02-23T21:34:59 | https://v.redd.it/fwb4qdwxbblg1 | IntelligentAbroad729 | /r/LocalLLaMA/comments/1rcupp1/building_an_infrastructure_that_lets_local_agents/ | 1970-01-01T00:00:00 | 0 | {} | 1rcupp1 | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/fwb4qdwxbblg1/DASHPlaylist.mpd?a=1774609869%2CYmNkN2E2N2ZjMmU0N2IxNzY3NDEyYThjNmVkNTdhZWNjYjcwZmYzMGM3MWQwZmZjZTZhYzkwMmUxMDI5NDU4MA%3D%3D&v=1&f=sd', 'duration': 61, 'fallback_url': 'https://v.redd.it/fwb4qdwxbblg1/CMAF_1080.mp4?source=fallback', 'h... | t3_1rcupp1 | /r/LocalLLaMA/comments/1rcupp1/building_an_infrastructure_that_lets_local_agents/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'bnM1ZDhqeHhiYmxnMZzrMOs4zlQnZCELaCCVXJ1pDh4-gx1tXfi9jZFHn9EJ', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/bnM1ZDhqeHhiYmxnMZzrMOs4zlQnZCELaCCVXJ1pDh4-gx1tXfi9jZFHn9EJ.png?width=108&crop=smart&format=pjpg&auto=webp&s=bd7e4b1807428594b7126dd3c6409882f765a... | ||
What LLM subscriptions are you using for coding in 2026? | 3 | I've evaluated Chutes, Kimi, MiniMax, and [Z.ai](http://Z.ai) for coding workflows but want to hear from the community.
What LLM subscriptions are you paying for in 2026? Any standout performers for code generation, debugging, or architecture discussions? | 2026-02-23T21:33:59 | https://www.reddit.com/r/LocalLLaMA/comments/1rcuosf/what_llm_subscriptions_are_you_using_for_coding/ | Embarrassed_Bread_16 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rcuosf | false | null | t3_1rcuosf | /r/LocalLLaMA/comments/1rcuosf/what_llm_subscriptions_are_you_using_for_coding/ | false | false | self | 3 | null |
Building a service and PWA for Ollama (and other models) with SQLite RAG and artifacts. Is this project interesting to the community? | 1 | Hi everyone! For almost a year, I’ve been working on a project that serves as a smart, functional, and secure UI for LLM models. There are many ready-made solutions, but most often they require complex Docker setups or writing configurations. Projects with a simpler launch but similar functionality are usually paid.
M... | 2026-02-23T21:29:48 | https://www.reddit.com/r/LocalLLaMA/comments/1rcukq3/building_a_service_and_pwa_for_ollama_and_other/ | pokemondodo | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rcukq3 | false | null | t3_1rcukq3 | /r/LocalLLaMA/comments/1rcukq3/building_a_service_and_pwa_for_ollama_and_other/ | false | false | self | 1 | null |
Serious question: do you think Dario (or any other major AI players or political players) have enough power and influence that they will get Chinese local AI and/or local AI in general banned in the U.S.? What do you think the odds are? | 30 | I guess I'll put Dario in the title, since he's the most relevant hater of the day, and I guess fairly powerful in regards to this as far as any one specific guy goes, but, obviously if something like this happened, it would involve a lot more people combining their powers than just Dario alone.
Anyway, curious what y... | 2026-02-23T21:19:33 | https://www.reddit.com/r/LocalLLaMA/comments/1rcuaip/serious_question_do_you_think_dario_or_any_other/ | DeepOrangeSky | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rcuaip | false | null | t3_1rcuaip | /r/LocalLLaMA/comments/1rcuaip/serious_question_do_you_think_dario_or_any_other/ | false | false | self | 30 | null |
Why crypto UX is broken & how agents might fix it | 0 | **Why First-Time DeFi Users Abandon Transactions: The Crypto Onboarding Problem**
According to data from Dune analytics, roughly 73% of first-time DeFi users abandon their transaction after they encounter their first error or failure. A significant portion of those new users (37%) only perform a single transaction, wh... | 2026-02-23T21:18:26 | https://www.reddit.com/r/LocalLLaMA/comments/1rcu9f5/why_crypto_ux_is_broken_how_agents_might_fix_it/ | AgentAiLeader | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rcu9f5 | false | null | t3_1rcu9f5 | /r/LocalLLaMA/comments/1rcu9f5/why_crypto_ux_is_broken_how_agents_might_fix_it/ | false | false | self | 0 | null |
Anthropic today | 301 | While I generally do not agree with the misuse of others' property, this statement is ironic coming from Anthropic. | 2026-02-23T21:16:10 | PaceImaginary8610 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1rcu741 | false | null | t3_1rcu741 | /r/LocalLLaMA/comments/1rcu741/anthropic_today/ | false | false | 301 | {'enabled': True, 'images': [{'id': 'mfd5i5tr8blg1', 'resolutions': [{'height': 108, 'url': 'https://preview.redd.it/mfd5i5tr8blg1.jpeg?width=108&crop=smart&auto=webp&s=765389e3ed63aeebfcc989a4ebff01d06862f0c6', 'width': 108}, {'height': 216, 'url': 'https://preview.redd.it/mfd5i5tr8blg1.jpeg?width=216&crop=smart&auto=... | ||
Best small local LLM to run on a phone? | 10 | Hey folks, what is the best local LLM to run on your phone? Looking for a small enough model that actually feels smooth and useful. I have tried **Llama 3.2 3B**, **Gemma 1.1 2B** and they are somewhat ok for small stuff, but wanted to know if anyone has tried it.
Also curious if anyone has experience running models f... | 2026-02-23T20:56:46 | https://www.reddit.com/r/LocalLLaMA/comments/1rctpx4/best_small_local_llm_to_run_on_a_phone/ | alexndb | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rctpx4 | false | null | t3_1rctpx4 | /r/LocalLLaMA/comments/1rctpx4/best_small_local_llm_to_run_on_a_phone/ | false | false | self | 10 | null |
Which embedding model do you suggest that Is compatible with "Zvec" , that i can fit entirely on 8gb vram ? | 1 | https://github.com/alibaba/zvec
This tool Focus on a CPU with good Single-Core Speed and AVX/SIMD support, as Zvec uses these to speed up vector math without a GPU.
Im planning to run an AI model (like Llama-3 or Mistral) alongside Zvec:
. the Embedding Model (which turns text into vectors for Zvec to store) usually... | 2026-02-23T20:55:43 | https://www.reddit.com/r/LocalLLaMA/comments/1rctou1/which_embedding_model_do_you_suggest_that_is/ | Quiet_Dasy | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rctou1 | false | null | t3_1rctou1 | /r/LocalLLaMA/comments/1rctou1/which_embedding_model_do_you_suggest_that_is/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': '2Lct4wMXjPAeEZo-CO-v4h5VVdGB4Uxo46g9Aumu6eM', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/2Lct4wMXjPAeEZo-CO-v4h5VVdGB4Uxo46g9Aumu6eM.png?width=108&crop=smart&auto=webp&s=90c30868330e27846184849956b6eaa463135baa', 'width': 108}, {'height': 108, 'url': 'h... |
we can't upvote Elon Musk, this is reddit :) | 326 | 2026-02-23T20:47:05 | jacek2023 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1rctg3y | false | null | t3_1rctg3y | /r/LocalLLaMA/comments/1rctg3y/we_cant_upvote_elon_musk_this_is_reddit/ | false | false | 326 | {'enabled': True, 'images': [{'id': '4sskgcvr3blg1', 'resolutions': [{'height': 71, 'url': 'https://preview.redd.it/4sskgcvr3blg1.png?width=108&crop=smart&auto=webp&s=87ac07799d3574ce5dd4b5792978c1c18078ab4c', 'width': 108}, {'height': 143, 'url': 'https://preview.redd.it/4sskgcvr3blg1.png?width=216&crop=smart&auto=web... | |||
I’m building a tool to help ML engineers automatically optimize their models for lower energy consumption. | 0 | Would you use it? What’s the biggest pain point? | 2026-02-23T20:41:43 | https://www.reddit.com/r/LocalLLaMA/comments/1rctamr/im_building_a_tool_to_help_ml_engineers/ | Loud-Association7455 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rctamr | false | null | t3_1rctamr | /r/LocalLLaMA/comments/1rctamr/im_building_a_tool_to_help_ml_engineers/ | false | false | self | 0 | null |
We analyzed 10,000 OpenClaw GitHub stars. Here’s what we found. | 0 | A lot of people here questioned OpenClaw’s star growth right before the OpenAI acquisition.
The curve looked almost too clean. Sudden spike. Perfect timing.
Last week there was even a viral thread here raising the same concern. Plenty of engineers suspected bot-driven hype.
Instead of speculating, we pulled data.
W... | 2026-02-23T20:31:11 | https://www.reddit.com/r/LocalLLaMA/comments/1rct01j/we_analyzed_10000_openclaw_github_stars_heres/ | Fancy-Exit-6954 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rct01j | false | null | t3_1rct01j | /r/LocalLLaMA/comments/1rct01j/we_analyzed_10000_openclaw_github_stars_heres/ | false | false | self | 0 | null |
MiniMax 2.5 with 8x+ concurrency using RTX 3090s HW Requirements. | 13 | [https://huggingface.co/mratsim/MiniMax-M2.5-BF16-INT4-AWQ/](https://huggingface.co/mratsim/MiniMax-M2.5-BF16-INT4-AWQ/)
So I have 7 x RTX 3090s split across 2 Servers.
I will need to buy a minimum of 1 more GPU and a better motherboard ( to support having all 8 on it ) just to test trial this model.
However, I ne... | 2026-02-23T20:19:51 | https://www.reddit.com/r/LocalLLaMA/comments/1rcsoju/minimax_25_with_8x_concurrency_using_rtx_3090s_hw/ | BigFoxMedia | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rcsoju | false | null | t3_1rcsoju | /r/LocalLLaMA/comments/1rcsoju/minimax_25_with_8x_concurrency_using_rtx_3090s_hw/ | false | false | self | 13 | {'enabled': False, 'images': [{'id': 'I7HhGgZ5jytPOEszf95VVrcvnAnGaAnhlTnFD3mzA1k', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/I7HhGgZ5jytPOEszf95VVrcvnAnGaAnhlTnFD3mzA1k.png?width=108&crop=smart&auto=webp&s=178e9735b3856b3d664ffdbbf1b4840c3650992b', 'width': 108}, {'height': 116, 'url': 'h... |
Running an autonomous Slack/Telegram agent swarm natively on a 2W Android phone Has anyone successfully run a local swarm on Termux/Android instead of a VPS? | 0 | I've been experimenting with getting away from cloud APIs. I managed to get a python agent swarm running flawlessly on an old $30 Android using Termux and Ollama (pulling only 2 Watts). It's acting as a Telegram gateway and can execute native bash scripts to check my server health. The hardest part was getting it to gr... | 2026-02-23T20:15:54 | https://www.reddit.com/r/LocalLLaMA/comments/1rcskg0/running_an_autonomous_slacktelegram_agent_swarm/ | Anon-60330 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rcskg0 | false | null | t3_1rcskg0 | /r/LocalLLaMA/comments/1rcskg0/running_an_autonomous_slacktelegram_agent_swarm/ | false | false | self | 0 | null |
We analyzed 10,000 OpenClaw GitHub stars. Here’s what we found. | 0 | A lot of people here questioned OpenClaw’s star growth right before the OpenAI acquisition.
The curve looked almost too clean. Sudden spike. Perfect timing.
Last week there was even a viral thread here raising the same concern. Plenty of engineers suspected bot-driven hype.
Instead of speculating, we pulled data.
W... | 2026-02-23T20:11:39 | https://www.reddit.com/r/LocalLLaMA/comments/1rcsg1x/we_analyzed_10000_openclaw_github_stars_heres/ | Fancy-Exit-6954 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rcsg1x | false | null | t3_1rcsg1x | /r/LocalLLaMA/comments/1rcsg1x/we_analyzed_10000_openclaw_github_stars_heres/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': '4Q4TpKGmNMMPfboeeddrhE-JnwOMb_yUTOfYrGbSRh0', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/4Q4TpKGmNMMPfboeeddrhE-JnwOMb_yUTOfYrGbSRh0.png?width=108&crop=smart&auto=webp&s=8dd52bdfdc0e873f44822141b10a8b81960f1c0f', 'width': 108}, {'height': 108, 'url': 'h... |
Fun fact: Anthropic has never open-sourced any LLMs | 755 | I’ve been working on a little side project comparing tokenizer efficiency across different companies’ models for multilingual encoding.
Then I saw Anthropic’s announcement today and suddenly realized: there’s no way to analyze claude’s tokenizer lmao! | 2026-02-23T20:10:06 | https://www.reddit.com/r/LocalLLaMA/comments/1rcseh1/fun_fact_anthropic_has_never_opensourced_any_llms/ | InternationalAsk1490 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rcseh1 | false | null | t3_1rcseh1 | /r/LocalLLaMA/comments/1rcseh1/fun_fact_anthropic_has_never_opensourced_any_llms/ | false | false | self | 755 | null |
Free and Uncensored AI Videos | 1 | [removed] | 2026-02-23T20:06:17 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 1rcsalu | false | null | t3_1rcsalu | /r/LocalLLaMA/comments/1rcsalu/free_and_uncensored_ai_videos/ | false | false | default | 1 | null | ||
Talking to my to-do list | 138 | Been testing feeding all my to-do list and productivity and having this kinda of desk robot thing as a screen to talk to? all the stuff happens on the pc, the screen is just a display and still for now it is a cloud based ai but I can definitely see this all happening locally in the future *(also better for privacy stu... | 2026-02-23T20:05:37 | https://v.redd.it/xplqhdz7valg1 | llo7d | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1rcs9vr | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/xplqhdz7valg1/DASHPlaylist.mpd?a=1774469159%2CNzVkYjFmYzViZmMxMzE1NDQxZDdkZDE5NTUwZTZiNjgxZWY4YzZjNDk4YzNiODAzNTE3OTQzZmY0NTZjZjhjYw%3D%3D&v=1&f=sd', 'duration': 23, 'fallback_url': 'https://v.redd.it/xplqhdz7valg1/CMAF_1080.mp4?source=fallback', 'h... | t3_1rcs9vr | /r/LocalLLaMA/comments/1rcs9vr/talking_to_my_todo_list/ | false | false | 138 | {'enabled': False, 'images': [{'id': 'YnFzdm9lejd2YWxnMWY-tuy7HWwE5y0N4mja7xeEwkxeCiovLgSs8XbE5sB8', 'resolutions': [{'height': 192, 'url': 'https://external-preview.redd.it/YnFzdm9lejd2YWxnMWY-tuy7HWwE5y0N4mja7xeEwkxeCiovLgSs8XbE5sB8.png?width=108&crop=smart&format=pjpg&auto=webp&s=3a77fbb746fee51fb050b44d3a40564f52f6... | |
We analyzed 10,000 OpenClaw GitHub stars. Here’s what we found. | 0 | 2026-02-23T19:55:35 | Fancy-Exit-6954 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1rcrzj9 | false | null | t3_1rcrzj9 | /r/LocalLLaMA/comments/1rcrzj9/we_analyzed_10000_openclaw_github_stars_heres/ | false | false | 0 | {'enabled': True, 'images': [{'id': '0vsz65imualg1', 'resolutions': [{'height': 108, 'url': 'https://preview.redd.it/0vsz65imualg1.png?width=108&crop=smart&auto=webp&s=8bfd8b77e444fdcb137a6455c5683e1faf2fec65', 'width': 108}, {'height': 216, 'url': 'https://preview.redd.it/0vsz65imualg1.png?width=216&crop=smart&auto=we... | |||
Strix Halo 128Gb: what models, which quants are optimal? | 19 | Strix Halo APU should not benefit from running large models that have been quantized using MXFP4 (as on Blackwell GPUs). So which models at which quants have you found that do shine on this architecture in GPU only mode? Could it benefit as well from usage of FP4/FP8 formats that are closer to the native format of thes... | 2026-02-23T19:55:21 | https://www.reddit.com/r/LocalLLaMA/comments/1rcrzbn/strix_halo_128gb_what_models_which_quants_are/ | DevelopmentBorn3978 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rcrzbn | false | null | t3_1rcrzbn | /r/LocalLLaMA/comments/1rcrzbn/strix_halo_128gb_what_models_which_quants_are/ | false | false | self | 19 | null |
Agentic coding with GLM 5 on Mac M3u 512 gb | 14 | I'm running the MLX 4 bit quant and it's actually quite usable. Obviously not nearly as fast as Claude or another API, especially with prompt processing, but as long as you keep context below 50k or so, it feels very usable with a bit of patience.
Wouldn't work for something where you absolutely need 70k+ tokens in co... | 2026-02-23T19:52:20 | https://www.reddit.com/r/LocalLLaMA/comments/1rcrw96/agentic_coding_with_glm_5_on_mac_m3u_512_gb/ | nomorebuttsplz | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rcrw96 | false | null | t3_1rcrw96 | /r/LocalLLaMA/comments/1rcrw96/agentic_coding_with_glm_5_on_mac_m3u_512_gb/ | false | false | self | 14 | null |
Hola a todos! | 0 | Hola, solo queria saludaros, soy nueva en esto del reddit y ns muy bien como va :) | 2026-02-23T19:38:02 | https://www.reddit.com/r/LocalLLaMA/comments/1rcrhts/hola_a_todos/ | VirusPure2413 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rcrhts | false | null | t3_1rcrhts | /r/LocalLLaMA/comments/1rcrhts/hola_a_todos/ | false | false | self | 0 | null |
4-layer memory architecture for local LLMs - full system breakdown | 1 | [removed] | 2026-02-23T19:37:32 | https://www.reddit.com/r/LocalLLaMA/comments/1rcrhbb/4layer_memory_architecture_for_local_llms_full/ | OblivionLabz | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rcrhbb | false | null | t3_1rcrhbb | /r/LocalLLaMA/comments/1rcrhbb/4layer_memory_architecture_for_local_llms_full/ | false | false | self | 1 | null |
Running Ollama on a 3-node GPU cluster with automatic failover - lessons from building a production local LLM stack | 1 | [removed] | 2026-02-23T19:36:08 | https://www.reddit.com/r/LocalLLaMA/comments/1rcrfx1/running_ollama_on_a_3node_gpu_cluster_with/ | AI_Engineering_AT | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rcrfx1 | false | null | t3_1rcrfx1 | /r/LocalLLaMA/comments/1rcrfx1/running_ollama_on_a_3node_gpu_cluster_with/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'avY7TWfTXSKVxKhmOjwu05qk9RjXV1MowMmStKalY3Y', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/avY7TWfTXSKVxKhmOjwu05qk9RjXV1MowMmStKalY3Y.png?width=108&crop=smart&auto=webp&s=2cdcf5c714b9605fda10e9fa70b8ffdbd06ac2c7', 'width': 108}, {'height': 113, 'url': 'h... |
Hypocrisy? | 438 | 2026-02-23T19:31:17 | pmv143 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1rcrb2k | false | null | t3_1rcrb2k | /r/LocalLLaMA/comments/1rcrb2k/hypocrisy/ | false | false | 438 | {'enabled': True, 'images': [{'id': 'jxutlq8bqalg1', 'resolutions': [{'height': 82, 'url': 'https://preview.redd.it/jxutlq8bqalg1.jpeg?width=108&crop=smart&auto=webp&s=3a3b10f745ee76a5ac4b4ad8340c54dd5ebdefc0', 'width': 108}, {'height': 164, 'url': 'https://preview.redd.it/jxutlq8bqalg1.jpeg?width=216&crop=smart&auto=w... | |||
gpumod - switching models with mcp | 3 | Hi. I have RTX4090 and when I see a new model, I wanted to test models and then check GGUF files exist or not. And I was testing which one would be the best fit with my machine. Even though I have only 24GB, I found that llama.cpp or vllm can be used with wake / sleep and I can use 1 model for 5 agents. After that, I c... | 2026-02-23T19:30:23 | https://www.reddit.com/r/LocalLLaMA/comments/1rcra4h/gpumod_switching_models_with_mcp/ | jaigouk | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rcra4h | false | null | t3_1rcra4h | /r/LocalLLaMA/comments/1rcra4h/gpumod_switching_models_with_mcp/ | false | false | self | 3 | {'enabled': False, 'images': [{'id': 'kRXyCuqxEdtq9nkrWF8zXmovXNdnLrirNuPSGejYME8', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/kRXyCuqxEdtq9nkrWF8zXmovXNdnLrirNuPSGejYME8.png?width=108&crop=smart&auto=webp&s=9f07b7a761a7b2de789e5ea8db322cbf25efca7e', 'width': 108}, {'height': 108, 'url': 'h... |
Dario Is Scared | 176 | Why did Anthropic choose this exact moment to release that [statement](https://x.com/AnthropicAI/status/2025997928242811253)? Because he’s scared.
Ever since OpenClaw launched, token usage from both individuals and model companies has been booming. And yet, on OpenRouter, the top-ranked models are no longer Claude bu... | 2026-02-23T19:24:52 | https://www.reddit.com/r/LocalLLaMA/comments/1rcr4ju/dario_is_scared/ | Doris_Dressy1 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rcr4ju | false | null | t3_1rcr4ju | /r/LocalLLaMA/comments/1rcr4ju/dario_is_scared/ | false | false | 176 | null | |
Chatterbox TTS Multilanguage cutting off audio when using custom voice clones | 1 | **Hi everyone,**
**I’m reaching out because I’ve hit a wall with Chatterbox TTS Multilanguage and I’m hoping someone here has encountered a similar issue.**
**The Problem**
**The system works perfectly fine when I use the built-in, provided voices—the entire sentence is generated without any issues. However, the mom... | 2026-02-23T19:22:29 | https://www.reddit.com/r/LocalLLaMA/comments/1rcr254/chatterbox_tts_multilanguage_cutting_off_audio/ | Tomasz_NieMasz | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rcr254 | false | null | t3_1rcr254 | /r/LocalLLaMA/comments/1rcr254/chatterbox_tts_multilanguage_cutting_off_audio/ | false | false | self | 1 | null |
Should Anthropic acquire ZeroClaw? As a Claude user, I think this could reshape edge AI deployment | 0 | You all already know ZeroClaw — 15K+ stars, Rust, 3.4 MB, the whole thing. Not going to rehash what it does.
I'm not affiliated with ZeroClaw. I'm a Claude user who's been watching both projects and can't shake the feeling that these two belong together.
Anthropic has $50B committed to cloud infrastructure, the best ... | 2026-02-23T19:20:13 | https://www.reddit.com/r/LocalLLaMA/comments/1rcqzrr/should_anthropic_acquire_zeroclaw_as_a_claude/ | nafigator | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rcqzrr | false | null | t3_1rcqzrr | /r/LocalLLaMA/comments/1rcqzrr/should_anthropic_acquire_zeroclaw_as_a_claude/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'ngkLkOIF39M5mkS5WPg-2-JA7m14Y8JS9lY0PjeTxB4', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/ngkLkOIF39M5mkS5WPg-2-JA7m14Y8JS9lY0PjeTxB4.jpeg?width=108&crop=smart&auto=webp&s=94f5227ff8eb9023522f2fb53a1c2e3c7eb97d0e', 'width': 108}, {'height': 121, 'url': '... |
Models with 14B parameters or fewer are completely unfit for agent use cases, so I can only run larger models via shared RAM and VRAM, and I want to know how much the speed will slow down with this RAM, preferably with concrete examples. | 1 | [removed] | 2026-02-23T19:17:58 | https://www.reddit.com/r/LocalLLaMA/comments/1rcqxk0/models_with_14b_parameters_or_fewer_are/ | BitOk4326 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rcqxk0 | false | null | t3_1rcqxk0 | /r/LocalLLaMA/comments/1rcqxk0/models_with_14b_parameters_or_fewer_are/ | false | false | self | 1 | null |
Anthropic is claiming that Chinese labs play dirty | 50 | at least GLM is not mentioned (m GLM fanboy)
anyway, seriously, do you think anthropic has the right to consider this illegal? | 2026-02-23T19:16:18 | keb_37 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1rcqvv2 | false | null | t3_1rcqvv2 | /r/LocalLLaMA/comments/1rcqvv2/anthropic_is_claiming_that_chinese_labs_play_dirty/ | false | false | 50 | {'enabled': True, 'images': [{'id': 'qj1y3zpmnalg1', 'resolutions': [{'height': 78, 'url': 'https://preview.redd.it/qj1y3zpmnalg1.jpeg?width=108&crop=smart&auto=webp&s=be495528ca8da7430a9e5994c27adafa269fa455', 'width': 108}, {'height': 157, 'url': 'https://preview.redd.it/qj1y3zpmnalg1.jpeg?width=216&crop=smart&auto=w... | ||
A guide to building an ML research cluster | 8 | 2026-02-23T19:15:39 | https://www.reddit.com/r/LocalLLaMA/comments/1rcqv6b/a_guide_to_building_an_ml_research_cluster/ | OriginalSpread3100 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rcqv6b | false | null | t3_1rcqv6b | /r/LocalLLaMA/comments/1rcqv6b/a_guide_to_building_an_ml_research_cluster/ | false | false | 8 | null | ||
Looking for a perfect "Deep Research" app which works with Llama.cpp | 6 | I have found something like Perplexica but can't get it to work with llamacpp. suggestions appreciated. | 2026-02-23T19:11:11 | https://www.reddit.com/r/LocalLLaMA/comments/1rcqqlz/looking_for_a_perfect_deep_research_app_which/ | hackiv | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rcqqlz | false | null | t3_1rcqqlz | /r/LocalLLaMA/comments/1rcqqlz/looking_for_a_perfect_deep_research_app_which/ | false | false | self | 6 | null |
Need Linux help: Testing a hardware-aware 'Can I Run It' for Local LLMs (Early Beta) | 1 | [removed] | 2026-02-23T19:09:21 | https://www.reddit.com/r/LocalLLaMA/comments/1rcqoo7/need_linux_help_testing_a_hardwareaware_can_i_run/ | RunItLocal001 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rcqoo7 | false | null | t3_1rcqoo7 | /r/LocalLLaMA/comments/1rcqoo7/need_linux_help_testing_a_hardwareaware_can_i_run/ | false | false | self | 1 | null |
Who here has been able to get minicpm o 4.5 working | 1 | It's extremely impressive in the demo full duplex audio and video 10 frames a second video understanding the ability to talk and listen at the same time but for the life of me I can't get this damn thing to work anybody have any success | 2026-02-23T19:02:35 | https://www.reddit.com/r/LocalLLaMA/comments/1rcqhy5/who_here_has_been_able_to_get_minicpm_o_45_working/ | One_Hovercraft_7456 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rcqhy5 | false | null | t3_1rcqhy5 | /r/LocalLLaMA/comments/1rcqhy5/who_here_has_been_able_to_get_minicpm_o_45_working/ | false | false | self | 1 | null |
Let's talk hardware | 2 | I want to run a local model for inference to do coding tasks and security review for personal programming projects.
Is getting something like the ASUS Ascent G10X going to be a better spend per $ than building another rig with a 5090? The costs to build a full rig for that would be 2x the G10X, but I don't see much d... | 2026-02-23T18:48:45 | https://www.reddit.com/r/LocalLLaMA/comments/1rcq3p1/lets_talk_hardware/ | skmagiik | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rcq3p1 | false | null | t3_1rcq3p1 | /r/LocalLLaMA/comments/1rcq3p1/lets_talk_hardware/ | false | false | self | 2 | null |
I'm looking for the fastest instruct model from nvidia NIMs | 0 | I'm looking for the fastest , lowest latency instruct model for router layer.
It can be low context window or model size.
is llama-3.2-3b-instruct the fastest? What are your experiences like? | 2026-02-23T18:47:37 | https://www.reddit.com/r/LocalLLaMA/comments/1rcq2ib/im_looking_for_the_fastest_instruct_model_from/ | IcyMushroom4147 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rcq2ib | false | null | t3_1rcq2ib | /r/LocalLLaMA/comments/1rcq2ib/im_looking_for_the_fastest_instruct_model_from/ | false | false | self | 0 | null |
Hmm new drama unlocked | 140 | 2026-02-23T18:43:04 | Independent-Wind4462 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1rcpxs7 | false | null | t3_1rcpxs7 | /r/LocalLLaMA/comments/1rcpxs7/hmm_new_drama_unlocked/ | false | false | 140 | {'enabled': True, 'images': [{'id': 'fs0ubtgphalg1', 'resolutions': [{'height': 111, 'url': 'https://preview.redd.it/fs0ubtgphalg1.jpeg?width=108&crop=smart&auto=webp&s=b8370499b250cea20a2fa4b1a1353506e06b9501', 'width': 108}, {'height': 222, 'url': 'https://preview.redd.it/fs0ubtgphalg1.jpeg?width=216&crop=smart&auto=... | |||
Anthropic: "We’ve identified industrial-scale distillation attacks on our models by DeepSeek, Moonshot AI, and MiniMax." 🚨 | 4,518 | 2026-02-23T18:32:45 | KvAk_AKPlaysYT | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1rcpmwn | false | null | t3_1rcpmwn | /r/LocalLLaMA/comments/1rcpmwn/anthropic_weve_identified_industrialscale/ | false | false | 4,518 | {'enabled': True, 'images': [{'id': '94fbimavfalg1', 'resolutions': [{'height': 89, 'url': 'https://preview.redd.it/94fbimavfalg1.png?width=108&crop=smart&auto=webp&s=7587b814c5d1532762e664de796a897432709268', 'width': 108}, {'height': 179, 'url': 'https://preview.redd.it/94fbimavfalg1.png?width=216&crop=smart&auto=web... | |||
Does anyone know when openclaw will be lean enough to run on a single M2 Pro? | 0 | from OpenClaw local models docs:
>Local is doable, but OpenClaw expects large context + strong defenses against prompt injection. Small cards truncate context and leak safety. Aim high: **≥2 maxed-out Mac Studios or equivalent GPU rig (\~$30k+)**. A single **24 GB** GPU works only for lighter prompts with higher la... | 2026-02-23T18:28:46 | https://www.reddit.com/r/LocalLLaMA/comments/1rcpiow/does_anyone_know_when_openclaw_will_be_lean/ | Bulbasaur2015 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rcpiow | false | null | t3_1rcpiow | /r/LocalLLaMA/comments/1rcpiow/does_anyone_know_when_openclaw_will_be_lean/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'i7zCjw37rMxaQ-AtIRB3bfO71CrugkBGX7RyDbgqkS8', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/i7zCjw37rMxaQ-AtIRB3bfO71CrugkBGX7RyDbgqkS8.png?width=108&crop=smart&auto=webp&s=1f710f16cb810af49eb18390b76044dff0ee10af', 'width': 108}, {'height': 113, 'url': 'h... |
llama-server Production Ready? | 1 | [removed] | 2026-02-23T18:28:19 | https://www.reddit.com/r/LocalLLaMA/comments/1rcpi7m/llamaserver_production_ready/ | Sudden_Tennis_2067 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rcpi7m | false | null | t3_1rcpi7m | /r/LocalLLaMA/comments/1rcpi7m/llamaserver_production_ready/ | false | false | self | 1 | null |
This maybe a stupid question | 0 | how much does RAM speed play into llama.cpp overall performance? | 2026-02-23T18:18:40 | https://www.reddit.com/r/LocalLLaMA/comments/1rcp85n/this_maybe_a_stupid_question/ | Insomniac24x7 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rcp85n | false | null | t3_1rcp85n | /r/LocalLLaMA/comments/1rcp85n/this_maybe_a_stupid_question/ | false | false | self | 0 | null |
HOW TO GET 500USDT | 0 | I was browsing around and stumbled on Ratbet cc. They got this pr0m0: enter 'b0nus500' and supposedly snag 500 USDT as a bonus. LOL, it screams too good to be true, right? I mean, I'm convinced it's likely a scam—shady online casino with insane wagering requirements or hidden catches. But hey, curiosity's killing me...... | 2026-02-23T18:18:10 | https://www.reddit.com/r/LocalLLaMA/comments/1rcp7no/how_to_get_500usdt/ | RATBETCC | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rcp7no | false | null | t3_1rcp7no | /r/LocalLLaMA/comments/1rcp7no/how_to_get_500usdt/ | false | false | self | 0 | null |
AI Agent Bug Bounty: Is "Zero-Click" Autonomous Draft Creation via external email a valid vulnerability? | 0 | Hey everyone. I'm currently researching a popular AI assistant platform that integrates with Gmail (it has read and write-draft permissions). I've found an interesting behavior and wanted to get your opinion on its severity before I submit a report.
**The Scenario:**
1. **A user connects their Gmail to the AI agent.*... | 2026-02-23T17:58:31 | https://www.reddit.com/r/LocalLLaMA/comments/1rcomzd/ai_agent_bug_bounty_is_zeroclick_autonomous_draft/ | PresentSituation8736 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rcomzd | false | null | t3_1rcomzd | /r/LocalLLaMA/comments/1rcomzd/ai_agent_bug_bounty_is_zeroclick_autonomous_draft/ | false | false | self | 0 | null |
Thoughts on this benchmark? | 0 | Copied from X post:
"""
Introducing the latest results of our Long-Context Agentic Orchestration Benchmark.
• 31 high-complexity, non-coding scenarios (100k+ tokens) where the model must select the correct next-step action using proprietary orchestration logic with no public precedent — a pure test of instruction fo... | 2026-02-23T17:45:35 | KevinDurantXSnake | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1rco9xh | false | null | t3_1rco9xh | /r/LocalLLaMA/comments/1rco9xh/thoughts_on_this_benchmark/ | false | false | 0 | {'enabled': True, 'images': [{'id': 'uttfk16g7alg1', 'resolutions': [{'height': 87, 'url': 'https://preview.redd.it/uttfk16g7alg1.jpeg?width=108&crop=smart&auto=webp&s=1f39f8a8b8eb9e9550ef81197e034dd29cb9e442', 'width': 108}, {'height': 174, 'url': 'https://preview.redd.it/uttfk16g7alg1.jpeg?width=216&crop=smart&auto=w... | ||
RWKV-7: O(1) memory inference, 16.39 tok/s on ARM Cortex-A76, beats LLaMA 3.2 3B. The local-first architecture nobody is talking about... | 54 | Wrote a deep-dive specifically because the deployment numbers don't get enough attention.
The headline stats for local inference:
* O(1) memory per token, no KV cache at all. Context length does not affect VRAM usage.
* 16.39 tok/s on ARM Cortex-A76 (7B model). That's a mid-range Android chip.
* 28.7 tok/s on Snapdra... | 2026-02-23T17:45:31 | https://medium.com/ai-advances/rwkv-7-beats-llama-3-2-rnn-constant-memory-46064bbf1f64 | Sensitive-Two9732 | medium.com | 1970-01-01T00:00:00 | 0 | {} | 1rco9v7 | false | null | t3_1rco9v7 | /r/LocalLLaMA/comments/1rco9v7/rwkv7_o1_memory_inference_1639_toks_on_arm/ | false | false | default | 54 | null |
I gave my claw bot eyes and ears - how are you solving context beyond MCPs? | 0 | I've been working on my claw bot and recently gave it the ability to watch my screen and listen as I work. It learns from me in real time - picks up on what I'm doing, how I'm doing it, and starts adapting on the fly.
ngl, never felt this powerful.
But it got me thinking about a bigger problem which is context.
MCPs... | 2026-02-23T17:40:03 | https://www.reddit.com/r/LocalLLaMA/comments/1rco4cn/i_gave_my_claw_bot_eyes_and_ears_how_are_you/ | Simple_Thing_5011 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rco4cn | false | null | t3_1rco4cn | /r/LocalLLaMA/comments/1rco4cn/i_gave_my_claw_bot_eyes_and_ears_how_are_you/ | false | false | self | 0 | null |
Spent months doing this manually. Then I automated it. | 0 | The problem: LLMs approve their own work too easily. You ask Claude to plan and review its own plan – it says yes. Every time.
My solution: two agents with strictly separated roles and machine-readable contracts between them.
* Claude plans → Codex reviews (can reject with findings)
* Codex implements → Claude review... | 2026-02-23T17:38:45 | https://www.reddit.com/r/LocalLLaMA/comments/1rco303/spent_months_doing_this_manually_then_i_automated/ | TheKnilch | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rco303 | false | null | t3_1rco303 | /r/LocalLLaMA/comments/1rco303/spent_months_doing_this_manually_then_i_automated/ | false | false | self | 0 | null |
FoodTruck Bench update: tested Sonnet 4.6, Gemini 3.1 Pro, Qwen 3.5. Case studies with comparisons for each. | 0 | Three new models tested and added to the leaderboard since last week's post: Claude Sonnet 4.6, Gemini 3.1 Pro, and Qwen 3.5 397B. Wrote detailed case studies for each. Here's the summary.
Claude Sonnet 4.6 — massive leap from Sonnet 4.5. Genuine business reasoning, zero bankruptcies, $17.4K net worth. But here's the ... | 2026-02-23T17:36:05 | https://www.reddit.com/gallery/1rco0d9 | Disastrous_Theme5906 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1rco0d9 | false | null | t3_1rco0d9 | /r/LocalLLaMA/comments/1rco0d9/foodtruck_bench_update_tested_sonnet_46_gemini_31/ | false | false | 0 | null | |
GLM-5 is the new top open-weights model on the Extended NYT Connections benchmark, with a score of 81.8, edging out Kimi K2.5 Thinking (78.3) | 129 | More info: [https://github.com/lechmazur/nyt-connections/](https://github.com/lechmazur/nyt-connections/) | 2026-02-23T17:31:02 | https://www.reddit.com/gallery/1rcnv9h | zero0_one1 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1rcnv9h | false | null | t3_1rcnv9h | /r/LocalLLaMA/comments/1rcnv9h/glm5_is_the_new_top_openweights_model_on_the/ | false | false | 129 | null | |
Developing an AI-powered SMTP/IMAP proxy to protect against prompt injection in mail. Looking for technical feedback/testers. | 1 | Hi everyone,
I've been working on a project called **CarapaMail** and just released the first public Beta (v0.9.0).
The motivation was simple: I wanted to give my AI agents access to my email via MCP, but I was terrified of prompt injection attacks buried in incoming emails or agents accidentally leaking secrets in o... | 2026-02-23T17:24:49 | https://www.reddit.com/r/LocalLLaMA/comments/1rcnp2u/developing_an_aipowered_smtpimap_proxy_to_protect/ | FishermanExisting286 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rcnp2u | false | null | t3_1rcnp2u | /r/LocalLLaMA/comments/1rcnp2u/developing_an_aipowered_smtpimap_proxy_to_protect/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'IA5q_hXrySRehvq2RY9sPaBtmPq2ECV7Zzwh9BoOCAg', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/IA5q_hXrySRehvq2RY9sPaBtmPq2ECV7Zzwh9BoOCAg.png?width=108&crop=smart&auto=webp&s=4b3e0864dbf9ade9598bd6edccf8866ccf9c51eb', 'width': 108}, {'height': 108, 'url': 'h... |
Help With First Local LLM Build | 2 | I'm looking to build my first first local LLM. I have done a ton of research and have a fairly good idea of the terms like tokens, traind vs inference, the difference between a 12B and 70B etc. But, like I said, still very much in the learning phase. current components available for my build (no cost, I already have th... | 2026-02-23T17:21:00 | https://www.reddit.com/r/LocalLLaMA/comments/1rcnl97/help_with_first_local_llm_build/ | Sarsippius3 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rcnl97 | false | null | t3_1rcnl97 | /r/LocalLLaMA/comments/1rcnl97/help_with_first_local_llm_build/ | false | false | self | 2 | null |
Gpt 5.2 Continuity Regression — Executive Summary + Technical Memo | 0 | ​
EXECUTIVE SUMMARY
Summary:
Gpt 5.2 introduces a significant continuity regression. While single-turn analytical performance improves, multi-turn reasoning stability degrades due to upstream masking/gating interfering with latent-state formation.
Problem:
The model fails to retain thread-level structure ac... | 2026-02-23T17:20:20 | https://www.reddit.com/r/LocalLLaMA/comments/1rcnkjg/gpt_52_continuity_regression_executive_summary/ | whataboutAI | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rcnkjg | false | null | t3_1rcnkjg | /r/LocalLLaMA/comments/1rcnkjg/gpt_52_continuity_regression_executive_summary/ | false | false | self | 0 | null |
Building PC: RAM Speed?CPU Core Count?Single-Core Clock Speed? What are the reccomanded minimum setup | 1 | am currently building a PC on a tight budget, which means I cannot afford high capacities of VRAM at this time.
I am not looking for recommendations on the amount of GBs I need. Instead, I need to understand the architectural requirements for effective CPU offloading. Specifically, what are the recommended minimums f... | 2026-02-23T17:03:36 | https://www.reddit.com/r/LocalLLaMA/comments/1rcn2wu/building_pc_ram_speedcpu_core_countsinglecore/ | Quiet_Dasy | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rcn2wu | false | null | t3_1rcn2wu | /r/LocalLLaMA/comments/1rcn2wu/building_pc_ram_speedcpu_core_countsinglecore/ | false | false | self | 1 | null |
lost in tools - assistant with persistant memory based on files? - suggest a modern tool(set) | 0 | Ok, I lost touch here. I used ollama and openwebui for the longest time...
I'm looking for a more modern toolset. I manage my personal knowledge base in obsidian and paperless-ngx right now. With all the recent bang about openclaw and all the agentic tools out there, I thougt it should be possible to have an AI person... | 2026-02-23T16:47:28 | https://www.reddit.com/r/LocalLLaMA/comments/1rcmmbn/lost_in_tools_assistant_with_persistant_memory/ | momsi91 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rcmmbn | false | null | t3_1rcmmbn | /r/LocalLLaMA/comments/1rcmmbn/lost_in_tools_assistant_with_persistant_memory/ | false | false | self | 0 | null |
so is OpenClaw local or not | 948 | "Safety and alignment at Meta Superintelligence." | 2026-02-23T16:47:01 | jacek2023 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1rcmlwk | false | null | t3_1rcmlwk | /r/LocalLLaMA/comments/1rcmlwk/so_is_openclaw_local_or_not/ | false | false | 948 | {'enabled': True, 'images': [{'id': '5rolok0mw9lg1', 'resolutions': [{'height': 95, 'url': 'https://preview.redd.it/5rolok0mw9lg1.png?width=108&crop=smart&auto=webp&s=7aeccb59e1ff59967c7fe655f425b6669e5a8e8e', 'width': 108}, {'height': 190, 'url': 'https://preview.redd.it/5rolok0mw9lg1.png?width=216&crop=smart&auto=web... | ||
What's everyone doing for AI agent safety? Built something after
getting burned — curious how others handle it | 2 | Been building agents that take real actions (file ops, API calls) and kept running into the same wall: nothing stops a hallucination from doing something irreversible before you even notice.
Tried a few approaches — model-level prompting, try/except wrappers, manual validation. None of them felt like infrastructure.
... | 2026-02-23T16:44:10 | https://www.reddit.com/r/LocalLLaMA/comments/1rcmizz/whats_everyone_doing_for_ai_agent_safety_built/ | Time_Boat3625 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rcmizz | false | null | t3_1rcmizz | /r/LocalLLaMA/comments/1rcmizz/whats_everyone_doing_for_ai_agent_safety_built/ | false | false | 2 | null | |
Can we build Claude Code like Orchestrate in couple hundred lines? | 1 | Hey folks,
I really like Claude Code and especially how it uses Bash for doing most things on a computer. That approach gives agents a lot more autonomy compared to typical tool-calling setups.
I wanted to build something similar, but for a different use case — mainly focused on local models and systems you can embed... | 2026-02-23T16:29:57 | https://github.com/liquidos-ai/Odyssey | Human_Hac3rk | github.com | 1970-01-01T00:00:00 | 0 | {} | 1rcm4dx | false | null | t3_1rcm4dx | /r/LocalLLaMA/comments/1rcm4dx/can_we_build_claude_code_like_orchestrate_in/ | false | false | default | 1 | null |
Portable Workstation for Inference | 129 | Built a new portable workstation for gaming/AI workloads. One of the fans is a 12018 fan bought from aliexpress derived from a fan on the 4090FE, allowing it to provide airflow equivalent to normal 25mm thick fans despite only being 18mm in thickness.
Would've loved to get a Threadripper for additional memory bandwidt... | 2026-02-23T16:24:21 | https://www.reddit.com/gallery/1rclyvf | neintailedfoxx | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1rclyvf | false | null | t3_1rclyvf | /r/LocalLLaMA/comments/1rclyvf/portable_workstation_for_inference/ | false | false | 129 | null | |
Looking for feedback: Building an Open Source one shot installer for local AI. | 1 | Essentially what the title says, free to own, use, and modify / customize. Start with bare metal, run a 15-20 download script off of one CL command, fully baked and setup local AI system with no bugs with the apps and uses you want already baked in.
Two questions:
1.) Does this seem cool? Feels like it would solve ... | 2026-02-23T16:24:11 | https://www.reddit.com/r/LocalLLaMA/comments/1rclypo/looking_for_feedback_building_an_open_source_one/ | Signal_Ad657 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rclypo | false | null | t3_1rclypo | /r/LocalLLaMA/comments/1rclypo/looking_for_feedback_building_an_open_source_one/ | false | false | self | 1 | null |
Multi-Model Invoice OCR Pipeline | 3 | Built an open-source **invoice OCR pipeline** that combines multiple OCR / layout / extraction models into a single reproducible pipeline.
Repo: [https://github.com/dakshjain-1616/Multi-Model-Invoice-OCR-Pipeline](https://github.com/dakshjain-1616/Multi-Model-Invoice-OCR-Pipeline)
# What it does
* Runs **multiple OC... | 2026-02-23T16:11:22 | https://www.reddit.com/r/LocalLLaMA/comments/1rclm3z/multimodel_invoice_ocr_pipeline/ | gvij | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rclm3z | false | null | t3_1rclm3z | /r/LocalLLaMA/comments/1rclm3z/multimodel_invoice_ocr_pipeline/ | false | false | self | 3 | null |
Arij - OSS project - Another agent / project manager. Kanban powered by any agent CLI. | 3 | Beware, non ai slop text onward.
I present Arij to you (you can pronounce it how you want), a project / agent manager UI, that let you easily manage multiple agent across multiple CLI / models, and enforce an easy-to-read workflow.
The core idea is born during my own work habit. I usually work on many project at the ... | 2026-02-23T16:09:18 | https://www.reddit.com/r/LocalLLaMA/comments/1rclk23/arij_oss_project_another_agent_project_manager/ | Orolol | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rclk23 | false | null | t3_1rclk23 | /r/LocalLLaMA/comments/1rclk23/arij_oss_project_another_agent_project_manager/ | false | false | self | 3 | null |
GLM-4.7-Flash vs Qwen3-Coder-Next vs GPT-OSS-120b | 0 | Which is the best to sue with Openclaw (i have been using Qwen3-Coder-Next, and so far it is great but slow so i am looking to switch any hints ?)
In my previous experience with GLM-4.7-Flash it was too but tool call with absolutely bad, however I learned that it could be fixed (in Cline for an example) and by adjusti... | 2026-02-23T16:07:39 | https://www.reddit.com/r/LocalLLaMA/comments/1rclied/glm47flash_vs_qwen3codernext_vs_gptoss120b/ | Potential_Block4598 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rclied | false | null | t3_1rclied | /r/LocalLLaMA/comments/1rclied/glm47flash_vs_qwen3codernext_vs_gptoss120b/ | false | false | self | 0 | null |
Will Llama-3.2-3B-Instruct be supported on the Raspberry Pi AI HAT+ 2? | 2 | I’m looking at the new Raspberry Pi AI HAT+ 2 (40 TOPS, 8 GB RAM) and noticed current documentation mentions support for smaller models like Qwen2 and DeepSeek-R1.
Are there hints from the community that *Llama-3.2-3B-Instruct* (or other larger LLMs) will be supported on this board in future?
| 2026-02-23T15:42:56 | https://www.reddit.com/r/LocalLLaMA/comments/1rckudv/will_llama323binstruct_be_supported_on_the/ | isaachwl | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rckudv | false | null | t3_1rckudv | /r/LocalLLaMA/comments/1rckudv/will_llama323binstruct_be_supported_on_the/ | false | false | self | 2 | null |
Hardware requirements for training a ~3B Model From Scratch locally? | 29 | Hey all,
I’m a data science master’s student who’s posted on here a couple times before over the last year or 2. Now am working on my senior thesis and I’m trying to figure out the feasibility of training a \~3B parameter transformer model from scratch. So not fine-tuning. I’m trying to figure out what’s realistically... | 2026-02-23T15:39:08 | https://www.reddit.com/r/LocalLLaMA/comments/1rckqpp/hardware_requirements_for_training_a_3b_model/ | Any-Cobbler6161 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rckqpp | false | null | t3_1rckqpp | /r/LocalLLaMA/comments/1rckqpp/hardware_requirements_for_training_a_3b_model/ | false | false | self | 29 | null |
Is there any model that does TTS, STS and vocal separation all in one or at least in a pipeline? | 1 | I believe Seedance 2.0 can already do this besides making videos but it's close sourced. For the model ou basically give it text, audio or both and it'd talk, sing or anything possible with a mouth based on the combined input as well as being able to train/save custom voice. Any suggestion? | 2026-02-23T15:31:43 | https://www.reddit.com/r/LocalLLaMA/comments/1rckjib/is_there_any_model_that_does_tts_sts_and_vocal/ | Jackw78 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rckjib | false | null | t3_1rckjib | /r/LocalLLaMA/comments/1rckjib/is_there_any_model_that_does_tts_sts_and_vocal/ | false | false | self | 1 | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.