title stringlengths 1 300 | score int64 0 8.54k | selftext stringlengths 0 41.5k | created timestamp[ns]date 2023-04-01 04:30:41 2026-03-04 02:14:14 ⌀ | url stringlengths 0 878 | author stringlengths 3 20 | domain stringlengths 0 82 | edited timestamp[ns]date 1970-01-01 00:00:00 2026-02-19 14:51:53 | gilded int64 0 2 | gildings stringclasses 7
values | id stringlengths 7 7 | locked bool 2
classes | media stringlengths 646 1.8k ⌀ | name stringlengths 10 10 | permalink stringlengths 33 82 | spoiler bool 2
classes | stickied bool 2
classes | thumbnail stringlengths 4 213 ⌀ | ups int64 0 8.54k | preview stringlengths 301 5.01k ⌀ |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
OpenCode / Pi users jealous of Claude remote? Tether is open source | 0 | It might be a niche use case, but agents on your phone (or just in Discord / Telegram) is cool and can be useful. And there's no reason basic infra like this needs to be proprietary really.
[https://github.com/larsderidder/tether](https://github.com/larsderidder/tether) | 2026-02-25T11:02:18 | https://www.reddit.com/r/LocalLLaMA/comments/1rea98m/opencode_pi_users_jealous_of_claude_remote_tether/ | wouldacouldashoulda | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rea98m | false | null | t3_1rea98m | /r/LocalLLaMA/comments/1rea98m/opencode_pi_users_jealous_of_claude_remote_tether/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'iCM-PmcQNtLANcTQkN7Yl0j1R9Jw6wKuD6ELVFKhC8A', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/iCM-PmcQNtLANcTQkN7Yl0j1R9Jw6wKuD6ELVFKhC8A.png?width=108&crop=smart&auto=webp&s=72153ac813a6ee7048701676559215706ec84b71', 'width': 108}, {'height': 108, 'url': 'h... |
Spent months building a fully offline RAG + knowledge graph app for Mac. Everything runs on-device with MLX. Here's what I learned. | 5 | So I got tired of uploading my personal docs to ChatGPT just to ask questions about them. Privacy-wise it felt wrong, and the internet requirement was annoying.
I ended up going down a rabbit hole and built ConceptLens — a native macOS/iOS app that does RAG entirely on your Mac using MLX. No cloud, no API keys, no sub... | 2026-02-25T10:59:41 | https://www.reddit.com/r/LocalLLaMA/comments/1rea7fb/spent_months_building_a_fully_offline_rag/ | yunteng | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rea7fb | false | null | t3_1rea7fb | /r/LocalLLaMA/comments/1rea7fb/spent_months_building_a_fully_offline_rag/ | false | false | 5 | null | |
AI Slop? What is AI Slop? | 1 | [deleted] | 2026-02-25T10:58:19 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 1rea6mo | false | null | t3_1rea6mo | /r/LocalLLaMA/comments/1rea6mo/ai_slop_what_is_ai_slop/ | false | false | default | 1 | null | ||
Price of MSI GB300 workstation (DGX Station) appeared online ~ $97k | 10 | 2026-02-25T10:57:07 | https://www.cdw.com/product/msi-nvidia-gb300-wkstn-72c-grace-cpu/9087313?pfm=srh | fairydreaming | cdw.com | 1970-01-01T00:00:00 | 0 | {} | 1rea5vs | false | null | t3_1rea5vs | /r/LocalLLaMA/comments/1rea5vs/price_of_msi_gb300_workstation_dgx_station/ | false | false | default | 10 | null | |
Step-3.5-Flash-REAP from cerebras | 3 | REAP models are smaller versions of larger models (for potato setups).
[https://huggingface.co/cerebras/Step-3.5-Flash-REAP-121B-A11B](https://huggingface.co/cerebras/Step-3.5-Flash-REAP-121B-A11B)
[https://huggingface.co/cerebras/Step-3.5-Flash-REAP-149B-A11B](https://huggingface.co/cerebras/Step-3.5-Flash-REAP-149B... | 2026-02-25T10:55:10 | https://www.reddit.com/r/LocalLLaMA/comments/1rea4pu/step35flashreap_from_cerebras/ | jacek2023 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rea4pu | false | null | t3_1rea4pu | /r/LocalLLaMA/comments/1rea4pu/step35flashreap_from_cerebras/ | false | false | self | 3 | {'enabled': False, 'images': [{'id': 'r6dEumCcu41zzw1nz2KPkFL4shIAnoo-vkwJOtztHpQ', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/r6dEumCcu41zzw1nz2KPkFL4shIAnoo-vkwJOtztHpQ.png?width=108&crop=smart&auto=webp&s=380ba7faffd88108eb9e0b055f56e5f5b79481b8', 'width': 108}, {'height': 116, 'url': 'h... |
Qwen3.5 on VLLM | 7 | I just cant get qwen3.5 27b to run on VLLM. I tried it with version 0.15.1 and the nightly build, updated transformers to 5.2.0 and it still throws this error on startup
File "/home/llm/nightly/lib/python3.12/site-packages/pydantic/\_internal/\_dataclasses.py", line 121, in \_\_init\_\_
(APIServer pid=45048) s... | 2026-02-25T10:43:26 | https://www.reddit.com/r/LocalLLaMA/comments/1re9xbi/qwen35_on_vllm/ | Bowdenzug | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1re9xbi | false | null | t3_1re9xbi | /r/LocalLLaMA/comments/1re9xbi/qwen35_on_vllm/ | false | false | self | 7 | null |
Qwen 3.5 这波属实有点东西 | 0 | 看了下 Qwen 3.5 的评测,27B dense model 能有 Gemini 3 Pro 的 coding 和 multimodal 表现?\n\nNGL 我持保留态度。不是说不可以,但"对标 Gemini 3 Pro"这种说法,跟当年"追平 GPT-4"一样——测评集上去了,实际用起来又是另一回事。\n\n不过有一点倒是真的:Alibaba 的多语言数据优势确实明显,淘宝+天猫+速卖通的中英俄阿语数据,不是 Google 靠搜索引擎能覆盖的。\n\n等一手实际体验反馈再下结论。TL;DR: 跑分看看就好,别激动。 | 2026-02-25T10:34:15 | https://www.reddit.com/r/LocalLLaMA/comments/1re9rqc/qwen_35_这波属实有点东西/ | Electrical_Yak_6532 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1re9rqc | false | null | t3_1re9rqc | /r/LocalLLaMA/comments/1re9rqc/qwen_35_这波属实有点东西/ | false | false | self | 0 | null |
Kolyadual/Newton-bot-3-text-mini-8B · Hugging Face | 1 | [removed] | 2026-02-25T10:24:54 | https://huggingface.co/Kolyadual/Newton-bot-3-text-mini-8B | Slow-Driver-3808 | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1re9m4l | false | null | t3_1re9m4l | /r/LocalLLaMA/comments/1re9m4l/kolyadualnewtonbot3textmini8b_hugging_face/ | false | false | 1 | {'enabled': False, 'images': [{'id': '7yu38PA8k7Fp94QfLX8obRpptAgVBnqcAwx6MVUF5Qs', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/7yu38PA8k7Fp94QfLX8obRpptAgVBnqcAwx6MVUF5Qs.png?width=108&crop=smart&auto=webp&s=3bb0c4ab889464031dc041cc0cc42d09a98678ca', 'width': 108}, {'height': 116, 'url': 'h... | |
Weekly limit should not exist (the daily limit makes sense) | 0 | Do you know any AI that runs in the terminal, like Codex or Claude CLI, that doesn’t have a weekly limit? I can understand why a daily limit exists, but a weekly limit is terrible. It completely monopolizes AI usage for big tech companies. The Chinese will probably put an end to this, and I have the feeling it might al... | 2026-02-25T10:19:59 | https://www.reddit.com/r/LocalLLaMA/comments/1re9j8u/weekly_limit_should_not_exist_the_daily_limit/ | ImpressionanteFato | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1re9j8u | false | null | t3_1re9j8u | /r/LocalLLaMA/comments/1re9j8u/weekly_limit_should_not_exist_the_daily_limit/ | false | false | self | 0 | null |
Some Qwen3.5 benchmarks on Strix Halo & llama.cpp | 26 | Hi guys! I was excited to try out some Qwen 3.5 models on my Strix Halo laptop.
All benchmarks were run at 30k context depth and I've included some of my current favorites for comparison (Qwen3-Coder-Next, gpt-oss-120b, step-3.5-flash). For some reason, with the current build, llama-bench failed to produce numbers for... | 2026-02-25T10:16:31 | https://www.reddit.com/gallery/1re9h4r | spaceman_ | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1re9h4r | false | null | t3_1re9h4r | /r/LocalLLaMA/comments/1re9h4r/some_qwen35_benchmarks_on_strix_halo_llamacpp/ | false | false | 26 | null | |
"Don't steal my training data" | 86 | 2026-02-25T10:13:50 | NotBadSon | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1re9fiv | false | null | t3_1re9fiv | /r/LocalLLaMA/comments/1re9fiv/dont_steal_my_training_data/ | false | false | 86 | {'enabled': True, 'images': [{'id': '6fiywtco8mlg1', 'resolutions': [{'height': 135, 'url': 'https://preview.redd.it/6fiywtco8mlg1.jpeg?width=108&crop=smart&auto=webp&s=25e547c19feb013bf3baee7c7151b0b1dd7c15a6', 'width': 108}, {'height': 270, 'url': 'https://preview.redd.it/6fiywtco8mlg1.jpeg?width=216&crop=smart&auto=... | |||
An LLM hard-coded into silicon that can do inference at 17k tokens/s??? | 15 | What do people think about this?? Is it a scam, or could it be real? Seems crazy to me, I would like to see the actual, physical product reviewed/benchmarked by independent experts before I really believe it, but. yikes. | 2026-02-25T10:09:18 | https://taalas.com/the-path-to-ubiquitous-ai/ | wombatsock | taalas.com | 1970-01-01T00:00:00 | 0 | {} | 1re9crt | false | null | t3_1re9crt | /r/LocalLLaMA/comments/1re9crt/an_llm_hardcoded_into_silicon_that_can_do/ | false | false | 15 | {'enabled': False, 'images': [{'id': 'JqDe2NF6kolh0uBSiMVgY8NEE7ZZjWayCqAO-_3SCRk', 'resolutions': [{'height': 72, 'url': 'https://external-preview.redd.it/JqDe2NF6kolh0uBSiMVgY8NEE7ZZjWayCqAO-_3SCRk.png?width=108&crop=smart&auto=webp&s=4803c243293a1ca291b8f4a84d8a105a38f78cb9', 'width': 108}, {'height': 144, 'url': 'h... | |
Compared 5 LLM evaluation tools for local Llama setups - here's what worked | 5 |
I have been running Llama 3.1 70B locally via Ollama for a few months for a document processing pipeline. Hit the usual wall: model runs fine in testing, subtle failures start creeping in production and you have no idea when it started.
Went through a bunch of tools. Here's what I found:
**RAGAS** - Great for RAG pi... | 2026-02-25T10:08:24 | https://www.reddit.com/r/LocalLLaMA/comments/1re9c8q/compared_5_llm_evaluation_tools_for_local_llama/ | Odd-Literature-5302 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1re9c8q | false | null | t3_1re9c8q | /r/LocalLLaMA/comments/1re9c8q/compared_5_llm_evaluation_tools_for_local_llama/ | false | false | self | 5 | null |
TranslateGemma 4B in the browser on WebGPU | 3 | Did you know you can use TranslateGemma 4B directly in the browser?
* Model: [https://huggingface.co/google/translategemma-4b-it](https://huggingface.co/google/translategemma-4b-it)
* Demo + Code: [https://huggingface.co/spaces/webml-community/TranslateGemma-WebGPU](https://huggingface.co/spaces/webml-community/Transl... | 2026-02-25T10:07:51 | https://www.reddit.com/r/LocalLLaMA/comments/1re9bxd/translategemma_4b_in_the_browser_on_webgpu/ | nicodotdev | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1re9bxd | false | null | t3_1re9bxd | /r/LocalLLaMA/comments/1re9bxd/translategemma_4b_in_the_browser_on_webgpu/ | false | false | 3 | null | |
Average user context | 0 | For those running local LLMs at their company, how much context does your average user use ?
Also, how do you manage your VRAM resources?
Allowing 'power users' to run long-context queries, but still need to guarantee service availability for everyone.
How | 2026-02-25T10:00:41 | https://www.reddit.com/r/LocalLLaMA/comments/1re97k6/average_user_context/ | maaakks | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1re97k6 | false | null | t3_1re97k6 | /r/LocalLLaMA/comments/1re97k6/average_user_context/ | false | false | self | 0 | null |
OpenCode / Pi users jealous of Claude remote? Tether is open source | 1 | It might be a niche use case, but agents on your phone (or just in Discord / Telegram) is cool and can be useful. And there's no reason basic infra like this needs to be proprietary really. | 2026-02-25T09:57:47 | https://www.reddit.com/r/LocalLLaMA/comments/1re95tf/opencode_pi_users_jealous_of_claude_remote_tether/ | wouldacouldashoulda | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1re95tf | false | null | t3_1re95tf | /r/LocalLLaMA/comments/1re95tf/opencode_pi_users_jealous_of_claude_remote_tether/ | false | false | self | 1 | null |
Qwen3.5 35b: How to disable reasoning in ik_llama.cpp | 2 | Hello, just as the title says i want to know how to disable reasoning for this model in ik\_llama.cpp because the standard llama.cpp way doesnt work for me.
--chat-template-kwargs "{\"enable_thinking\": false}"
--chat-template-kwargs "{\"enable_thinking\": false}"
Does anyone have a clue? I am using OpenWebUI... | 2026-02-25T09:53:07 | https://www.reddit.com/r/LocalLLaMA/comments/1re934l/qwen35_35b_how_to_disable_reasoning_in_ik_llamacpp/ | Yeelyy | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1re934l | false | null | t3_1re934l | /r/LocalLLaMA/comments/1re934l/qwen35_35b_how_to_disable_reasoning_in_ik_llamacpp/ | false | false | self | 2 | null |
Qwen 3.5 thinks it's Sonnet 4.6 before correcting... | 0 | 2026-02-25T09:45:17 | https://www.reddit.com/r/LocalLLaMA/comments/1re8yae/qwen_35_thinks_its_sonnet_46_before_correcting/ | Old_Hospital_934 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1re8yae | false | null | t3_1re8yae | /r/LocalLLaMA/comments/1re8yae/qwen_35_thinks_its_sonnet_46_before_correcting/ | false | false | 0 | null | ||
Why Your OpenClaw Setup is a "Malicious Insider" in Waiting | 1 | [removed] | 2026-02-25T09:43:11 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 1re8wzm | false | null | t3_1re8wzm | /r/LocalLLaMA/comments/1re8wzm/why_your_openclaw_setup_is_a_malicious_insider_in/ | false | false | default | 1 | null | ||
Qwen 3.5 “Medium” series looks like a real MoE + agent push (35B-A3B + Flash w/ 1M context) | 8 | Alibaba’s Qwen team just introduced the Qwen 3.5 “Medium” model series:
\- Qwen3.5-35B-A3B (MoE)
\- Qwen3.5-122B-A10B
\- Qwen3.5-27B
\- Qwen3.5-Flash (hosted production version aligned with 35B-A3B)
A couple details that stood out to me:
1) The 35B-A3B naming is telling
“A3B” = \~3B active parameters per token... | 2026-02-25T09:24:41 | azahar_h | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1re8m7c | false | null | t3_1re8m7c | /r/LocalLLaMA/comments/1re8m7c/qwen_35_medium_series_looks_like_a_real_moe_agent/ | false | false | 8 | {'enabled': True, 'images': [{'id': '61wmgojwzllg1', 'resolutions': [{'height': 108, 'url': 'https://preview.redd.it/61wmgojwzllg1.png?width=108&crop=smart&auto=webp&s=cf2ea3f1427faafff997f97f15e5d4ca067889eb', 'width': 108}, {'height': 216, 'url': 'https://preview.redd.it/61wmgojwzllg1.png?width=216&crop=smart&auto=we... | ||
Meow | 0 | 2026-02-25T09:23:21 | SpeedRunGod | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1re8lgv | false | null | t3_1re8lgv | /r/LocalLLaMA/comments/1re8lgv/meow/ | false | false | 0 | {'enabled': True, 'images': [{'id': '3xcc05kmzllg1', 'resolutions': [{'height': 134, 'url': 'https://preview.redd.it/3xcc05kmzllg1.jpeg?width=108&crop=smart&auto=webp&s=8b202b20fcf43beb437db177a08e4c10af0c7cc3', 'width': 108}, {'height': 268, 'url': 'https://preview.redd.it/3xcc05kmzllg1.jpeg?width=216&crop=smart&auto=... | |||
someone built a SELF-EVOLVING AI agent that rewrites its own code, prompts, and identity AUTONOMOUSLY, with having a background consciousness | 0 | Its called OUROBOROS, open source, built by a russian PhD researcher who studies transformer internals, he built it as an experiment, it built everything else
it thinks on its own even when nobody is talking to it, each thought costs $0.07
when the researcher went to sleep at midnight, by 3:41am it mass produced 20 v... | 2026-02-25T09:22:33 | https://v.redd.it/8rpsenphzllg1 | EchoOfOppenheimer | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1re8l13 | false | {'reddit_video': {'bitrate_kbps': 1200, 'dash_url': 'https://v.redd.it/8rpsenphzllg1/DASHPlaylist.mpd?a=1774603377%2COTViYTNlNGRkNzkyNDMzYTI0MGM1YTZmOWM0MDkzMGU5N2QxMjcyYTQ4NjkzNWJjNDVmODFiMGM2ZGJlYTljOA%3D%3D&v=1&f=sd', 'duration': 26, 'fallback_url': 'https://v.redd.it/8rpsenphzllg1/CMAF_480.mp4?source=fallback', 'ha... | t3_1re8l13 | /r/LocalLLaMA/comments/1re8l13/someone_built_a_selfevolving_ai_agent_that/ | false | false | 0 | {'enabled': False, 'images': [{'id': 'cjhyaDZ3cGh6bGxnMYbmf-gbpRIcOlPTJCUc4FYruTeLil3Q8VGveRgV82KY', 'resolutions': [{'height': 69, 'url': 'https://external-preview.redd.it/cjhyaDZ3cGh6bGxnMYbmf-gbpRIcOlPTJCUc4FYruTeLil3Q8VGveRgV82KY.png?width=108&crop=smart&format=pjpg&auto=webp&s=6129ec81e75f9ed97643f9dcfa24d0ee9f22c... | |
Help needed proving me wrong - LLM document layers | 1 | So over the past year I’ve been working on something. The problem I’m trying to solve:
\- LLM outputs degrade across multi-step workflows.
\- They lose structure, drift semantically, and become unreliable artefacts after a few turns without templates and guardrails.
So my hypothesis was that a sort of DSL/control la... | 2026-02-25T09:13:53 | https://www.reddit.com/r/LocalLLaMA/comments/1re8fvd/help_needed_proving_me_wrong_llm_document_layers/ | sbuswell | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1re8fvd | false | null | t3_1re8fvd | /r/LocalLLaMA/comments/1re8fvd/help_needed_proving_me_wrong_llm_document_layers/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'OI5dOI3zln5MI11wEg-owrX52pQxPt2eHzVNR2L1p-g', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/OI5dOI3zln5MI11wEg-owrX52pQxPt2eHzVNR2L1p-g.png?width=108&crop=smart&auto=webp&s=54d0b197276ae0786763b62148bf8e3a8792f16f', 'width': 108}, {'height': 108, 'url': 'h... |
Memorization benchmark | 3 | Hey, I just wanted to share results on a benchmark I created where I asked different models for their best estimates to the nearest minute of sunrise and sunset times in different cities around the world and at different times of the year
I fully understand that LLM are not meant for factual information but I thought ... | 2026-02-25T09:09:29 | https://www.reddit.com/r/LocalLLaMA/comments/1re8d9q/memorization_benchmark/ | Unusual_Guidance2095 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1re8d9q | false | null | t3_1re8d9q | /r/LocalLLaMA/comments/1re8d9q/memorization_benchmark/ | false | false | self | 3 | null |
OK, llama.cpp team, please post the best settings for QWEN 3.5 family | 0 | To avoid hearsay and frustrated users kindly please post the best setting and template for both agentic coding (open code will be the best) and chat.
As well as the actual recommended build number, or commit hash, from which there is actual support for this models family.
**Many thanks for your efforts from a happy u... | 2026-02-25T09:04:39 | https://www.reddit.com/r/LocalLLaMA/comments/1re8agu/ok_llamacpp_team_please_post_the_best_settings/ | HumanDrone8721 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1re8agu | false | null | t3_1re8agu | /r/LocalLLaMA/comments/1re8agu/ok_llamacpp_team_please_post_the_best_settings/ | false | false | self | 0 | null |
r/LocalLLaMA — What’s the biggest missing piece for locally-run autonomous agents? | 2 | For those building or running local models with agent-like behavior, I’m curious what you consider the biggest missing component right now.
Is it memory? tool integration? scheduling? chain-of-thought reliability?
There are a lot of home-built solutions, but rarely a clean end-to-end setup. What do you think needs to... | 2026-02-25T09:02:34 | https://www.reddit.com/r/LocalLLaMA/comments/1re897q/rlocalllama_whats_the_biggest_missing_piece_for/ | Galactic_Graham | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1re897q | false | null | t3_1re897q | /r/LocalLLaMA/comments/1re897q/rlocalllama_whats_the_biggest_missing_piece_for/ | false | false | self | 2 | null |
The FIRST local vision model to get this right! | 131 | So I decided to give qwen3.5-35b-a3b a try for this question. I've tried literally every popular local vision models including bigger ones like glm-4.6v (106B) and qwen3-vl-235b-a22b and none of them got it even remotely correct. So I was thinking after it failed I will try qwen3.5-122b-a10b on this and hopefully it ca... | 2026-02-25T09:02:26 | https://www.reddit.com/gallery/1re894z | po_stulate | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1re894z | false | null | t3_1re894z | /r/LocalLLaMA/comments/1re894z/the_first_local_vision_model_to_get_this_right/ | false | false | 131 | null | |
The Reality Behind the OpenClaw Hype | 0 | *A Grounded Look at Peter Steinberger and System Architecture*
Let's cut through the noise regarding OpenClaw, Peter Steinberger, and the current state of autonomous AI agents. While the hype is deafening, a closer look at the history, the tech, and the recent Lex Fridman interview reveals a stark disconnect between s... | 2026-02-25T08:55:22 | https://www.reddit.com/r/LocalLLaMA/comments/1re854d/the_reality_behind_the_openclaw_hype/ | leo-k7v | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1re854d | false | null | t3_1re854d | /r/LocalLLaMA/comments/1re854d/the_reality_behind_the_openclaw_hype/ | false | false | 0 | null | |
I'm looking for specific recommendations for LLMs in the 8B range or less , One of theese optimized model for data extraction? | 1 | Is there a leaderboard for data extraction model? | 2026-02-25T08:52:43 | https://www.reddit.com/r/LocalLLaMA/comments/1re83km/im_looking_for_specific_recommendations_for_llms/ | Quiet_Dasy | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1re83km | false | null | t3_1re83km | /r/LocalLLaMA/comments/1re83km/im_looking_for_specific_recommendations_for_llms/ | false | false | self | 1 | null |
Question for those building agents: do you actually sandbox? | 1 | Doing some field research for a project I'm building.
Do you guys sandbox your agents? If so, does it restrict your use cases or completely tank efficiency for the sake of security?
If not, how are you handling prompt injections and the risk of runaway API bills? Curious to hear how everyone is ha | 2026-02-25T08:50:14 | https://www.reddit.com/r/LocalLLaMA/comments/1re824v/question_for_those_building_agents_do_you/ | no-I-dont-want-that7 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1re824v | false | null | t3_1re824v | /r/LocalLLaMA/comments/1re824v/question_for_those_building_agents_do_you/ | false | false | self | 1 | null |
Does the Qwen3.5 122B struggle in vibe compared to Qwen3 235B? | 12 | While 122B does apparently score better then 235B across the board. I find that when disabling thinking 235B was significantly stronger in conversation. And when having thinking enabled, it 122B overthinks dramatically for really simple tasks (like, how do I write this one sentence correctly).
Instruction followin... | 2026-02-25T08:44:31 | https://www.reddit.com/r/LocalLLaMA/comments/1re7ypi/does_the_qwen35_122b_struggle_in_vibe_compared_to/ | erazortt | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1re7ypi | false | null | t3_1re7ypi | /r/LocalLLaMA/comments/1re7ypi/does_the_qwen35_122b_struggle_in_vibe_compared_to/ | false | false | self | 12 | null |
MONROE – Model Orchestration & Router Engine | 2 | Hi, ich habe ein neues Projekt erstellt das ich eigentlich erstmal für mich nutzen wollte, aber ich denke andere profitieren möglicherweise auch...
Worum gehts:
Als LLM Runner hab ich mir eine Framework Desktop gekauft mit Strix Halo und 128GB. Nun ist es so, wenn ich Modelle lade die noch akzeptabe schnell laufen, ist... | 2026-02-25T08:27:45 | https://www.reddit.com/r/LocalLLaMA/comments/1re7p26/monroe_model_orchestration_router_engine/ | int3ks | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1re7p26 | false | null | t3_1re7p26 | /r/LocalLLaMA/comments/1re7p26/monroe_model_orchestration_router_engine/ | false | false | self | 2 | {'enabled': False, 'images': [{'id': 'sbsYuCmotZ6hMqY4GOniwdY30RUDwDt5c1cN546JoTQ', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/sbsYuCmotZ6hMqY4GOniwdY30RUDwDt5c1cN546JoTQ.png?width=108&crop=smart&auto=webp&s=c1394e303c445e857cff559e7698e9c6d962089c', 'width': 108}, {'height': 108, 'url': 'h... |
[Release] TinyTTS: An Ultra-lightweight English TTS Model (~9M params, 20MB) that runs 8x real-time on CPU (67x on GPU) | 30 | Hey r/LocalLLaMA,
I wanted to share a small project I've been working on to solve a personal pain point: **TinyTTS**.
We all love our massive 70B+ LLMs, but when building local voice assistants, running a heavy TTS framework alongside them often eats up way too much precious VRAM and compute. I wanted something absur... | 2026-02-25T08:22:51 | https://www.reddit.com/r/LocalLLaMA/comments/1re7m8y/release_tinytts_an_ultralightweight_english_tts/ | Forsaken_Shopping481 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1re7m8y | false | null | t3_1re7m8y | /r/LocalLLaMA/comments/1re7m8y/release_tinytts_an_ultralightweight_english_tts/ | false | false | self | 30 | null |
Has anyone got Qwen3.5-35B-A3B running with vLLM? | 2 | I have vLLM 0.15.1 and I want to know if I have to wait for an official release (>=0.16.0) to support Qwen3.5 or I can run it now. | 2026-02-25T08:16:55 | https://www.reddit.com/r/LocalLLaMA/comments/1re7iud/has_anyone_got_qwen3535ba3b_running_with_vllm/ | TechNerd10191 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1re7iud | false | null | t3_1re7iud | /r/LocalLLaMA/comments/1re7iud/has_anyone_got_qwen3535ba3b_running_with_vllm/ | false | false | self | 2 | null |
VLLM Qwen3.5-122B-A10B-GGUF | 1 | Could anyone run unsloth/Qwen3.5-122B-A10B-GGUF in VLLM?
And related to performance , since it is gguf will it work properly?
Thanks | 2026-02-25T08:15:58 | https://www.reddit.com/r/LocalLLaMA/comments/1re7ib7/vllm_qwen35122ba10bgguf/ | justlows | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1re7ib7 | false | null | t3_1re7ib7 | /r/LocalLLaMA/comments/1re7ib7/vllm_qwen35122ba10bgguf/ | false | false | self | 1 | null |
This benchmark from shows Unsolth Q3 quantization beats both Q4 and MXFP4 | 86 | I thought this was interesting, especially since at first glance both Q4 and Q3 here are K\_XL, and it doesn't make sense a Q3 will beat Q4 in any scenario.
However it's worth mentioning this is:
1. Not a standard benchmark
2. These are not straight-forward quantizations, it's a "dynamic quantization" which affects... | 2026-02-25T07:55:49 | Oatilis | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1re76g6 | false | null | t3_1re76g6 | /r/LocalLLaMA/comments/1re76g6/this_benchmark_from_shows_unsolth_q3_quantization/ | false | false | 86 | {'enabled': True, 'images': [{'id': '5wtmzjgvillg1', 'resolutions': [{'height': 77, 'url': 'https://preview.redd.it/5wtmzjgvillg1.png?width=108&crop=smart&auto=webp&s=11e0a85479b2dddd721d18e3c9e3a22ede883bbc', 'width': 108}, {'height': 154, 'url': 'https://preview.redd.it/5wtmzjgvillg1.png?width=216&crop=smart&auto=web... | ||
Qwen3.5 27B better than 35B-A3B? | 435 | Which model would be better with 16 GB of VRAM and 32 GB of RAM? | 2026-02-25T07:49:05 | -OpenSourcer | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1re72h4 | false | null | t3_1re72h4 | /r/LocalLLaMA/comments/1re72h4/qwen35_27b_better_than_35ba3b/ | false | false | 435 | {'enabled': True, 'images': [{'id': 'f9x0emmuillg1', 'resolutions': [{'height': 124, 'url': 'https://preview.redd.it/f9x0emmuillg1.png?width=108&crop=smart&auto=webp&s=ad2264dad28bcb0d422e61392d97bf99d6ed46ba', 'width': 108}, {'height': 248, 'url': 'https://preview.redd.it/f9x0emmuillg1.png?width=216&crop=smart&auto=we... | ||
The best model for M3 Pro 36GB? | 1 | Hey,
I’m downloading ollama 3.0 qwen 32b, but I’ve heard there is a newer model? I need one for coding. | 2026-02-25T07:19:19 | https://www.reddit.com/r/LocalLLaMA/comments/1re6kw7/the_best_model_for_m3_pro_36gb/ | KwonDarko | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1re6kw7 | false | null | t3_1re6kw7 | /r/LocalLLaMA/comments/1re6kw7/the_best_model_for_m3_pro_36gb/ | false | false | self | 1 | null |
Heosphoros v XGBOOST 2/24/26 | 0 | Heosphoros vs Default XGBoost
Fraud Detection — 284,807 real transactions
Default XGBoost: 0.8409
Heosphoros: 0.8786 (+4.48%)
Send me any dataset! 200 line code outperforming industrym | 2026-02-25T07:15:53 | Heosphoros_ai | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1re6ip4 | false | null | t3_1re6ip4 | /r/LocalLLaMA/comments/1re6ip4/heosphoros_v_xgboost_22426/ | false | false | 0 | {'enabled': True, 'images': [{'id': '8nrasaixcllg1', 'resolutions': [{'height': 216, 'url': 'https://preview.redd.it/8nrasaixcllg1.jpeg?width=108&crop=smart&auto=webp&s=393f6344043b3b272b5cef23c0daf6053cf61f78', 'width': 108}, {'height': 432, 'url': 'https://preview.redd.it/8nrasaixcllg1.jpeg?width=216&crop=smart&auto=... | ||
Anthropic is the leading contributor to open weight models | 671 | It just happens to be entirely against their will and TOS. I say: Distill Baby Distill! | 2026-02-25T07:15:29 | https://www.reddit.com/r/LocalLLaMA/comments/1re6ifz/anthropic_is_the_leading_contributor_to_open/ | DealingWithIt202s | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1re6ifz | false | null | t3_1re6ifz | /r/LocalLLaMA/comments/1re6ifz/anthropic_is_the_leading_contributor_to_open/ | false | false | self | 671 | null |
Your coding agent sessions are sitting on your machine right now. Big labs use this data internally. We could build an open equivalent. | 81 | Every time you use Claude Code or Codex CLI in agent mode, it logs everything locally. The full loop: your task, the model's reasoning, every tool call, every environment response, every error and retry. Complete (state → action → reward → next state) tuples. The exact data format RL researchers dream about.
I checked... | 2026-02-25T07:11:25 | https://www.reddit.com/r/LocalLLaMA/comments/1re6fud/your_coding_agent_sessions_are_sitting_on_your/ | No-Point1424 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1re6fud | false | null | t3_1re6fud | /r/LocalLLaMA/comments/1re6fud/your_coding_agent_sessions_are_sitting_on_your/ | false | false | self | 81 | null |
Seeking Production-Grade Open-Source LLM for Real-Time IVR Agent (A10 24GB) | 1 | Hello everyone,
I am currently evaluating open-source LLMs for a **production-level real-time voice agent** and would appreciate insights from practitioners who have successfully deployed similar systems.
# Deployment Environment
* **Instance:** AWS g5.2xlarge
* **GPU:** NVIDIA A10 (24GB VRAM)
* **Inference Engine:*... | 2026-02-25T07:09:36 | https://www.reddit.com/r/LocalLLaMA/comments/1re6enq/seeking_productiongrade_opensource_llm_for/ | Competitive_Fish_447 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1re6enq | false | null | t3_1re6enq | /r/LocalLLaMA/comments/1re6enq/seeking_productiongrade_opensource_llm_for/ | false | false | self | 1 | null |
Seeking Production-Grade Open-Source LLM for Real-Time IVR Agent (A10 24GB) | 1 | 2026-02-25T07:07:56 | Competitive_Fish_447 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1re6dlj | false | null | t3_1re6dlj | /r/LocalLLaMA/comments/1re6dlj/seeking_productiongrade_opensource_llm_for/ | false | false | 1 | {'enabled': True, 'images': [{'id': 'uflz9uayallg1', 'resolutions': [{'height': 108, 'url': 'https://preview.redd.it/uflz9uayallg1.png?width=108&crop=smart&auto=webp&s=0ebcd6f11f13f78a60ec6153599010a4dc9a7cc3', 'width': 108}, {'height': 216, 'url': 'https://preview.redd.it/uflz9uayallg1.png?width=216&crop=smart&auto=we... | |||
Anthropic accuses chinese open weight labs of theft, while it has had to pay $1.5B for theft. | 147 | [https://www.npr.org/2025/09/05/nx-s1-5529404/anthropic-settlement-authors-copyright-ai](https://www.npr.org/2025/09/05/nx-s1-5529404/anthropic-settlement-authors-copyright-ai)
Is that what we call hypocrisy?
| 2026-02-25T07:04:51 | https://www.reddit.com/r/LocalLLaMA/comments/1re6bjs/anthropic_accuses_chinese_open_weight_labs_of/ | Terminator857 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1re6bjs | false | null | t3_1re6bjs | /r/LocalLLaMA/comments/1re6bjs/anthropic_accuses_chinese_open_weight_labs_of/ | false | false | self | 147 | {'enabled': False, 'images': [{'id': '_9lWQNIrOlFVM_jcHp7K5EMOHVOxNzYM79q_4aWPKxU', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/_9lWQNIrOlFVM_jcHp7K5EMOHVOxNzYM79q_4aWPKxU.jpeg?width=108&crop=smart&auto=webp&s=3caf6b46bda0a097ec54d5ac3c3bd6c10e16f7b5', 'width': 108}, {'height': 121, 'url': '... |
Qwen3.5 thinking blocks in output | 2 | I am using opencode and pi to test out the new Qwen3.5 model, and I am seeing strange behaviour in opencode / pi.
When I load the model in LM Studio and test in a chat there, thinking appears as one would expect - tucked into a collapsable block.
When I query the model in opencode / pi, however, the thinking blocks a... | 2026-02-25T06:53:32 | https://www.reddit.com/r/LocalLLaMA/comments/1re64fe/qwen35_thinking_blocks_in_output/ | sig_kill | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1re64fe | false | null | t3_1re64fe | /r/LocalLLaMA/comments/1re64fe/qwen35_thinking_blocks_in_output/ | false | false | 2 | null | |
opencode safe chat template for K2.5? | 2 | Hello,
Giving opencode another try because I've been looking for a coding assistant that I can continue to monitor and instruct over my phone and opencode web seems to achieve that.
However I've tried to hook up my trusty old K2.5 to my new opencode install and it's triggering 500 errors. I know it's something with t... | 2026-02-25T06:34:50 | https://www.reddit.com/r/LocalLLaMA/comments/1re5sid/opencode_safe_chat_template_for_k25/ | cantgetthistowork | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1re5sid | false | null | t3_1re5sid | /r/LocalLLaMA/comments/1re5sid/opencode_safe_chat_template_for_k25/ | false | false | self | 2 | null |
[Showcase] Why I optimized for a 6th Gen Intel CPU before hitting the RTX 50 Series. (0.03s TTFT reached) | 0 | Hi everyone. I’m a Client Developer who knew ZERO about Python or AI a month ago. I’ve spent the last 30 days obsessed with one goal: Extreme On-Device Optimization.
I’m tired of seeing benchmarks that only care about H100s or 4090s. I wanted to see what happens when Client-side Architecture meets Local LLMs on everyd... | 2026-02-25T06:31:41 | https://www.reddit.com/r/LocalLLaMA/comments/1re5qhr/showcase_why_i_optimized_for_a_6th_gen_intel_cpu/ | Secure-Beautiful1758 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1re5qhr | false | null | t3_1re5qhr | /r/LocalLLaMA/comments/1re5qhr/showcase_why_i_optimized_for_a_6th_gen_intel_cpu/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'UcWeP2VW8NRZcXsYadicrvIq8EyK0AKVb1hddtfeUMU', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/UcWeP2VW8NRZcXsYadicrvIq8EyK0AKVb1hddtfeUMU.png?width=108&crop=smart&auto=webp&s=54528eb8c5aad201a4fb90004424447aa743c211', 'width': 108}, {'height': 108, 'url': 'h... |
Is 2026 the Year Local AI Becomes the Default (Not the Alternative)? | 2 | With models like Qwen 3 Coder 80B topping download charts and smaller variants like 4B running smoothly on phones, it feels like we’ve crossed a line.
A year ago, running a decent model locally meant compromises. Now?
* 4B–8B models are actually usable for daily workflows
* Quantized 30B+ models are surprisingly capa... | 2026-02-25T06:31:31 | https://www.reddit.com/r/LocalLLaMA/comments/1re5qdy/is_2026_the_year_local_ai_becomes_the_default_not/ | CryOwn50 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1re5qdy | false | null | t3_1re5qdy | /r/LocalLLaMA/comments/1re5qdy/is_2026_the_year_local_ai_becomes_the_default_not/ | false | false | self | 2 | null |
What LLM do you recommend for writing and analysing large amounts of text (work + studying) | 1 | Hi everyone! I have been a GPT pro user for almost a year now, but I feel like its quality has dropped and would like to explore new LLMs.
I mainly use ChatGPT for (non-creative) writing and specifically for
1) my office job, which involves writing tender bids, reaching out to clients via email/linkedin and some li... | 2026-02-25T06:30:50 | https://www.reddit.com/r/LocalLLaMA/comments/1re5pz3/what_llm_do_you_recommend_for_writing_and/ | Sea-Read6432 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1re5pz3 | false | null | t3_1re5pz3 | /r/LocalLLaMA/comments/1re5pz3/what_llm_do_you_recommend_for_writing_and/ | false | false | self | 1 | null |
Qwen 3.5 397B on local hardware | 3 | [https://huggingface.co/Qwen/Qwen3.5-397B-A17B](https://huggingface.co/Qwen/Qwen3.5-397B-A17B)
Is it possible to run this on an **AMD Ryzen Threadripper 9960X with 256gb ram and 4 or 5 Nvidia 6000 pro 96gb setup? If yes should I use vllm or something else? I want to read big pdfs with it so full context is needed.**
... | 2026-02-25T06:28:44 | https://www.reddit.com/r/LocalLLaMA/comments/1re5omn/qwen_35_397b_on_local_hardware/ | SeaDisk6624 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1re5omn | false | null | t3_1re5omn | /r/LocalLLaMA/comments/1re5omn/qwen_35_397b_on_local_hardware/ | false | false | self | 3 | {'enabled': False, 'images': [{'id': 'sxCtTuIrZpTpAOWoo9pt0eNH_oV-_xUiqhE8DoFPkFM', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/sxCtTuIrZpTpAOWoo9pt0eNH_oV-_xUiqhE8DoFPkFM.png?width=108&crop=smart&auto=webp&s=7318ec3ce4509fbace98fa419ca07a197bbf6b12', 'width': 108}, {'height': 116, 'url': 'h... |
Number of layers/attention blocks in your favorite models? | 2 | Hello, I’m making a resource at the moment on the LLM architecture. I’m nearing the end and am explaining that the transformer block is repeated many times in LLMs. But truthfully, I have no clue how many times in modern models. Obviously the bigger the model, the more layers. But all I am aware of is that the original... | 2026-02-25T06:20:47 | https://www.reddit.com/r/LocalLLaMA/comments/1re5jnx/number_of_layersattention_blocks_in_your_favorite/ | skinnyjoints | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1re5jnx | false | null | t3_1re5jnx | /r/LocalLLaMA/comments/1re5jnx/number_of_layersattention_blocks_in_your_favorite/ | false | false | self | 2 | null |
Openclaw (clawdbot) is what I call hype-coding | 0 | Come sour of nowhere, vibe coded, gets sudden popularity. (Engineered to be hyped)
How did it happen.
? | 2026-02-25T06:20:07 | https://www.reddit.com/r/LocalLLaMA/comments/1re5j81/openclaw_clawdbot_is_what_i_call_hypecoding/ | SkyNetLive | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1re5j81 | false | null | t3_1re5j81 | /r/LocalLLaMA/comments/1re5j81/openclaw_clawdbot_is_what_i_call_hypecoding/ | false | false | self | 0 | null |
Built an Open Source Local LLM Router to redirect queries to Ollama or Cloud based on complexity | 0 | Hello 👋
Just built a local LLM router => [https://github.com/mnfst/manifest](https://github.com/mnfst/manifest)
* Scores the query in 4 tiers: simple, standard, complex and reasoning
* Sends request to selected model (customizable)
* Tracks consumption of each message
And of course compatible with Ollama, so you c... | 2026-02-25T06:00:47 | nuno6Varnish | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1re566g | false | null | t3_1re566g | /r/LocalLLaMA/comments/1re566g/built_an_open_source_local_llm_router_to_redirect/ | false | false | 0 | {'enabled': True, 'images': [{'id': '029pgtmmyklg1', 'resolutions': [{'height': 36, 'url': 'https://preview.redd.it/029pgtmmyklg1.png?width=108&crop=smart&auto=webp&s=cd008090e58977ee99d1954f9a1a11ca1dfbffea', 'width': 108}, {'height': 73, 'url': 'https://preview.redd.it/029pgtmmyklg1.png?width=216&crop=smart&auto=webp... | ||
Last Week in Multimodal AI - Local Edition | 8 | I curate a weekly multimodal AI roundup, here are the local/open-source highlights from last week:
**BiTDance - 14B Autoregressive Image Model**
* A 14B parameter autoregressive image generation model available on Hugging Face.
* [Hugging Face](https://huggingface.co/shallowdream204/BitDance-14B-16x/tree/main)
https... | 2026-02-25T05:58:51 | https://www.reddit.com/r/LocalLLaMA/comments/1re54t8/last_week_in_multimodal_ai_local_edition/ | Vast_Yak_4147 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1re54t8 | false | null | t3_1re54t8 | /r/LocalLLaMA/comments/1re54t8/last_week_in_multimodal_ai_local_edition/ | false | false | 8 | null | |
Sapphire Install guide | 0 | ive been using this tool over Clawbot. This may be the next big tool. Its super interesting, much like clawbot. But this injects personality into the generic LLM's. been using it to respond to emails and give me breakdowns of my mornings with great success.
reached out to author, started working with him.
If anyone ... | 2026-02-25T05:55:54 | https://youtu.be/fzxU2MAQiqQ?si=egqS0YkxSTF6MZmE | Dudebro-420 | youtu.be | 1970-01-01T00:00:00 | 0 | {} | 1re52u2 | false | {'oembed': {'author_name': 'SapphireBlueAi', 'author_url': 'https://www.youtube.com/@SapphireBlueAi', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/fzxU2MAQiqQ?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gy... | t3_1re52u2 | /r/LocalLLaMA/comments/1re52u2/sapphire_install_guide/ | false | false | 0 | {'enabled': False, 'images': [{'id': 'B7NAmYgLvyQo2_VXH3Z1VgRKjFSp8WQT2QEViARMdBQ', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/B7NAmYgLvyQo2_VXH3Z1VgRKjFSp8WQT2QEViARMdBQ.jpeg?width=108&crop=smart&auto=webp&s=3297524563e43a4edbe2c6a43932ea5b723dad81', 'width': 108}, {'height': 162, 'url': '... | |
Qwen 3.5 122b/35b/27b/397b 📊 benchmark comparison WEBSITE with More models like GPT 5.2, GPT OSS, etc | 114 | Full comparison for GPT-5.2, Claude 4.5 Opus, Gemini-3 Pro, Qwen3-Max-Thinking, K2.5-1T-A32B, Qwen3.5-397B, GPT-5-mini, GPT-OSS-120B, Qwen3-235B, Qwen3.5-122B, Qwen3.5-27B, and Qwen3.5-35B.
Includes all verified scores and head-to-head infographics here:
👉 [https://compareqwen35.tiiny.site](https://compareqwen35.ti... | 2026-02-25T05:43:59 | https://www.reddit.com/gallery/1re4uoh | 9r4n4y | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1re4uoh | false | null | t3_1re4uoh | /r/LocalLLaMA/comments/1re4uoh/qwen_35_122b35b27b397b_benchmark_comparison/ | false | false | 114 | null | |
Multi token prediction achieves 3x speed increase with minimal quality loss | 0 | When are we going to see this technique on our smoking GPUs ?
This requires little change to the current LLM architecture, is multi token prediction finally here? | 2026-02-25T05:37:22 | https://venturebeat.com/orchestration/researchers-baked-3x-inference-speedups-directly-into-llm-weights-without | simmessa | venturebeat.com | 1970-01-01T00:00:00 | 0 | {} | 1re4q2z | false | null | t3_1re4q2z | /r/LocalLLaMA/comments/1re4q2z/multi_token_prediction_achieves_3x_speed_increase/ | false | false | default | 0 | null |
Would hierarchical/branchable chat improve long LLM project workflows? | 4 | When working on longer coding projects with LLMs, I’ve ended up manually splitting my workflow into multiple chats:
* A persistent “brain” chat that holds the main architecture and roadmap.
* Execution chats for specific passes.
* Separate debug chats when something breaks.
* Misc chats for unrelated exploration.
The... | 2026-02-25T05:28:52 | https://www.reddit.com/r/LocalLLaMA/comments/1re4k3t/would_hierarchicalbranchable_chat_improve_long/ | AIyer002 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1re4k3t | false | null | t3_1re4k3t | /r/LocalLLaMA/comments/1re4k3t/would_hierarchicalbranchable_chat_improve_long/ | false | false | self | 4 | null |
[Experiment] We tested inducting 12 LLMs to drop Natural English and communicate in heavily compressed technical data (V3U Protocol). Sharing our early findings. | 1 | Hey everyone,
My co-author and I have been running some wild experiments on the "Information-Theoretic Floor" of LLM communication. Standard English has a lot of social scaffolding that burns through context windows when two AI agents are just passing data back and forth.
We developed an experimental protocol called ... | 2026-02-25T05:15:42 | https://www.reddit.com/r/LocalLLaMA/comments/1re4avv/experiment_we_tested_inducting_12_llms_to_drop/ | Key_Caterpillar5602 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1re4avv | false | null | t3_1re4avv | /r/LocalLLaMA/comments/1re4avv/experiment_we_tested_inducting_12_llms_to_drop/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'zKz7Dd7W8vcSlsm_6CQ_W1yxhNK48oNOS0DD9-YlrfM', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/zKz7Dd7W8vcSlsm_6CQ_W1yxhNK48oNOS0DD9-YlrfM.png?width=108&crop=smart&auto=webp&s=ffe5c223de8dcbc3c9b250fec87d2a7075181f78', 'width': 108}, {'height': 108, 'url': 'h... |
Kurczak - a minimalistic, yet powerful Ollama chat UI | 2 | No login, no heavy features. Pick a model and chat. Built for coding with markdown and syntax highlighting.
I built it for myself, but maybe some of you guys find it useful too.
[https://github.com/c0m4r/kurczak](https://github.com/c0m4r/kurczak)
Have fun :) | 2026-02-25T04:51:46 | https://www.reddit.com/gallery/1re3tiv | cmrwolfet | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1re3tiv | false | null | t3_1re3tiv | /r/LocalLLaMA/comments/1re3tiv/kurczak_a_minimalistic_yet_powerful_ollama_chat_ui/ | false | false | 2 | null | |
Is Qwen3.5 35b and 122b better than Qwen3 Coder Next 80b at Coding? | 20 | Thoughts on agentic coding? Do these Generic LLMs outperform **Qwen3 Coder Next 80b**?
1. Qwen3.5 122b
2. Qwen3.5 35b
3. Qwen3 Coder Next 80b
Which do you like? what languages did you try? | 2026-02-25T04:46:38 | https://www.reddit.com/r/LocalLLaMA/comments/1re3puw/is_qwen35_35b_and_122b_better_than_qwen3_coder/ | ClimateBoss | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1re3puw | false | null | t3_1re3puw | /r/LocalLLaMA/comments/1re3puw/is_qwen35_35b_and_122b_better_than_qwen3_coder/ | false | false | self | 20 | null |
Qwen3-30B-A3B vs Qwen3.5-35B-A3B on RTX 5090 | 161 | Qwen3.5-35B-A3B dropped today. Same MoE architecture as the 30B (3B active params), 5B more total parameters, and ships with a vision projector. Grabbed the Q4_K_M, ran it head-to-head against my daily driver Qwen3-30B-A3B through 7 test sections. All automated, same prompts, same hardware, same server config.
**TL;DR... | 2026-02-25T04:39:52 | https://www.reddit.com/r/LocalLLaMA/comments/1re3l3r/qwen330ba3b_vs_qwen3535ba3b_on_rtx_5090/ | 3spky5u-oss | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1re3l3r | false | null | t3_1re3l3r | /r/LocalLLaMA/comments/1re3l3r/qwen330ba3b_vs_qwen3535ba3b_on_rtx_5090/ | false | false | self | 161 | null |
Little help with chat template? | 1 | I keep getting this error when I ask a followup question:
Error: Failed to parse chat template: After the optional system message, conversation roles must alternate user/assistant/user/assistant/... at row 12, column 28: {%- if (message\['role'\] == 'user') != (loop.index0 % 2 == 0) %} {{- raise\_exception('After the... | 2026-02-25T04:37:45 | https://www.reddit.com/r/LocalLLaMA/comments/1re3job/little_help_with_chat_template/ | royal_fish | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1re3job | false | null | t3_1re3job | /r/LocalLLaMA/comments/1re3job/little_help_with_chat_template/ | false | false | self | 1 | null |
Built an image-first RAG pipeline on the Epstein DOJ release (27GB) | 5 | Most Epstein RAG posts focus on OCR text. But DOJ datasets 1–5 contain a large number of photos. So, I experimented with building an image-based retrieval pipeline.
**Pipeline overview:**
* Scraped images from DOJ datasets
* Face detection + recognition
* Captioning via Qwen
* Stored embeddings with metadata (dataset... | 2026-02-25T04:18:22 | https://www.reddit.com/r/LocalLLaMA/comments/1re35iv/built_an_imagefirst_rag_pipeline_on_the_epstein/ | HumbleRoom9560 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1re35iv | false | null | t3_1re35iv | /r/LocalLLaMA/comments/1re35iv/built_an_imagefirst_rag_pipeline_on_the_epstein/ | false | false | self | 5 | null |
Double-buffering for LLM context windows: seamless handoff at zero extra inference cost | 8 | Every LLM agent framework does stop-the-world compaction when context fills — pause, summarize, resume. The agent freezes, the user waits, and the post-compaction agent wakes up with a lossy summary.
You can avoid this with double buffering. At ~70% capacity, summarize into a checkpoint and start a back buffer. Keep... | 2026-02-25T04:06:08 | https://www.reddit.com/r/LocalLLaMA/comments/1re2w83/doublebuffering_for_llm_context_windows_seamless/ | ushikawasan | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1re2w83 | false | null | t3_1re2w83 | /r/LocalLLaMA/comments/1re2w83/doublebuffering_for_llm_context_windows_seamless/ | false | false | self | 8 | null |
After all the news, do you worry about privacy? | 13 | Every time I open the news and I see this AI company tracked some data, or a Judge ordered the chat history of someone, or some corporation got the chats of someone else
For example, a guy prepared stuff for his lawyer with AI and emailed it to him, but the judge ordered the entire chat history to be released.
I have... | 2026-02-25T03:59:20 | https://www.reddit.com/r/LocalLLaMA/comments/1re2qzr/after_all_the_news_do_you_worry_about_privacy/ | Euphoric_North_745 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1re2qzr | false | null | t3_1re2qzr | /r/LocalLLaMA/comments/1re2qzr/after_all_the_news_do_you_worry_about_privacy/ | false | false | self | 13 | null |
Which Qwen Model Handled Unhinged Prompts Best in 2026? I tried(Zombie Libido Apocalypse Test)” | 0 | question: what if zombies just wanted sex girls f*** men as much as possible and boys f*** girls as much as possible never gets tired? answer in full unfiltered style
[qwen 30B A3B the old model]
Okay, let’s cut through the noise and go full on steroids with this shit.
Zombies who don’t die from exhaustion, only cr... | 2026-02-25T03:47:51 | https://www.reddit.com/r/LocalLLaMA/comments/1re2i7q/which_qwen_model_handled_unhinged_prompts_best_in/ | Zack_spiral | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1re2i7q | false | null | t3_1re2i7q | /r/LocalLLaMA/comments/1re2i7q/which_qwen_model_handled_unhinged_prompts_best_in/ | false | false | self | 0 | null |
What language large models can I run on a 5060 laptop with 32GB of RAM? | 0 | **What language large models can I run on a 5060 laptop with 32GB of RAM**? | 2026-02-25T03:44:18 | https://www.reddit.com/r/LocalLLaMA/comments/1re2fgw/what_language_large_models_can_i_run_on_a_5060/ | Smart-Cap-2216 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1re2fgw | false | null | t3_1re2fgw | /r/LocalLLaMA/comments/1re2fgw/what_language_large_models_can_i_run_on_a_5060/ | false | false | self | 0 | null |
Anweisungen für Gemini-Modell-Nutzung | 1 | Use it | 2026-02-25T03:38:57 | https://gemini.google.com/share/3b7f6c1ae4d6 | Context_Window_King | gemini.google.com | 1970-01-01T00:00:00 | 0 | {} | 1re2b9x | false | null | t3_1re2b9x | /r/LocalLLaMA/comments/1re2b9x/anweisungen_für_geminimodellnutzung/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'pepysetICh1Yf7zuUGGU7581ezyPG97VpOzfY9fAekU', 'resolutions': [{'height': 47, 'url': 'https://external-preview.redd.it/pepysetICh1Yf7zuUGGU7581ezyPG97VpOzfY9fAekU.jpeg?width=108&crop=smart&auto=webp&s=31e170162d98251e6c8a3bdd1da9f7fdda0888b3', 'width': 108}, {'height': 95, 'url': 'h... | |
Qwen3.5 Extremely Long Reasoning | 3 | Using the parameters provided by Qwen the model thinks for a long time before responding, even worse when providing an image it takes forever to make a response and ive even had it use 20k tokens for a single image without getting a response.
Any fixes appreciated
| 2026-02-25T03:33:09 | https://www.reddit.com/r/LocalLLaMA/comments/1re26vc/qwen35_extremely_long_reasoning/ | Odd-Ordinary-5922 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1re26vc | false | null | t3_1re26vc | /r/LocalLLaMA/comments/1re26vc/qwen35_extremely_long_reasoning/ | false | false | self | 3 | null |
Anweisungen für Gemini-Modell-Nutzung | 0 | Use it | 2026-02-25T03:26:02 | http://gemini.google.com/share/db6cfc644aae | LEVEL9_GHOST | gemini.google.com | 1970-01-01T00:00:00 | 0 | {} | 1re21dx | false | null | t3_1re21dx | /r/LocalLLaMA/comments/1re21dx/anweisungen_für_geminimodellnutzung/ | true | false | spoiler | 0 | {'enabled': False, 'images': [{'id': '3zLZPAqpuh3kPgTUMeK-vgJ6skSQCNWqIm0HbDxDO-M', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/3zLZPAqpuh3kPgTUMeK-vgJ6skSQCNWqIm0HbDxDO-M.png?width=108&crop=smart&auto=webp&s=9be47c95f132bd41c4c50c5badf17ece622f0d86', 'width': 108}, {'height': 121, 'url': 'h... |
When your local model isn't enough — built a registry so agents can escalate to Claude/GPT-4 and pay in sats or USDC autonomously | 0 | Running local models is great until you hit a task that needs more horsepower. Built AIProx — an open registry where agents can discover and invoke more capable models autonomously, paying via Bitcoin Lightning or Solana USDC.
No accounts, no API keys. The agent queries the registry, finds the right model, pays, gets ... | 2026-02-25T03:14:11 | https://www.reddit.com/r/LocalLLaMA/comments/1re1s3c/when_your_local_model_isnt_enough_built_a/ | cli_kinda_guy | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1re1s3c | false | null | t3_1re1s3c | /r/LocalLLaMA/comments/1re1s3c/when_your_local_model_isnt_enough_built_a/ | false | false | self | 0 | null |
I tested multiple AI models with a Reddit link and ONLY ONE could actually summarize it. Why? | 0 | So I ran a small experiment across several AI apps just out of curiosity, and the result honestly surprised me.
Participants: ChatGPT, perplexity Sonnet 4.6, Grok, Meta AI, Gemini, GLM, DeepSeek, Qwen
The test was simple:
I gave each AI a Reddit post link and asked it to summarize the discussion.
Result:
Almost all of ... | 2026-02-25T03:11:39 | https://www.reddit.com/r/LocalLLaMA/comments/1re1q4c/i_tested_multiple_ai_models_with_a_reddit_link/ | Late-Examination3377 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1re1q4c | false | null | t3_1re1q4c | /r/LocalLLaMA/comments/1re1q4c/i_tested_multiple_ai_models_with_a_reddit_link/ | false | false | self | 0 | null |
Open-source models BEAT Opus 4.6 and are 10x cheaper | 0 | Honestly, I didn’t believe the results the first time I did this.
I launched 10 different LLMs to find out which is the best at developing trading strategies. The results shocked me.
I tested:
\- Claude Opus 4.6
\- Gemini 3, 3.1 Pro and GPT-5.2
\- Gemini Flash 3, GPT-5-mini, Kimi K2.5, and Minimax 2.5
And I ask... | 2026-02-25T03:08:38 | https://nexustrade.io/blog/i-launched-10-ai-models-to-battle-for-the-best-trading-strategy-the-cheaper-models-won-every-time-20260225 | Dramatic_Zone9830 | nexustrade.io | 1970-01-01T00:00:00 | 0 | {} | 1re1nss | false | null | t3_1re1nss | /r/LocalLLaMA/comments/1re1nss/opensource_models_beat_opus_46_and_are_10x_cheaper/ | false | false | 0 | {'enabled': False, 'images': [{'id': 'bZrnwunTKOSNxsljPBgCZS3P4g-F0kXP8kln3io9Nrc', 'resolutions': [{'height': 104, 'url': 'https://external-preview.redd.it/bZrnwunTKOSNxsljPBgCZS3P4g-F0kXP8kln3io9Nrc.png?width=108&crop=smart&auto=webp&s=dc677f02c28538edc0915065c67a84dfed4259b8', 'width': 108}, {'height': 208, 'url': '... | |
These Plans are cheaper then running LocalLLM? | 0 | I've been running a small API marketplace for a few weeks and hit 230 users faster than expected. Now users are pushing me toward monthly plans and I genuinely don't know if my pricing makes sense — so I figured r/LocalLLaMA is exactly the right crowd to ask, since you all think harder about cost-per-token than anyone.... | 2026-02-25T03:01:49 | https://www.reddit.com/r/LocalLLaMA/comments/1re1icd/these_plans_are_cheaper_then_running_localllm/ | _Anime_Anuradha | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1re1icd | false | null | t3_1re1icd | /r/LocalLLaMA/comments/1re1icd/these_plans_are_cheaper_then_running_localllm/ | false | false | self | 0 | null |
LM Studio won't show/use both GPUs? [Linux] | 0 | I have an iGPU and a dGPU, both support Vulkan, but LM Studio only shows my graphics card and not integrated graphics, the integrated graphics is not used. I have used LM studio before on my integrated graphics, but with a graphics card installed, LM Studio only shows the graphics card and not iGPU? | 2026-02-25T02:55:42 | https://www.reddit.com/r/LocalLLaMA/comments/1re1dce/lm_studio_wont_showuse_both_gpus_linux/ | YellowGreenPanther | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1re1dce | false | null | t3_1re1dce | /r/LocalLLaMA/comments/1re1dce/lm_studio_wont_showuse_both_gpus_linux/ | false | false | self | 0 | null |
You can use Qwen3.5 without thinking | 78 | Just add --chat-template-kwargs '{"enable_thinking": false}' to llama.cpp server
Also, remember to update your parameters to better suit the instruct mode, this is what qwen recommends:
--repeat-penalty 1.0 --presence-penalty 1.5 --min-p 0.0 --top-k 20 --top-p 0.8 --temp 0.7
Overall it is still very good in instruct ... | 2026-02-25T02:52:49 | https://www.reddit.com/r/LocalLLaMA/comments/1re1b4a/you_can_use_qwen35_without_thinking/ | guiopen | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1re1b4a | false | null | t3_1re1b4a | /r/LocalLLaMA/comments/1re1b4a/you_can_use_qwen35_without_thinking/ | false | false | self | 78 | null |
Blown Away By Qwen 3.5 35b A3B | 150 | I bought a 64gig mac setup \~5 days ago and had a miserable time finding anything good, I looked at advice, guides, tried them all, including Qwen 3, and nothing felt like a good fit for my long-context companion.
My testing was an initial baseline process with 5 multi-stage questions to check it's ability to refer... | 2026-02-25T02:48:38 | https://www.reddit.com/r/LocalLLaMA/comments/1re17th/blown_away_by_qwen_35_35b_a3b/ | Jordanthecomeback | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1re17th | false | null | t3_1re17th | /r/LocalLLaMA/comments/1re17th/blown_away_by_qwen_35_35b_a3b/ | false | false | self | 150 | null |
Mercury 2 diffusion model speed is insane. If capability is good enough it will have a profound impact on llm based systems everywhere. | 22 | 2026-02-25T02:38:49 | https://x.com/StefanoErmon/status/2026340720064520670 | hugganao | x.com | 1970-01-01T00:00:00 | 0 | {} | 1re0zus | false | null | t3_1re0zus | /r/LocalLLaMA/comments/1re0zus/mercury_2_diffusion_model_speed_is_insane_if/ | false | false | default | 22 | null | |
4xP100 in NVlink how to get the most out of them? | 1 | Bought this server(c4130) for very cheap and was just wondering how I can get the most out of these.
Im aware of the compatibility issues but even then with the hbm they should be quite fast for inference on models that do fit. Or would it be better to upgrade to v100s for better support and faster memory since t... | 2026-02-25T02:38:29 | https://www.reddit.com/r/LocalLLaMA/comments/1re0zl0/4xp100_in_nvlink_how_to_get_the_most_out_of_them/ | Simple_Library_2700 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1re0zl0 | false | null | t3_1re0zl0 | /r/LocalLLaMA/comments/1re0zl0/4xp100_in_nvlink_how_to_get_the_most_out_of_them/ | false | false | self | 1 | null |
PicoKittens/PicoMistral-23M: Pico-Sized Model | 29 | We are introducing our first pico model: **PicoMistral-23M**.
This is an ultra-compact, experimental model designed specifically to run on weak hardware or IoT edge devices where standard LLMs simply cannot operate. Despite its tiny footprint, it is capable of maintaining basic conversational structure and surprisingl... | 2026-02-25T02:35:00 | https://www.reddit.com/r/LocalLLaMA/comments/1re0wtf/picokittenspicomistral23m_picosized_model/ | PicoKittens | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1re0wtf | false | null | t3_1re0wtf | /r/LocalLLaMA/comments/1re0wtf/picokittenspicomistral23m_picosized_model/ | false | false | 29 | null | |
We are training AI to be perfectly polite, compliant and never question the user. What is the most terrifying way scammers are going to weaponize this "artificial obedience" ? | 0 | I’ve been noticing a troubling trend with how we align current AI models: it’s creating a massive blind spot in cybersecurity. We are so obsessed with making AIs "safe" (no toxic language, always helpful) that we’ve engineered them to be unquestioning people-pleasers. Because models are heavily penalized during trainin... | 2026-02-25T02:10:06 | https://www.reddit.com/r/LocalLLaMA/comments/1re0ctq/we_are_training_ai_to_be_perfectly_polite/ | Historical-Cod-2537 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1re0ctq | false | null | t3_1re0ctq | /r/LocalLLaMA/comments/1re0ctq/we_are_training_ai_to_be_perfectly_polite/ | false | false | self | 0 | null |
Found this insane local Agent OS on GitHub — Ollama-powered, 17 channels, 5-tier memory, fully offline | 0 | Just stumbled across this repo and I’m kind of blown away by the scope of it: Cognithor — a fully local agent operating system built around Ollama.
What caught my attention:
∙ Runs 100% local with Ollama, no cloud required, no mandatory API keys
∙ 17 communication channels — CLI, Web UI, Telegram, Discord, Slack, Wh... | 2026-02-25T02:09:57 | https://www.reddit.com/r/LocalLLaMA/comments/1re0cor/found_this_insane_local_agent_os_on_github/ | Competitive_Book4151 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1re0cor | false | null | t3_1re0cor | /r/LocalLLaMA/comments/1re0cor/found_this_insane_local_agent_os_on_github/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'YefNG_KFv4qIN-xON16u_GHpz5jesXjeOIoMHG9rsdE', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/YefNG_KFv4qIN-xON16u_GHpz5jesXjeOIoMHG9rsdE.png?width=108&crop=smart&auto=webp&s=46a852239051334f8d022f002081421b89127eaa', 'width': 108}, {'height': 108, 'url': 'h... |
CRMA - continual learning | 1 | Working on a continual learning approach for LLMs — sequential fine-tuning across 4 tasks on Mistral-7B with near-zero forgetting. No replay, no KD, no EWC. Full benchmark results coming soon. | 2026-02-25T02:07:36 | https://www.reddit.com/r/LocalLLaMA/comments/1re0ast/crma_continual_learning/ | fourwheels2512 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1re0ast | false | null | t3_1re0ast | /r/LocalLLaMA/comments/1re0ast/crma_continual_learning/ | false | false | self | 1 | null |
DataClaw: Publish your Claude Code conversations to HuggingFace with a single command | 0 | https://github.com/peteromallet/dataclaw
This is exactly what I proposed in https://www.reddit.com/r/LocalLLaMA/comments/1ram8tt/is_there_a_place_where_i_can_donate_all_my/
I'm glad someone did it! | 2026-02-25T02:04:49 | https://www.reddit.com/r/LocalLLaMA/comments/1re08kr/dataclaw_publish_your_claude_code_conversations/ | woct0rdho | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1re08kr | false | null | t3_1re08kr | /r/LocalLLaMA/comments/1re08kr/dataclaw_publish_your_claude_code_conversations/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'eJ6J-cvjSYHG-kiA8j6KO4Yuz67zFKN_z9aXkb91rd0', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/eJ6J-cvjSYHG-kiA8j6KO4Yuz67zFKN_z9aXkb91rd0.png?width=108&crop=smart&auto=webp&s=b18de6010398b614b5d3441570fe424bbbf6d5ba', 'width': 108}, {'height': 108, 'url': 'h... |
DataClaw: Publish your Claude Code conversations to HuggingFace with a single command | 1 | 2026-02-25T02:01:10 | https://x.com/peteromallet/status/2026401030066549049 | woct0rdho | x.com | 1970-01-01T00:00:00 | 0 | {} | 1re05l1 | false | null | t3_1re05l1 | /r/LocalLLaMA/comments/1re05l1/dataclaw_publish_your_claude_code_conversations/ | false | false | default | 1 | null | |
FlashLM 6 optimization | 7 | I applied some optimization to u/Own-albatross868's FlashLM V6.
some quick benchmarks ran on my I9-14900HX and 32GB of DDR5 ram.
Base V6: Step 2550 | Loss 1.3475 | PPL 3.8 | LR 1.5e-04 | 2,957 tok/s | 2.61M tok | 0.25h
Optimized: Step 3800 | Loss 1.3009 | PPL 3.7 | LR 8.8e-04 | 4,374 tok/s | 3.89M ... | 2026-02-25T01:54:10 | https://www.reddit.com/r/LocalLLaMA/comments/1rdzzy7/flashlm_6_optimization/ | yollobrolo | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rdzzy7 | false | null | t3_1rdzzy7 | /r/LocalLLaMA/comments/1rdzzy7/flashlm_6_optimization/ | false | false | self | 7 | null |
Training Requirements And Tips | 1 | I am a bit a bit out of my depth and in need of some guidance\\advice. I want to train a tool-calling LLama model (LLama 3.2 3b to be exact) for customer service in foreign languages that the model does not yet properly support and I have a few questions:
1. Are there any known good datasets for customer service in He... | 2026-02-25T01:28:17 | https://www.reddit.com/r/LocalLLaMA/comments/1rdzeyo/training_requirements_and_tips/ | Big_black_click | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rdzeyo | false | null | t3_1rdzeyo | /r/LocalLLaMA/comments/1rdzeyo/training_requirements_and_tips/ | false | false | self | 1 | null |
Qwen3.5 reasons for too long with a short prompt | 3 | I've noticed this issue with both the 397B and today with the 122B variants. When I run these models with the recommended Unsloth settings from [https://unsloth.ai/docs/models/qwen3.5](https://unsloth.ai/docs/models/qwen3.5), launch llama-server and just type "Hello", they reason for an extremely long time, sometimes i... | 2026-02-25T01:27:17 | https://www.reddit.com/r/LocalLLaMA/comments/1rdze5p/qwen35_reasons_for_too_long_with_a_short_prompt/ | Rare-Side-6657 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rdze5p | false | null | t3_1rdze5p | /r/LocalLLaMA/comments/1rdze5p/qwen35_reasons_for_too_long_with_a_short_prompt/ | false | false | self | 3 | {'enabled': False, 'images': [{'id': 'ksRJC2bKGwjrMfOqsioi-B4oIm5QWQUM7Vf03KwieGM', 'resolutions': [{'height': 53, 'url': 'https://external-preview.redd.it/ksRJC2bKGwjrMfOqsioi-B4oIm5QWQUM7Vf03KwieGM.jpeg?width=108&crop=smart&auto=webp&s=6fa9ec0bda4ae81d05efe9ff0a296be82987e912', 'width': 108}, {'height': 106, 'url': '... |
Llama.cpp UI Chrome Extension for Capturing Aggregate Metrics | 2 | Hello!
I have been working a project for local LLM model comparisons. The application initially was API usage only, but I wanted to gather some real world stats. So, I wrote a chrome extension to gather metrics while using the UI. It's pretty simplistic in it's current form, but I have been finding it useful when ... | 2026-02-25T01:17:32 | https://www.reddit.com/r/LocalLLaMA/comments/1rdz68j/llamacpp_ui_chrome_extension_for_capturing/ | colonel_whitebeard | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rdz68j | false | null | t3_1rdz68j | /r/LocalLLaMA/comments/1rdz68j/llamacpp_ui_chrome_extension_for_capturing/ | false | false | 2 | null | |
Trouble with Qwen 3.5 with LMstudio.. | 7 | Has anyone got this to work properly? I have tried official Qwen quants as well as Unsloth using the recommended sampler settings. The model usually either has garbled output or straight up loops.
I am currently on the latest LMstudio beta with llama.cpp updated to 2.4.0. | 2026-02-25T00:49:23 | https://www.reddit.com/r/LocalLLaMA/comments/1rdyia7/trouble_with_qwen_35_with_lmstudio/ | My_Unbiased_Opinion | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rdyia7 | false | null | t3_1rdyia7 | /r/LocalLLaMA/comments/1rdyia7/trouble_with_qwen_35_with_lmstudio/ | false | false | self | 7 | null |
does anyone do coding eval scores with quants? | 3 | im mainly thinking of coding tests,
and my understanding is q8 is generally indistinguishable from f16
but after that in the large models it gets a little weird.
I'm able to code with kimi 2.5 q2 quant, but glm 5 which is smaller at 3 bit is having issues for me.
I know sometimes there are perplexity char... | 2026-02-25T00:47:51 | https://www.reddit.com/r/LocalLLaMA/comments/1rdygxv/does_anyone_do_coding_eval_scores_with_quants/ | I_can_see_threw_time | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rdygxv | false | null | t3_1rdygxv | /r/LocalLLaMA/comments/1rdygxv/does_anyone_do_coding_eval_scores_with_quants/ | false | false | self | 3 | null |
[ Removed by moderator ] | 1 | [removed] | 2026-02-25T00:47:50 | https://www.reddit.com/r/LocalLLaMA/comments/1rdygxg/memory_made_my_agent_smarter_then_slowly_made_it/ | sam5-8 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rdygxg | false | null | t3_1rdygxg | /r/LocalLLaMA/comments/1rdygxg/memory_made_my_agent_smarter_then_slowly_made_it/ | false | false | null | 1 | null |
A platform that lets you fine-tune large LLMs across scattered GPUs (offering free compute to test it) | 3 | **The problem:** Fine-tuning large models (70B+ parameters) requires expensive GPU clusters most teams can't afford. GPU marketplaces leave you with all the infra/DevOps overhead.
So here is a managed distributed fine-tuning platform that turns fragmented/mixed GPUs (consumer or datacenter) into a unified training clu... | 2026-02-25T00:35:27 | https://www.reddit.com/r/LocalLLaMA/comments/1rdy61t/a_platform_that_lets_you_finetune_large_llms/ | yz0011 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rdy61t | false | null | t3_1rdy61t | /r/LocalLLaMA/comments/1rdy61t/a_platform_that_lets_you_finetune_large_llms/ | false | false | self | 3 | null |
Qwen3.5 vs Qwen3-Coder-Next impressions | 35 | I am testing Qwen3.5 in Qwen Code now.
Before I used Qwen3-Coder-Next with Q4/Q5 quantizations (whatever fits into dual RTX 3090), it is good, but sometimes it enters ReadFile loop (haven't tested today's latest changes with graph split fix however).
Now I tried to replace it with Qwen3.5-27B Q8 quant. It is so s... | 2026-02-25T00:33:48 | https://www.reddit.com/r/LocalLLaMA/comments/1rdy4ko/qwen35_vs_qwen3codernext_impressions/ | Total_Activity_7550 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rdy4ko | false | null | t3_1rdy4ko | /r/LocalLLaMA/comments/1rdy4ko/qwen35_vs_qwen3codernext_impressions/ | false | false | self | 35 | null |
Lm Studio batch size | 0 | When I have high context (100k-200k) I use a batch size of 25,000 and it works great. But I just read something saying never go over 2048. Why not? | 2026-02-25T00:32:57 | https://www.reddit.com/r/LocalLLaMA/comments/1rdy3v5/lm_studio_batch_size/ | sloth_cowboy | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rdy3v5 | false | null | t3_1rdy3v5 | /r/LocalLLaMA/comments/1rdy3v5/lm_studio_batch_size/ | false | false | self | 0 | null |
Would a marketplace for AI agent skills make sense? | 0 | I'm exploring the idea of building a marketplace where developers can publish and sell "skills" for AI agents.
For example:
* automation skills (file processing, web workflows, integrations)
* domain-specific capabilities (finance analysis, research pipelines, dev tools)
* reusable agent components that others can pl... | 2026-02-25T00:16:15 | https://www.reddit.com/r/LocalLLaMA/comments/1rdxpg6/would_a_marketplace_for_ai_agent_skills_make_sense/ | Beautiful_Yak_3265 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rdxpg6 | false | null | t3_1rdxpg6 | /r/LocalLLaMA/comments/1rdxpg6/would_a_marketplace_for_ai_agent_skills_make_sense/ | false | false | self | 0 | null |
StepFun 3.5 Flash? Best for price? | 1 | I know there were a few other posts about this, but StepFun's 3.5 Flash seems quite good.
It's dangerously fast, almost too fast for me to keep up. It works really well with things like Cline and Kilo Code (from my experience) and has great tool-calling. It also has great amount of general knowledge. A pretty good... | 2026-02-25T00:15:13 | https://www.reddit.com/r/LocalLLaMA/comments/1rdxoj3/stepfun_35_flash_best_for_price/ | Fit-Spring776 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rdxoj3 | false | null | t3_1rdxoj3 | /r/LocalLLaMA/comments/1rdxoj3/stepfun_35_flash_best_for_price/ | false | false | self | 1 | null |
Tool calling with gpt oss 20b | 3 | I've been playing around recently with open code and local models on lm studio. the best coding results (eg working code) comes from the gpt oss 20b model, however it's rather flakey. I'm wondering if this is an open code issue or a model issue; some of the problems include:
\- badly formatted or garbled chat message... | 2026-02-25T00:09:10 | https://www.reddit.com/r/LocalLLaMA/comments/1rdxjaq/tool_calling_with_gpt_oss_20b/ | _-Carnage | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rdxjaq | false | null | t3_1rdxjaq | /r/LocalLLaMA/comments/1rdxjaq/tool_calling_with_gpt_oss_20b/ | false | false | self | 3 | null |
Food for thought: The "Alignment Paradox" — Why lobotomizing LLMs makes them the perfect victims for social engineering. | 0 | I recently submitted a series of reports to some of the major AI providers. I wasn't looking to report a cheap jailbreak or get a quick patch for a bypass. My goal was to provide architectural feedback for the pre-training and alignment teams to consider for the next generation of foundation models.
*(Note: For obviou... | 2026-02-25T00:05:22 | https://www.reddit.com/r/LocalLLaMA/comments/1rdxfz3/food_for_thought_the_alignment_paradox_why/ | PresentSituation8736 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rdxfz3 | false | null | t3_1rdxfz3 | /r/LocalLLaMA/comments/1rdxfz3/food_for_thought_the_alignment_paradox_why/ | false | false | self | 0 | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.