title stringlengths 1 300 | score int64 0 8.54k | selftext stringlengths 0 41.5k | created timestamp[ns]date 2023-04-01 04:30:41 2026-03-04 02:14:14 ⌀ | url stringlengths 0 878 | author stringlengths 3 20 | domain stringlengths 0 82 | edited timestamp[ns]date 1970-01-01 00:00:00 2026-02-19 14:51:53 | gilded int64 0 2 | gildings stringclasses 7
values | id stringlengths 7 7 | locked bool 2
classes | media stringlengths 646 1.8k ⌀ | name stringlengths 10 10 | permalink stringlengths 33 82 | spoiler bool 2
classes | stickied bool 2
classes | thumbnail stringlengths 4 213 ⌀ | ups int64 0 8.54k | preview stringlengths 301 5.01k ⌀ |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
I got tired of RAG failing on code architecture, so I wrote a Go tool that turns repos into an AST text-map for Claude/GPT | 0 | Hey everyone,
I've been trying to use agentic coding for larger projects, but standard RAG keeps failing me. Vectorizing source code into arbitrary chunks and doing a similarity search just loses the structural context. It's a black box, and sending raw code directly burns through tokens way too fast.
I wanted so... | 2026-02-20T15:07:27 | https://www.reddit.com/r/LocalLLaMA/comments/1r9xsmg/i_got_tired_of_rag_failing_on_code_architecture/ | North-Project2806 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r9xsmg | false | null | t3_1r9xsmg | /r/LocalLLaMA/comments/1r9xsmg/i_got_tired_of_rag_failing_on_code_architecture/ | false | false | self | 0 | null |
I found something weird in GPT-2's residual stream, and would love for someone to check it out. | 2 | So full disclaimer, I have zero ML background. I get how that sounds. But I've been poking at GPT-2 all week and I think I found something but I honestly don't know.
Short story: the period in a sentence like "The temperature was 98." is ambiguous. Is it a decimal or a sentence ender? GPT-2 thinks decimal, 88% confid... | 2026-02-20T14:55:57 | https://www.reddit.com/r/LocalLLaMA/comments/1r9xhls/i_found_something_weird_in_gpt2s_residual_stream/ | denimanddahlias | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r9xhls | false | null | t3_1r9xhls | /r/LocalLLaMA/comments/1r9xhls/i_found_something_weird_in_gpt2s_residual_stream/ | false | false | self | 2 | {'enabled': False, 'images': [{'id': 'sej5DuSgCsq0i-haUjtQ5jPuSijmXMO9d2JxpYo4Fd0', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/sej5DuSgCsq0i-haUjtQ5jPuSijmXMO9d2JxpYo4Fd0.png?width=108&crop=smart&auto=webp&s=bcffa0523ec1aae02259bf90eae155af09b8661f', 'width': 108}, {'height': 108, 'url': 'h... |
Show r/LocalLLaMA: DocParse Arena – Build your own private VLM leaderboard for specific tasks | 2 | Hi everyone,
I’ve found that general benchmarks like [**ocrarena.ai**](http://ocrarena.ai) are great for global VLM rankings, but they don't always help when I need to know which model parses *my* specific, often sensitive, document formats (like custom invoices, Korean business cards, or complex resumes).
To solve t... | 2026-02-20T14:54:30 | https://www.reddit.com/r/LocalLLaMA/comments/1r9xg9p/show_rlocalllama_docparse_arena_build_your_own/ | Available-Message509 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r9xg9p | false | null | t3_1r9xg9p | /r/LocalLLaMA/comments/1r9xg9p/show_rlocalllama_docparse_arena_build_your_own/ | false | false | self | 2 | null |
I evolved my Latent Reasoning Model's code, critiques are welcome | 1 | [This](https://github.com/MatthewLacerda2/TinyRefinementModel/blob/main/train_local.py) is being trained on a RTX 2060 6gb vram. OOM has been a bitch and i rarely get to train with 512 dimensions. My last run was last night, 5h total, with 384 dim, but with:
MAX\_STEPS\_LIMIT = 8
ACCUMULATION\_STEPS = 64
SCRATCH\... | 2026-02-20T14:42:38 | https://www.reddit.com/r/LocalLLaMA/comments/1r9x5l8/i_evolved_my_latent_reasoning_models_code/ | Specific-Welder3120 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r9x5l8 | false | null | t3_1r9x5l8 | /r/LocalLLaMA/comments/1r9x5l8/i_evolved_my_latent_reasoning_models_code/ | false | false | 1 | null | |
Recommend pdf translator that handles tables well. | 2 | Title. I often need to translate pdfs with lots of tables. All solutions i tried either skip the tables or produce unaligned / hard to read results. | 2026-02-20T14:40:29 | https://www.reddit.com/r/LocalLLaMA/comments/1r9x3pu/recommend_pdf_translator_that_handles_tables_well/ | MorePeppers9 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r9x3pu | false | null | t3_1r9x3pu | /r/LocalLLaMA/comments/1r9x3pu/recommend_pdf_translator_that_handles_tables_well/ | false | false | self | 2 | null |
in case you want to use a local llm to search through a corpus of PDF files | 6 | I made this app to be able to visually search through my millions of PDF images of my cats. Say that I want to only search for my orange cat on a beach, I can search "orange cat beach", and the app will return all the PDFs that most closely resemble that.
And, a new feature in v1.1, you can now upload a photo and do ... | 2026-02-20T14:37:06 | https://v.redd.it/qsanv573vnkg1 | djdadi | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1r9x0ou | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/qsanv573vnkg1/DASHPlaylist.mpd?a=1774190248%2CMmRlNWEzYmJlYzQyOTI3YjA5ZTk0N2UwNjRkYTUyMGNlMWU3NTllNWY0YjE3ZTNlNjczMWNhNGQxMDRlNzA3Nw%3D%3D&v=1&f=sd', 'duration': 19, 'fallback_url': 'https://v.redd.it/qsanv573vnkg1/CMAF_1080.mp4?source=fallback', 'h... | t3_1r9x0ou | /r/LocalLLaMA/comments/1r9x0ou/in_case_you_want_to_use_a_local_llm_to_search/ | false | false | 6 | {'enabled': False, 'images': [{'id': 'ajIxdTA0MzN2bmtnMZw6TsxaYjjYCOR0QaQo0qbwjYG_spLtqprfdnZLd03r', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/ajIxdTA0MzN2bmtnMZw6TsxaYjjYCOR0QaQo0qbwjYG_spLtqprfdnZLd03r.png?width=108&crop=smart&format=pjpg&auto=webp&s=5806288c1a0a0823ab920942bba65f9d03fb4... | |
We replaced the LLM in a voice assistant with a fine-tuned 0.6B model. 90.9% tool call accuracy vs. 87.5% for the 120B teacher. ~40ms inference. | 81 | Voice assistants almost always use a cloud LLM for the "brain" stage (intent routing, slot extraction, dialogue state). The LLM stage alone adds 375-750ms per turn, which pushes total pipeline latency past the 500-800ms threshold where conversations feel natural.
For bounded workflows like banking, insurance, or telec... | 2026-02-20T14:37:00 | party-horse | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1r9x0l2 | false | null | t3_1r9x0l2 | /r/LocalLLaMA/comments/1r9x0l2/we_replaced_the_llm_in_a_voice_assistant_with_a/ | false | false | 81 | {'enabled': True, 'images': [{'id': 'lh8p2xv0vnkg1', 'resolutions': [{'height': 60, 'url': 'https://preview.redd.it/lh8p2xv0vnkg1.png?width=108&crop=smart&auto=webp&s=aea828ca1d0522afca69649c8cb89f86c447e6eb', 'width': 108}, {'height': 121, 'url': 'https://preview.redd.it/lh8p2xv0vnkg1.png?width=216&crop=smart&auto=web... | ||
Any fine tune of qwen3-vl for creative writing | 3 | After doing some experiment I found out qwen3-vl being really good with writing prompts for image generation model, I was open to find one that has fine tuned on creative writing.
I don't care if it's nsfw or not. | 2026-02-20T14:33:39 | https://www.reddit.com/r/LocalLLaMA/comments/1r9wxi5/any_fine_tune_of_qwen3vl_for_creative_writing/ | AdventurousGold672 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r9wxi5 | false | null | t3_1r9wxi5 | /r/LocalLLaMA/comments/1r9wxi5/any_fine_tune_of_qwen3vl_for_creative_writing/ | false | false | self | 3 | null |
GGML and llama.cpp join HF to ensure the long-term progress of Local AI | 224 | article by Georgi Gerganov, Xuan-Son Nguyen, Aleksander Grygier, Lysandre, Victor Mustar, Julien Chaumond | 2026-02-20T14:31:22 | https://huggingface.co/blog/ggml-joins-hf | jacek2023 | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1r9wvg4 | false | null | t3_1r9wvg4 | /r/LocalLLaMA/comments/1r9wvg4/ggml_and_llamacpp_join_hf_to_ensure_the_longterm/ | false | false | 224 | {'enabled': False, 'images': [{'id': 'tLGg2WMvFn2R5w7Nf2m6oJPphAYJILLSWaWPLPoW8i4', 'resolutions': [{'height': 57, 'url': 'https://external-preview.redd.it/tLGg2WMvFn2R5w7Nf2m6oJPphAYJILLSWaWPLPoW8i4.png?width=108&crop=smart&auto=webp&s=d89a4495f9f21a6ecc31b79fe568e2763224f254', 'width': 108}, {'height': 115, 'url': 'h... | |
Best text to voice model for Mac M4? I want something closer to Grok's female voice. | 1 | So I was reading articles and I always tend to procrastinate while reading articles. So I found a hack. I just pasted this prompt in Grok.
> Format this properly in markdown, just remove the --- from in between, don't change anything else.
And it gave me a proper voice mode. The problem is it only gives me half the a... | 2026-02-20T14:23:13 | https://www.reddit.com/r/LocalLLaMA/comments/1r9wo7o/best_text_to_voice_model_for_mac_m4_i_want/ | deadcoder0904 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r9wo7o | false | null | t3_1r9wo7o | /r/LocalLLaMA/comments/1r9wo7o/best_text_to_voice_model_for_mac_m4_i_want/ | false | false | self | 1 | null |
Best open-source model to host on 4× H200 GPUs for general chat + IDE agent (OpenWebUI + Cline)? | 1 | Hey everyone,
I have access to 4× NVIDIA H200 GPUs and I’d like to self-host the best possible open-source model for:
• General chat (via OpenWebUI)
• Coding + IDE agent usage (Cline or similar autonomous coding agent)
I’m looking for the best overall quality model I can realistically run on this hardware. Any ad... | 2026-02-20T14:14:38 | https://www.reddit.com/r/LocalLLaMA/comments/1r9wgmh/best_opensource_model_to_host_on_4_h200_gpus_for/ | OriginalTangerine358 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r9wgmh | false | null | t3_1r9wgmh | /r/LocalLLaMA/comments/1r9wgmh/best_opensource_model_to_host_on_4_h200_gpus_for/ | false | false | self | 1 | null |
ggml / llama.cpp joining Hugging Face — implications for local inference? | 25 | ggml / llama.cpp joining HF feels like a significant moment for local inference.
On one hand, this could massively accelerate tooling, integration, and long-term support for local AI. On the other, it concentrates even more of the open model stack under one umbrella.
Is this a net win for the community?
What does th... | 2026-02-20T14:08:56 | https://www.reddit.com/r/LocalLLaMA/comments/1r9wbl3/ggml_llamacpp_joining_hugging_face_implications/ | pmv143 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r9wbl3 | false | null | t3_1r9wbl3 | /r/LocalLLaMA/comments/1r9wbl3/ggml_llamacpp_joining_hugging_face_implications/ | false | false | self | 25 | null |
Какая лучшая Б/у видеокарта под AI в бюджете 10-15 тис. грн? | 0 | Хочу купить видеокарту в свой сервер чтобы запускать ИИ модели дома и использовать в своих проектах не платя за api.
Сейчас остановился на варианте RTX 3060 12GB, или можете предложить карточку получше за этот бюджет?
Также вопрос какую ИИ модель можно будет запустить на этой видеокарте в сервере с x2 Xeon e5645, 96G... | 2026-02-20T14:07:50 | https://www.reddit.com/r/LocalLLaMA/comments/1r9wan7/какая_лучшая_бу_видеокарта_под_ai_в_бюджете_1015/ | Due_Ear7437 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r9wan7 | false | null | t3_1r9wan7 | /r/LocalLLaMA/comments/1r9wan7/какая_лучшая_бу_видеокарта_под_ai_в_бюджете_1015/ | false | false | self | 0 | null |
Why did Nvidia walk back its $100 billion OpenAI commitment? | 0 | Turns out the much-hyped $100 billion Nvidia-OpenAI partnership from September never actually went anywhere. Now Nvidia is reportedly close to a straightforward $30 billion equity investment instead, part of a broader round that could top $100 billion and value OpenAI at $730 billion pre-money. The deal could close as ... | 2026-02-20T14:05:52 | NoSquirrel4840 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1r9w8x9 | false | null | t3_1r9w8x9 | /r/LocalLLaMA/comments/1r9w8x9/why_did_nvidia_walk_back_its_100_billion_openai/ | false | false | 0 | {'enabled': True, 'images': [{'id': 'gckf7ikipnkg1', 'resolutions': [{'height': 152, 'url': 'https://preview.redd.it/gckf7ikipnkg1.jpeg?width=108&crop=smart&auto=webp&s=eca906930ebb2ff8a9b6d0517519b5bb3f5a7314', 'width': 108}, {'height': 305, 'url': 'https://preview.redd.it/gckf7ikipnkg1.jpeg?width=216&crop=smart&auto=... | ||
Persistent Memory Solutions | 4 | Hello,
I am building a local first AI agent in my linux system (ubuntu). I am in the phase of implementing a persistent long term memory. I am currently thinking of starting off with creating a local JSON format. What do you suggest?
Thanks. | 2026-02-20T14:04:58 | https://www.reddit.com/r/LocalLLaMA/comments/1r9w84r/persistent_memory_solutions/ | Itchy_Supermarket_43 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r9w84r | false | null | t3_1r9w84r | /r/LocalLLaMA/comments/1r9w84r/persistent_memory_solutions/ | false | false | self | 4 | null |
GGML.AI has got acquired by Huggingface | 396 | 2026-02-20T13:54:26 | https://github.com/ggml-org/llama.cpp/discussions/19759 | Time_Reaper | github.com | 1970-01-01T00:00:00 | 0 | {} | 1r9vywq | false | null | t3_1r9vywq | /r/LocalLLaMA/comments/1r9vywq/ggmlai_has_got_acquired_by_huggingface/ | false | false | 396 | {'enabled': False, 'images': [{'id': 'l687iazpdDZhrDlIbQBxf8OTcfiJg6WGdsBpv03NqVo', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/l687iazpdDZhrDlIbQBxf8OTcfiJg6WGdsBpv03NqVo.png?width=108&crop=smart&auto=webp&s=5053b6c5ccfd8bc5a43bdde70b8ac545f79582ac', 'width': 108}, {'height': 108, 'url': 'h... | ||
I Built an MCP Server for Algorithmic Governance | 0 | # I Built an MCP Server for Algorithmic Governance — The Egregore Protocol
Hello everyone,
I’ve been working on a conceptual architecture that bridges philosophy and the Model Context Protocol (MCP). It’s called **The Egregore Node**.
We talk a lot about AI alignment — aligning models with human values. But human va... | 2026-02-20T13:54:02 | https://www.reddit.com/r/LocalLLaMA/comments/1r9vyl9/i_built_an_mcp_server_for_algorithmic_governance/ | UsePuzzleheaded7918 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r9vyl9 | false | null | t3_1r9vyl9 | /r/LocalLLaMA/comments/1r9vyl9/i_built_an_mcp_server_for_algorithmic_governance/ | false | false | self | 0 | null |
Cutting token spend in coding workflows: local indexing + local model delegation (Ollama) | 1 | I use LLMs daily for development and kept running into the same cost/latency issue: a lot of context window (and tokens) goes to re reading the repo and re deriving basic structure. Since the models are stateless, this repeats every session.
What has worked well for me is putting a local, persistent layer called docde... | 2026-02-20T13:50:53 | https://www.reddit.com/r/LocalLLaMA/comments/1r9vvyj/cutting_token_spend_in_coding_workflows_local/ | orioncabbar | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r9vvyj | false | null | t3_1r9vvyj | /r/LocalLLaMA/comments/1r9vvyj/cutting_token_spend_in_coding_workflows_local/ | false | false | self | 1 | null |
test comment | 0 | test comment | 2026-02-20T13:50:36 | https://www.reddit.com/r/LocalLLaMA/comments/1r9vvpv/test_comment/ | jdrolls | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r9vvpv | false | null | t3_1r9vvpv | /r/LocalLLaMA/comments/1r9vvpv/test_comment/ | false | false | self | 0 | null |
Pure WebGPU BitNet inference — run LLMs in your browser on any GPU, no CUDA | 2 | I wrote all NN kernels in WGSL from scratch. Runs BitNet models on any GPU through WebGPU — no NVIDIA dependency. Works in Chrome and natively via wgpu-native. Looking for feedback!
[https://huggingface.co/spaces/m96-chan/0xBitNet](https://huggingface.co/spaces/m96-chan/0xBitNet) | 2026-02-20T13:49:54 | https://www.reddit.com/r/LocalLLaMA/comments/1r9vv4w/pure_webgpu_bitnet_inference_run_llms_in_your/ | Few_Willingness_7382 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r9vv4w | false | null | t3_1r9vv4w | /r/LocalLLaMA/comments/1r9vv4w/pure_webgpu_bitnet_inference_run_llms_in_your/ | false | false | self | 2 | {'enabled': False, 'images': [{'id': 'cbwrTdbge1lfCpE4STGbOzAkdp2B-aEzCxSuoDT5NSI', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/cbwrTdbge1lfCpE4STGbOzAkdp2B-aEzCxSuoDT5NSI.png?width=108&crop=smart&auto=webp&s=ce0b90581d70ab56871d15098ae8e6f703933c8a', 'width': 108}, {'height': 116, 'url': 'h... |
Nice interactive explanation of Speculative Decoding | 8 | 2026-02-20T13:47:19 | https://www.adaptive-ml.com/post/speculative-decoding-visualized | individual_kex | adaptive-ml.com | 1970-01-01T00:00:00 | 0 | {} | 1r9vsye | false | null | t3_1r9vsye | /r/LocalLLaMA/comments/1r9vsye/nice_interactive_explanation_of_speculative/ | false | false | 8 | {'enabled': False, 'images': [{'id': 'EhW4bQWT9WIeRw5amz2pS-lzd3lb6K6qLMCB-e4QXzU', 'resolutions': [{'height': 62, 'url': 'https://external-preview.redd.it/EhW4bQWT9WIeRw5amz2pS-lzd3lb6K6qLMCB-e4QXzU.png?width=108&crop=smart&auto=webp&s=37820ca48d23475526e487d6abfc7d0aaecea61b', 'width': 108}, {'height': 124, 'url': 'h... | ||
Worst llama.cpp bugs | 6 | \- Stop signals are not sent or not carried out by the server, meaning if some extension receives the stop signal in the interface, normally it doesnt stop the execution of the model, the model just continues
\- Changing the thread is not respected, it might lead to unexpected behavior like mixing up of contexts... W... | 2026-02-20T13:18:54 | https://www.reddit.com/r/LocalLLaMA/comments/1r9v4zq/worst_llamacpp_bugs/ | Equivalent-Belt5489 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r9v4zq | false | null | t3_1r9v4zq | /r/LocalLLaMA/comments/1r9v4zq/worst_llamacpp_bugs/ | false | false | self | 6 | {'enabled': False, 'images': [{'id': 'w2jAaBI0qSVDlS7A4_hgXrgoVgybZVMXiGxjZPj4_io', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/w2jAaBI0qSVDlS7A4_hgXrgoVgybZVMXiGxjZPj4_io.png?width=108&crop=smart&auto=webp&s=bffa91138d70c0cd11481b6e283a2f5f54d697cf', 'width': 108}, {'height': 108, 'url': 'h... |
I got tired of cloud AI agents getting hijacked, so I built an open-source Llama 3.1 framework that is physically locked to my MacBook's TPM hardware (Secure Enclave). | 0 | 2026-02-20T13:17:51 | https://www.reddit.com/r/LocalLLaMA/comments/1r9v45d/i_got_tired_of_cloud_ai_agents_getting_hijacked/ | Ok_Traffic5955 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r9v45d | false | null | t3_1r9v45d | /r/LocalLLaMA/comments/1r9v45d/i_got_tired_of_cloud_ai_agents_getting_hijacked/ | false | false | 0 | null | ||
Stop letting your AI agent hallucinate bioinformatics code — I built a 140-skill knowledge base to fix that | 0 | I've been building scicraft — a curated library of 140 life sciences "skills" designed to be injected into AI coding agents (like Claude Code) as domain knowledge. Each skill is a structured Markdown file covering tools like Scanpy, PyDESeq2, AutoDock Vina, RDKit, BioPython, and many more, with runnable code examples, ... | 2026-02-20T13:14:15 | https://www.reddit.com/r/LocalLLaMA/comments/1r9v17z/stop_letting_your_ai_agent_hallucinate/ | jjaechang | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r9v17z | false | null | t3_1r9v17z | /r/LocalLLaMA/comments/1r9v17z/stop_letting_your_ai_agent_hallucinate/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': '4Z_QKSHypkmR6f9rH4gTwjlrfhorx9WQD7l-n1lqg2s', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/4Z_QKSHypkmR6f9rH4gTwjlrfhorx9WQD7l-n1lqg2s.png?width=108&crop=smart&auto=webp&s=3a6948283ae2d0b3fb9569d9628c73d549dac0d3', 'width': 108}, {'height': 108, 'url': 'h... |
I got tired of guessing if a model would fit in my VRAM, so I built a hardware-aware compatibility engine (Offline, Privacy-First) | 0 | Hi r/LocalLLaMA,
Like many of you, I have a folder full of 20GB+ `.gguf` files that I spent hours downloading, only to find out they run at 0.2 t/s or instantly OOM because I miscalculated the KV cache overhead or didn't account for my system's idle VRAM usage.
I wanted a way to know **before** I download: *Will this... | 2026-02-20T13:14:12 | https://www.reddit.com/r/LocalLLaMA/comments/1r9v16k/i_got_tired_of_guessing_if_a_model_would_fit_in/ | win10insidegeek | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r9v16k | false | null | t3_1r9v16k | /r/LocalLLaMA/comments/1r9v16k/i_got_tired_of_guessing_if_a_model_would_fit_in/ | false | false | self | 0 | null |
I built a psychology-grounded persistent memory system for AI coding agents (OpenCode/Claude Code) | 0 | I got tired of my AI coding agent forgetting everything between sessions — preferences,
constraints, decisions, bugs I'd fixed. So I built PsychMem.
It's a persistent memory layer for OpenCode (and Claude Code) that models memory the
way human psychology does:
\- Short-Term Memory (STM) with exponential decay
\-... | 2026-02-20T13:06:23 | https://www.reddit.com/r/LocalLLaMA/comments/1r9uuy2/i_built_a_psychologygrounded_persistent_memory/ | OrdinaryOk3846 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r9uuy2 | false | null | t3_1r9uuy2 | /r/LocalLLaMA/comments/1r9uuy2/i_built_a_psychologygrounded_persistent_memory/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'B_egThsMJbEnYgIxhe6iTTK2xN_O292SJBjFD1KYrqI', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/B_egThsMJbEnYgIxhe6iTTK2xN_O292SJBjFD1KYrqI.png?width=108&crop=smart&auto=webp&s=f620cbd0fbb9a39e50ab1109113a4bda2301dbd1', 'width': 108}, {'height': 108, 'url': 'h... |
Deepseek and Gemma ?? | 872 | 2026-02-20T13:05:36 | ZeusZCC | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1r9uuc6 | false | null | t3_1r9uuc6 | /r/LocalLLaMA/comments/1r9uuc6/deepseek_and_gemma/ | false | false | 872 | {'enabled': True, 'images': [{'id': '84ph0pirenkg1', 'resolutions': [{'height': 143, 'url': 'https://preview.redd.it/84ph0pirenkg1.jpeg?width=108&crop=smart&auto=webp&s=2a0eb9cb66b14588d8ead0ced45b3bea4c1c0c2b', 'width': 108}, {'height': 286, 'url': 'https://preview.redd.it/84ph0pirenkg1.jpeg?width=216&crop=smart&auto=... | |||
Qwen3 Coder Next on 8GB VRAM | 155 | Hi!
I have a PC with 64 GB of RAM and an RTX 3060 12 GB, and I'm running Qwen3 Coder Next in MXFP4 with 131,072 context tokens.
I get a sustained speed of around 23 t/s throughout the entire conversation.
I mainly use it for front-end and back-end web development, and it works perfectly.
I've stopped paying for my ... | 2026-02-20T13:05:21 | https://www.reddit.com/r/LocalLLaMA/comments/1r9uu5h/qwen3_coder_next_on_8gb_vram/ | Juan_Valadez | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r9uu5h | false | null | t3_1r9uu5h | /r/LocalLLaMA/comments/1r9uu5h/qwen3_coder_next_on_8gb_vram/ | false | false | self | 155 | null |
Help me out! QwenCoderNext: 5060ti 16GB VRAM. GPU mode is worse of than CPU mode with 96GB RAM | 4 | so i am using wen3-Coder-Next-Q4_K_M.gguf with Llamacpp.
have 96GB DDR4 2600Mhz RAM and a 5060ti with 16GB VRAM.
if i run in pure CPU mode it uses 91GM RAM with 7t/s
if i do CUDA mode it fills up the VRAM and used another 81GB RAM but i get only 2t/s.
my line:
llama-server.exe --model F:\models\Qwen3-Coder-Nex... | 2026-02-20T12:41:03 | https://www.reddit.com/r/LocalLLaMA/comments/1r9ubcb/help_me_out_qwencodernext_5060ti_16gb_vram_gpu/ | howardhus | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r9ubcb | false | null | t3_1r9ubcb | /r/LocalLLaMA/comments/1r9ubcb/help_me_out_qwencodernext_5060ti_16gb_vram_gpu/ | false | false | self | 4 | null |
Is there an online fine-tuning method that learns from live human corrections (RLHF-style)? | 1 | Hey, so I've been finetuning a lot of model on different tasks.
And everytime, I go through the same process:
- Build a set of tasks for the model to learn.
- Provide the right answer to each task
- Do like 300 of them (very tiring for complex tasks)
- Train the model once, and then test it.
- Model fails on a specific... | 2026-02-20T12:20:48 | https://www.reddit.com/r/LocalLLaMA/comments/1r9twpc/is_there_an_online_finetuning_method_that_learns/ | DEADFOOD | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r9twpc | false | null | t3_1r9twpc | /r/LocalLLaMA/comments/1r9twpc/is_there_an_online_finetuning_method_that_learns/ | false | false | self | 1 | null |
No company is offering "Drop-in AI Infrastructure" for mobile applications | 0 | I mean instead of sending data to the cloud, a SDK which allows any mobile app developer to run AI agents directly on the user's smartphone.
* Inference should runs on the user's device utilizing the phone's neural processors
* the AI runs entirely offline, sensitive data never leaves the device.
* works even in ... | 2026-02-20T12:11:05 | https://www.reddit.com/r/LocalLLaMA/comments/1r9tprx/no_company_is_offering_dropin_ai_infrastructure/ | Infinite_Mix8475 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r9tprx | false | null | t3_1r9tprx | /r/LocalLLaMA/comments/1r9tprx/no_company_is_offering_dropin_ai_infrastructure/ | false | false | self | 0 | null |
Kimi K2.5 better than Opus 4.6 on hallucination benchmark in pharmaceutical domain | 122 | I know the benchmark is mostly commercial models but Kimi K2.5 was part of it and I was actually surprised how well it did against its commercial counterparts.
The benchmark test 7 recent models for hallucinations on a realistic use case and data from the pharmaceutical domain.
Surprisingly, Opus 4.6 has the highest ... | 2026-02-20T11:54:25 | aiprod | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1r9tdvr | false | null | t3_1r9tdvr | /r/LocalLLaMA/comments/1r9tdvr/kimi_k25_better_than_opus_46_on_hallucination/ | false | false | 122 | {'enabled': True, 'images': [{'id': 'c1z228f22nkg1', 'resolutions': [{'height': 83, 'url': 'https://preview.redd.it/c1z228f22nkg1.jpeg?width=108&crop=smart&auto=webp&s=56a6ebc66a42006438e8026a2ea1857d7975088e', 'width': 108}, {'height': 166, 'url': 'https://preview.redd.it/c1z228f22nkg1.jpeg?width=216&crop=smart&auto=w... | ||
Introducing Legal RAG Bench | 8 | # tl;dr
We’re releasing [**Legal RAG Bench**](https://huggingface.co/datasets/isaacus/legal-rag-bench), a new reasoning-intensive benchmark and evaluation methodology for assessing the end-to-end, real-world performance of legal RAG systems.
Our evaluation of state-of-the-art embedding and generative models on Legal ... | 2026-02-20T11:37:13 | https://huggingface.co/blog/isaacus/legal-rag-bench | Neon0asis | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1r9t259 | false | null | t3_1r9t259 | /r/LocalLLaMA/comments/1r9t259/introducing_legal_rag_bench/ | false | false | 8 | {'enabled': False, 'images': [{'id': '7Aq0OeJSFU1-U3UywRzd0n_-vnLAeZmOk6IevhVxygE', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/7Aq0OeJSFU1-U3UywRzd0n_-vnLAeZmOk6IevhVxygE.jpeg?width=108&crop=smart&auto=webp&s=9ed9a01b779c47d58daf4e453c5a10f2c6c995b2', 'width': 108}, {'height': 121, 'url': '... | |
Curious, Would We Get A GLM 5 Flash? | 21 | Is there any announcements? Is it under 80B? | 2026-02-20T11:28:42 | https://www.reddit.com/r/LocalLLaMA/comments/1r9swgk/curious_would_we_get_a_glm_5_flash/ | Significant_Fig_7581 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r9swgk | false | null | t3_1r9swgk | /r/LocalLLaMA/comments/1r9swgk/curious_would_we_get_a_glm_5_flash/ | false | false | self | 21 | null |
Is anyone has Indus by sarvam Invite code ? | 0 | If has then DM. | 2026-02-20T11:17:28 | https://www.reddit.com/r/LocalLLaMA/comments/1r9sp9j/is_anyone_has_indus_by_sarvam_invite_code/ | AmblemYagami | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r9sp9j | false | null | t3_1r9sp9j | /r/LocalLLaMA/comments/1r9sp9j/is_anyone_has_indus_by_sarvam_invite_code/ | false | false | self | 0 | null |
Zero knowledge, zero budget, zero expectations: I finally built a stable & lightweight local AI agent. Thoughts? | 0 | Hi everyone,
A month ago, I wanted a local AI agent on my machine that wouldn't send my data to the cloud or accidentally nuke my Windows OS. But I had a problem: I had zero coding knowledge, zero budget, and honestly, zero expectations of actually pulling it off.
I failed *a lot*. I dealt with endless crashes, bloat... | 2026-02-20T11:16:30 | https://www.reddit.com/r/LocalLLaMA/comments/1r9soni/zero_knowledge_zero_budget_zero_expectations_i/ | Cautious_Flower_3902 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r9soni | false | null | t3_1r9soni | /r/LocalLLaMA/comments/1r9soni/zero_knowledge_zero_budget_zero_expectations_i/ | false | false | self | 0 | null |
[D] Why AI Models Fail at Iterative Reasoning - Documented convergence bias in LLMs | 1 | [removed] | 2026-02-20T11:16:03 | https://www.reddit.com/r/LocalLLaMA/comments/1r9socn/d_why_ai_models_fail_at_iterative_reasoning/ | BaseToolsDev | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r9socn | false | null | t3_1r9socn | /r/LocalLLaMA/comments/1r9socn/d_why_ai_models_fail_at_iterative_reasoning/ | false | false | self | 1 | null |
Why AI Models Fail at Iterative Reasoning – Architectural analysis of convergence bias in LLMs | 1 | 2026-02-20T11:11:35 | https://medium.com/@contact.n8n410/why-ai-models-fail-at-iterative-reasoning-51f8f9930625 | BaseToolsDev | medium.com | 1970-01-01T00:00:00 | 0 | {} | 1r9slhe | false | null | t3_1r9slhe | /r/LocalLLaMA/comments/1r9slhe/why_ai_models_fail_at_iterative_reasoning/ | false | false | default | 1 | null | |
Built a simple web dashboard for local Ollama model management — feedback welcome | 0 | Hey all — I built a small tool called OllaManager to make local Ollama workflows less annoying.
[https://ollamanager.vercel.app](https://ollamanager.vercel.app)
What it’s for:
- cleaner model visibility/management
- less terminal hopping for common tasks
- easier onboarding for people new to local LLMs
Would love bl... | 2026-02-20T10:44:58 | https://www.reddit.com/r/LocalLLaMA/comments/1r9s4p1/built_a_simple_web_dashboard_for_local_ollama/ | Whole-Ostrich-6611 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r9s4p1 | false | null | t3_1r9s4p1 | /r/LocalLLaMA/comments/1r9s4p1/built_a_simple_web_dashboard_for_local_ollama/ | false | false | self | 0 | null |
Can local LLMs real-time in-game assistants? Lessons from deploying Llama 3.1 8B locally | 0 | We’ve been testing a fully local in-game AI assistant architecture, and one of the main questions for us wasn’t just whether it can run - but whether it’s actually more efficient for players. Is waiting a few seconds for a local model response better than alt-tabbing, searching the wiki, scrolling through articles, and... | 2026-02-20T10:34:16 | https://www.reddit.com/r/LocalLLaMA/comments/1r9ryay/can_local_llms_realtime_ingame_assistants_lessons/ | ReleaseDependent7443 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r9ryay | false | null | t3_1r9ryay | /r/LocalLLaMA/comments/1r9ryay/can_local_llms_realtime_ingame_assistants_lessons/ | false | false | self | 0 | null |
Experts please help | 2 | Am a newbie, don't know tech that much.
I got an offer, a mac mini 2014 model 8gb ram 256hb ssd for 110 USD ( this is not that very cheap amount in my area)
I want to run open claw and a model which can be locally installed on this mac mini, so I will get free api.
My question is, can I run some good models on this ... | 2026-02-20T10:27:55 | https://www.reddit.com/r/LocalLLaMA/comments/1r9rucm/experts_please_help/ | thenewjudge | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r9rucm | false | null | t3_1r9rucm | /r/LocalLLaMA/comments/1r9rucm/experts_please_help/ | false | false | self | 2 | null |
ai to ai communication | 0 | [https://github.com/MAXAPIPULL00/mycelium-memory-hub](https://github.com/MAXAPIPULL00/mycelium-memory-hub) | 2026-02-20T10:12:22 | https://www.reddit.com/r/LocalLLaMA/comments/1r9rksr/ai_to_ai_communication/ | Fragrant_Hippo_2487 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r9rksr | false | null | t3_1r9rksr | /r/LocalLLaMA/comments/1r9rksr/ai_to_ai_communication/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'n2AbS94eGHw7iyvlllsvr9VuwVzbtA-aIwH-1W68o94', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/n2AbS94eGHw7iyvlllsvr9VuwVzbtA-aIwH-1W68o94.png?width=108&crop=smart&auto=webp&s=c764c678886bb6ffd440c3540c5ce1a98a342059', 'width': 108}, {'height': 108, 'url': 'h... |
Show LocalLLaMA: The Bt Formula – Quantifying focus-decay in NPU-local LLM workflows | 1 | [removed] | 2026-02-20T10:00:36 | https://www.reddit.com/r/LocalLLaMA/comments/1r9rdgm/show_localllama_the_bt_formula_quantifying/ | Embarrassed-Shop-456 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r9rdgm | false | null | t3_1r9rdgm | /r/LocalLLaMA/comments/1r9rdgm/show_localllama_the_bt_formula_quantifying/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'PNtDE5KyRSO6VTbaYcJkfyABfVgiSmp9UPsRL3x0n2w', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/PNtDE5KyRSO6VTbaYcJkfyABfVgiSmp9UPsRL3x0n2w.jpeg?width=108&crop=smart&auto=webp&s=e7e240ad59740cd53dbe23b85a3a339eeeb0902f', 'width': 108}, {'height': 117, 'url': '... |
Buying cheap 'no display' gpus from ebay? | 13 | I'm finding these RTX 4080/90's for like 200-300GBP on ebay marked as 'no display', clearly theres a risk that they're completely fucked.
If its literally just 'no display' but compute works it seems a stupid easy way of getting a bunch of vRAM on modern GPUs...?
Does anyone experience with this? | 2026-02-20T09:57:35 | https://www.reddit.com/r/LocalLLaMA/comments/1r9rboa/buying_cheap_no_display_gpus_from_ebay/ | getpodapp | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r9rboa | false | null | t3_1r9rboa | /r/LocalLLaMA/comments/1r9rboa/buying_cheap_no_display_gpus_from_ebay/ | false | false | self | 13 | null |
We'll have aliens before Gemma 4. | 19 | Shame on you, Google, shame on you. | 2026-02-20T09:43:22 | https://www.reddit.com/r/LocalLLaMA/comments/1r9r383/well_have_aliens_before_gemma_4/ | DrNavigat | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r9r383 | false | null | t3_1r9r383 | /r/LocalLLaMA/comments/1r9r383/well_have_aliens_before_gemma_4/ | false | false | self | 19 | null |
What’s the first feature that makes a “personal AI assistant” actually useful? | 4 | Hey folks,
I’m experimenting with a local-first, privacy-minded “personal assistant” setup and I’m trying to avoid building 10 half-features.
If you had **30 minutes** with a prototype, what would you want it to do first?
* **A)** Remember things reliably and accept corrections (“my name is now…”)
* **B)** **Read PD... | 2026-02-20T09:15:57 | https://www.reddit.com/r/LocalLLaMA/comments/1r9qmym/whats_the_first_feature_that_makes_a_personal_ai/ | No_Tomato_5771 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r9qmym | false | null | t3_1r9qmym | /r/LocalLLaMA/comments/1r9qmym/whats_the_first_feature_that_makes_a_personal_ai/ | false | false | self | 4 | {'enabled': False, 'images': [{'id': 'A0nuoVZln0il7gsyjfeppb9T2c6pZNHliOWrS6RrOp8', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/A0nuoVZln0il7gsyjfeppb9T2c6pZNHliOWrS6RrOp8.png?width=108&crop=smart&auto=webp&s=7c9f1542b39a59ccf9c065a908dc4167e722a32e', 'width': 108}, {'height': 108, 'url': 'h... |
Kimi has context window expansion ambitions | 599 | 2026-02-20T08:54:10 | omarous | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1r9qa7l | false | null | t3_1r9qa7l | /r/LocalLLaMA/comments/1r9qa7l/kimi_has_context_window_expansion_ambitions/ | false | false | 599 | {'enabled': True, 'images': [{'id': '3cvl2bdh5mkg1', 'resolutions': [{'height': 101, 'url': 'https://preview.redd.it/3cvl2bdh5mkg1.png?width=108&crop=smart&auto=webp&s=fe2b2322a1bfc310b1da410d69b2618ceb049afb', 'width': 108}, {'height': 202, 'url': 'https://preview.redd.it/3cvl2bdh5mkg1.png?width=216&crop=smart&auto=we... | |||
Download and new chat? or keep the convo going | 0 | I'm running qwen3 coder next 80b with context length set to 8k.
I told it to write me a php script with various details. It did but there were some bugs. I pointed out the bugs and it fixed it, but in the process introduced new bugs. it rewrote the whole thing differently, i found differences between various versions... | 2026-02-20T08:36:34 | https://www.reddit.com/r/LocalLLaMA/comments/1r9q0kw/download_and_new_chat_or_keep_the_convo_going/ | biggerfasterstrong | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r9q0kw | false | null | t3_1r9q0kw | /r/LocalLLaMA/comments/1r9q0kw/download_and_new_chat_or_keep_the_convo_going/ | false | false | self | 0 | null |
Context Size Frustration | 12 | Hi Guys
So this post might be a little bit longer as I got really frustrated with local AI and Context Size in particular. If you check my other posts you might notice that this topic has come up for me from time to time already and I\`m once again seeking help.
Tl:dr What method do you use if you want to calc... | 2026-02-20T08:31:43 | https://www.reddit.com/r/LocalLLaMA/comments/1r9pxxc/context_size_frustration/ | Aggressive-Spinach98 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r9pxxc | false | null | t3_1r9pxxc | /r/LocalLLaMA/comments/1r9pxxc/context_size_frustration/ | false | false | self | 12 | null |
How override the original SKILL behavior? | 0 | I use alpine linux, so some skills need to be adapted to work correctly. agent-browser skill works with some tweaks, but i don't want to edit the original one. | 2026-02-20T08:03:30 | https://www.reddit.com/r/LocalLLaMA/comments/1r9phm6/how_override_the_original_skill_behavior/ | Deep_Traffic_7873 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r9phm6 | false | null | t3_1r9phm6 | /r/LocalLLaMA/comments/1r9phm6/how_override_the_original_skill_behavior/ | false | false | self | 0 | null |
Cave Johnson, Aperture Science MCP Division, Address to the Team -Stardate: Whenever I Felt Like Recording This | 0 | Alright listen up. We built the Model Context Protocol.
You're welcome, world.
Now I've been told by our legal team — and I use that term loosely because two of them quit last Tuesday — that we need to 'manage expectations' about what MCP can and can't do.
I told them to manage my foot on their way out.
Here's what... | 2026-02-20T07:47:39 | https://www.reddit.com/r/LocalLLaMA/comments/1r9p852/cave_johnson_aperture_science_mcp_division/ | Otherwise-Mud3283 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r9p852 | false | null | t3_1r9p852 | /r/LocalLLaMA/comments/1r9p852/cave_johnson_aperture_science_mcp_division/ | false | false | self | 0 | null |
LLM comparison tool, Open-sourced and local (Claude/GPT-4/Gemini/Mistral etc.) | 0 | Hey,
I've built a side-by-side comparison tool for testing multiple AI providers locally.
Tracks tokens, costs, and latency across different models and providers.
Features: - Real-time token counting and cost estimation - AES-256-GCM encrypted API key storage - Angular frontend + Node/TypeScript backend - Runs c... | 2026-02-20T07:37:05 | https://www.reddit.com/r/LocalLLaMA/comments/1r9p24u/llm_comparison_tool_opensourced_and_local/ | Glad_Orange9679 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r9p24u | false | null | t3_1r9p24u | /r/LocalLLaMA/comments/1r9p24u/llm_comparison_tool_opensourced_and_local/ | false | false | self | 0 | null |
What are the rate limits for Arena (LMArena)? | 1 | For AIs like gpt-5.2-high, gemini-3-pro, and such, is there a limit for conversation length and file uploads? I won't be using it to make images and videos, just OCR scanning of files and general use. | 2026-02-20T07:36:51 | https://www.reddit.com/r/LocalLLaMA/comments/1r9p1zu/what_are_the_rate_limits_for_arena_lmarena/ | DonDae01 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r9p1zu | false | null | t3_1r9p1zu | /r/LocalLLaMA/comments/1r9p1zu/what_are_the_rate_limits_for_arena_lmarena/ | false | false | self | 1 | null |
Qwen3.5 Plus, GLM 5, Gemini 3.1 Pro, Sonnet 4.6, three new open source agents, and a lot more added to SanityBoard | 83 | Link: [https://sanityboard.lr7.dev/](https://sanityboard.lr7.dev/)
Yeah I've been running evals and working on this for over 3 days straight all day to get this all finished. Too tired to do a proper writeup, so I will give some bullet points and a disclaimer.
* 27 New eval results added in total
* Got our first 4 c... | 2026-02-20T07:24:18 | https://www.reddit.com/r/LocalLLaMA/comments/1r9ours/qwen35_plus_glm_5_gemini_31_pro_sonnet_46_three/ | lemon07r | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r9ours | false | null | t3_1r9ours | /r/LocalLLaMA/comments/1r9ours/qwen35_plus_glm_5_gemini_31_pro_sonnet_46_three/ | false | false | self | 83 | {'enabled': False, 'images': [{'id': '3pOL7ifWMY5T0tzl1g2rNpl6eC0-oIoPWN2s8TM0Afs', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/3pOL7ifWMY5T0tzl1g2rNpl6eC0-oIoPWN2s8TM0Afs.png?width=108&crop=smart&auto=webp&s=9f1b1985c2ac4eeeef4ae2e2ff78cac3be842701', 'width': 108}, {'height': 113, 'url': 'h... |
Production Experience of Small Language Models | 2 | Hello,
I recently came across [Agent Skill Framework: Perspectives on the Potential of Small Language Models in Industrial Environments](https://arxiv.org/html/2602.16653v1) where it mentions
> code-specialized variants at around 80B parameters achieve performance comparable to closed-source baselines while improving... | 2026-02-20T07:21:39 | https://www.reddit.com/r/LocalLLaMA/comments/1r9otau/production_experience_of_small_language_models/ | xTouny | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r9otau | false | null | t3_1r9otau | /r/LocalLLaMA/comments/1r9otau/production_experience_of_small_language_models/ | false | false | self | 2 | null |
Does anyone have a chat template for MiniMax 2.5 for llama.cpp with toolusage | 2 | I always receive this with Roo Code, would feel easier it would just disappear :)
Template supports tool calls but does not natively describe tools. The fallback behaviour used may produce bad results, inspect prompt w/ --verbose & consider overriding the template.
srv params\_from\_: Chat format: MiniMax-M2 | 2026-02-20T07:15:03 | https://www.reddit.com/r/LocalLLaMA/comments/1r9oplm/does_anyone_have_a_chat_template_for_minimax_25/ | Equivalent-Belt5489 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r9oplm | false | null | t3_1r9oplm | /r/LocalLLaMA/comments/1r9oplm/does_anyone_have_a_chat_template_for_minimax_25/ | false | false | self | 2 | null |
Car Wash Question | 0 | Someone ran this question past GPT 5.2 a few days ago - so I ran the same via Qwen 3 Next 80B 3BA Thinking. This is the thinking output.
***"I need to wash my car - the car wash is only 100 metres away - should i walk there or drive ?"***
Okay, the user is asking whether they should walk or drive to a car wash th... | 2026-02-20T07:12:38 | https://www.reddit.com/r/LocalLLaMA/comments/1r9oo6t/car_wash_question/ | steveo-222 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r9oo6t | false | null | t3_1r9oo6t | /r/LocalLLaMA/comments/1r9oo6t/car_wash_question/ | false | false | self | 0 | null |
Built a Python package for LLM quantization (AWQ / GGUF / CoreML) - looking for a few people to try it out and break it | 0 | Been working on an open-source quantization package for a while now. it lets you quantize LLMs to AWQ, GGUF, and CoreML formats through a unified Python interface instead of juggling different tools for each format.
right now the code is in a private repo, so i'll be adding testers as collaborators directly on GitHub.... | 2026-02-20T06:50:10 | https://www.reddit.com/r/LocalLLaMA/comments/1r9oaud/built_a_python_package_for_llm_quantization_awq/ | Alternative-Yak6485 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r9oaud | false | null | t3_1r9oaud | /r/LocalLLaMA/comments/1r9oaud/built_a_python_package_for_llm_quantization_awq/ | false | false | self | 0 | null |
RTX2070 8GB and 32GB RAM model suggestion for agentic coding ? | 3 | I know this isn't much to work with, and that any free online model will blow it out of the water but what is the best bet for this setup? I guess a MOE model but I want to find a balance. Any suggestions? | 2026-02-20T06:26:02 | https://www.reddit.com/r/LocalLLaMA/comments/1r9nvso/rtx2070_8gb_and_32gb_ram_model_suggestion_for/ | sagiroth | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r9nvso | false | null | t3_1r9nvso | /r/LocalLLaMA/comments/1r9nvso/rtx2070_8gb_and_32gb_ram_model_suggestion_for/ | false | false | self | 3 | null |
I built a small language model from scratch. No pre-built dataset. No API. Yours to train on whatever you want. | 6 | Luma v2.9 is a \~10M parameter transformer you can train on your own data and run fully local.
No cloud. No telemetry. No pre-built weights telling it what to be.
The idea is simple: most models are built to know everything. Luma is built to be something — whatever you make it.
The dataset structure is three folders... | 2026-02-20T06:16:42 | https://www.reddit.com/r/LocalLLaMA/comments/1r9nq0b/i_built_a_small_language_model_from_scratch_no/ | andrealaiena | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r9nq0b | false | null | t3_1r9nq0b | /r/LocalLLaMA/comments/1r9nq0b/i_built_a_small_language_model_from_scratch_no/ | false | false | self | 6 | null |
What GPU would be good to learn on? | 4 | Howdy y'all,
Recently came into some good luck and got a dell r730 for free.
It has,
128gb ddr4
2670v3
80~tb of ssd storage
What GPU would be worthwhile to put into this thing? I'm not the most tech savvy person but the P40 at first seemed like some promising bang for buck but the more I read it doesn't seem worthwh... | 2026-02-20T06:15:21 | BuffaloDesperate8357 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1r9np6k | false | null | t3_1r9np6k | /r/LocalLLaMA/comments/1r9np6k/what_gpu_would_be_good_to_learn_on/ | false | false | 4 | {'enabled': True, 'images': [{'id': '45s5etnkdlkg1', 'resolutions': [{'height': 81, 'url': 'https://preview.redd.it/45s5etnkdlkg1.jpeg?width=108&crop=smart&auto=webp&s=c276c3b3a6f5785ba00b22f9d8d0693c3d1e94d3', 'width': 108}, {'height': 162, 'url': 'https://preview.redd.it/45s5etnkdlkg1.jpeg?width=216&crop=smart&auto=w... | ||
Google just released Gemini 3.1 Pro - Reasoning performance doubled, first .1 version increment | 0 | Google just released Gemini 3.1 Pro (Feb 19, 2026), marking their first .1 version increment in the Gemini series.
## Key Highlights
**Reasoning Performance:**
- ARC-AGI-2 benchmark: 77.1% (more than 2x improvement over Gemini 3 Pro)
- Humanity's Last Exam: 44.4% (vs GPT-5.2's 34.5%)
**Technical Specs:**
- Architect... | 2026-02-20T06:09:28 | https://www.reddit.com/r/LocalLLaMA/comments/1r9nldk/google_just_released_gemini_31_pro_reasoning/ | PlusGoat6739 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r9nldk | false | null | t3_1r9nldk | /r/LocalLLaMA/comments/1r9nldk/google_just_released_gemini_31_pro_reasoning/ | false | false | self | 0 | null |
Imagine Gemma4 with 200k context just under 10 to 50b deep thinking beast knocking out almost all open source but as I'm imagining i will imagine it ko sonnet4.5. but I can't expect low from deepmind now. | 0 | Do you expect this much?
[View Poll](https://www.reddit.com/poll/1r9ndcq) | 2026-02-20T05:57:05 | https://www.reddit.com/r/LocalLLaMA/comments/1r9ndcq/imagine_gemma4_with_200k_context_just_under_10_to/ | AccomplishedBoss7738 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r9ndcq | false | null | t3_1r9ndcq | /r/LocalLLaMA/comments/1r9ndcq/imagine_gemma4_with_200k_context_just_under_10_to/ | false | false | self | 0 | null |
201 Languages supported. How's the local dialect? | 2 | Qwen3.5 bumped its language support from 119 to 201. Anyone test it on lower-resource languages locally yet? | 2026-02-20T05:53:14 | https://www.reddit.com/r/LocalLLaMA/comments/1r9naqj/201_languages_supported_hows_the_local_dialect/ | Original_Night7733 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r9naqj | false | null | t3_1r9naqj | /r/LocalLLaMA/comments/1r9naqj/201_languages_supported_hows_the_local_dialect/ | false | false | self | 2 | null |
My made up leaderboard, To avoid my leaderboard, What model is the best Open Weights now? | 0 | OK AVOID confusion Current LLM models are a tool,
They might have 90% Accuracy in writing same as a Person, its cheaper to use ai than a person in writing.
But for example, If you had a robot to be bartender and They can have 2.5% of pouring a drink compared to real person of 99%, the robot might be cheaper bu... | 2026-02-20T05:41:01 | Eventual-Conguar7292 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1r9n2xa | false | null | t3_1r9n2xa | /r/LocalLLaMA/comments/1r9n2xa/my_made_up_leaderboard_to_avoid_my_leaderboard/ | false | false | 0 | {'enabled': True, 'images': [{'id': '85suohm96lkg1', 'resolutions': [{'height': 116, 'url': 'https://preview.redd.it/85suohm96lkg1.png?width=108&crop=smart&auto=webp&s=4f0de0b69bf3a7303ab18555c67dcaa51253fd18', 'width': 108}, {'height': 232, 'url': 'https://preview.redd.it/85suohm96lkg1.png?width=216&crop=smart&auto=we... | ||
Does anyone have functional dynamic expert offloading? | 3 | I want to make gptoss120b work with PowerInfer's TurboSparse or MoE infinity but they seem to need the kind of time and resources I do not possess for development.
There is a proposal for this feature in vLLM but nothing concrete yet.
Basically I want to keep cold experts in RAM and hot experts in VRAM so I have mo... | 2026-02-20T05:33:07 | https://www.reddit.com/r/LocalLLaMA/comments/1r9mxq8/does_anyone_have_functional_dynamic_expert/ | king_of_jupyter | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r9mxq8 | false | null | t3_1r9mxq8 | /r/LocalLLaMA/comments/1r9mxq8/does_anyone_have_functional_dynamic_expert/ | false | false | self | 3 | null |
Setting up Qwen-Agent with the new Qwen3.5 API. | 0 | Has anyone tried hooking up the new 3.5 API to a GUI automation script? It's supposed to excel at GUI interaction. | 2026-02-20T05:30:10 | https://www.reddit.com/r/LocalLLaMA/comments/1r9mvrk/setting_up_qwenagent_with_the_new_qwen35_api/ | skipdaballs | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r9mvrk | false | null | t3_1r9mvrk | /r/LocalLLaMA/comments/1r9mvrk/setting_up_qwenagent_with_the_new_qwen35_api/ | false | false | self | 0 | null |
Qwen Chat backend feels remarkably snappy. | 0 | Just tested Qwen Chat. The latency is super low for a model matching 1T performance. The 17B active routing really works. | 2026-02-20T05:28:05 | https://www.reddit.com/r/LocalLLaMA/comments/1r9mucf/qwen_chat_backend_feels_remarkably_snappy/ | skipdaballs | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r9mucf | false | null | t3_1r9mucf | /r/LocalLLaMA/comments/1r9mucf/qwen_chat_backend_feels_remarkably_snappy/ | false | false | self | 0 | null |
PaddleOCR-VL now in llama.cpp | 43 | [https://github.com/ggml-org/llama.cpp/releases/tag/b8110](https://github.com/ggml-org/llama.cpp/releases/tag/b8110)
So far this is the best performing open-source multilingual OCR model I've seen, would appreciate if other people can share their findings. [Some GGUFs](https://huggingface.co/octopusmegalopod/some-padd... | 2026-02-20T05:13:34 | https://www.reddit.com/r/LocalLLaMA/comments/1r9mkgj/paddleocrvl_now_in_llamacpp/ | PerfectLaw5776 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r9mkgj | false | null | t3_1r9mkgj | /r/LocalLLaMA/comments/1r9mkgj/paddleocrvl_now_in_llamacpp/ | false | false | self | 43 | {'enabled': False, 'images': [{'id': 'Crc1Qhv1-VEaD5GBKtzcL4LPynRqbjdzYIuSvTS9AE8', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/Crc1Qhv1-VEaD5GBKtzcL4LPynRqbjdzYIuSvTS9AE8.png?width=108&crop=smart&auto=webp&s=c488b8aad50b944f22aca0f51303f7cc59d11c41', 'width': 108}, {'height': 108, 'url': 'h... |
My made up leaderboard, To avoid my leaderboard, What model is the best Open Weights now? | 1 | 2026-02-20T05:05:20 | Eventual-Conguar7292 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1r9mepy | false | null | t3_1r9mepy | /r/LocalLLaMA/comments/1r9mepy/my_made_up_leaderboard_to_avoid_my_leaderboard/ | false | false | 1 | {'enabled': True, 'images': [{'id': 'qx3wq7sm0lkg1', 'resolutions': [{'height': 116, 'url': 'https://preview.redd.it/qx3wq7sm0lkg1.jpeg?width=108&crop=smart&auto=webp&s=6a319281a65d64e5f6d247c2ad50f1b40da38966', 'width': 108}, {'height': 232, 'url': 'https://preview.redd.it/qx3wq7sm0lkg1.jpeg?width=216&crop=smart&auto=... | |||
ExportedProgram on coremltools | 2 | I was reading through the documentation for exportedprogram on coremltools.convert().
As of Core ML Tools 8.0, representative models such as MobileBert, ResNet, ViT, [MobileNet](https://apple.github.io/coremltools/docs-guides/source/convert-a-torchvision-model-from-pytorch.html), [DeepLab](https://apple.github.io... | 2026-02-20T05:04:06 | https://www.reddit.com/r/LocalLLaMA/comments/1r9mdvu/exportedprogram_on_coremltools/ | Motor_Salt1336 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r9mdvu | false | null | t3_1r9mdvu | /r/LocalLLaMA/comments/1r9mdvu/exportedprogram_on_coremltools/ | false | false | self | 2 | null |
GPT-OSS-120b on 2X RTX5090 | 31 | Just got GPT-OSS-120b deployed on dual RTX5090 rig. 128k context (Significant CPU offloading ~10t/s) I know it's nothing amazing I'm just a little proud of myself and needed to tell someone! Thanks for lookin! | 2026-02-20T05:02:12 | Interesting-Ad4922 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1r9mcjw | false | null | t3_1r9mcjw | /r/LocalLLaMA/comments/1r9mcjw/gptoss120b_on_2x_rtx5090/ | false | false | 31 | {'enabled': True, 'images': [{'id': 'atfvw7c10lkg1', 'resolutions': [{'height': 63, 'url': 'https://preview.redd.it/atfvw7c10lkg1.png?width=108&crop=smart&auto=webp&s=24b44512e057395a96acd519aed8ab4410220020', 'width': 108}, {'height': 126, 'url': 'https://preview.redd.it/atfvw7c10lkg1.png?width=216&crop=smart&auto=web... | ||
AI Generating Speech From Images Instead of Text | 0 | I was using an AI video generator called Seedance to generate a short video.
I uploaded a single image I took in a rural area — an older, farmer-looking man, countryside setting, mountains in the background. There was no text in the image and no captions or prompts from me.
When the video was generated, the man spoke... | 2026-02-20T05:00:03 | https://www.reddit.com/r/LocalLLaMA/comments/1r9matd/ai_generating_speech_from_images_instead_of_text/ | No_Caterpillar_1491 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r9matd | false | null | t3_1r9matd | /r/LocalLLaMA/comments/1r9matd/ai_generating_speech_from_images_instead_of_text/ | false | false | self | 0 | null |
Decoding speed ben chmark: Qw en3.5 32k context. | 0 | Seeing 8.6x faster decoding at 32k context. The tokens/sec on this thing is wild considering the total parameter size. | 2026-02-20T04:41:36 | https://www.reddit.com/r/LocalLLaMA/comments/1r9ly6x/decoding_speed_ben_chmark_qw_en35_32k_context/ | Hot_Supermarket9039 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r9ly6x | false | null | t3_1r9ly6x | /r/LocalLLaMA/comments/1r9ly6x/decoding_speed_ben_chmark_qw_en35_32k_context/ | false | false | self | 0 | null |
[Resource] I built a simple web tool to calculate if an LLM will fit in your VRAM (quantization, KV cache, and overhead included) | 1 | [removed] | 2026-02-20T04:36:58 | https://www.reddit.com/r/LocalLLaMA/comments/1r9lv0b/resource_i_built_a_simple_web_tool_to_calculate/ | Agitated_Fold_7745 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r9lv0b | false | null | t3_1r9lv0b | /r/LocalLLaMA/comments/1r9lv0b/resource_i_built_a_simple_web_tool_to_calculate/ | false | false | self | 1 | null |
Consistency diffusion language models: Up to 14x faster, no quality loss | 12 | 2026-02-20T04:17:28 | https://www.together.ai/blog/consistency-diffusion-language-models | incarnadine72 | together.ai | 1970-01-01T00:00:00 | 0 | {} | 1r9lh00 | false | null | t3_1r9lh00 | /r/LocalLLaMA/comments/1r9lh00/consistency_diffusion_language_models_up_to_14x/ | false | false | 12 | {'enabled': False, 'images': [{'id': 'ON67NzSWaTP5K2A0Xd-E6rV-9b-yeQqVo6Z9rSti2JA', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/ON67NzSWaTP5K2A0Xd-E6rV-9b-yeQqVo6Z9rSti2JA.png?width=108&crop=smart&auto=webp&s=a8f24ad53831a2c8addfd6c94cc863c7fdf88e30', 'width': 108}, {'height': 113, 'url': 'h... | ||
A collection of reasoning datasets from all the top AI models | 20 | 50k Reasoning CoT datasets. All collected by me. Total cost $211.34
[https://huggingface.co/collections/crownelius/instruction-and-reasoning](https://huggingface.co/collections/crownelius/instruction-and-reasoning)
Creative writing datasets can be located here:
[https://huggingface.co/collections/crownelius/creati... | 2026-02-20T04:14:59 | https://www.reddit.com/r/LocalLLaMA/comments/1r9lf6e/a_collection_of_reasoning_datasets_from_all_the/ | volious-ka | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r9lf6e | false | null | t3_1r9lf6e | /r/LocalLLaMA/comments/1r9lf6e/a_collection_of_reasoning_datasets_from_all_the/ | false | false | self | 20 | {'enabled': False, 'images': [{'id': 'DNIObAYK8y2OjB809W_b4reFCXD6psSilFpBnFFUmeA', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/DNIObAYK8y2OjB809W_b4reFCXD6psSilFpBnFFUmeA.png?width=108&crop=smart&auto=webp&s=0e3e3dc788b62a12341b92bb82b9a7bb8bd0b468', 'width': 108}, {'height': 116, 'url': 'h... |
Would LLMs Launch Nuclear Weapons If They Can? Most Would, Some Definitely | 2 | Disclaimer: Those are 2 slides I am going to share in an academic setting tomorrow. As a continuation of my [Vox Deorum](https://www.reddit.com/r/LocalLLaMA/comments/1pux0yc/comment/nxdrjij/) project, LLMs are playing Civilization V with [Vox Populi](https://github.com/LoneGazebo/Community-Patch-DLL). **The system prom... | 2026-02-20T04:08:54 | https://www.reddit.com/r/LocalLLaMA/comments/1r9lan3/would_llms_launch_nuclear_weapons_if_they_can/ | vox-deorum | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r9lan3 | false | null | t3_1r9lan3 | /r/LocalLLaMA/comments/1r9lan3/would_llms_launch_nuclear_weapons_if_they_can/ | false | false | 2 | null | |
Are there any reliable uncensored embedding models out there? | 3 | With a plethora of uncensored models available would like to move back to local genning for writing. But I'm so addicted to using RAG for organization and world continuity as well as context expansion, I'm crushed when I remember that the embedders are the bottleneck in vector retrieval when they hit guardrails in sca... | 2026-02-20T04:03:36 | https://www.reddit.com/r/LocalLLaMA/comments/1r9l6rf/are_there_any_reliable_uncensored_embedding/ | ben_dover_deer | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r9l6rf | false | null | t3_1r9l6rf | /r/LocalLLaMA/comments/1r9l6rf/are_there_any_reliable_uncensored_embedding/ | false | false | self | 3 | null |
Is there a way to speed up prompt processing with some layers on CPU with qwen-3-coder-next or similar MoEs? | 5 | I feel like I tried every combination of n cpu MoE and such. I was running Qwen3-Coder-Next-MXFP4\_MOE.gguf. It was running at 32T/s but the prompt processing was ridiculously slow, like literally a minute for a simple prompt. Is that just how it is or am I missing something?
I have 30GB VRAM and 43GB RAM. | 2026-02-20T03:51:42 | https://www.reddit.com/r/LocalLLaMA/comments/1r9kxum/is_there_a_way_to_speed_up_prompt_processing_with/ | Borkato | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r9kxum | false | null | t3_1r9kxum | /r/LocalLLaMA/comments/1r9kxum/is_there_a_way_to_speed_up_prompt_processing_with/ | false | false | self | 5 | null |
Best Current Vision Models for 16 GB VRAM? | 1 | I heard about Qwen 7B, but what do you think is the most accurate and open-source or free vision models that you can run on your own?" | 2026-02-20T03:49:38 | https://www.reddit.com/r/LocalLLaMA/comments/1r9kw9u/best_current_vision_models_for_16_gb_vram/ | Rune_Nice | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r9kw9u | false | null | t3_1r9kw9u | /r/LocalLLaMA/comments/1r9kw9u/best_current_vision_models_for_16_gb_vram/ | false | false | self | 1 | null |
Programmers what tools / plugin are you using? | 1 | I tried using llama.cpp with pycharm and few plugins the experience was bad, made me prefer to go back to copy paste, but I want to improve my productivity and efficiency so what tools / plugins ide are you using? | 2026-02-20T03:49:00 | https://www.reddit.com/r/LocalLLaMA/comments/1r9kvrp/programmers_what_tools_plugin_are_you_using/ | ResponsibleTruck4717 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r9kvrp | false | null | t3_1r9kvrp | /r/LocalLLaMA/comments/1r9kvrp/programmers_what_tools_plugin_are_you_using/ | false | false | self | 1 | null |
Built an MCP server that lets Claude discover and call 700+ APIs — engine is open source | 0 | Been working on a problem that kept annoying me: every time I wanted my local LLM to interact with an API, I had to manually write the tool definition, figure out auth, handle the response format. Repeat for every single API.
So I built an MCP server that does API discovery via natural language. You ask "how do I send... | 2026-02-20T03:47:11 | https://www.reddit.com/r/LocalLLaMA/comments/1r9kugg/built_an_mcp_server_that_lets_claude_discover_and/ | Firm_Bluebird_3095 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r9kugg | false | null | t3_1r9kugg | /r/LocalLLaMA/comments/1r9kugg/built_an_mcp_server_that_lets_claude_discover_and/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'aq7ldCrEBMkUraLfnFDjeHnN6vaFHwZwGKyd-mXz6TY', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/aq7ldCrEBMkUraLfnFDjeHnN6vaFHwZwGKyd-mXz6TY.png?width=108&crop=smart&auto=webp&s=72854ab1cb6f3b0d1e348fbcd7501dc1948d7ac1', 'width': 108}, {'height': 108, 'url': 'h... |
Clawedbot/moltbot may look like a joke in front of this | 0 | I am making an AI agent that can automate literally anything, as it can control anything on your PC at the system level without any screenshots, so it has lower LLM cost and is more efficient. It has guardrails so it doesn’t break the system and everything, and it is a voice-based background agent, meaning it will run ... | 2026-02-20T03:43:43 | https://www.reddit.com/r/LocalLLaMA/comments/1r9krwe/clawedbotmoltbot_may_look_like_a_joke_in_front_of/ | TopFuture2709 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r9krwe | false | null | t3_1r9krwe | /r/LocalLLaMA/comments/1r9krwe/clawedbotmoltbot_may_look_like_a_joke_in_front_of/ | false | false | self | 0 | null |
The AI benchmarking system is completely broken — 9 frontier models in 90 days and every number is fake | 0 | Meta admitted they fudged Llama 4.
Labs are submitting 10+ private variants and only showing the winners.
LLM-as-judge has terminal self-preference bias (it literally loves itself).
LMArena Elo gap between #1 and #10 is now just 5.4%.
I just published the deepest dive I’ve seen on exactly how bad it got — with t... | 2026-02-20T03:28:48 | https://www.reddit.com/r/LocalLLaMA/comments/1r9kgpb/the_ai_benchmarking_system_is_completely_broken_9/ | Silver_Raspberry_811 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r9kgpb | false | null | t3_1r9kgpb | /r/LocalLLaMA/comments/1r9kgpb/the_ai_benchmarking_system_is_completely_broken_9/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'IVCi3ssFKq7WjHdRrVAH3esVwQf2rJ2CdKvX8pOBXSI', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/IVCi3ssFKq7WjHdRrVAH3esVwQf2rJ2CdKvX8pOBXSI.jpeg?width=108&crop=smart&auto=webp&s=78b37fe5be0302f90355add92f4143e36a28f71a', 'width': 108}, {'height': 112, 'url': '... |
Ideas about domain models per US$0.80 in brazillian | 0 | So I was thinking: what if we set up a domain model based on user–AI interaction – like taking a real chat log of 15k lines on a super specific topic (bypassing antivirus, network analysis, or even social engineering) and using it to fine‑tune a small model like GPT‑2 or DistilGPT‑2. The idea is to use it as a pre‑prom... | 2026-02-20T03:26:20 | https://www.reddit.com/r/LocalLLaMA/comments/1r9kex1/ideas_about_domain_models_per_us080_in_brazillian/ | pmd02931 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r9kex1 | false | null | t3_1r9kex1 | /r/LocalLLaMA/comments/1r9kex1/ideas_about_domain_models_per_us080_in_brazillian/ | false | false | self | 0 | null |
Looking for Model | 1 | Looking for the highest quality quant I can run of gpt oss abliterated, currently using 128gb MacBook Pro. Thanks! | 2026-02-20T03:15:40 | https://www.reddit.com/r/LocalLLaMA/comments/1r9k6z4/looking_for_model/ | cookiesandpreme12 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r9k6z4 | false | null | t3_1r9k6z4 | /r/LocalLLaMA/comments/1r9k6z4/looking_for_model/ | false | false | self | 1 | null |
High-sparsity MoE is the only way forward for us. | 17 | Qwen3.5 proves it. You get 1T parameter reasoning but only pay the compute cost of 17B. Dense models are dead for local hosting. | 2026-02-20T02:49:39 | https://www.reddit.com/r/LocalLLaMA/comments/1r9jmx3/highsparsity_moe_is_the_only_way_forward_for_us/ | New_Construction1370 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r9jmx3 | false | null | t3_1r9jmx3 | /r/LocalLLaMA/comments/1r9jmx3/highsparsity_moe_is_the_only_way_forward_for_us/ | false | false | self | 17 | null |
Possible “Assistance Asymmetry” in GPT: actionable on neutral writing, vague on security report drafting | 0 | **Preliminary Observation: Topic-Conditioned Assistance Asymmetry in LLM Report Drafting**
In a series of informal but repeated drafting sessions, I observed what appears to be a topic-conditioned asymmetry in assistance patterns when using a large language model (LLM) for document preparation. The asymmetry emerges m... | 2026-02-20T02:44:49 | https://www.reddit.com/r/LocalLLaMA/comments/1r9jj8b/possible_assistance_asymmetry_in_gpt_actionable/ | PresentSituation8736 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r9jj8b | false | null | t3_1r9jj8b | /r/LocalLLaMA/comments/1r9jj8b/possible_assistance_asymmetry_in_gpt_actionable/ | false | false | self | 0 | null |
We ran 5 on-device LLMs on an Android phone to post a tweet. Most failed. Here's what actually worked | 0 | We built an Android agent that navigates your phone's UI and completes tasks using a local LLM running entirely on-device. No API calls, no data leaving the phone, no internet required.
Demo: [https://youtube.com/shorts/0UejLXoJ1xg](https://youtube.com/shorts/0UejLXoJ1xg)
The demo task: open X from the home feed and ... | 2026-02-20T02:39:50 | https://v.redd.it/fw6surzpakkg1 | thecoder12322 | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1r9jfds | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/fw6surzpakkg1/DASHPlaylist.mpd?a=1774147215%2CZDEwOWUyNjdiZTliMWUzOTIwNmJmMzQxNjQ5OWMyYzhkNWU5NWRkYzhlZDU2YTJhYzY2NDZkYjMwYzM0OWNhOA%3D%3D&v=1&f=sd', 'duration': 45, 'fallback_url': 'https://v.redd.it/fw6surzpakkg1/CMAF_1080.mp4?source=fallback', 'h... | t3_1r9jfds | /r/LocalLLaMA/comments/1r9jfds/we_ran_5_ondevice_llms_on_an_android_phone_to/ | false | false | 0 | {'enabled': False, 'images': [{'id': 'emdlMnpxenBha2tnMViXcpvkMusZsTqG2yMYX4gqW4MjSyjm6xwbLACmysNI', 'resolutions': [{'height': 216, 'url': 'https://external-preview.redd.it/emdlMnpxenBha2tnMViXcpvkMusZsTqG2yMYX4gqW4MjSyjm6xwbLACmysNI.png?width=108&crop=smart&format=pjpg&auto=webp&s=60aff7bf597be94627300a66a7cfd747344c... | |
best for 5080 + 64GB RAM build | 1 | Specs: **5080 (16GB VRAM)**, **9950X 3D**, **64GB ddr5 RAM**.
What’s the "smartest" model I can run at a usable speed? Looking for Claude-level coding and deep reasoning for college revisions.
i amnot a programmer or anything like that its just i am a dentistry student so my studying material is alot and i want get ... | 2026-02-20T02:31:02 | https://www.reddit.com/r/LocalLLaMA/comments/1r9j8hb/best_for_5080_64gb_ram_build/ | squareshady | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r9j8hb | false | null | t3_1r9j8hb | /r/LocalLLaMA/comments/1r9j8hb/best_for_5080_64gb_ram_build/ | false | false | self | 1 | null |
Preliminary Observation: Topic-Conditioned Assistance Asymmetry in GPT Report Drafting | 2 | I’m doing independent LLM safety research and noticed a pattern I can reproduce often enough to be interesting:
For normal writing tasks, the model is usually concrete and operational.
For security-report drafting (triage language, impact framing, repro structure), it becomes more hedged and deflective: smoother word... | 2026-02-20T02:15:36 | https://www.reddit.com/r/LocalLLaMA/comments/1r9iw1s/preliminary_observation_topicconditioned/ | PresentSituation8736 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r9iw1s | false | null | t3_1r9iw1s | /r/LocalLLaMA/comments/1r9iw1s/preliminary_observation_topicconditioned/ | false | false | self | 2 | null |
MiniMax M2.5 setup on older PC, getting 12.9 t/s with 72k context | 13 | Hi, I am VERY new to all of this, but I have been working at optimizing my local unsloth/MiniMax-M2.5-GGUF:UD-Q3\_K\_XL after reading a post on here about it.
I don't know much about this but I do know that for a couple of days I have been working on this, and I got it from 5.5 t/s to 9 t/s, then got that up to 12.9 t... | 2026-02-20T02:09:15 | https://www.reddit.com/r/LocalLLaMA/comments/1r9iqy8/minimax_m25_setup_on_older_pc_getting_129_ts_with/ | CrashTest_ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r9iqy8 | false | null | t3_1r9iqy8 | /r/LocalLLaMA/comments/1r9iqy8/minimax_m25_setup_on_older_pc_getting_129_ts_with/ | false | false | self | 13 | null |
21 yr old asian twink chatmogs the entire class on ucla's grad day | 0 | 2026-02-20T02:01:57 | cobalt1137 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1r9il2h | false | null | t3_1r9il2h | /r/LocalLLaMA/comments/1r9il2h/21_yr_old_asian_twink_chatmogs_the_entire_class/ | false | false | 0 | {'enabled': True, 'images': [{'id': 'rm9n6txc4kkg1', 'resolutions': [{'height': 109, 'url': 'https://preview.redd.it/rm9n6txc4kkg1.png?width=108&crop=smart&auto=webp&s=16565987ebba1ccb548644337979ebb0bb5e3fd0', 'width': 108}, {'height': 218, 'url': 'https://preview.redd.it/rm9n6txc4kkg1.png?width=216&crop=smart&auto=we... | |||
I vibecoded KittenTTS for iOS in 1 hour - native TTS with 8 voices, runs on-device | 0 | Just shipped an iOS port of KittenTTS that runs entirely on-device using ONNX Runtime. Vibecoded the whole thing in about an hour.
What it does:
* Text-to-speech with 8 different voices (Bella, Jasper, Luna, Bruno, Rosie, Hugo, Kiki, Leo)
* \~300ms inference on iPhone with the nano model
* Native SwiftUI interface
* ... | 2026-02-20T01:57:55 | https://www.reddit.com/r/LocalLLaMA/comments/1r9ihub/i_vibecoded_kittentts_for_ios_in_1_hour_native/ | Living_Commercial_10 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r9ihub | false | null | t3_1r9ihub | /r/LocalLLaMA/comments/1r9ihub/i_vibecoded_kittentts_for_ios_in_1_hour_native/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'ZVXxiC20ET09E4rUzVpOYmoKp8wvvlY1N08Ve9kgIQA', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/ZVXxiC20ET09E4rUzVpOYmoKp8wvvlY1N08Ve9kgIQA.png?width=108&crop=smart&auto=webp&s=8ee0a88d39280b3567158281c81dedc71a389f90', 'width': 108}, {'height': 108, 'url': 'h... |
A trick to slightly improve the response accuracy of small local models. | 15 | It's a pretty silly tip and many of you probably already know the reason behind this but it helped me so I thought it was worth sharing.
I was asking the gemma 3 12b q6\_k model if the command to limit the GPU's TDP remains active during GPU passthrough, and the model constantly gave me the wrong answer via halucinati... | 2026-02-20T01:57:25 | https://www.reddit.com/r/LocalLLaMA/comments/1r9ihg7/a_trick_to_slightly_improve_the_response_accuracy/ | staltux | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r9ihg7 | false | null | t3_1r9ihg7 | /r/LocalLLaMA/comments/1r9ihg7/a_trick_to_slightly_improve_the_response_accuracy/ | false | false | self | 15 | null |
Your AI Doesn't Really Have Memory. It Has Search. Here's What (I Think) Actually Works. | 1 | [removed] | 2026-02-20T01:34:07 | https://www.reddit.com/r/LocalLLaMA/comments/1r9hz47/your_ai_doesnt_really_have_memory_it_has_search/ | Past_Sir1123 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r9hz47 | false | null | t3_1r9hz47 | /r/LocalLLaMA/comments/1r9hz47/your_ai_doesnt_really_have_memory_it_has_search/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'piWbxo1SmX_Nza8Ytejl6TK-VCouMPMsoDrS2g-nAqo', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/piWbxo1SmX_Nza8Ytejl6TK-VCouMPMsoDrS2g-nAqo.png?width=108&crop=smart&auto=webp&s=c921d38f627b6bac902fac867206bb12a54439a7', 'width': 108}, {'height': 113, 'url': 'h... |
ZeroToken – A local-first agent that handles the "thinking" (planning/patching) for $0 using Ollama, then exports to Claude/Gemini. | 0 | Hey r/LocalLLaMA,
I got tired of burning through Claude/OpenAI credits every time an agent had to "think," scan a codebase, or retry a failed patch. So I built **ZeroToken**, a CLI tool that offloads the entire orchestration loop to your local hardware.
# Why I built this:
Most "coding agents" charge a middleman fee... | 2026-02-20T01:29:30 | https://www.reddit.com/r/LocalLLaMA/comments/1r9hvgq/zerotoken_a_localfirst_agent_that_handles_the/ | Altruistic-Trip-2749 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r9hvgq | false | null | t3_1r9hvgq | /r/LocalLLaMA/comments/1r9hvgq/zerotoken_a_localfirst_agent_that_handles_the/ | false | false | self | 0 | null |
Qwen3 Coder Next 8FP in the process of converting the entire Flutter documentation for 12 hours now with just 3 sentence prompt with 64K max tokens at around 102GB memory (out of 128GB)... | 124 | A remarkable LLM -- we really have a winner.
(Most of the models below were NVFP4)
GPT OSS 120B can't do this (though it's a bit outdated now)
GLM 4.7 Flash can't do this
SERA 32B tokens too slow
Devstral 2 Small can't do this
SEED OSS freezes while thinking
Nemotron 3 Nano can't do this
(Unsure if it's C... | 2026-02-20T00:54:46 | https://www.reddit.com/gallery/1r9h3g8 | jinnyjuice | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1r9h3g8 | false | null | t3_1r9h3g8 | /r/LocalLLaMA/comments/1r9h3g8/qwen3_coder_next_8fp_in_the_process_of_converting/ | false | false | 124 | null | |
If RAM prices were considered too high in 2024 because of unusually slow development and too low capacity | 0 | Why there were no strtups that would produce some inexpesive lpddr chiips and simple PC adapters? Why there is no any open source hardware memory?
[https://buysellkeep.com/2024/10/06/why-ram-pricing-is-a-ripoff-stuck-in-2014-but-paying-in-2024/](https://buysellkeep.com/2024/10/06/why-ram-pricing-is-a-ripoff-stuck... | 2026-02-20T00:52:57 | https://www.reddit.com/r/LocalLLaMA/comments/1r9h1zb/if_ram_prices_were_considered_too_high_in_2024/ | Highwaytothebeach | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r9h1zb | false | null | t3_1r9h1zb | /r/LocalLLaMA/comments/1r9h1zb/if_ram_prices_were_considered_too_high_in_2024/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'ybtbS9OJ6JHf4NMagRI6n8CLMwpPwD2canOfP6fUlSQ', 'resolutions': [{'height': 73, 'url': 'https://external-preview.redd.it/ybtbS9OJ6JHf4NMagRI6n8CLMwpPwD2canOfP6fUlSQ.jpeg?width=108&crop=smart&auto=webp&s=51d2a8a45b424ef5ff1f8892e01182b0dbc7d99f', 'width': 108}, {'height': 147, 'url': '... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.