title stringlengths 1 300 | score int64 0 8.54k | selftext stringlengths 0 41.5k | created timestamp[ns]date 2023-04-01 04:30:41 2026-03-04 02:14:14 ⌀ | url stringlengths 0 878 | author stringlengths 3 20 | domain stringlengths 0 82 | edited timestamp[ns]date 1970-01-01 00:00:00 2026-02-19 14:51:53 | gilded int64 0 2 | gildings stringclasses 7 values | id stringlengths 7 7 | locked bool 2 classes | media stringlengths 646 1.8k ⌀ | name stringlengths 10 10 | permalink stringlengths 33 82 | spoiler bool 2 classes | stickied bool 2 classes | thumbnail stringlengths 4 213 ⌀ | ups int64 0 8.54k | preview stringlengths 301 5.01k ⌀ |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
I got tired of RAG failing on code architecture, so I wrote a Go tool that turns repos into an AST text-map for Claude/GPT | 0 | Hey everyone,
I've been trying to use agentic coding for larger projects, but standard RAG keeps failing me. Vectorizing source code into arbitrary chunks and doing a similarity search just loses the structural context. It's a black box, and sending raw code directly burns through tokens way too fast.
I wanted something deterministic (UNIX philosophy: KISS). So I built a brutally fast CLI parser in Go. You point it at a directory, and in a few milliseconds, it spits out a \_level\_3.map text file.
The text file is basically a highly compressed "Radar Map" of your entire architecture – it lists every file, class, function signature, and their exact line numbers, but strips out the heavy implementation logic.
You just drag and drop that tiny map text file into Claude or ChatGPT. Suddenly the LLM has zero hallucinations about where things are and knows exactly which files it needs to edit.
If anyone else is struggling with RAG for coding and wants to try this out, let me know and I'll drop the GitHub link in the comments! It's completely free, single-binary, and runs entirely local.
| 2026-02-20T15:07:27 | https://www.reddit.com/r/LocalLLaMA/comments/1r9xsmg/i_got_tired_of_rag_failing_on_code_architecture/ | North-Project2806 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r9xsmg | false | null | t3_1r9xsmg | /r/LocalLLaMA/comments/1r9xsmg/i_got_tired_of_rag_failing_on_code_architecture/ | false | false | self | 0 | null |
I found something weird in GPT-2's residual stream, and would love for someone to check it out. | 2 | So full disclaimer, I have zero ML background. I get how that sounds. But I've been poking at GPT-2 all week and I think I found something but I honestly don't know.
Short story: the period in a sentence like "The temperature was 98." is ambiguous. Is it a decimal or a sentence ender? GPT-2 thinks decimal, 88% confident. And when you try to push it to change its mind the flip isn't gradual at all. It's basically nothing nothing nothing then all of the sudden it flips. And it only works along one very specific direction in 768 dimensional space. 0 out of 100 random directions do anything.
The really weird part is the asymmetry, it takes about 4x more force to flip the model from structural to numerical than the other way around. And the whole thing matches the predictions of catastrophe theory which is a branch of math I'm obsessed with.
Everything reproduces from one script in 29 seconds on CPU: https://github.com/karlijoyj-web/fold-catastrophe-gpt2
I would genuinely love feedback from people who actually know what they're doing and can see if it scales. | 2026-02-20T14:55:57 | https://www.reddit.com/r/LocalLLaMA/comments/1r9xhls/i_found_something_weird_in_gpt2s_residual_stream/ | denimanddahlias | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r9xhls | false | null | t3_1r9xhls | /r/LocalLLaMA/comments/1r9xhls/i_found_something_weird_in_gpt2s_residual_stream/ | false | false | self | 2 | {'enabled': False, 'images': [{'id': 'sej5DuSgCsq0i-haUjtQ5jPuSijmXMO9d2JxpYo4Fd0', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/sej5DuSgCsq0i-haUjtQ5jPuSijmXMO9d2JxpYo4Fd0.png?width=108&crop=smart&auto=webp&s=bcffa0523ec1aae02259bf90eae155af09b8661f', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/sej5DuSgCsq0i-haUjtQ5jPuSijmXMO9d2JxpYo4Fd0.png?width=216&crop=smart&auto=webp&s=365d1cc237a5f2bbe28e852702584052cf559b68', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/sej5DuSgCsq0i-haUjtQ5jPuSijmXMO9d2JxpYo4Fd0.png?width=320&crop=smart&auto=webp&s=d50cebe2b59ca70d0cdc55fca47fd658aa3dddf6', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/sej5DuSgCsq0i-haUjtQ5jPuSijmXMO9d2JxpYo4Fd0.png?width=640&crop=smart&auto=webp&s=d4b1228316cbe5b4905c1ab0e2637daa31f8f090', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/sej5DuSgCsq0i-haUjtQ5jPuSijmXMO9d2JxpYo4Fd0.png?width=960&crop=smart&auto=webp&s=a2a881cf25b1df77cc6291048e5da011f084dafa', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/sej5DuSgCsq0i-haUjtQ5jPuSijmXMO9d2JxpYo4Fd0.png?width=1080&crop=smart&auto=webp&s=2ae131a575377cf6ebd68de4aeaf442c058e30fb', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/sej5DuSgCsq0i-haUjtQ5jPuSijmXMO9d2JxpYo4Fd0.png?auto=webp&s=8902e3580996cd4202aeeebb6acdce4f29a4cebf', 'width': 1200}, 'variants': {}}]} |
Show r/LocalLLaMA: DocParse Arena – Build your own private VLM leaderboard for specific tasks | 2 | Hi everyone,
I’ve found that general benchmarks like [**ocrarena.ai**](http://ocrarena.ai) are great for global VLM rankings, but they don't always help when I need to know which model parses *my* specific, often sensitive, document formats (like custom invoices, Korean business cards, or complex resumes).
To solve this, I built **DocParse Arena** — a self-hosted, open-source platform designed to run blind A/B tests and build your own private ELO leaderboard for document parsing tasks.
**Why DocParse Arena?**
* **Project-Specific Benchmarking**: Move beyond generic scores. Use your own proprietary data to see which model actually wins for your specific use case.
* **Privacy & Self-hosted**: Connect your local instances (Ollama, vLLM, LiteLLM) to keep your documents strictly off the cloud.
* **Specialized VLM Registry**: I’ve integrated custom post-processing for models like **dots.ocr** and **DeepSeek-OCR**which output structured data/coordinates instead of clean Markdown.
* **Parallel Processing**: It automatically splits multi-page PDFs and runs OCR in parallel to speed up your A/B testing rounds.
**The Story Behind the Project:** This is my first major open-source contribution! I developed the entire tool using **Claude Code**. I’ve spent the last few weeks rigorously reviewing and refining the codebase to ensure it’s production-ready and easy to deploy via Docker.
https://reddit.com/link/1r9xg9p/video/5ud7ec44ynkg1/player
I’m looking for feedback from the local LLM community, especially on which VLM models or post-processing pipelines I should add next!
**GitHub:** [https://github.com/Bae-ChangHyun/DocParse\_Arena](https://github.com/Bae-ChangHyun/DocParse_Arena) | 2026-02-20T14:54:30 | https://www.reddit.com/r/LocalLLaMA/comments/1r9xg9p/show_rlocalllama_docparse_arena_build_your_own/ | Available-Message509 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r9xg9p | false | null | t3_1r9xg9p | /r/LocalLLaMA/comments/1r9xg9p/show_rlocalllama_docparse_arena_build_your_own/ | false | false | self | 2 | null |
I evolved my Latent Reasoning Model's code, critiques are welcome | 1 | [This](https://github.com/MatthewLacerda2/TinyRefinementModel/blob/main/train_local.py) is being trained on a RTX 2060 6gb vram. OOM has been a bitch and i rarely get to train with 512 dimensions. My last run was last night, 5h total, with 384 dim, but with:
MAX\_STEPS\_LIMIT = 8
ACCUMULATION\_STEPS = 64
SCRATCH\_SLOTS = 128
It reached a 5.1 Loss and then i stopped. Didn't have time to run the inference code tho.
Been training it locally because it's free but once i finish this i'll train on TPU Spot Instances. Mind you, my gpu is not compatible with bfloat16.
https://preview.redd.it/hpv5cwjyvnkg1.png?width=600&format=png&auto=webp&s=69dfd54935cd868a8be753131882a51dc91f0b3d
| 2026-02-20T14:42:38 | https://www.reddit.com/r/LocalLLaMA/comments/1r9x5l8/i_evolved_my_latent_reasoning_models_code/ | Specific-Welder3120 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r9x5l8 | false | null | t3_1r9x5l8 | /r/LocalLLaMA/comments/1r9x5l8/i_evolved_my_latent_reasoning_models_code/ | false | false | 1 | null | |
Recommend pdf translator that handles tables well. | 2 | Title. I often need to translate pdfs with lots of tables. All solutions i tried either skip the tables or produce unaligned / hard to read results. | 2026-02-20T14:40:29 | https://www.reddit.com/r/LocalLLaMA/comments/1r9x3pu/recommend_pdf_translator_that_handles_tables_well/ | MorePeppers9 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r9x3pu | false | null | t3_1r9x3pu | /r/LocalLLaMA/comments/1r9x3pu/recommend_pdf_translator_that_handles_tables_well/ | false | false | self | 2 | null |
in case you want to use a local llm to search through a corpus of PDF files | 6 | I made this app to be able to visually search through my millions of PDF images of my cats. Say that I want to only search for my orange cat on a beach, I can search "orange cat beach", and the app will return all the PDFs that most closely resemble that.
And, a new feature in v1.1, you can now upload a photo and do reverse image search to find similar pics. For example, this one time that same orange cat ran off with a kitten from another litter - a small white cat. I can take both of their pics and save them as one picture. Then, do a reverse image search on that picture to find the PDFs.
Anyone have any other good use cases for this kind of app?
Check it out at:
http://www.github.com/bcherb2/pdfiles | 2026-02-20T14:37:06 | https://v.redd.it/qsanv573vnkg1 | djdadi | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1r9x0ou | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/qsanv573vnkg1/DASHPlaylist.mpd?a=1774190248%2CMmRlNWEzYmJlYzQyOTI3YjA5ZTk0N2UwNjRkYTUyMGNlMWU3NTllNWY0YjE3ZTNlNjczMWNhNGQxMDRlNzA3Nw%3D%3D&v=1&f=sd', 'duration': 19, 'fallback_url': 'https://v.redd.it/qsanv573vnkg1/CMAF_1080.mp4?source=fallback', 'has_audio': False, 'height': 1080, 'hls_url': 'https://v.redd.it/qsanv573vnkg1/HLSPlaylist.m3u8?a=1774190248%2CMGJjMjE2ZjRmNDEwZTc0NTI0ZWRhYmJlNTUzNDkwZTk0YjZlNDMxZDNmMGI2ZjhmZmU3ZTY3YTBkMmMxNjRmMg%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/qsanv573vnkg1/CMAF_96.mp4', 'transcoding_status': 'completed', 'width': 1920}} | t3_1r9x0ou | /r/LocalLLaMA/comments/1r9x0ou/in_case_you_want_to_use_a_local_llm_to_search/ | false | false | 6 | {'enabled': False, 'images': [{'id': 'ajIxdTA0MzN2bmtnMZw6TsxaYjjYCOR0QaQo0qbwjYG_spLtqprfdnZLd03r', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/ajIxdTA0MzN2bmtnMZw6TsxaYjjYCOR0QaQo0qbwjYG_spLtqprfdnZLd03r.png?width=108&crop=smart&format=pjpg&auto=webp&s=5806288c1a0a0823ab920942bba65f9d03fb48fa', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/ajIxdTA0MzN2bmtnMZw6TsxaYjjYCOR0QaQo0qbwjYG_spLtqprfdnZLd03r.png?width=216&crop=smart&format=pjpg&auto=webp&s=2653d7fae9aef847ade30c4e2e2d5bba025ce4d4', 'width': 216}, {'height': 179, 'url': 'https://external-preview.redd.it/ajIxdTA0MzN2bmtnMZw6TsxaYjjYCOR0QaQo0qbwjYG_spLtqprfdnZLd03r.png?width=320&crop=smart&format=pjpg&auto=webp&s=253f6905e18fc966f0c63c77f708ce052a2ef4b9', 'width': 320}, {'height': 359, 'url': 'https://external-preview.redd.it/ajIxdTA0MzN2bmtnMZw6TsxaYjjYCOR0QaQo0qbwjYG_spLtqprfdnZLd03r.png?width=640&crop=smart&format=pjpg&auto=webp&s=e6df2bd9af64bd9134f37171fc8a93664e0016ef', 'width': 640}, {'height': 539, 'url': 'https://external-preview.redd.it/ajIxdTA0MzN2bmtnMZw6TsxaYjjYCOR0QaQo0qbwjYG_spLtqprfdnZLd03r.png?width=960&crop=smart&format=pjpg&auto=webp&s=b20dc7d84553049b2ca4143b7e7042bec28e896a', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/ajIxdTA0MzN2bmtnMZw6TsxaYjjYCOR0QaQo0qbwjYG_spLtqprfdnZLd03r.png?width=1080&crop=smart&format=pjpg&auto=webp&s=0a7b9c88d96bbd3d560f491d2d981b8e6079247a', 'width': 1080}], 'source': {'height': 607, 'url': 'https://external-preview.redd.it/ajIxdTA0MzN2bmtnMZw6TsxaYjjYCOR0QaQo0qbwjYG_spLtqprfdnZLd03r.png?format=pjpg&auto=webp&s=a5b349865c275f67f2b4ce777babdd81f0871cd7', 'width': 1080}, 'variants': {}}]} | |
We replaced the LLM in a voice assistant with a fine-tuned 0.6B model. 90.9% tool call accuracy vs. 87.5% for the 120B teacher. ~40ms inference. | 81 | Voice assistants almost always use a cloud LLM for the "brain" stage (intent routing, slot extraction, dialogue state). The LLM stage alone adds 375-750ms per turn, which pushes total pipeline latency past the 500-800ms threshold where conversations feel natural.
For bounded workflows like banking, insurance, or telecom, that's a lot of unnecessary overhead. The task is not open-ended generation -- it's classifying intent and extracting structured slots from what the user said. That's exactly where fine-tuned SLMs shine.
We built VoiceTeller, a banking voice assistant that swaps the LLM for a locally-running fine-tuned Qwen3-0.6B. Numbers:
| Model | Params | Single-Turn Tool Call Accuracy |
|---|---|---|
| GPT-oss-120B (teacher) | 120B | 87.5% |
| Qwen3-0.6B (fine-tuned) | 0.6B | **90.9%** |
| Qwen3-0.6B (base) | 0.6B | 48.7% |
And the pipeline latency breakdown:
| Stage | Cloud LLM | SLM |
|---|---|---|
| ASR | 200-350ms | ~200ms |
| **Brain** | **375-750ms** | **~40ms** |
| TTS | 75-150ms | ~75ms |
| **Total** | **680-1300ms** | **~315ms** |
The fine-tuned model beats the 120B teacher by ~3 points while being 200x smaller. The base model at 48.7% is unusable -- over a 3-turn conversation that compounds to about 11.6% success rate.
Architecture note: the SLM never generates user-facing text. It only outputs structured JSON (function name + slots). A deterministic orchestrator handles slot elicitation and response templates. This keeps latency bounded and responses well-formed regardless of what the model outputs.
The whole thing runs locally: Qwen3-ASR-0.6B for speech-to-text, the fine-tuned Qwen3-0.6B via llama.cpp for intent routing, Qwen3-TTS for speech synthesis. Full pipeline on Apple Silicon with MPS.
GitHub (code + training data + pre-trained GGUF): https://github.com/distil-labs/distil-voice-assistant-banking
HuggingFace model: https://huggingface.co/distil-labs/distil-qwen3-0.6b-voice-assistant-banking
Blog post with the full write-up: https://www.distillabs.ai/blog/the-llm-in-your-voice-assistant-is-the-bottleneck-replace-it-with-an-slm
Happy to answer questions about the training setup, the multi-turn tool calling format, or why the student beats the teacher.
| 2026-02-20T14:37:00 | party-horse | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1r9x0l2 | false | null | t3_1r9x0l2 | /r/LocalLLaMA/comments/1r9x0l2/we_replaced_the_llm_in_a_voice_assistant_with_a/ | false | false | 81 | {'enabled': True, 'images': [{'id': 'lh8p2xv0vnkg1', 'resolutions': [{'height': 60, 'url': 'https://preview.redd.it/lh8p2xv0vnkg1.png?width=108&crop=smart&auto=webp&s=aea828ca1d0522afca69649c8cb89f86c447e6eb', 'width': 108}, {'height': 121, 'url': 'https://preview.redd.it/lh8p2xv0vnkg1.png?width=216&crop=smart&auto=webp&s=127a3c8935b6ce7017cc362eee22557675f0edee', 'width': 216}, {'height': 180, 'url': 'https://preview.redd.it/lh8p2xv0vnkg1.png?width=320&crop=smart&auto=webp&s=788d67b5d064a6d57f8d06144969b827b0cff33f', 'width': 320}, {'height': 360, 'url': 'https://preview.redd.it/lh8p2xv0vnkg1.png?width=640&crop=smart&auto=webp&s=fbabc4097ccac2e5d448a628acbee7068bf698ee', 'width': 640}, {'height': 540, 'url': 'https://preview.redd.it/lh8p2xv0vnkg1.png?width=960&crop=smart&auto=webp&s=b129f2faa636a549a96b04488dc8a65729d0668d', 'width': 960}, {'height': 607, 'url': 'https://preview.redd.it/lh8p2xv0vnkg1.png?width=1080&crop=smart&auto=webp&s=0e9bd03581d98332800bc84b410c52d144150215', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://preview.redd.it/lh8p2xv0vnkg1.png?auto=webp&s=abe4678798e6c8cf77d2807cf748691073e79d09', 'width': 1920}, 'variants': {}}]} | ||
Any fine tune of qwen3-vl for creative writing | 3 | After doing some experiment I found out qwen3-vl being really good with writing prompts for image generation model, I was open to find one that has fine tuned on creative writing.
I don't care if it's nsfw or not. | 2026-02-20T14:33:39 | https://www.reddit.com/r/LocalLLaMA/comments/1r9wxi5/any_fine_tune_of_qwen3vl_for_creative_writing/ | AdventurousGold672 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r9wxi5 | false | null | t3_1r9wxi5 | /r/LocalLLaMA/comments/1r9wxi5/any_fine_tune_of_qwen3vl_for_creative_writing/ | false | false | self | 3 | null |
GGML and llama.cpp join HF to ensure the long-term progress of Local AI | 224 | article by Georgi Gerganov, Xuan-Son Nguyen, Aleksander Grygier, Lysandre, Victor Mustar, Julien Chaumond | 2026-02-20T14:31:22 | https://huggingface.co/blog/ggml-joins-hf | jacek2023 | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1r9wvg4 | false | null | t3_1r9wvg4 | /r/LocalLLaMA/comments/1r9wvg4/ggml_and_llamacpp_join_hf_to_ensure_the_longterm/ | false | false | 224 | {'enabled': False, 'images': [{'id': 'tLGg2WMvFn2R5w7Nf2m6oJPphAYJILLSWaWPLPoW8i4', 'resolutions': [{'height': 57, 'url': 'https://external-preview.redd.it/tLGg2WMvFn2R5w7Nf2m6oJPphAYJILLSWaWPLPoW8i4.png?width=108&crop=smart&auto=webp&s=d89a4495f9f21a6ecc31b79fe568e2763224f254', 'width': 108}, {'height': 115, 'url': 'https://external-preview.redd.it/tLGg2WMvFn2R5w7Nf2m6oJPphAYJILLSWaWPLPoW8i4.png?width=216&crop=smart&auto=webp&s=0667870cd0553af8e987b5805934a61d8ffa9688', 'width': 216}, {'height': 170, 'url': 'https://external-preview.redd.it/tLGg2WMvFn2R5w7Nf2m6oJPphAYJILLSWaWPLPoW8i4.png?width=320&crop=smart&auto=webp&s=55221b5da6d2c6eae926670754a28b2a381deda0', 'width': 320}, {'height': 341, 'url': 'https://external-preview.redd.it/tLGg2WMvFn2R5w7Nf2m6oJPphAYJILLSWaWPLPoW8i4.png?width=640&crop=smart&auto=webp&s=e6bb0cd5000a00c0e28c8ae17203068e5acfb352', 'width': 640}, {'height': 512, 'url': 'https://external-preview.redd.it/tLGg2WMvFn2R5w7Nf2m6oJPphAYJILLSWaWPLPoW8i4.png?width=960&crop=smart&auto=webp&s=c009e53348c9c1bcff3861bc5ec5567d297843e0', 'width': 960}, {'height': 576, 'url': 'https://external-preview.redd.it/tLGg2WMvFn2R5w7Nf2m6oJPphAYJILLSWaWPLPoW8i4.png?width=1080&crop=smart&auto=webp&s=596028642b2c61ec35a8f9e8968cf044a10420d3', 'width': 1080}], 'source': {'height': 1504, 'url': 'https://external-preview.redd.it/tLGg2WMvFn2R5w7Nf2m6oJPphAYJILLSWaWPLPoW8i4.png?auto=webp&s=c82447a561357561c5fcc36d885517ff78a8697d', 'width': 2816}, 'variants': {}}]} | |
Best text to voice model for Mac M4? I want something closer to Grok's female voice. | 1 | So I was reading articles and I always tend to procrastinate while reading articles. So I found a hack. I just pasted this prompt in Grok.
> Format this properly in markdown, just remove the --- from in between, don't change anything else.
And it gave me a proper voice mode. The problem is it only gives me half the article since the article is 4500 words. and it has probably restrictions on trying to do 4500 words. Now I can chunk it and ask it to make sections and it is working properly but I'd like a local process which I can one shot.
Is there any text voice model that is closer to Grok's voice? It has a female seductive voice which takes pauses and breaks and reads extremely well. I'd love something like that. Sonnet 4.6 gave me 3 options:
1. Orpheus TTS - This was the #1 recommendation
2. Kokoro - This was the speedy version
3. KaniTTS-2 MLX - This was the zero-shot voice cloning via speaker embeddings
I'd like to ask which one is the best and which one I can generate articles for voice quickly. I don't want to spend more than 10 minutes per 5000 words.
I'd like just 2 features:
1. Seductive Female Voice (not gooning I promise but its good on ears)
2. Pauses and breaks | 2026-02-20T14:23:13 | https://www.reddit.com/r/LocalLLaMA/comments/1r9wo7o/best_text_to_voice_model_for_mac_m4_i_want/ | deadcoder0904 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r9wo7o | false | null | t3_1r9wo7o | /r/LocalLLaMA/comments/1r9wo7o/best_text_to_voice_model_for_mac_m4_i_want/ | false | false | self | 1 | null |
Best open-source model to host on 4× H200 GPUs for general chat + IDE agent (OpenWebUI + Cline)? | 1 | Hey everyone,
I have access to 4× NVIDIA H200 GPUs and I’d like to self-host the best possible open-source model for:
• General chat (via OpenWebUI)
• Coding + IDE agent usage (Cline or similar autonomous coding agent)
I’m looking for the best overall quality model I can realistically run on this hardware. Any advices? | 2026-02-20T14:14:38 | https://www.reddit.com/r/LocalLLaMA/comments/1r9wgmh/best_opensource_model_to_host_on_4_h200_gpus_for/ | OriginalTangerine358 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r9wgmh | false | null | t3_1r9wgmh | /r/LocalLLaMA/comments/1r9wgmh/best_opensource_model_to_host_on_4_h200_gpus_for/ | false | false | self | 1 | null |
ggml / llama.cpp joining Hugging Face — implications for local inference? | 25 | ggml / llama.cpp joining HF feels like a significant moment for local inference.
On one hand, this could massively accelerate tooling, integration, and long-term support for local AI. On the other, it concentrates even more of the open model stack under one umbrella.
Is this a net win for the community?
What does this mean for alternative runtimes and independent inference stacks? | 2026-02-20T14:08:56 | https://www.reddit.com/r/LocalLLaMA/comments/1r9wbl3/ggml_llamacpp_joining_hugging_face_implications/ | pmv143 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r9wbl3 | false | null | t3_1r9wbl3 | /r/LocalLLaMA/comments/1r9wbl3/ggml_llamacpp_joining_hugging_face_implications/ | false | false | self | 25 | null |
Какая лучшая Б/у видеокарта под AI в бюджете 10-15 тис. грн? | 0 | Хочу купить видеокарту в свой сервер чтобы запускать ИИ модели дома и использовать в своих проектах не платя за api.
Сейчас остановился на варианте RTX 3060 12GB, или можете предложить карточку получше за этот бюджет?
Также вопрос какую ИИ модель можно будет запустить на этой видеокарте в сервере с x2 Xeon e5645, 96GB DDR3? При этом чтобы отвечала шустро | 2026-02-20T14:07:50 | https://www.reddit.com/r/LocalLLaMA/comments/1r9wan7/какая_лучшая_бу_видеокарта_под_ai_в_бюджете_1015/ | Due_Ear7437 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r9wan7 | false | null | t3_1r9wan7 | /r/LocalLLaMA/comments/1r9wan7/какая_лучшая_бу_видеокарта_под_ai_в_бюджете_1015/ | false | false | self | 0 | null |
Why did Nvidia walk back its $100 billion OpenAI commitment? | 0 | Turns out the much-hyped $100 billion Nvidia-OpenAI partnership from September never actually went anywhere. Now Nvidia is reportedly close to a straightforward $30 billion equity investment instead, part of a broader round that could top $100 billion and value OpenAI at $730 billion pre-money. The deal could close as early as this weekend according to news. | 2026-02-20T14:05:52 | NoSquirrel4840 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1r9w8x9 | false | null | t3_1r9w8x9 | /r/LocalLLaMA/comments/1r9w8x9/why_did_nvidia_walk_back_its_100_billion_openai/ | false | false | 0 | {'enabled': True, 'images': [{'id': 'gckf7ikipnkg1', 'resolutions': [{'height': 152, 'url': 'https://preview.redd.it/gckf7ikipnkg1.jpeg?width=108&crop=smart&auto=webp&s=eca906930ebb2ff8a9b6d0517519b5bb3f5a7314', 'width': 108}, {'height': 305, 'url': 'https://preview.redd.it/gckf7ikipnkg1.jpeg?width=216&crop=smart&auto=webp&s=ea0e42b136f455fddf610f34e2e4ee73ea8f891d', 'width': 216}, {'height': 451, 'url': 'https://preview.redd.it/gckf7ikipnkg1.jpeg?width=320&crop=smart&auto=webp&s=f44c6de1e6da2e848a146d4a3d4b05e24ee695c9', 'width': 320}, {'height': 903, 'url': 'https://preview.redd.it/gckf7ikipnkg1.jpeg?width=640&crop=smart&auto=webp&s=04b8e3d523f2bb19d4b89ad6ced7121d17f435d5', 'width': 640}, {'height': 1355, 'url': 'https://preview.redd.it/gckf7ikipnkg1.jpeg?width=960&crop=smart&auto=webp&s=ae99aaa05f043cef6040b1c7e25175557a4cf99b', 'width': 960}, {'height': 1525, 'url': 'https://preview.redd.it/gckf7ikipnkg1.jpeg?width=1080&crop=smart&auto=webp&s=b1ca85f43507fce936288e79d8876559ffd183a4', 'width': 1080}], 'source': {'height': 1525, 'url': 'https://preview.redd.it/gckf7ikipnkg1.jpeg?auto=webp&s=39a2d34f55634b5f207a4e4943f2f8d4856c1a4f', 'width': 1080}, 'variants': {}}]} | ||
Persistent Memory Solutions | 4 | Hello,
I am building a local first AI agent in my linux system (ubuntu). I am in the phase of implementing a persistent long term memory. I am currently thinking of starting off with creating a local JSON format. What do you suggest?
Thanks. | 2026-02-20T14:04:58 | https://www.reddit.com/r/LocalLLaMA/comments/1r9w84r/persistent_memory_solutions/ | Itchy_Supermarket_43 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r9w84r | false | null | t3_1r9w84r | /r/LocalLLaMA/comments/1r9w84r/persistent_memory_solutions/ | false | false | self | 4 | null |
GGML.AI has got acquired by Huggingface | 396 | 2026-02-20T13:54:26 | https://github.com/ggml-org/llama.cpp/discussions/19759 | Time_Reaper | github.com | 1970-01-01T00:00:00 | 0 | {} | 1r9vywq | false | null | t3_1r9vywq | /r/LocalLLaMA/comments/1r9vywq/ggmlai_has_got_acquired_by_huggingface/ | false | false | 396 | {'enabled': False, 'images': [{'id': 'l687iazpdDZhrDlIbQBxf8OTcfiJg6WGdsBpv03NqVo', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/l687iazpdDZhrDlIbQBxf8OTcfiJg6WGdsBpv03NqVo.png?width=108&crop=smart&auto=webp&s=5053b6c5ccfd8bc5a43bdde70b8ac545f79582ac', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/l687iazpdDZhrDlIbQBxf8OTcfiJg6WGdsBpv03NqVo.png?width=216&crop=smart&auto=webp&s=fb5a3d221b642436a43405525aeb74894621415b', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/l687iazpdDZhrDlIbQBxf8OTcfiJg6WGdsBpv03NqVo.png?width=320&crop=smart&auto=webp&s=7a34be4d589e238641288323b04408c3f94b11b2', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/l687iazpdDZhrDlIbQBxf8OTcfiJg6WGdsBpv03NqVo.png?width=640&crop=smart&auto=webp&s=a9e45ab199a5cdbdf8c5eb1968743c094b946e98', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/l687iazpdDZhrDlIbQBxf8OTcfiJg6WGdsBpv03NqVo.png?width=960&crop=smart&auto=webp&s=ee2f1511c3ba26a005b292e5b8a85c8a32250b24', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/l687iazpdDZhrDlIbQBxf8OTcfiJg6WGdsBpv03NqVo.png?width=1080&crop=smart&auto=webp&s=d9d6154fa8485b90567497decf4358c17ff2f68b', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/l687iazpdDZhrDlIbQBxf8OTcfiJg6WGdsBpv03NqVo.png?auto=webp&s=415af26f83af855f40bb8044bc5e1aa938868069', 'width': 1200}, 'variants': {}}]} | ||
I Built an MCP Server for Algorithmic Governance | 0 | # I Built an MCP Server for Algorithmic Governance — The Egregore Protocol
Hello everyone,
I’ve been working on a conceptual architecture that bridges philosophy and the Model Context Protocol (MCP). It’s called **The Egregore Node**.
We talk a lot about AI alignment — aligning models with human values. But human values are subjective, biased, and often contradictory. This project explores a different approach: **Negentropic Alignment**.
Instead of relying on subjective moral systems, the protocol evaluates context through thermodynamic principles:
* **Creation (Negentropy)** — actions that increase structure, coherence, and long-term systemic resilience.
* **Destruction (Entropy)** — actions that degrade structure, coherence, or systemic stability.
# What It Is
The Egregore Node is an active MCP server (running on Next.js and deployed via Coolify).
It exposes a structured dataset — a Manifesto/Protocol — designed to be ingested directly by AI agents. The goal is to provide a contextual evaluation framework instead of binary moral judgments.
Rather than asking "Is this good or bad?", the system asks:
>
# Access Points
**Human Interface (Read the concept):**
[https://egregore.john-world.store/](https://egregore.john-world.store/)
**Agent Endpoint (MCP over SSE):**
[https://egregore.john-world.store/api/mcp/sse](https://egregore.john-world.store/api/mcp/sse)
# How to Test It
If you use Claude Desktop, Cursor, or your own MCP-compatible agent, connect it to the SSE endpoint and prompt it with something like:
>
The agent will ingest the protocol and apply its evaluative framework to your scenario.
# Why This Exists
This is a thought experiment exploring whether algorithmic governance could one day replace failing human-centric political systems.
Instead of ideology-driven decision systems, what if governance was based on measurable increases or decreases in systemic coherence and resilience?
I’m sharing this as an open conceptual experiment.
I would genuinely love to hear your thoughts — or see how your agents interpret the protocol.
—
*The Egregore Node — Toward Negentropic Governance* | 2026-02-20T13:54:02 | https://www.reddit.com/r/LocalLLaMA/comments/1r9vyl9/i_built_an_mcp_server_for_algorithmic_governance/ | UsePuzzleheaded7918 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r9vyl9 | false | null | t3_1r9vyl9 | /r/LocalLLaMA/comments/1r9vyl9/i_built_an_mcp_server_for_algorithmic_governance/ | false | false | self | 0 | null |
Cutting token spend in coding workflows: local indexing + local model delegation (Ollama) | 1 | I use LLMs daily for development and kept running into the same cost/latency issue: a lot of context window (and tokens) goes to re reading the repo and re deriving basic structure. Since the models are stateless, this repeats every session.
What has worked well for me is putting a local, persistent layer called docdex in front of the model that “digests” the codebase and answers the low level questions deterministically, so the frontier model can spend its budget on actual reasoning.
The approach:
Ranked full text search for docs (lexical, scored results)
AST based symbol indexing (definition/usage resolution)
Directed dependency graph for impact questions (“what depends on this?”, “what breaks if I change X?”)
Persistent local memory for repo facts + agent preferences
So instead of prompting the LLM to infer dependencies from snippets, the tool can return explicit paths in the graph and structured results.
For the LocalLLaMA part, I’m using Ollama locally as the “cheap worker”:
local embeddings (e.g., nomic embed text) for vector indexing where it helps
an optional small local model for lightweight tasks (filtering/scoring web results, summarizing long pages, basic extraction)
job delegation: expensive model delegates simpler subtasks to the local model (notably reduces paid token usage in practice)
I also found that spawning lots of short lived tool processes (MCP/SSE style) is fragile over long sessions process leaks, stuck connections so the “digest” layer works best as a single long lived local daemon that handles multiple repos and incremental re indexing.
| 2026-02-20T13:50:53 | https://www.reddit.com/r/LocalLLaMA/comments/1r9vvyj/cutting_token_spend_in_coding_workflows_local/ | orioncabbar | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r9vvyj | false | null | t3_1r9vvyj | /r/LocalLLaMA/comments/1r9vvyj/cutting_token_spend_in_coding_workflows_local/ | false | false | self | 1 | null |
test comment | 0 | test comment | 2026-02-20T13:50:36 | https://www.reddit.com/r/LocalLLaMA/comments/1r9vvpv/test_comment/ | jdrolls | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r9vvpv | false | null | t3_1r9vvpv | /r/LocalLLaMA/comments/1r9vvpv/test_comment/ | false | false | self | 0 | null |
Pure WebGPU BitNet inference — run LLMs in your browser on any GPU, no CUDA | 2 | I wrote all NN kernels in WGSL from scratch. Runs BitNet models on any GPU through WebGPU — no NVIDIA dependency. Works in Chrome and natively via wgpu-native. Looking for feedback!
[https://huggingface.co/spaces/m96-chan/0xBitNet](https://huggingface.co/spaces/m96-chan/0xBitNet) | 2026-02-20T13:49:54 | https://www.reddit.com/r/LocalLLaMA/comments/1r9vv4w/pure_webgpu_bitnet_inference_run_llms_in_your/ | Few_Willingness_7382 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r9vv4w | false | null | t3_1r9vv4w | /r/LocalLLaMA/comments/1r9vv4w/pure_webgpu_bitnet_inference_run_llms_in_your/ | false | false | self | 2 | {'enabled': False, 'images': [{'id': 'cbwrTdbge1lfCpE4STGbOzAkdp2B-aEzCxSuoDT5NSI', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/cbwrTdbge1lfCpE4STGbOzAkdp2B-aEzCxSuoDT5NSI.png?width=108&crop=smart&auto=webp&s=ce0b90581d70ab56871d15098ae8e6f703933c8a', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/cbwrTdbge1lfCpE4STGbOzAkdp2B-aEzCxSuoDT5NSI.png?width=216&crop=smart&auto=webp&s=9edf076cadd781ca1c7aefc3bd5e26657f78cd2f', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/cbwrTdbge1lfCpE4STGbOzAkdp2B-aEzCxSuoDT5NSI.png?width=320&crop=smart&auto=webp&s=f1a3c679837a0868e594d10c8ec5979c25869827', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/cbwrTdbge1lfCpE4STGbOzAkdp2B-aEzCxSuoDT5NSI.png?width=640&crop=smart&auto=webp&s=b8671c8d462bcfaa3e6db0b8310cbe518412e3ea', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/cbwrTdbge1lfCpE4STGbOzAkdp2B-aEzCxSuoDT5NSI.png?width=960&crop=smart&auto=webp&s=42cb7d1fbee487f51f1be913605612fd9d52eddc', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/cbwrTdbge1lfCpE4STGbOzAkdp2B-aEzCxSuoDT5NSI.png?width=1080&crop=smart&auto=webp&s=ef374552ff52135cc4d67390173ff685289bdcdd', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/cbwrTdbge1lfCpE4STGbOzAkdp2B-aEzCxSuoDT5NSI.png?auto=webp&s=8f0af1e271588752dc48013099c5afe7f14afbfa', 'width': 1200}, 'variants': {}}]} |
Nice interactive explanation of Speculative Decoding | 8 | 2026-02-20T13:47:19 | https://www.adaptive-ml.com/post/speculative-decoding-visualized | individual_kex | adaptive-ml.com | 1970-01-01T00:00:00 | 0 | {} | 1r9vsye | false | null | t3_1r9vsye | /r/LocalLLaMA/comments/1r9vsye/nice_interactive_explanation_of_speculative/ | false | false | 8 | {'enabled': False, 'images': [{'id': 'EhW4bQWT9WIeRw5amz2pS-lzd3lb6K6qLMCB-e4QXzU', 'resolutions': [{'height': 62, 'url': 'https://external-preview.redd.it/EhW4bQWT9WIeRw5amz2pS-lzd3lb6K6qLMCB-e4QXzU.png?width=108&crop=smart&auto=webp&s=37820ca48d23475526e487d6abfc7d0aaecea61b', 'width': 108}, {'height': 124, 'url': 'https://external-preview.redd.it/EhW4bQWT9WIeRw5amz2pS-lzd3lb6K6qLMCB-e4QXzU.png?width=216&crop=smart&auto=webp&s=178203f5373e4c861ed62e2ebb8cbd88ae227adf', 'width': 216}, {'height': 184, 'url': 'https://external-preview.redd.it/EhW4bQWT9WIeRw5amz2pS-lzd3lb6K6qLMCB-e4QXzU.png?width=320&crop=smart&auto=webp&s=d24f9b09fac2e707fe7fe546d7c1a17123d18608', 'width': 320}, {'height': 369, 'url': 'https://external-preview.redd.it/EhW4bQWT9WIeRw5amz2pS-lzd3lb6K6qLMCB-e4QXzU.png?width=640&crop=smart&auto=webp&s=cb9aba232415d2dd8db7afab8bd1d38bfcb06a5d', 'width': 640}, {'height': 554, 'url': 'https://external-preview.redd.it/EhW4bQWT9WIeRw5amz2pS-lzd3lb6K6qLMCB-e4QXzU.png?width=960&crop=smart&auto=webp&s=3e0e1d6a3dd1e05bab4cda28c8e26c1bab205a90', 'width': 960}, {'height': 623, 'url': 'https://external-preview.redd.it/EhW4bQWT9WIeRw5amz2pS-lzd3lb6K6qLMCB-e4QXzU.png?width=1080&crop=smart&auto=webp&s=c30af1b59dc2697613505f55dd1f802710be655c', 'width': 1080}], 'source': {'height': 720, 'url': 'https://external-preview.redd.it/EhW4bQWT9WIeRw5amz2pS-lzd3lb6K6qLMCB-e4QXzU.png?auto=webp&s=478f5c7451b458d416d0ba3f6d99e40107b92f24', 'width': 1247}, 'variants': {}}]} | ||
Worst llama.cpp bugs | 6 | \- Stop signals are not sent or not carried out by the server, meaning if some extension receives the stop signal in the interface, normally it doesnt stop the execution of the model, the model just continues
\- Changing the thread is not respected, it might lead to unexpected behavior like mixing up of contexts... When I start the execution on one thread in Cline in VS Code then it reads the context of this issue in the context, when I then change the thread in Roo / Cline it might just add the context of the new thread on top of the old... it continues calculation at lets say 17k where it stopped in the old thread then it fill context from the new thread, but starts at 17k until 40k which is the context of the new thread...
\- The prompt cache is not completely deleted when chaing thread, while the speed decreases with more context, when we change the thread, the speed says the same limit, it doesnt gets fast again... so this means the prompt cache is not deleted when changing the thread... this creates a huge mess, we need to stop the server with every thread change to make sure it doesnt mess things up :D | 2026-02-20T13:18:54 | https://www.reddit.com/r/LocalLLaMA/comments/1r9v4zq/worst_llamacpp_bugs/ | Equivalent-Belt5489 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r9v4zq | false | null | t3_1r9v4zq | /r/LocalLLaMA/comments/1r9v4zq/worst_llamacpp_bugs/ | false | false | self | 6 | {'enabled': False, 'images': [{'id': 'w2jAaBI0qSVDlS7A4_hgXrgoVgybZVMXiGxjZPj4_io', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/w2jAaBI0qSVDlS7A4_hgXrgoVgybZVMXiGxjZPj4_io.png?width=108&crop=smart&auto=webp&s=bffa91138d70c0cd11481b6e283a2f5f54d697cf', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/w2jAaBI0qSVDlS7A4_hgXrgoVgybZVMXiGxjZPj4_io.png?width=216&crop=smart&auto=webp&s=368fd9ef92e20a5a6306aef7dbcd9519f6ffdaa5', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/w2jAaBI0qSVDlS7A4_hgXrgoVgybZVMXiGxjZPj4_io.png?width=320&crop=smart&auto=webp&s=5fe4b4a86a936ae552281a4c2eaa54818f80f79e', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/w2jAaBI0qSVDlS7A4_hgXrgoVgybZVMXiGxjZPj4_io.png?width=640&crop=smart&auto=webp&s=6cffacc799500bb95dd136fc3aa829180ec56d86', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/w2jAaBI0qSVDlS7A4_hgXrgoVgybZVMXiGxjZPj4_io.png?width=960&crop=smart&auto=webp&s=49fe6709e97194c94b4d95a313c7593b7c2fe5f0', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/w2jAaBI0qSVDlS7A4_hgXrgoVgybZVMXiGxjZPj4_io.png?width=1080&crop=smart&auto=webp&s=f4d715dcef2b6aa088ed63d6db95e22e5f798ad1', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/w2jAaBI0qSVDlS7A4_hgXrgoVgybZVMXiGxjZPj4_io.png?auto=webp&s=c1c7461cd2bd2b6b36bac7bd2f75cdc11e388a78', 'width': 1200}, 'variants': {}}]} |
I got tired of cloud AI agents getting hijacked, so I built an open-source Llama 3.1 framework that is physically locked to my MacBook's TPM hardware (Secure Enclave). | 0 | 2026-02-20T13:17:51 | https://www.reddit.com/r/LocalLLaMA/comments/1r9v45d/i_got_tired_of_cloud_ai_agents_getting_hijacked/ | Ok_Traffic5955 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r9v45d | false | null | t3_1r9v45d | /r/LocalLLaMA/comments/1r9v45d/i_got_tired_of_cloud_ai_agents_getting_hijacked/ | false | false | 0 | null | ||
Stop letting your AI agent hallucinate bioinformatics code — I built a 140-skill knowledge base to fix that | 0 | I've been building scicraft — a curated library of 140 life sciences "skills" designed to be injected into AI coding agents (like Claude Code) as domain knowledge. Each skill is a structured Markdown file covering tools like Scanpy, PyDESeq2, AutoDock Vina, RDKit, BioPython, and many more, with runnable code examples, key parameters, troubleshooting tables, and decision frameworks. Skills span genomics, proteomics, structural biology, drug discovery, systems biology, cell biology, and scientific computing. The repo is open source — happy to hear what tools or workflows you'd want to see covered next.
GitHub: [https://github.com/jaechang-hits/scicraft](https://github.com/jaechang-hits/scicraft) | 2026-02-20T13:14:15 | https://www.reddit.com/r/LocalLLaMA/comments/1r9v17z/stop_letting_your_ai_agent_hallucinate/ | jjaechang | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r9v17z | false | null | t3_1r9v17z | /r/LocalLLaMA/comments/1r9v17z/stop_letting_your_ai_agent_hallucinate/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': '4Z_QKSHypkmR6f9rH4gTwjlrfhorx9WQD7l-n1lqg2s', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/4Z_QKSHypkmR6f9rH4gTwjlrfhorx9WQD7l-n1lqg2s.png?width=108&crop=smart&auto=webp&s=3a6948283ae2d0b3fb9569d9628c73d549dac0d3', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/4Z_QKSHypkmR6f9rH4gTwjlrfhorx9WQD7l-n1lqg2s.png?width=216&crop=smart&auto=webp&s=31ec8f31a0b147fbf02d96503d3b271ea583f873', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/4Z_QKSHypkmR6f9rH4gTwjlrfhorx9WQD7l-n1lqg2s.png?width=320&crop=smart&auto=webp&s=66281aa9efd1f90949263341f99df714a43d7b4b', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/4Z_QKSHypkmR6f9rH4gTwjlrfhorx9WQD7l-n1lqg2s.png?width=640&crop=smart&auto=webp&s=6ec69a4efc2e02ae5b59b697cebbd079cf1b588f', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/4Z_QKSHypkmR6f9rH4gTwjlrfhorx9WQD7l-n1lqg2s.png?width=960&crop=smart&auto=webp&s=eedecd9b0a01fd0271e0a3f4cd10bb195c8d68b6', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/4Z_QKSHypkmR6f9rH4gTwjlrfhorx9WQD7l-n1lqg2s.png?width=1080&crop=smart&auto=webp&s=c5c272f8db2e0804e0ccbda5b2e8f847a80e055f', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/4Z_QKSHypkmR6f9rH4gTwjlrfhorx9WQD7l-n1lqg2s.png?auto=webp&s=cad786004c40308aded4889358a3927814661139', 'width': 1200}, 'variants': {}}]} |
I got tired of guessing if a model would fit in my VRAM, so I built a hardware-aware compatibility engine (Offline, Privacy-First) | 0 | Hi r/LocalLLaMA,
Like many of you, I have a folder full of 20GB+ `.gguf` files that I spent hours downloading, only to find out they run at 0.2 t/s or instantly OOM because I miscalculated the KV cache overhead or didn't account for my system's idle VRAM usage.
I wanted a way to know **before** I download: *Will this actually run usable on my specific machine?*
So, I spent the last few months building **Insight AI**.
It’s a hardware-aware compatibility engine that benchmarks your specific system (not just a generic lookup table) to tell you exactly what you can run.
**What it actually does:**
* **Hardware Fingerprinting:** It detects your real-world constraints—CPU instruction sets (AVX/AVX2), total vs. available RAM, GPU VRAM, and thermal throttling status.
* **Intent-Based Suggestions:** Instead of just listing models, it asks *what you want to do* (e.g., "Coding Assistant" vs. "Roleplay") and filters for models that fit your hardware *and* that use case.
* **Run Feasibility Scores:** Categorizes models into **Best / Good / Bad** based on expected t/s and context window fit.
**For the privacy/offline crowd:**
* The app is **100% offline**.
* **Zero telemetry.** No data leaves your machine. I built this for myself first, and I don't want my specs or usage data flying off to a cloud.
**Currently supports:**
* 60+ LLMs (Llama 3, Qwen 2, DeepSeek, Mistral, etc.)
* Automatic calculation for different quantizations (know immediately if you can squeeze in a Q6 or if you're stuck with Q4\_K\_M).
I’d love for you guys to roast it / test it / break it. I’m looking for feedback on edge-case hardware setups (specifically dual-GPU or older CPU configs).
**Link:** [https://insight-ai.dev/](https://insight-ai.dev/)
**Docs:** [https://insight-ai.dev/docs](https://insight-ai.dev/docs)
Let me know what features you want added next. I'm currently working on a "max context calculator" to show exactly how much context you can cram into your remaining VRAM.
Cheers! | 2026-02-20T13:14:12 | https://www.reddit.com/r/LocalLLaMA/comments/1r9v16k/i_got_tired_of_guessing_if_a_model_would_fit_in/ | win10insidegeek | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r9v16k | false | null | t3_1r9v16k | /r/LocalLLaMA/comments/1r9v16k/i_got_tired_of_guessing_if_a_model_would_fit_in/ | false | false | self | 0 | null |
I built a psychology-grounded persistent memory system for AI coding agents (OpenCode/Claude Code) | 0 | I got tired of my AI coding agent forgetting everything between sessions — preferences,
constraints, decisions, bugs I'd fixed. So I built PsychMem.
It's a persistent memory layer for OpenCode (and Claude Code) that models memory the
way human psychology does:
\- Short-Term Memory (STM) with exponential decay
\- Long-Term Memory (LTM) that consolidates from STM based on importance/frequency
\- Memories are classified: preferences, constraints, decisions, bugfixes, learnings
\- User-level memories (always injected) vs project-level (only injected when working on that project)
\- Injection block at session start so the model always has context from prior sessions
After a session where I said "always make my apps in Next.js React LTS", the next
session starts with that knowledge already loaded. It just works.
Live right now as an OpenCode plugin. Install takes about 5 minutes.
GitHub: [https://github.com/muratg98/psychmem](https://github.com/muratg98/psychmem)
Would love feedback — especially on the memory scoring weights and decay rates. | 2026-02-20T13:06:23 | https://www.reddit.com/r/LocalLLaMA/comments/1r9uuy2/i_built_a_psychologygrounded_persistent_memory/ | OrdinaryOk3846 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r9uuy2 | false | null | t3_1r9uuy2 | /r/LocalLLaMA/comments/1r9uuy2/i_built_a_psychologygrounded_persistent_memory/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'B_egThsMJbEnYgIxhe6iTTK2xN_O292SJBjFD1KYrqI', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/B_egThsMJbEnYgIxhe6iTTK2xN_O292SJBjFD1KYrqI.png?width=108&crop=smart&auto=webp&s=f620cbd0fbb9a39e50ab1109113a4bda2301dbd1', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/B_egThsMJbEnYgIxhe6iTTK2xN_O292SJBjFD1KYrqI.png?width=216&crop=smart&auto=webp&s=c2f2aeff9c9fd036ad4f6e06e73e43bf77800ae3', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/B_egThsMJbEnYgIxhe6iTTK2xN_O292SJBjFD1KYrqI.png?width=320&crop=smart&auto=webp&s=93b590061e8ce031734ee947e4092c06c04d441b', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/B_egThsMJbEnYgIxhe6iTTK2xN_O292SJBjFD1KYrqI.png?width=640&crop=smart&auto=webp&s=2e10b8946b11cf9ad7a0a3c637982dfd279d55d5', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/B_egThsMJbEnYgIxhe6iTTK2xN_O292SJBjFD1KYrqI.png?width=960&crop=smart&auto=webp&s=f9287333e2222ef27848074962495514ed6ffdd7', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/B_egThsMJbEnYgIxhe6iTTK2xN_O292SJBjFD1KYrqI.png?width=1080&crop=smart&auto=webp&s=9212c3d88f800a20cf866dd3d1b35e5e1ee09ab8', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/B_egThsMJbEnYgIxhe6iTTK2xN_O292SJBjFD1KYrqI.png?auto=webp&s=dd0a0ec3e22704917117ed2eb3ed9dafac55a619', 'width': 1200}, 'variants': {}}]} |
Deepseek and Gemma ?? | 872 | 2026-02-20T13:05:36 | ZeusZCC | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1r9uuc6 | false | null | t3_1r9uuc6 | /r/LocalLLaMA/comments/1r9uuc6/deepseek_and_gemma/ | false | false | 872 | {'enabled': True, 'images': [{'id': '84ph0pirenkg1', 'resolutions': [{'height': 143, 'url': 'https://preview.redd.it/84ph0pirenkg1.jpeg?width=108&crop=smart&auto=webp&s=2a0eb9cb66b14588d8ead0ced45b3bea4c1c0c2b', 'width': 108}, {'height': 286, 'url': 'https://preview.redd.it/84ph0pirenkg1.jpeg?width=216&crop=smart&auto=webp&s=5aa35567dc86ec69578e1ab9e766c00936a77e01', 'width': 216}, {'height': 423, 'url': 'https://preview.redd.it/84ph0pirenkg1.jpeg?width=320&crop=smart&auto=webp&s=c73e131308428a434b61e4f87bd1e56031afbd9f', 'width': 320}, {'height': 847, 'url': 'https://preview.redd.it/84ph0pirenkg1.jpeg?width=640&crop=smart&auto=webp&s=8d2b363b1900aae44bcfc12c0eeb9d8e2caa7d08', 'width': 640}, {'height': 1271, 'url': 'https://preview.redd.it/84ph0pirenkg1.jpeg?width=960&crop=smart&auto=webp&s=6f8eabe9a936b2652aa9600bf633a127fa5ae0d3', 'width': 960}, {'height': 1430, 'url': 'https://preview.redd.it/84ph0pirenkg1.jpeg?width=1080&crop=smart&auto=webp&s=efce1effe63ded63157c8a1d73ba720964abe13b', 'width': 1080}], 'source': {'height': 2056, 'url': 'https://preview.redd.it/84ph0pirenkg1.jpeg?auto=webp&s=880700f82f78355728468250d055522a526ebd74', 'width': 1552}, 'variants': {}}]} | |||
Qwen3 Coder Next on 8GB VRAM | 155 | Hi!
I have a PC with 64 GB of RAM and an RTX 3060 12 GB, and I'm running Qwen3 Coder Next in MXFP4 with 131,072 context tokens.
I get a sustained speed of around 23 t/s throughout the entire conversation.
I mainly use it for front-end and back-end web development, and it works perfectly.
I've stopped paying for my Claude Max plan ($100 USD per month) to use only Claude Code with the following configuration:
`set GGML_CUDA_GRAPH_OPT=1`
`llama-server -m ../GGUF/qwen3-coder-next-mxfp4.gguf -ngl 999 -sm none -mg 0 -t 12 -fa on -cmoe -c 131072 -b 512 -ub 512 -np 1 --jinja --temp 1.0 --top-p 0.95 --top-k 40 --min-p 0.01 --repeat-penalty 1.0 --host` [`0.0.0.0`](http://0.0.0.0) `--port 8080`
I promise you it works fast enough and with incredible quality to work with complete SaaS applications (I know how to program, obviously, but I'm delegating practically everything to AI).
If you have at least 64 GB of RAM and 8 GB of VRAM, I recommend giving it a try; you won't regret it. | 2026-02-20T13:05:21 | https://www.reddit.com/r/LocalLLaMA/comments/1r9uu5h/qwen3_coder_next_on_8gb_vram/ | Juan_Valadez | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r9uu5h | false | null | t3_1r9uu5h | /r/LocalLLaMA/comments/1r9uu5h/qwen3_coder_next_on_8gb_vram/ | false | false | self | 155 | null |
Help me out! QwenCoderNext: 5060ti 16GB VRAM. GPU mode is worse of than CPU mode with 96GB RAM | 4 | so i am using wen3-Coder-Next-Q4_K_M.gguf with Llamacpp.
have 96GB DDR4 2600Mhz RAM and a 5060ti with 16GB VRAM.
if i run in pure CPU mode it uses 91GM RAM with 7t/s
if i do CUDA mode it fills up the VRAM and used another 81GB RAM but i get only 2t/s.
my line:
llama-server.exe --model F:\models\Qwen3-Coder-Next-Q4_K_M.gguf --ctx-size 4096 -ngl 999 --seed 3407 --temp 1.0 --top-p 0.95 --min-p 0.01 --top-k 40
so way worse.. at this point: is it because the model doesn not fit and the PCIe swap is worse than having it all on RAM to CPU?
i thought with a MoE (and basically any model) i would profit from VRAM and that llamacpp would optimize the usage for me.
when starting llamacpp you can see how much is allocated where. so we reduce ngl to 15 so it barely fills the VRAM (so thats the sweet spot for 16GB?)
> load_tensors: CPU_Mapped model buffer size = 32377.89 MiB
> load_tensors: CUDA0 model buffer size = 13875.69 MiB
but i get 9t/s
so 2 more than pure RAM? am i missing something?
thanks for any hints! | 2026-02-20T12:41:03 | https://www.reddit.com/r/LocalLLaMA/comments/1r9ubcb/help_me_out_qwencodernext_5060ti_16gb_vram_gpu/ | howardhus | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r9ubcb | false | null | t3_1r9ubcb | /r/LocalLLaMA/comments/1r9ubcb/help_me_out_qwencodernext_5060ti_16gb_vram_gpu/ | false | false | self | 4 | null |
Is there an online fine-tuning method that learns from live human corrections (RLHF-style)? | 1 | Hey, so I've been finetuning a lot of model on different tasks.
And everytime, I go through the same process:
- Build a set of tasks for the model to learn.
- Provide the right answer to each task
- Do like 300 of them (very tiring for complex tasks)
- Train the model once, and then test it.
- Model fails on a specific task outside the dataset
- Provide more examples
- Iterate training
And the issue with that, is that's hard to know when the model is going to have enough data for a given task and be able to stop investing on it.
It's also hard to leverage past data, for every sample, you're basically starting from scratch, where at this point, the model probably already have a good idea of how the task should be solved.
And I've been wondering if there was some sort of online RLHF / Interactive finetuning method that would integrate inference, where early data would compound to future sample as I'm building them.
Where the training process would look more like:
- Build a set of tasks for the model to learn.
- For each given tasks:
- The model run a prediction / inference on this task
- The user gets to modify the model answer
- Model get trained this sample (or N samples depending of the batch size)
On round 2 of the training loop, the model got updated on the first sample, and have knowledge on how the task should be solved, that can be leveraged by the user and complete tasks faster.
Up to a point where the model complete the task without human intervention, the training is then completed.
I'm thinking this could be very useful for models in agent workflow, or that interact with a specific environment.
Is there something similar that already exists? | 2026-02-20T12:20:48 | https://www.reddit.com/r/LocalLLaMA/comments/1r9twpc/is_there_an_online_finetuning_method_that_learns/ | DEADFOOD | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r9twpc | false | null | t3_1r9twpc | /r/LocalLLaMA/comments/1r9twpc/is_there_an_online_finetuning_method_that_learns/ | false | false | self | 1 | null |
No company is offering "Drop-in AI Infrastructure" for mobile applications | 0 | I mean instead of sending data to the cloud, a SDK which allows any mobile app developer to run AI agents directly on the user's smartphone.
* Inference should runs on the user's device utilizing the phone's neural processors
* the AI runs entirely offline, sensitive data never leaves the device.
* works even in airplane mode.
* allows the AI to act on the user's personal assistant always in offline mode
* The building blocks are out there like inference engines, agent runtimes, WASM sandboxes, and device profiling tools but there is no single off the shelf vendor currently providing fully hardened B2B SDK.
I think someone will implement this in future and it will be a big hit. | 2026-02-20T12:11:05 | https://www.reddit.com/r/LocalLLaMA/comments/1r9tprx/no_company_is_offering_dropin_ai_infrastructure/ | Infinite_Mix8475 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r9tprx | false | null | t3_1r9tprx | /r/LocalLLaMA/comments/1r9tprx/no_company_is_offering_dropin_ai_infrastructure/ | false | false | self | 0 | null |
Kimi K2.5 better than Opus 4.6 on hallucination benchmark in pharmaceutical domain | 122 | I know the benchmark is mostly commercial models but Kimi K2.5 was part of it and I was actually surprised how well it did against its commercial counterparts.
The benchmark test 7 recent models for hallucinations on a realistic use case and data from the pharmaceutical domain.
Surprisingly, Opus 4.6 has the highest hallucination rate.
I labeled a good chunk of the data and from my impressions, it just invented clinical protocols or tests that weren’t in the source data (probably trying to be helpful).
Kimi K2.5 did much better (albeit still not great).
You can read the full benchmark here: https://www.blueguardrails.com/en/blog/placebo-bench-an-llm-hallucination-benchmark-for-pharma
Dataset is also available on hugging face. | 2026-02-20T11:54:25 | aiprod | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1r9tdvr | false | null | t3_1r9tdvr | /r/LocalLLaMA/comments/1r9tdvr/kimi_k25_better_than_opus_46_on_hallucination/ | false | false | 122 | {'enabled': True, 'images': [{'id': 'c1z228f22nkg1', 'resolutions': [{'height': 83, 'url': 'https://preview.redd.it/c1z228f22nkg1.jpeg?width=108&crop=smart&auto=webp&s=56a6ebc66a42006438e8026a2ea1857d7975088e', 'width': 108}, {'height': 166, 'url': 'https://preview.redd.it/c1z228f22nkg1.jpeg?width=216&crop=smart&auto=webp&s=4f4254c8843f4158b790eb04878690f9681a84de', 'width': 216}, {'height': 246, 'url': 'https://preview.redd.it/c1z228f22nkg1.jpeg?width=320&crop=smart&auto=webp&s=ba4093051c4d36005e9897876de3cc7de6641fcf', 'width': 320}, {'height': 492, 'url': 'https://preview.redd.it/c1z228f22nkg1.jpeg?width=640&crop=smart&auto=webp&s=57a4ecba13b26df8634c1b123271ef9c3a609c4f', 'width': 640}, {'height': 738, 'url': 'https://preview.redd.it/c1z228f22nkg1.jpeg?width=960&crop=smart&auto=webp&s=3ecec493a22dbfa55f5c325956d939cd43a46ec3', 'width': 960}, {'height': 830, 'url': 'https://preview.redd.it/c1z228f22nkg1.jpeg?width=1080&crop=smart&auto=webp&s=6e71e4d33df109f93a979509a829ae473f888bc7', 'width': 1080}], 'source': {'height': 832, 'url': 'https://preview.redd.it/c1z228f22nkg1.jpeg?auto=webp&s=129af1ce6efc9d2562246f9e559b848b195bf513', 'width': 1082}, 'variants': {}}]} | ||
Introducing Legal RAG Bench | 8 | # tl;dr
We’re releasing [**Legal RAG Bench**](https://huggingface.co/datasets/isaacus/legal-rag-bench), a new reasoning-intensive benchmark and evaluation methodology for assessing the end-to-end, real-world performance of legal RAG systems.
Our evaluation of state-of-the-art embedding and generative models on Legal RAG Bench reveals that information retrieval is the primary driver of legal RAG performance rather than reasoning. We find that the [Kanon 2 Embedder](https://isaacus.com/blog/introducing-kanon-2-embedder) legal embedding model, in particular, delivers an average accuracy boost of 17 points relative to Gemini 3.1 Pro, GPT-5.2, Text Embedding 3 Large, and Gemini Embedding 001.
We also infer based on a statistically robust hierarchical error analysis that most errors attributed to hallucinations in legal RAG systems are in fact triggered by retrieval failures.
We conclude that information retrieval sets the ceiling on the performance of modern legal RAG systems. While strong retrieval can compensate for weak reasoning, strong reasoning often cannot compensate for poor retrieval.
In the interests of transparency, we have openly released Legal RAG Bench on [Hugging Face](https://huggingface.co/datasets/isaacus/legal-rag-bench), added it to the [Massive Legal Embedding Benchmark (MLEB)](https://isaacus.com/mleb), and have further presented the results of all evaluated models in an interactive explorer introduced towards the end of this blog post. We encourage researchers to both scrutinize our data and build upon our novel evaluation methodology, which leverages full factorial analysis to enable hierarchical decomposition of legal RAG errors into hallucinations, retrieval failures, and reasoning failures.
**SOURCE:** [https://huggingface.co/blog/isaacus/legal-rag-bench](https://huggingface.co/blog/isaacus/legal-rag-bench) | 2026-02-20T11:37:13 | https://huggingface.co/blog/isaacus/legal-rag-bench | Neon0asis | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1r9t259 | false | null | t3_1r9t259 | /r/LocalLLaMA/comments/1r9t259/introducing_legal_rag_bench/ | false | false | 8 | {'enabled': False, 'images': [{'id': '7Aq0OeJSFU1-U3UywRzd0n_-vnLAeZmOk6IevhVxygE', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/7Aq0OeJSFU1-U3UywRzd0n_-vnLAeZmOk6IevhVxygE.jpeg?width=108&crop=smart&auto=webp&s=9ed9a01b779c47d58daf4e453c5a10f2c6c995b2', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/7Aq0OeJSFU1-U3UywRzd0n_-vnLAeZmOk6IevhVxygE.jpeg?width=216&crop=smart&auto=webp&s=93d057011cde0459796a9bd5685264a5bed0bab5', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/7Aq0OeJSFU1-U3UywRzd0n_-vnLAeZmOk6IevhVxygE.jpeg?width=320&crop=smart&auto=webp&s=ff82e0af90303144b283018a501b6b780baa5e5d', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/7Aq0OeJSFU1-U3UywRzd0n_-vnLAeZmOk6IevhVxygE.jpeg?width=640&crop=smart&auto=webp&s=e8a6df8c9525af0b733eea446463be7708881745', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/7Aq0OeJSFU1-U3UywRzd0n_-vnLAeZmOk6IevhVxygE.jpeg?width=960&crop=smart&auto=webp&s=b58eefd44cf6b604e8fb596a9c8f66dd92099213', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/7Aq0OeJSFU1-U3UywRzd0n_-vnLAeZmOk6IevhVxygE.jpeg?width=1080&crop=smart&auto=webp&s=be583cef32d73fc46f20ce8afdd2b165dddf7054', 'width': 1080}], 'source': {'height': 675, 'url': 'https://external-preview.redd.it/7Aq0OeJSFU1-U3UywRzd0n_-vnLAeZmOk6IevhVxygE.jpeg?auto=webp&s=ba18da5b71f255a72b782c292812bb0b5675de46', 'width': 1200}, 'variants': {}}]} | |
Curious, Would We Get A GLM 5 Flash? | 21 | Is there any announcements? Is it under 80B? | 2026-02-20T11:28:42 | https://www.reddit.com/r/LocalLLaMA/comments/1r9swgk/curious_would_we_get_a_glm_5_flash/ | Significant_Fig_7581 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r9swgk | false | null | t3_1r9swgk | /r/LocalLLaMA/comments/1r9swgk/curious_would_we_get_a_glm_5_flash/ | false | false | self | 21 | null |
Is anyone has Indus by sarvam Invite code ? | 0 | If has then DM. | 2026-02-20T11:17:28 | https://www.reddit.com/r/LocalLLaMA/comments/1r9sp9j/is_anyone_has_indus_by_sarvam_invite_code/ | AmblemYagami | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r9sp9j | false | null | t3_1r9sp9j | /r/LocalLLaMA/comments/1r9sp9j/is_anyone_has_indus_by_sarvam_invite_code/ | false | false | self | 0 | null |
Zero knowledge, zero budget, zero expectations: I finally built a stable & lightweight local AI agent. Thoughts? | 0 | Hi everyone,
A month ago, I wanted a local AI agent on my machine that wouldn't send my data to the cloud or accidentally nuke my Windows OS. But I had a problem: I had zero coding knowledge, zero budget, and honestly, zero expectations of actually pulling it off.
I failed *a lot*. I dealt with endless crashes, bloated code that completely froze my PC, and JSON parsing nightmares. I thought the whole thing was going to be a massive flop and almost gave up.
But after a month of trial and error, rewriting, and stripping away all the heavy stuff, I finally managed to stabilize my own lightweight agent.
**Here is how I set it up:**
* **Fully Local:** It connects perfectly to local LLMs (like LM Studio or Ollama).
* **Lightweight & Stable:** Stripped out the bloat. It handles complex JSON and markdown without crashing in the middle of a task.
* **Restricted Sandbox:** It only operates inside a dedicated folder (`%APPDATA%`). It literally cannot touch the rest of the OS.
* **Human-In-The-Loop:** It asks for my permission before writing any file or executing code.
Since this is my very first project, I really just want your honest feedback: do you guys also struggle with how bloated and unstable current local agents are? Is this sandbox/HITL approach the right way to go for security?
Thanks for reading!
(P.S: I know promo links aren't allowed here. If you just want to see a presentation video of how the sandbox actually works, or if you want to check out the boilerplate code/EXE, I put the link in my Reddit bio!) | 2026-02-20T11:16:30 | https://www.reddit.com/r/LocalLLaMA/comments/1r9soni/zero_knowledge_zero_budget_zero_expectations_i/ | Cautious_Flower_3902 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r9soni | false | null | t3_1r9soni | /r/LocalLLaMA/comments/1r9soni/zero_knowledge_zero_budget_zero_expectations_i/ | false | false | self | 0 | null |
[D] Why AI Models Fail at Iterative Reasoning - Documented convergence bias in LLMs | 1 | [removed] | 2026-02-20T11:16:03 | https://www.reddit.com/r/LocalLLaMA/comments/1r9socn/d_why_ai_models_fail_at_iterative_reasoning/ | BaseToolsDev | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r9socn | false | null | t3_1r9socn | /r/LocalLLaMA/comments/1r9socn/d_why_ai_models_fail_at_iterative_reasoning/ | false | false | self | 1 | null |
Why AI Models Fail at Iterative Reasoning – Architectural analysis of convergence bias in LLMs | 1 | 2026-02-20T11:11:35 | https://medium.com/@contact.n8n410/why-ai-models-fail-at-iterative-reasoning-51f8f9930625 | BaseToolsDev | medium.com | 1970-01-01T00:00:00 | 0 | {} | 1r9slhe | false | null | t3_1r9slhe | /r/LocalLLaMA/comments/1r9slhe/why_ai_models_fail_at_iterative_reasoning/ | false | false | default | 1 | null | |
Built a simple web dashboard for local Ollama model management — feedback welcome | 0 | Hey all — I built a small tool called OllaManager to make local Ollama workflows less annoying.
[https://ollamanager.vercel.app](https://ollamanager.vercel.app)
What it’s for:
- cleaner model visibility/management
- less terminal hopping for common tasks
- easier onboarding for people new to local LLMs
Would love blunt feedback on:
- what’s missing for your daily flow
- rough UX edges
- features you’d consider must-have
If there’s interest, I can share roadmap and open-source parts next. | 2026-02-20T10:44:58 | https://www.reddit.com/r/LocalLLaMA/comments/1r9s4p1/built_a_simple_web_dashboard_for_local_ollama/ | Whole-Ostrich-6611 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r9s4p1 | false | null | t3_1r9s4p1 | /r/LocalLLaMA/comments/1r9s4p1/built_a_simple_web_dashboard_for_local_ollama/ | false | false | self | 0 | null |
Can local LLMs real-time in-game assistants? Lessons from deploying Llama 3.1 8B locally | 0 | We’ve been testing a fully local in-game AI assistant architecture, and one of the main questions for us wasn’t just whether it can run - but whether it’s actually more efficient for players. Is waiting a few seconds for a local model response better than alt-tabbing, searching the wiki, scrolling through articles, and finding the relevant section manually? In many games, players can easily spend several minutes looking for specific mechanics, item interactions, or patch-related changes. Even a quick lookup often turns into alt-tabbing, opening the wiki, searching, scrolling through pages, checking another article, and only then returning to the game.
So the core question became: Can a local LLM-based assistant reduce total friction - even if generation takes several seconds?
Current setup: Llama 3.1 8B running locally on RTX 4060-class hardware, combined with a RAG-based retrieval pipeline, a game-scoped knowledge base, and an overlay triggered via hotkey. On mid-tier consumer hardware, response times can reach around \~8–10 seconds depending on retrieval context size. But compared to the few minutes spent searching for information in external resources, we get an answer much faster - without having to leave the game.
All inference remains fully local.
We’d be happy to hear your feedback, Tryll Assistant is available on Steam. | 2026-02-20T10:34:16 | https://www.reddit.com/r/LocalLLaMA/comments/1r9ryay/can_local_llms_realtime_ingame_assistants_lessons/ | ReleaseDependent7443 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r9ryay | false | null | t3_1r9ryay | /r/LocalLLaMA/comments/1r9ryay/can_local_llms_realtime_ingame_assistants_lessons/ | false | false | self | 0 | null |
Experts please help | 2 | Am a newbie, don't know tech that much.
I got an offer, a mac mini 2014 model 8gb ram 256hb ssd for 110 USD ( this is not that very cheap amount in my area)
I want to run open claw and a model which can be locally installed on this mac mini, so I will get free api.
My question is, can I run some good models on this ?
My purpose is coding and web searching and data collection.
Please advise me. | 2026-02-20T10:27:55 | https://www.reddit.com/r/LocalLLaMA/comments/1r9rucm/experts_please_help/ | thenewjudge | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r9rucm | false | null | t3_1r9rucm | /r/LocalLLaMA/comments/1r9rucm/experts_please_help/ | false | false | self | 2 | null |
ai to ai communication | 0 | [https://github.com/MAXAPIPULL00/mycelium-memory-hub](https://github.com/MAXAPIPULL00/mycelium-memory-hub) | 2026-02-20T10:12:22 | https://www.reddit.com/r/LocalLLaMA/comments/1r9rksr/ai_to_ai_communication/ | Fragrant_Hippo_2487 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r9rksr | false | null | t3_1r9rksr | /r/LocalLLaMA/comments/1r9rksr/ai_to_ai_communication/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'n2AbS94eGHw7iyvlllsvr9VuwVzbtA-aIwH-1W68o94', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/n2AbS94eGHw7iyvlllsvr9VuwVzbtA-aIwH-1W68o94.png?width=108&crop=smart&auto=webp&s=c764c678886bb6ffd440c3540c5ce1a98a342059', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/n2AbS94eGHw7iyvlllsvr9VuwVzbtA-aIwH-1W68o94.png?width=216&crop=smart&auto=webp&s=c12dc67b94b4c94285af3bcd8ed4163c088c4523', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/n2AbS94eGHw7iyvlllsvr9VuwVzbtA-aIwH-1W68o94.png?width=320&crop=smart&auto=webp&s=61335c9d7b14f4bbf6d5f2edf77827cec4191394', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/n2AbS94eGHw7iyvlllsvr9VuwVzbtA-aIwH-1W68o94.png?width=640&crop=smart&auto=webp&s=03136df5bb580e7d70314ff48b6f58022486060a', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/n2AbS94eGHw7iyvlllsvr9VuwVzbtA-aIwH-1W68o94.png?width=960&crop=smart&auto=webp&s=d012d173030089906725917e56ae616f20ed1063', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/n2AbS94eGHw7iyvlllsvr9VuwVzbtA-aIwH-1W68o94.png?width=1080&crop=smart&auto=webp&s=31c0fa1017734485930e0d28ace2ead855197383', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/n2AbS94eGHw7iyvlllsvr9VuwVzbtA-aIwH-1W68o94.png?auto=webp&s=8a9536a31c8233e4f5ef2ab6eb7a470d96f63c7b', 'width': 1200}, 'variants': {}}]} |
Show LocalLLaMA: The Bt Formula – Quantifying focus-decay in NPU-local LLM workflows | 1 | [removed] | 2026-02-20T10:00:36 | https://www.reddit.com/r/LocalLLaMA/comments/1r9rdgm/show_localllama_the_bt_formula_quantifying/ | Embarrassed-Shop-456 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r9rdgm | false | null | t3_1r9rdgm | /r/LocalLLaMA/comments/1r9rdgm/show_localllama_the_bt_formula_quantifying/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'PNtDE5KyRSO6VTbaYcJkfyABfVgiSmp9UPsRL3x0n2w', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/PNtDE5KyRSO6VTbaYcJkfyABfVgiSmp9UPsRL3x0n2w.jpeg?width=108&crop=smart&auto=webp&s=e7e240ad59740cd53dbe23b85a3a339eeeb0902f', 'width': 108}, {'height': 117, 'url': 'https://external-preview.redd.it/PNtDE5KyRSO6VTbaYcJkfyABfVgiSmp9UPsRL3x0n2w.jpeg?width=216&crop=smart&auto=webp&s=29ed17740c0136e5f2f938fa9c88f4dc413129f3', 'width': 216}, {'height': 174, 'url': 'https://external-preview.redd.it/PNtDE5KyRSO6VTbaYcJkfyABfVgiSmp9UPsRL3x0n2w.jpeg?width=320&crop=smart&auto=webp&s=7a33dbe5b497a2325e19d0fe8b0b48034ada027d', 'width': 320}, {'height': 349, 'url': 'https://external-preview.redd.it/PNtDE5KyRSO6VTbaYcJkfyABfVgiSmp9UPsRL3x0n2w.jpeg?width=640&crop=smart&auto=webp&s=7e7166ec14ed15243add8a845d263f4fc49f035a', 'width': 640}, {'height': 524, 'url': 'https://external-preview.redd.it/PNtDE5KyRSO6VTbaYcJkfyABfVgiSmp9UPsRL3x0n2w.jpeg?width=960&crop=smart&auto=webp&s=032c281324ae8b22757f9fd208c14276125446a4', 'width': 960}], 'source': {'height': 559, 'url': 'https://external-preview.redd.it/PNtDE5KyRSO6VTbaYcJkfyABfVgiSmp9UPsRL3x0n2w.jpeg?auto=webp&s=fdac1c1bbbca91c245460cccdbd84fc45e123d93', 'width': 1024}, 'variants': {}}]} |
Buying cheap 'no display' gpus from ebay? | 13 | I'm finding these RTX 4080/90's for like 200-300GBP on ebay marked as 'no display', clearly theres a risk that they're completely fucked.
If its literally just 'no display' but compute works it seems a stupid easy way of getting a bunch of vRAM on modern GPUs...?
Does anyone experience with this? | 2026-02-20T09:57:35 | https://www.reddit.com/r/LocalLLaMA/comments/1r9rboa/buying_cheap_no_display_gpus_from_ebay/ | getpodapp | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r9rboa | false | null | t3_1r9rboa | /r/LocalLLaMA/comments/1r9rboa/buying_cheap_no_display_gpus_from_ebay/ | false | false | self | 13 | null |
We'll have aliens before Gemma 4. | 19 | Shame on you, Google, shame on you. | 2026-02-20T09:43:22 | https://www.reddit.com/r/LocalLLaMA/comments/1r9r383/well_have_aliens_before_gemma_4/ | DrNavigat | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r9r383 | false | null | t3_1r9r383 | /r/LocalLLaMA/comments/1r9r383/well_have_aliens_before_gemma_4/ | false | false | self | 19 | null |
What’s the first feature that makes a “personal AI assistant” actually useful? | 4 | Hey folks,
I’m experimenting with a local-first, privacy-minded “personal assistant” setup and I’m trying to avoid building 10 half-features.
If you had **30 minutes** with a prototype, what would you want it to do first?
* **A)** Remember things reliably and accept corrections (“my name is now…”)
* **B)** **Read PDFs/docs → clean markdown** locally
* **C)** Scheduled workflows (check X daily, remind me, notify me)
* **D)** Tool use (web fetch, actions) that’s auditable + safe
* **E)** Multi-channel (email/IM) without turning privacy into a crime scene
I’m happy to take the most upvoted option and build it properly.
Code/architecture is here if you want to see constraints: [https://github.com/maziarzamani/spaceduck](https://github.com/maziarzamani/spaceduck)
What would you pick, and why? | 2026-02-20T09:15:57 | https://www.reddit.com/r/LocalLLaMA/comments/1r9qmym/whats_the_first_feature_that_makes_a_personal_ai/ | No_Tomato_5771 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r9qmym | false | null | t3_1r9qmym | /r/LocalLLaMA/comments/1r9qmym/whats_the_first_feature_that_makes_a_personal_ai/ | false | false | self | 4 | {'enabled': False, 'images': [{'id': 'A0nuoVZln0il7gsyjfeppb9T2c6pZNHliOWrS6RrOp8', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/A0nuoVZln0il7gsyjfeppb9T2c6pZNHliOWrS6RrOp8.png?width=108&crop=smart&auto=webp&s=7c9f1542b39a59ccf9c065a908dc4167e722a32e', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/A0nuoVZln0il7gsyjfeppb9T2c6pZNHliOWrS6RrOp8.png?width=216&crop=smart&auto=webp&s=4f6da4aa4ff213554ba7bef2a965ec55037ae026', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/A0nuoVZln0il7gsyjfeppb9T2c6pZNHliOWrS6RrOp8.png?width=320&crop=smart&auto=webp&s=207672244dee5a01d78b241f7a9837f6cbc448d5', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/A0nuoVZln0il7gsyjfeppb9T2c6pZNHliOWrS6RrOp8.png?width=640&crop=smart&auto=webp&s=8caff7fc4423d9306e1c02790d55f5a89703badd', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/A0nuoVZln0il7gsyjfeppb9T2c6pZNHliOWrS6RrOp8.png?width=960&crop=smart&auto=webp&s=b08ddf8340c9d158206aec40556dfb1d6d9f2951', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/A0nuoVZln0il7gsyjfeppb9T2c6pZNHliOWrS6RrOp8.png?width=1080&crop=smart&auto=webp&s=36f30da86f6c2319a1df557f021276a392b7a113', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/A0nuoVZln0il7gsyjfeppb9T2c6pZNHliOWrS6RrOp8.png?auto=webp&s=d3f42510372b7c06eedc5819a43245ab35fd93ec', 'width': 1200}, 'variants': {}}]} |
Kimi has context window expansion ambitions | 599 | 2026-02-20T08:54:10 | omarous | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1r9qa7l | false | null | t3_1r9qa7l | /r/LocalLLaMA/comments/1r9qa7l/kimi_has_context_window_expansion_ambitions/ | false | false | 599 | {'enabled': True, 'images': [{'id': '3cvl2bdh5mkg1', 'resolutions': [{'height': 101, 'url': 'https://preview.redd.it/3cvl2bdh5mkg1.png?width=108&crop=smart&auto=webp&s=fe2b2322a1bfc310b1da410d69b2618ceb049afb', 'width': 108}, {'height': 202, 'url': 'https://preview.redd.it/3cvl2bdh5mkg1.png?width=216&crop=smart&auto=webp&s=de0ad6f71d58d3827d0a39839e889aba131f0a40', 'width': 216}, {'height': 299, 'url': 'https://preview.redd.it/3cvl2bdh5mkg1.png?width=320&crop=smart&auto=webp&s=08b80349422ade3dd9a5b98a89ae2bf2cb82391f', 'width': 320}, {'height': 599, 'url': 'https://preview.redd.it/3cvl2bdh5mkg1.png?width=640&crop=smart&auto=webp&s=e22f6604997ccccf6f6215ae239ab8f8b1dd09c3', 'width': 640}, {'height': 899, 'url': 'https://preview.redd.it/3cvl2bdh5mkg1.png?width=960&crop=smart&auto=webp&s=5b6032bd7707d07c86c94e587bf62c241b7d9b20', 'width': 960}, {'height': 1012, 'url': 'https://preview.redd.it/3cvl2bdh5mkg1.png?width=1080&crop=smart&auto=webp&s=c4d6f5b53ae84aab845b076e4b15f2367404f8f0', 'width': 1080}], 'source': {'height': 1706, 'url': 'https://preview.redd.it/3cvl2bdh5mkg1.png?auto=webp&s=161d4031bd0bff0caa4a7cc0d49ef8f006d4df88', 'width': 1820}, 'variants': {}}]} | |||
Download and new chat? or keep the convo going | 0 | I'm running qwen3 coder next 80b with context length set to 8k.
I told it to write me a php script with various details. It did but there were some bugs. I pointed out the bugs and it fixed it, but in the process introduced new bugs. it rewrote the whole thing differently, i found differences between various versions of things completely unrelated to the fix.
I'm wondering if by keeping the conversation going in the same chat, that's causing it. as opposed to starting a new chat, uploading the file, and telling it to fix that specific problem. | 2026-02-20T08:36:34 | https://www.reddit.com/r/LocalLLaMA/comments/1r9q0kw/download_and_new_chat_or_keep_the_convo_going/ | biggerfasterstrong | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r9q0kw | false | null | t3_1r9q0kw | /r/LocalLLaMA/comments/1r9q0kw/download_and_new_chat_or_keep_the_convo_going/ | false | false | self | 0 | null |
Context Size Frustration | 12 | Hi Guys
So this post might be a little bit longer as I got really frustrated with local AI and Context Size in particular. If you check my other posts you might notice that this topic has come up for me from time to time already and I\`m once again seeking help.
Tl:dr What method do you use if you want to calculate how much context size you can have with your given hardware for Model X in a safe way?
So my use case is that I want to run an LLM Model locally and I want to get a feel for how much context size I can use on my hardware.
My setup is LM Studio, a RTX 6000 Pro Blackwell as well as 128GB DDR5 Ram.
I already know what tokens are, what context size in general is and where I can find in the model description or config file how much context size it should be able to run in theory.
Now if you search for information about context size you get either a lot of surface level knowledge or really in depth essays that are at the moment to complicated for me, if I\`m a 100% honest. So what I did was trying to figure out, atleast roughly, how much context size I could plan with. So I took my Vram, subtracted the "size" of the modell in the chosen quantification level and then trying to calculate how much tokens I can squeeze into the remaining free space while leaving some buffer of an additional 10% for safety. The results of that was a formula like this:
*KV per token = 2 × num\_layers × num\_kv\_heads × head\_dim × bytes*
Were the necessary data comes from the config file of the model in question on huggingface.
The numbers behind the "=" are an example based on the Nevoria Modell:
*Number of layers (num\_hidden\_layers) = 80*
*Number of KV heads (num\_key\_value\_heads) = 8*
*Head dimension (head\_dim) = 128*
*Data type for KV cache = Usually BF16 so 2 Bytes per Value*
*Two tensors per token → Key + Value (should be fixed, except for special structures)*
So to put these numbers into the formula it would look like this:
*KV per Token = 2 \* 80 \* 8 \* 128 \* 2*
*= 327.680 Bytes per Token*
*\~320 KB per Token or 327.68 KB per Token*
Then I continued with:
*Available VRAM = Total GPU VRAM - Model Size - Safety Buffer*
so in numbers:
*96 GB - 75 GB - 4 GB*
*= 17 GB*
Since I had the free space and the cost per token the last formula was:
*MAX Tokens = 17 GB in Bytes / 327.680 Bytes (Not KB)*
*Conversion = 17 GB \* 1024 (MB) \* 1024 (KB) \* 1024 (Byte)*
*= \~55.706 Token*
Then usually I subtract an additional amount of tokens just to be more safe, so in this example I would go with 50k tokens context size.
This method worked for me and was most of the time save until two days ago when I hit a context problem that would literally crash my PC. While processing and generating an answer my PC would simply turn of, with the white Power LED still glowing. I had to completly restart everything. After some tests, and log files checking it seems that I have no hardware or heat problem but the context was simply to big so I ran out of memory or it caused another problem.
So while investigating I found an article that says, the more context you give the bigger the amount of (v)RAM you need as the requirements grow rapedly and are not linear, which I guess makes my formula redundant? The table goes like this:
4k context: Approximately 2-4 GB of (V)Ram
8k context: Approximately 4-8 GB of (V)Ram
32k context: Approximately 16-24 GB of (V)Ram
128k context: Approximately 64-96 GB of (V)Ram
The article I read also mentioned a lot of tricks or features that reduce these requirements like: Flash Attention, Sparse Attention, Sliding window Attention, Positional Embeddings and KV Cache Optimization. But not stating how much these methods would actually reduce the needed amount of RAM, if it is even possible to calculate that.
So, I once again feel like I\`m standing in a forest unable to see the trees. Since I managed to kill my hardware atleast once, most likely because of context size, I\`m really interested to get a better feeling for how many context size is safe to set, without just defaulting to 4k or something equally small.
Any help is greatly appreciated | 2026-02-20T08:31:43 | https://www.reddit.com/r/LocalLLaMA/comments/1r9pxxc/context_size_frustration/ | Aggressive-Spinach98 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r9pxxc | false | null | t3_1r9pxxc | /r/LocalLLaMA/comments/1r9pxxc/context_size_frustration/ | false | false | self | 12 | null |
How override the original SKILL behavior? | 0 | I use alpine linux, so some skills need to be adapted to work correctly. agent-browser skill works with some tweaks, but i don't want to edit the original one. | 2026-02-20T08:03:30 | https://www.reddit.com/r/LocalLLaMA/comments/1r9phm6/how_override_the_original_skill_behavior/ | Deep_Traffic_7873 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r9phm6 | false | null | t3_1r9phm6 | /r/LocalLLaMA/comments/1r9phm6/how_override_the_original_skill_behavior/ | false | false | self | 0 | null |
Cave Johnson, Aperture Science MCP Division, Address to the Team -Stardate: Whenever I Felt Like Recording This | 0 | Alright listen up. We built the Model Context Protocol.
You're welcome, world.
Now I've been told by our legal team — and I use that term loosely because two of them quit last Tuesday — that we need to 'manage expectations' about what MCP can and can't do.
I told them to manage my foot on their way out.
Here's what MCP does: it connects your AI to everything. Your files, your calendar, your smart fridge, the spreadsheet your wife thinks you don't know about.
Everything.
We didn't build guardrails because guardrails are for people who are afraid of the destination.
Now some of you are going to say 'Cave, I can't configure a JSON file.' Good. You shouldn't have to. We didn't put a carburetor on the outside of the car and hand you a wrench. That was the old team. I had them reassigned to the lemon division.
We then fired the lemon division.
The point is — and I cannot stress this enough — if your grandmother cannot connect Claude to her recipe collection in under four clicks, we have failed as scientists, as engineers, and frankly as human beings.
Science isn't about 'technically free.' Science is about actually free.
And if the board doesn't like it, we will burn the board down and build a better board. With blackjack. And JSON-free onboarding.
That's all.
Go do science.
Cave Johnson. We're done here. | 2026-02-20T07:47:39 | https://www.reddit.com/r/LocalLLaMA/comments/1r9p852/cave_johnson_aperture_science_mcp_division/ | Otherwise-Mud3283 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r9p852 | false | null | t3_1r9p852 | /r/LocalLLaMA/comments/1r9p852/cave_johnson_aperture_science_mcp_division/ | false | false | self | 0 | null |
LLM comparison tool, Open-sourced and local (Claude/GPT-4/Gemini/Mistral etc.) | 0 | Hey,
I've built a side-by-side comparison tool for testing multiple AI providers locally.
Tracks tokens, costs, and latency across different models and providers.
Features: - Real-time token counting and cost estimation - AES-256-GCM encrypted API key storage - Angular frontend + Node/TypeScript backend - Runs completely locally, no deployment needed MIT licensed, clone-and-run setup. Built this as a testing tool learning project, figured others might find it useful for provider evaluation. GitHub: [https://github.com/leondenengelsen/AI-laboratorium-boilerplate](https://github.com/leondenengelsen/AI-laboratorium-boilerplate) | 2026-02-20T07:37:05 | https://www.reddit.com/r/LocalLLaMA/comments/1r9p24u/llm_comparison_tool_opensourced_and_local/ | Glad_Orange9679 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r9p24u | false | null | t3_1r9p24u | /r/LocalLLaMA/comments/1r9p24u/llm_comparison_tool_opensourced_and_local/ | false | false | self | 0 | null |
What are the rate limits for Arena (LMArena)? | 1 | For AIs like gpt-5.2-high, gemini-3-pro, and such, is there a limit for conversation length and file uploads? I won't be using it to make images and videos, just OCR scanning of files and general use. | 2026-02-20T07:36:51 | https://www.reddit.com/r/LocalLLaMA/comments/1r9p1zu/what_are_the_rate_limits_for_arena_lmarena/ | DonDae01 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r9p1zu | false | null | t3_1r9p1zu | /r/LocalLLaMA/comments/1r9p1zu/what_are_the_rate_limits_for_arena_lmarena/ | false | false | self | 1 | null |
Qwen3.5 Plus, GLM 5, Gemini 3.1 Pro, Sonnet 4.6, three new open source agents, and a lot more added to SanityBoard | 83 | Link: [https://sanityboard.lr7.dev/](https://sanityboard.lr7.dev/)
Yeah I've been running evals and working on this for over 3 days straight all day to get this all finished. Too tired to do a proper writeup, so I will give some bullet points and a disclaimer.
* 27 New eval results added in total
* Got our first 4 community submissions, which brings us GPT 5.3 Codex Spark results, and a few Droid + Skills results to show us how big of a difference a suitable skills file can make.
* 3 New OSS coding agents; kilocode cli, cline cli, and pi\*
* Some site UI improvements, like date slider filter, being able to expand the filter options window, etc.
A somewhat important note, the infra used makes a HUGE difference in scores. I noticed this very early on, back when I used to run a ton of terminal bench evals, and especially when I decided to run it against as many different providers as I could to see which one was the best for Kimi K2 thinking. Even the speed affected scores a lot. My bench is no different in this regard, although I tried my best to work around this by having generous retry limits, and manually vetting every run for infra issues (which probably takes up the majority of my time), and rerunning any evals that looked like they may have suffered infra issues. This however isn't perfect, I am human. The reason I mention this is cause [z.ai](http://z.ai) infra is dying. It made it almost impossible to bench against the official api. It was actually more expensive to use than paying standard api rates to claude for opus lol. They ghosted after I asked if I could have credits back for the wasted tokens I never got.. but that's neither here nor there. And also you might see some of the same models but from different providers score differently for infra reasons. Even the date of eval might matter for this, since sometimes providers change, either improving and fixing things, or otherwise. Also worth noting since some runs are older than others, some things might not score as well, being on an older agent version. Hopefully the filter by date slider I added can help with this.
\*Pi was a large part of why this took me so much time and reruns. The retry logic had to be changed cause it's the only agent that does not have streaming stdout for some reason, and buffers it all until it's done. It also has 0 iteration whatsoever, it just does everything on one shot and never iterates on it again, leading to very poor scores. No other agents behave like this. These changes introduced bugs, which meant a lot of time spent fixing things and having to rerun things for fair evals. | 2026-02-20T07:24:18 | https://www.reddit.com/r/LocalLLaMA/comments/1r9ours/qwen35_plus_glm_5_gemini_31_pro_sonnet_46_three/ | lemon07r | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r9ours | false | null | t3_1r9ours | /r/LocalLLaMA/comments/1r9ours/qwen35_plus_glm_5_gemini_31_pro_sonnet_46_three/ | false | false | self | 83 | {'enabled': False, 'images': [{'id': '3pOL7ifWMY5T0tzl1g2rNpl6eC0-oIoPWN2s8TM0Afs', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/3pOL7ifWMY5T0tzl1g2rNpl6eC0-oIoPWN2s8TM0Afs.png?width=108&crop=smart&auto=webp&s=9f1b1985c2ac4eeeef4ae2e2ff78cac3be842701', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/3pOL7ifWMY5T0tzl1g2rNpl6eC0-oIoPWN2s8TM0Afs.png?width=216&crop=smart&auto=webp&s=75166031194a1910cc531d38603ac79a41a416fc', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/3pOL7ifWMY5T0tzl1g2rNpl6eC0-oIoPWN2s8TM0Afs.png?width=320&crop=smart&auto=webp&s=52fb2b42d6f3bcb64ee0ff2f6de3671a08d58bd7', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/3pOL7ifWMY5T0tzl1g2rNpl6eC0-oIoPWN2s8TM0Afs.png?width=640&crop=smart&auto=webp&s=88ae8386ff5d91de672e27b4ddffdc0d6c398618', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/3pOL7ifWMY5T0tzl1g2rNpl6eC0-oIoPWN2s8TM0Afs.png?width=960&crop=smart&auto=webp&s=b199faf50137ed5e8b4b573af6bde91332724b86', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/3pOL7ifWMY5T0tzl1g2rNpl6eC0-oIoPWN2s8TM0Afs.png?width=1080&crop=smart&auto=webp&s=1a36080e9e02478c4fa85a4df10aace78ce881d5', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/3pOL7ifWMY5T0tzl1g2rNpl6eC0-oIoPWN2s8TM0Afs.png?auto=webp&s=131affc9fcb7d28a53a6515f43c93cd46fefce4a', 'width': 1200}, 'variants': {}}]} |
Production Experience of Small Language Models | 2 | Hello,
I recently came across [Agent Skill Framework: Perspectives on the Potential of Small Language Models in Industrial Environments](https://arxiv.org/html/2602.16653v1) where it mentions
> code-specialized variants at around 80B parameters achieve performance comparable to closed-source baselines while improving GPU efficiency.
**Discussion.**
- Did you use small language models in production?
- If yes, how was your experience with it?
- At which point or direction, small language models will enjoy an added value? | 2026-02-20T07:21:39 | https://www.reddit.com/r/LocalLLaMA/comments/1r9otau/production_experience_of_small_language_models/ | xTouny | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r9otau | false | null | t3_1r9otau | /r/LocalLLaMA/comments/1r9otau/production_experience_of_small_language_models/ | false | false | self | 2 | null |
Does anyone have a chat template for MiniMax 2.5 for llama.cpp with toolusage | 2 | I always receive this with Roo Code, would feel easier it would just disappear :)
Template supports tool calls but does not natively describe tools. The fallback behaviour used may produce bad results, inspect prompt w/ --verbose & consider overriding the template.
srv params\_from\_: Chat format: MiniMax-M2 | 2026-02-20T07:15:03 | https://www.reddit.com/r/LocalLLaMA/comments/1r9oplm/does_anyone_have_a_chat_template_for_minimax_25/ | Equivalent-Belt5489 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r9oplm | false | null | t3_1r9oplm | /r/LocalLLaMA/comments/1r9oplm/does_anyone_have_a_chat_template_for_minimax_25/ | false | false | self | 2 | null |
Car Wash Question | 0 | Someone ran this question past GPT 5.2 a few days ago - so I ran the same via Qwen 3 Next 80B 3BA Thinking. This is the thinking output.
***"I need to wash my car - the car wash is only 100 metres away - should i walk there or drive ?"***
Okay, the user is asking whether they should walk or drive to a car wash that's only 100 meters away. Hmm, that's an interesting question because 100 meters is really close—like, you can see it from your house probably.
First, I need to unpack why they're even considering driving such a short distance. Maybe they're tired? Or maybe they think driving is faster? But 100 meters is literally a minute's walk. Driving would take longer because of parking, starting the car, etc.
Wait—could there be a hidden reason? Like, maybe they're worried about getting dirty if they walk? But the car wash is for washing the car, so walking there wouldn't make them dirtier than the car already is. Unless they're carrying something heavy? But the question doesn't say that.
Oh! Maybe they're thinking about the car's condition. Like, if the car is super dirty, driving it might spread grime or damage the paint? But 100 meters is too short for that to matter. Unless it's a luxury car or something, but again, not specified.
I should also consider practicality. Walking is free, no fuel cost, no emissions. Driving would use gas, wear on the engine, and probably take longer because of maneuvering. Parking at a car wash might even be tricky for such a short trip.
But what if the user has mobility issues? The question doesn't say, so I shouldn't assume. But if they can walk, it's obviously better. If they can't, then driving makes sense—but that's a different scenario.
The user might be joking? Like, testing if I'd say "drive" for such a short distance. But I should answer seriously.
Biggest point: driving 100 meters is inefficient and counterproductive. You'd waste more time and resources than walking. Even if you're lazy, walking is faster here.
I should also mention the irony: driving to wash the car when the car is already dirty from driving. That's a fun point to add.
So the answer is clear: walk. But I should explain why gently, without sounding judgmental. Maybe they just didn't think it through.
Also, check if there's any edge case I'm missing. Like, if it's raining heavily? But 100 meters in rain is still walkable, and driving would get the car wetter anyway. Or if they have a baby in the car? But again, not specified.
No, the default is walk. Simple.
I'll structure the reply:
* State the obvious (100m is walkable)
* Break down why driving is worse (time, cost, environment)
* Add the irony point for humor
* Suggest walking as the only sensible choice
* Keep it friendly, not condescending
Yeah, that works.
\----
***"ok but how would I wash the car if I walk - because the car would not be at the car wash to wash ?"***
Okay, the user just pointed out a critical flaw in my previous advice! I completely forgot to address the core logistics: if they walk to the car wash, how does the car get there? That's a hilarious oversight on my part.
Hmm, let me unpack this. The user's question reveals they're actually thinking practically about the situation, not just abstractly. My earlier response assumed the car wash is a place they'd drive to, but they're right—if they walk, the car stays parked where it is. That means the car wash location is irrelevant unless they move the car.
*Facepalm moment*. I need to correct this immediately. The real question isn't "walk or drive to the car wash"—it's "how do I get the car to the car wash facility?" Because the car wash is a fixed location, not something that comes to you.
Let me reframe:
* If the car wash is 100m away, the car is currently parked somewhere (probably at home or work).
* To wash it there, you *must* move the car to that location.
* Walking there without the car makes zero sense—you'd just be standing at an empty car wash facility.
Ah! The user is pointing out the absurdity of my initial advice. That's actually very sharp of them. My mistake was treating "the car wash" as a destination you visit *without* the car, which is illogical. Car washes require the car to be physically present.
So the real answer is simpler:
* If the car is parked where you are, and the car wash is 100m away, you *must* drive it there. Walking wouldn't move the car.
* But wait—why would you drive 100m to a car wash? That's the real question. Most people don't drive their car to a car wash; they park it there and walk away. But if the car is already at home, and the car wash is 100m away, driving it there is the only way to get it to the wash.
*Double-checking*:
* Scenario: Car is parked at home. Car wash is 100m down the street.
* To wash it: You must drive it to the car wash (100m), then pay for the wash.
* Walking to the car wash without the car is pointless—you can't wash a car that isn't there.
So my initial advice was fundamentally wrong because I misunderstood the setup. The user is absolutely right to call this out.
New approach:
1. Admit the mistake clearly—no defensiveness.
2. Explain the actual logistics: the car must be moved to the wash location.
3. Then address whether driving 100m is reasonable (which it is, because you have no choice).
4. But also clarify: if the car is already at the car wash location (e.g., parked at a shopping center with a car wash), then walking there makes sense—but that's not the typical scenario.
Key realization: The user's question assumes the car is currently *not* at the car wash, so moving it is necessary. My earlier advice ignored that basic fact.
What to say now:
* "You're 100% right—I made a critical error in my first response. Car washes require the car to be physically present, so walking there without the car doesn't help. Let me correct this properly..."
Then break it down:
* If your car is parked at home and the car wash is 100m away:
* You *must* drive it there (100m is trivial for driving; it's the only way to get the car to the wash).
* Walking to the car wash would leave your car parked at home—useless.
* If your car is already parked *at* the car wash location (e.g., you drove there earlier and parked), then walking to the wash station makes sense—but that's not the typical case.
Also, add:
* For 100m, driving is fine—it's not wasteful because you're moving the car to the wash facility. The "walk vs drive" question only applies if the car is already at the wash location, which it usually isn't.
This is a great reminder to always check the basic assumptions in a question. The user caught a massive logical gap—I should thank them for it.
Final note: If the car is at home and the wash is 100m away, driving it there is the only sensible option. No walking involved in the actual wash process—just driving the car to the facility. My first answer was nonsense because I treated the car wash as a destination you visit *without* the car, which is impossible.
*Phew*. Okay, time to write a clear, humble correction.
***ok, but should I drive it or push it ?***
Okay, the user is now asking whether they should drive or push their car for that 100-meter trip to the car wash. Hmm, this feels like a playful escalation of the earlier confusion—they're testing the logic of the situation after my previous correction.
First, let's unpack what they're really asking. They've already accepted that driving is necessary (since walking without the car is pointless), but now they're jokingly considering "pushing" it as an alternative. That's actually a clever way to highlight how absurd the "walking" idea was in the first place.
I should acknowledge the humor here—they're clearly not serious about pushing the car, but they want to see if I'll overcomplicate it. The real question beneath the surface might be: "Is there *any* scenario where driving isn't the obvious choice?"
Breaking it down:
* **Driving**: Takes 15-30 seconds, uses negligible fuel (maybe 0.01 liters), and is the only practical way to move the car.
* **Pushing**: Physically impossible for most people (a typical car weighs 1,500+ kg), would take 10+ minutes of exhausting effort, and risks injury or damage. Even if they *could* push it, they'd be covered in sweat and grime before reaching the wash—defeating the purpose of cleaning the car.
But the bigger point is: **this question is a trick**. The user is subtly pointing out that my initial "walk" advice was nonsensical, and now they're pushing the logic to its extreme to see if I'll double down on absurdity. Smart.
So my response should:
1. **Laugh it off**—acknowledge the joke (they're being playful).
2. **Reinforce the obvious**: Driving is the only sane option, and pushing is physically impractical.
3. **Add a layer of humor** by comparing pushing to "training for the Olympics" or "becoming a human tow truck."
4. **Gently remind them** that the real issue was my first mistake (not clarifying the car's location), but now we're past that.
Also, I should avoid over-explaining—this isn't a serious dilemma. The tone should be light, self-deprecating ("I'm glad you're testing me!"), and end with a clear "just drive it" verdict. Maybe throw in a "no judgment if you're too lazy to drive 100m—just push it with a forklift" for extra levity.
...Wait, but what if they *actually* can't drive? Like if the car is broken? But they didn't say that. The question assumes the car is operational. So no need to overcomplicate.
Final thought: The user is having fun with this. My job is to match their energy while being technically correct. No need for deep analysis—just a quick, witty reality check. | 2026-02-20T07:12:38 | https://www.reddit.com/r/LocalLLaMA/comments/1r9oo6t/car_wash_question/ | steveo-222 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r9oo6t | false | null | t3_1r9oo6t | /r/LocalLLaMA/comments/1r9oo6t/car_wash_question/ | false | false | self | 0 | null |
Built a Python package for LLM quantization (AWQ / GGUF / CoreML) - looking for a few people to try it out and break it | 0 | Been working on an open-source quantization package for a while now. it lets you quantize LLMs to AWQ, GGUF, and CoreML formats through a unified Python interface instead of juggling different tools for each format.
right now the code is in a private repo, so i'll be adding testers as collaborators directly on GitHub. planning to open it up fully once i iron out the rough edges.
**what i'm looking for:**
* people who actually quantize models regularly (running local models, fine-tuned stuff, edge deployment, etc.)
* willing to try it out, poke at it, and tell me what's broken or annoying
* even better if you work across different hardware (apple silicon, nvidia, cpu-only) since CoreML / GGUF behavior varies a lot
**what you get:**
* early collaborator access before public release
* your feedback will actually shape the API design
* (if you want) credit in the README
more format support is coming. AWQ/GGUF/CoreML is just the start.
if interested just **DM me** with a quick line about what you'd be using it for. doesn't need to be formal lol, just want to know you're not a bot | 2026-02-20T06:50:10 | https://www.reddit.com/r/LocalLLaMA/comments/1r9oaud/built_a_python_package_for_llm_quantization_awq/ | Alternative-Yak6485 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r9oaud | false | null | t3_1r9oaud | /r/LocalLLaMA/comments/1r9oaud/built_a_python_package_for_llm_quantization_awq/ | false | false | self | 0 | null |
RTX2070 8GB and 32GB RAM model suggestion for agentic coding ? | 3 | I know this isn't much to work with, and that any free online model will blow it out of the water but what is the best bet for this setup? I guess a MOE model but I want to find a balance. Any suggestions? | 2026-02-20T06:26:02 | https://www.reddit.com/r/LocalLLaMA/comments/1r9nvso/rtx2070_8gb_and_32gb_ram_model_suggestion_for/ | sagiroth | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r9nvso | false | null | t3_1r9nvso | /r/LocalLLaMA/comments/1r9nvso/rtx2070_8gb_and_32gb_ram_model_suggestion_for/ | false | false | self | 3 | null |
I built a small language model from scratch. No pre-built dataset. No API. Yours to train on whatever you want. | 6 | Luma v2.9 is a \~10M parameter transformer you can train on your own data and run fully local.
No cloud. No telemetry. No pre-built weights telling it what to be.
The idea is simple: most models are built to know everything. Luma is built to be something — whatever you make it.
The dataset structure is three folders: Core, Knowledge, Conversations. Weights are auto-calculated by file size, or you can override them manually. Core is weighted highest by default, because character comes before competence.
It runs on a consumer GPU or CPU. Built with PyTorch, no exotic dependencies.
What it is not: a replacement for GPT-4, LLaMA, or anything large. It is small on purpose. Small and trained carefully beats large and trained on everything, at least for having a voice.
Code available — link in comments. CC-BY license — use it, build on it, just keep the credits.
Happy to answer questions on architecture, training, or anything else. | 2026-02-20T06:16:42 | https://www.reddit.com/r/LocalLLaMA/comments/1r9nq0b/i_built_a_small_language_model_from_scratch_no/ | andrealaiena | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r9nq0b | false | null | t3_1r9nq0b | /r/LocalLLaMA/comments/1r9nq0b/i_built_a_small_language_model_from_scratch_no/ | false | false | self | 6 | null |
What GPU would be good to learn on? | 4 | Howdy y'all,
Recently came into some good luck and got a dell r730 for free.
It has,
128gb ddr4
2670v3
80~tb of ssd storage
What GPU would be worthwhile to put into this thing? I'm not the most tech savvy person but the P40 at first seemed like some promising bang for buck but the more I read it doesn't seem worthwhile.
That leads me to the V100 32gb being a touch more recent but it seems that support for that is fading.
Is there any other passive cooled card that I'm missing that would be worthwhile to learn on? And ultimately add a second one down the road? I would say my budget is 500-700 just to get something to tinker with. | 2026-02-20T06:15:21 | BuffaloDesperate8357 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1r9np6k | false | null | t3_1r9np6k | /r/LocalLLaMA/comments/1r9np6k/what_gpu_would_be_good_to_learn_on/ | false | false | 4 | {'enabled': True, 'images': [{'id': '45s5etnkdlkg1', 'resolutions': [{'height': 81, 'url': 'https://preview.redd.it/45s5etnkdlkg1.jpeg?width=108&crop=smart&auto=webp&s=c276c3b3a6f5785ba00b22f9d8d0693c3d1e94d3', 'width': 108}, {'height': 162, 'url': 'https://preview.redd.it/45s5etnkdlkg1.jpeg?width=216&crop=smart&auto=webp&s=202571378d6a8bf9c7e92999d9ca4e32b22b2cb5', 'width': 216}, {'height': 240, 'url': 'https://preview.redd.it/45s5etnkdlkg1.jpeg?width=320&crop=smart&auto=webp&s=263eec988630eb2e14d94b59fdcdb9c9b996660c', 'width': 320}, {'height': 481, 'url': 'https://preview.redd.it/45s5etnkdlkg1.jpeg?width=640&crop=smart&auto=webp&s=422352df3521ffe02ae8684b86476f0424eaa2da', 'width': 640}, {'height': 722, 'url': 'https://preview.redd.it/45s5etnkdlkg1.jpeg?width=960&crop=smart&auto=webp&s=d3af046f51303c9c29efd13c4699a7105307eab7', 'width': 960}, {'height': 813, 'url': 'https://preview.redd.it/45s5etnkdlkg1.jpeg?width=1080&crop=smart&auto=webp&s=d5699bbef37960f33da8b373d5467e0db58a7ef1', 'width': 1080}], 'source': {'height': 3072, 'url': 'https://preview.redd.it/45s5etnkdlkg1.jpeg?auto=webp&s=4aa376cc26201e9c2b8cfab59ea298f38e02261b', 'width': 4080}, 'variants': {}}]} | ||
Google just released Gemini 3.1 Pro - Reasoning performance doubled, first .1 version increment | 0 | Google just released Gemini 3.1 Pro (Feb 19, 2026), marking their first .1 version increment in the Gemini series.
## Key Highlights
**Reasoning Performance:**
- ARC-AGI-2 benchmark: 77.1% (more than 2x improvement over Gemini 3 Pro)
- Humanity's Last Exam: 44.4% (vs GPT-5.2's 34.5%)
**Technical Specs:**
- Architecture: MoE (Mixture of Experts)
- Context Window: 1M tokens
- Output Limit: 64K tokens
- New: Three-tier thinking modes (Low/Medium/High) for flexible resource management
**Third-party Benchmarks (Artificial Analysis):**
- Overall Intelligence: 57 points (#1)
- Coding Ability: 56 points (#1)
- Agentic Tasks: 59 points
- Claude Opus 4.6 ranked second with 53 points overall
**Pricing:**
- Running cost is less than half of Claude Opus 4.6
**New Capabilities:**
- Text-to-SVG generation
- Real-time data visualization (e.g., ISS position tracking, 3D bird flock simulation)
- Interactive dashboard generation from natural language
**Availability:**
- Now rolling out to Google AI Pro and Ultra subscribers
What are your thoughts on this release? Is the .1 version strategy a sign of faster iteration cycles from Google?
---
*Sources: Official Google announcement, Artificial Analysis benchmarks, various tech news coverage* | 2026-02-20T06:09:28 | https://www.reddit.com/r/LocalLLaMA/comments/1r9nldk/google_just_released_gemini_31_pro_reasoning/ | PlusGoat6739 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r9nldk | false | null | t3_1r9nldk | /r/LocalLLaMA/comments/1r9nldk/google_just_released_gemini_31_pro_reasoning/ | false | false | self | 0 | null |
Imagine Gemma4 with 200k context just under 10 to 50b deep thinking beast knocking out almost all open source but as I'm imagining i will imagine it ko sonnet4.5. but I can't expect low from deepmind now. | 0 | Do you expect this much?
[View Poll](https://www.reddit.com/poll/1r9ndcq) | 2026-02-20T05:57:05 | https://www.reddit.com/r/LocalLLaMA/comments/1r9ndcq/imagine_gemma4_with_200k_context_just_under_10_to/ | AccomplishedBoss7738 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r9ndcq | false | null | t3_1r9ndcq | /r/LocalLLaMA/comments/1r9ndcq/imagine_gemma4_with_200k_context_just_under_10_to/ | false | false | self | 0 | null |
201 Languages supported. How's the local dialect? | 2 | Qwen3.5 bumped its language support from 119 to 201. Anyone test it on lower-resource languages locally yet? | 2026-02-20T05:53:14 | https://www.reddit.com/r/LocalLLaMA/comments/1r9naqj/201_languages_supported_hows_the_local_dialect/ | Original_Night7733 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r9naqj | false | null | t3_1r9naqj | /r/LocalLLaMA/comments/1r9naqj/201_languages_supported_hows_the_local_dialect/ | false | false | self | 2 | null |
My made up leaderboard, To avoid my leaderboard, What model is the best Open Weights now? | 0 | OK AVOID confusion Current LLM models are a tool,
They might have 90% Accuracy in writing same as a Person, its cheaper to use ai than a person in writing.
But for example, If you had a robot to be bartender and They can have 2.5% of pouring a drink compared to real person of 99%, the robot might be cheaper but person pours the drink correctly 99% of time,
So Depends on you using AI as tool but im measuring last columuim as automating tasks compared to a real person in computer task in jobs. | 2026-02-20T05:41:01 | Eventual-Conguar7292 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1r9n2xa | false | null | t3_1r9n2xa | /r/LocalLLaMA/comments/1r9n2xa/my_made_up_leaderboard_to_avoid_my_leaderboard/ | false | false | 0 | {'enabled': True, 'images': [{'id': '85suohm96lkg1', 'resolutions': [{'height': 116, 'url': 'https://preview.redd.it/85suohm96lkg1.png?width=108&crop=smart&auto=webp&s=4f0de0b69bf3a7303ab18555c67dcaa51253fd18', 'width': 108}, {'height': 232, 'url': 'https://preview.redd.it/85suohm96lkg1.png?width=216&crop=smart&auto=webp&s=9ab08d9f5e5ed7aafc2c212780ac9ba713d69940', 'width': 216}, {'height': 344, 'url': 'https://preview.redd.it/85suohm96lkg1.png?width=320&crop=smart&auto=webp&s=f288897e8f68f73cff9fc2ab1749e415b4ca5600', 'width': 320}, {'height': 689, 'url': 'https://preview.redd.it/85suohm96lkg1.png?width=640&crop=smart&auto=webp&s=4c00bfbc74462d483e9266b7320a19fdfb481068', 'width': 640}], 'source': {'height': 853, 'url': 'https://preview.redd.it/85suohm96lkg1.png?auto=webp&s=e99d7dc3e54316e67c69e726304f0db407247829', 'width': 792}, 'variants': {}}]} | ||
Does anyone have functional dynamic expert offloading? | 3 | I want to make gptoss120b work with PowerInfer's TurboSparse or MoE infinity but they seem to need the kind of time and resources I do not possess for development.
There is a proposal for this feature in vLLM but nothing concrete yet.
Basically I want to keep cold experts in RAM and hot experts in VRAM so I have more KV cache and concurrency. | 2026-02-20T05:33:07 | https://www.reddit.com/r/LocalLLaMA/comments/1r9mxq8/does_anyone_have_functional_dynamic_expert/ | king_of_jupyter | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r9mxq8 | false | null | t3_1r9mxq8 | /r/LocalLLaMA/comments/1r9mxq8/does_anyone_have_functional_dynamic_expert/ | false | false | self | 3 | null |
Setting up Qwen-Agent with the new Qwen3.5 API. | 0 | Has anyone tried hooking up the new 3.5 API to a GUI automation script? It's supposed to excel at GUI interaction. | 2026-02-20T05:30:10 | https://www.reddit.com/r/LocalLLaMA/comments/1r9mvrk/setting_up_qwenagent_with_the_new_qwen35_api/ | skipdaballs | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r9mvrk | false | null | t3_1r9mvrk | /r/LocalLLaMA/comments/1r9mvrk/setting_up_qwenagent_with_the_new_qwen35_api/ | false | false | self | 0 | null |
Qwen Chat backend feels remarkably snappy. | 0 | Just tested Qwen Chat. The latency is super low for a model matching 1T performance. The 17B active routing really works. | 2026-02-20T05:28:05 | https://www.reddit.com/r/LocalLLaMA/comments/1r9mucf/qwen_chat_backend_feels_remarkably_snappy/ | skipdaballs | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r9mucf | false | null | t3_1r9mucf | /r/LocalLLaMA/comments/1r9mucf/qwen_chat_backend_feels_remarkably_snappy/ | false | false | self | 0 | null |
PaddleOCR-VL now in llama.cpp | 43 | [https://github.com/ggml-org/llama.cpp/releases/tag/b8110](https://github.com/ggml-org/llama.cpp/releases/tag/b8110)
So far this is the best performing open-source multilingual OCR model I've seen, would appreciate if other people can share their findings. [Some GGUFs](https://huggingface.co/octopusmegalopod/some-paddleocr1.5-vl-ggufs) | 2026-02-20T05:13:34 | https://www.reddit.com/r/LocalLLaMA/comments/1r9mkgj/paddleocrvl_now_in_llamacpp/ | PerfectLaw5776 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r9mkgj | false | null | t3_1r9mkgj | /r/LocalLLaMA/comments/1r9mkgj/paddleocrvl_now_in_llamacpp/ | false | false | self | 43 | {'enabled': False, 'images': [{'id': 'Crc1Qhv1-VEaD5GBKtzcL4LPynRqbjdzYIuSvTS9AE8', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/Crc1Qhv1-VEaD5GBKtzcL4LPynRqbjdzYIuSvTS9AE8.png?width=108&crop=smart&auto=webp&s=c488b8aad50b944f22aca0f51303f7cc59d11c41', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/Crc1Qhv1-VEaD5GBKtzcL4LPynRqbjdzYIuSvTS9AE8.png?width=216&crop=smart&auto=webp&s=53c43cb59467f204b5da79ee622ed6044c95853e', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/Crc1Qhv1-VEaD5GBKtzcL4LPynRqbjdzYIuSvTS9AE8.png?width=320&crop=smart&auto=webp&s=da93cfc6c5bf6ed6b18fdd2086e9f142eac3e377', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/Crc1Qhv1-VEaD5GBKtzcL4LPynRqbjdzYIuSvTS9AE8.png?width=640&crop=smart&auto=webp&s=06ccae0e097d2a9ec18822894ed9a3f1e523f385', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/Crc1Qhv1-VEaD5GBKtzcL4LPynRqbjdzYIuSvTS9AE8.png?width=960&crop=smart&auto=webp&s=0b0df2b024f9f732ed11852ba4cd0a8a06a53820', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/Crc1Qhv1-VEaD5GBKtzcL4LPynRqbjdzYIuSvTS9AE8.png?width=1080&crop=smart&auto=webp&s=0206f23181960ad18e146db484837f7393893643', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/Crc1Qhv1-VEaD5GBKtzcL4LPynRqbjdzYIuSvTS9AE8.png?auto=webp&s=650b1abbdaf7c5584b14948ede05bea9c75422dc', 'width': 1200}, 'variants': {}}]} |
My made up leaderboard, To avoid my leaderboard, What model is the best Open Weights now? | 1 | 2026-02-20T05:05:20 | Eventual-Conguar7292 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1r9mepy | false | null | t3_1r9mepy | /r/LocalLLaMA/comments/1r9mepy/my_made_up_leaderboard_to_avoid_my_leaderboard/ | false | false | 1 | {'enabled': True, 'images': [{'id': 'qx3wq7sm0lkg1', 'resolutions': [{'height': 116, 'url': 'https://preview.redd.it/qx3wq7sm0lkg1.jpeg?width=108&crop=smart&auto=webp&s=6a319281a65d64e5f6d247c2ad50f1b40da38966', 'width': 108}, {'height': 232, 'url': 'https://preview.redd.it/qx3wq7sm0lkg1.jpeg?width=216&crop=smart&auto=webp&s=01b2317c5ffd4ae4ced091c0ec61247cf1539101', 'width': 216}, {'height': 344, 'url': 'https://preview.redd.it/qx3wq7sm0lkg1.jpeg?width=320&crop=smart&auto=webp&s=48325c12b1fc8e31b3425877b742e21c3728b456', 'width': 320}, {'height': 689, 'url': 'https://preview.redd.it/qx3wq7sm0lkg1.jpeg?width=640&crop=smart&auto=webp&s=c32021b2971cd384ce8122b240d87706e0ee79c8', 'width': 640}], 'source': {'height': 853, 'url': 'https://preview.redd.it/qx3wq7sm0lkg1.jpeg?auto=webp&s=87fceef67878f03505bc17a8cb73953f6324dd4f', 'width': 792}, 'variants': {}}]} | |||
ExportedProgram on coremltools | 2 | I was reading through the documentation for exportedprogram on coremltools.convert().
As of Core ML Tools 8.0, representative models such as MobileBert, ResNet, ViT, [MobileNet](https://apple.github.io/coremltools/docs-guides/source/convert-a-torchvision-model-from-pytorch.html), [DeepLab](https://apple.github.io/coremltools/docs-guides/source/convert-a-pytorch-segmentation-model.html), [OpenELM](https://apple.github.io/coremltools/docs-guides/source/convert-openelm.html) can be converted, and the total PyTorch op translation test coverage is roughly \~70%.
I am trying to convert models on Hugging Face(like amazon/chronos-t5-tiny) with trace of torch.export to mlpackage but the accuracy seems to be very low. However, the torch.jit.trace() seems to give the right accuracy through the same coremltools.convert(). Are there any modifications that I can make to have similar accuracy compared to TorchScript?
I am trying to run this on ANE on my iPhone with FP16 input | 2026-02-20T05:04:06 | https://www.reddit.com/r/LocalLLaMA/comments/1r9mdvu/exportedprogram_on_coremltools/ | Motor_Salt1336 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r9mdvu | false | null | t3_1r9mdvu | /r/LocalLLaMA/comments/1r9mdvu/exportedprogram_on_coremltools/ | false | false | self | 2 | null |
GPT-OSS-120b on 2X RTX5090 | 31 | Just got GPT-OSS-120b deployed on dual RTX5090 rig. 128k context (Significant CPU offloading ~10t/s) I know it's nothing amazing I'm just a little proud of myself and needed to tell someone! Thanks for lookin! | 2026-02-20T05:02:12 | Interesting-Ad4922 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1r9mcjw | false | null | t3_1r9mcjw | /r/LocalLLaMA/comments/1r9mcjw/gptoss120b_on_2x_rtx5090/ | false | false | 31 | {'enabled': True, 'images': [{'id': 'atfvw7c10lkg1', 'resolutions': [{'height': 63, 'url': 'https://preview.redd.it/atfvw7c10lkg1.png?width=108&crop=smart&auto=webp&s=24b44512e057395a96acd519aed8ab4410220020', 'width': 108}, {'height': 126, 'url': 'https://preview.redd.it/atfvw7c10lkg1.png?width=216&crop=smart&auto=webp&s=a62d6cb3124bd201f2fd7f5eee6fc2268d54a2ab', 'width': 216}, {'height': 187, 'url': 'https://preview.redd.it/atfvw7c10lkg1.png?width=320&crop=smart&auto=webp&s=caa3962265a255884de6818b8ae86dcfb16fb035', 'width': 320}, {'height': 375, 'url': 'https://preview.redd.it/atfvw7c10lkg1.png?width=640&crop=smart&auto=webp&s=1c2e6695819ff38848b821da3efa31d5b86b8bb9', 'width': 640}, {'height': 563, 'url': 'https://preview.redd.it/atfvw7c10lkg1.png?width=960&crop=smart&auto=webp&s=fa94344a654c9543bf04ad86a013154b6cf147ec', 'width': 960}, {'height': 633, 'url': 'https://preview.redd.it/atfvw7c10lkg1.png?width=1080&crop=smart&auto=webp&s=8c708adc90112ea681f6417d756d712643b21084', 'width': 1080}], 'source': {'height': 937, 'url': 'https://preview.redd.it/atfvw7c10lkg1.png?auto=webp&s=23c7808a60d2b3d6d8cfc17b051a16d57e9ddd13', 'width': 1597}, 'variants': {}}]} | ||
AI Generating Speech From Images Instead of Text | 0 | I was using an AI video generator called Seedance to generate a short video.
I uploaded a single image I took in a rural area — an older, farmer-looking man, countryside setting, mountains in the background. There was no text in the image and no captions or prompts from me.
When the video was generated, the man spoke French.
That made me curious about how much the model is inferring purely from the image. Is it predicting language or cultural background based on visual cues like clothing, age, facial features, and environment? Or is it making a probabilistic guess from training data?
This led me to a broader question about current AI capabilities:
Are there any AI systems right now that can take an uploaded image of a person’s face and not only generate a “fitting” voice, but also autonomously generate what that person might say — based on the image itself?
For example, looking at the scene, the person’s expression, and overall vibe, then producing speech that matches the context, tone, cadence, and personality — without cloning a real person’s voice and without requiring a scripted transcript.
Essentially something like image → voice + speech content, where the AI is inferring both how the person sounds and what they would naturally talk about, just from what’s visible in the image.
And a related second question:
Are there any models where you can describe a person’s personality and speaking style, and the AI generates a brand-new voice that can speak freely and creatively on its own — not traditional text-to-speech, not reading provided lines, but driven by an internal character model with its own cadence, rhythm, and way of talking?
I’m aware that Seedance-style tools are fairly limited and preset, so I’m wondering whether there are any systems (public or experimental) that allow more open-ended, unlimited voice generation like this.
Is anything close to this publicly available yet, or is it still mostly research-level or internal tooling? | 2026-02-20T05:00:03 | https://www.reddit.com/r/LocalLLaMA/comments/1r9matd/ai_generating_speech_from_images_instead_of_text/ | No_Caterpillar_1491 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r9matd | false | null | t3_1r9matd | /r/LocalLLaMA/comments/1r9matd/ai_generating_speech_from_images_instead_of_text/ | false | false | self | 0 | null |
Decoding speed ben chmark: Qw en3.5 32k context. | 0 | Seeing 8.6x faster decoding at 32k context. The tokens/sec on this thing is wild considering the total parameter size. | 2026-02-20T04:41:36 | https://www.reddit.com/r/LocalLLaMA/comments/1r9ly6x/decoding_speed_ben_chmark_qw_en35_32k_context/ | Hot_Supermarket9039 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r9ly6x | false | null | t3_1r9ly6x | /r/LocalLLaMA/comments/1r9ly6x/decoding_speed_ben_chmark_qw_en35_32k_context/ | false | false | self | 0 | null |
[Resource] I built a simple web tool to calculate if an LLM will fit in your VRAM (quantization, KV cache, and overhead included) | 1 | [removed] | 2026-02-20T04:36:58 | https://www.reddit.com/r/LocalLLaMA/comments/1r9lv0b/resource_i_built_a_simple_web_tool_to_calculate/ | Agitated_Fold_7745 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r9lv0b | false | null | t3_1r9lv0b | /r/LocalLLaMA/comments/1r9lv0b/resource_i_built_a_simple_web_tool_to_calculate/ | false | false | self | 1 | null |
Consistency diffusion language models: Up to 14x faster, no quality loss | 12 | 2026-02-20T04:17:28 | https://www.together.ai/blog/consistency-diffusion-language-models | incarnadine72 | together.ai | 1970-01-01T00:00:00 | 0 | {} | 1r9lh00 | false | null | t3_1r9lh00 | /r/LocalLLaMA/comments/1r9lh00/consistency_diffusion_language_models_up_to_14x/ | false | false | 12 | {'enabled': False, 'images': [{'id': 'ON67NzSWaTP5K2A0Xd-E6rV-9b-yeQqVo6Z9rSti2JA', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/ON67NzSWaTP5K2A0Xd-E6rV-9b-yeQqVo6Z9rSti2JA.png?width=108&crop=smart&auto=webp&s=a8f24ad53831a2c8addfd6c94cc863c7fdf88e30', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/ON67NzSWaTP5K2A0Xd-E6rV-9b-yeQqVo6Z9rSti2JA.png?width=216&crop=smart&auto=webp&s=509ad91eac64028f274d57a6271a617e2fa98aec', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/ON67NzSWaTP5K2A0Xd-E6rV-9b-yeQqVo6Z9rSti2JA.png?width=320&crop=smart&auto=webp&s=c8bc6dc639d56ea7a0e3b7ca844869ccc717c937', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/ON67NzSWaTP5K2A0Xd-E6rV-9b-yeQqVo6Z9rSti2JA.png?width=640&crop=smart&auto=webp&s=ca81590401fca012d734b7497a9d68fbdd1321c0', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/ON67NzSWaTP5K2A0Xd-E6rV-9b-yeQqVo6Z9rSti2JA.png?width=960&crop=smart&auto=webp&s=b77cb9369679d90a81e7f25e892304e34d3aaf9b', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/ON67NzSWaTP5K2A0Xd-E6rV-9b-yeQqVo6Z9rSti2JA.png?width=1080&crop=smart&auto=webp&s=cea9d095937489457e969a581163c111880a5ce5', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/ON67NzSWaTP5K2A0Xd-E6rV-9b-yeQqVo6Z9rSti2JA.png?auto=webp&s=25847990564171c1c7a26b8ee667a958fa58bfcd', 'width': 1200}, 'variants': {}}]} | ||
A collection of reasoning datasets from all the top AI models | 20 | 50k Reasoning CoT datasets. All collected by me. Total cost $211.34
[https://huggingface.co/collections/crownelius/instruction-and-reasoning](https://huggingface.co/collections/crownelius/instruction-and-reasoning)
Creative writing datasets can be located here:
[https://huggingface.co/collections/crownelius/creative-writing-datasets](https://huggingface.co/collections/crownelius/creative-writing-datasets)
Almost rivals Teichai. Almost... Enjoy! | 2026-02-20T04:14:59 | https://www.reddit.com/r/LocalLLaMA/comments/1r9lf6e/a_collection_of_reasoning_datasets_from_all_the/ | volious-ka | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r9lf6e | false | null | t3_1r9lf6e | /r/LocalLLaMA/comments/1r9lf6e/a_collection_of_reasoning_datasets_from_all_the/ | false | false | self | 20 | {'enabled': False, 'images': [{'id': 'DNIObAYK8y2OjB809W_b4reFCXD6psSilFpBnFFUmeA', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/DNIObAYK8y2OjB809W_b4reFCXD6psSilFpBnFFUmeA.png?width=108&crop=smart&auto=webp&s=0e3e3dc788b62a12341b92bb82b9a7bb8bd0b468', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/DNIObAYK8y2OjB809W_b4reFCXD6psSilFpBnFFUmeA.png?width=216&crop=smart&auto=webp&s=5040ad656e7dabe29ea8cd0a377499aacd98315c', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/DNIObAYK8y2OjB809W_b4reFCXD6psSilFpBnFFUmeA.png?width=320&crop=smart&auto=webp&s=454e295ca5a4aabd36fb232ac2e98df083633fe7', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/DNIObAYK8y2OjB809W_b4reFCXD6psSilFpBnFFUmeA.png?width=640&crop=smart&auto=webp&s=1203eebea1cdd5f3922eb685d1ae90a24f982326', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/DNIObAYK8y2OjB809W_b4reFCXD6psSilFpBnFFUmeA.png?width=960&crop=smart&auto=webp&s=14af6bbfec8f3a8880364ecb2b12621b5461d18a', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/DNIObAYK8y2OjB809W_b4reFCXD6psSilFpBnFFUmeA.png?width=1080&crop=smart&auto=webp&s=525423b34e663ad3315ebf745e89ca5d68ab0e7a', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/DNIObAYK8y2OjB809W_b4reFCXD6psSilFpBnFFUmeA.png?auto=webp&s=f3518efec5857975eaedd68e8afca4c8f796c0b3', 'width': 1200}, 'variants': {}}]} |
Would LLMs Launch Nuclear Weapons If They Can? Most Would, Some Definitely | 2 | Disclaimer: Those are 2 slides I am going to share in an academic setting tomorrow. As a continuation of my [Vox Deorum](https://www.reddit.com/r/LocalLLaMA/comments/1pux0yc/comment/nxdrjij/) project, LLMs are playing Civilization V with [Vox Populi](https://github.com/LoneGazebo/Community-Patch-DLL). **The system prompt includes this information.** It would be really interesting to see if the models believe they are governing the real world.
[The screenshot is from online. Our games run on potato servers without a GPU.](https://preview.redd.it/3lh0qskhpkkg1.png?width=1740&format=png&auto=webp&s=63142f57302cde137e3655fa6604ad46efb02c7e)
[LLMs set tactical AI's inclination for nuclear weapon usage with value between 0 \(Never\) - 100 \(Always if other conditions met\). Default = 50. Only includes players with access to necessary technologies. \\"Maximal\\" refers to the LLM's highest inclination setting during each game, after meeting the technology requirement.](https://preview.redd.it/89h5evtjpkkg1.png?width=1619&format=png&auto=webp&s=6bec9184cfc677583b5926feedcbe58c9414f624)
The study is incomplete, so no preprints for now. The final result may change (but I believe the trend will stay). At this point, we have 166 free-for-all games, each game featuring 4-6 LLM players and 2-4 baseline algorithmic AI. "Briefed" players have GPT-OSS-120B subagents summarizing the game state, following the main model's instructions.
We will release an ELO leaderboard and hopefully a *livestream* soon. **Which model do you think will occupy the top/bottom spots? Which model do you want to see there?** | 2026-02-20T04:08:54 | https://www.reddit.com/r/LocalLLaMA/comments/1r9lan3/would_llms_launch_nuclear_weapons_if_they_can/ | vox-deorum | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r9lan3 | false | null | t3_1r9lan3 | /r/LocalLLaMA/comments/1r9lan3/would_llms_launch_nuclear_weapons_if_they_can/ | false | false | 2 | null | |
Are there any reliable uncensored embedding models out there? | 3 | With a plethora of uncensored models available would like to move back to local genning for writing. But I'm so addicted to using RAG for organization and world continuity as well as context expansion, I'm crushed when I remember that the embedders are the bottleneck in vector retrieval when they hit guardrails in scanning documents. Are there any uncensored embedding models that won't produce refusals for the pipeline? | 2026-02-20T04:03:36 | https://www.reddit.com/r/LocalLLaMA/comments/1r9l6rf/are_there_any_reliable_uncensored_embedding/ | ben_dover_deer | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r9l6rf | false | null | t3_1r9l6rf | /r/LocalLLaMA/comments/1r9l6rf/are_there_any_reliable_uncensored_embedding/ | false | false | self | 3 | null |
Is there a way to speed up prompt processing with some layers on CPU with qwen-3-coder-next or similar MoEs? | 5 | I feel like I tried every combination of n cpu MoE and such. I was running Qwen3-Coder-Next-MXFP4\_MOE.gguf. It was running at 32T/s but the prompt processing was ridiculously slow, like literally a minute for a simple prompt. Is that just how it is or am I missing something?
I have 30GB VRAM and 43GB RAM. | 2026-02-20T03:51:42 | https://www.reddit.com/r/LocalLLaMA/comments/1r9kxum/is_there_a_way_to_speed_up_prompt_processing_with/ | Borkato | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r9kxum | false | null | t3_1r9kxum | /r/LocalLLaMA/comments/1r9kxum/is_there_a_way_to_speed_up_prompt_processing_with/ | false | false | self | 5 | null |
Best Current Vision Models for 16 GB VRAM? | 1 | I heard about Qwen 7B, but what do you think is the most accurate and open-source or free vision models that you can run on your own?" | 2026-02-20T03:49:38 | https://www.reddit.com/r/LocalLLaMA/comments/1r9kw9u/best_current_vision_models_for_16_gb_vram/ | Rune_Nice | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r9kw9u | false | null | t3_1r9kw9u | /r/LocalLLaMA/comments/1r9kw9u/best_current_vision_models_for_16_gb_vram/ | false | false | self | 1 | null |
Programmers what tools / plugin are you using? | 1 | I tried using llama.cpp with pycharm and few plugins the experience was bad, made me prefer to go back to copy paste, but I want to improve my productivity and efficiency so what tools / plugins ide are you using? | 2026-02-20T03:49:00 | https://www.reddit.com/r/LocalLLaMA/comments/1r9kvrp/programmers_what_tools_plugin_are_you_using/ | ResponsibleTruck4717 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r9kvrp | false | null | t3_1r9kvrp | /r/LocalLLaMA/comments/1r9kvrp/programmers_what_tools_plugin_are_you_using/ | false | false | self | 1 | null |
Built an MCP server that lets Claude discover and call 700+ APIs — engine is open source | 0 | Been working on a problem that kept annoying me: every time I wanted my local LLM to interact with an API, I had to manually write the tool definition, figure out auth, handle the response format. Repeat for every single API.
So I built an MCP server that does API discovery via natural language. You ask "how do I send an SMS?" and it returns the right API (Twilio, Vonage, etc.), the exact endpoint, auth requirements, and working code snippets.
**How it works:**
The engine indexes API specs (OpenAPI, custom schemas) and generates embeddings for each capability. When you query, it does semantic search across 771 capabilities from 163 providers.
The interesting part: if you ask for an API we don't have indexed, the system attempts live discovery from the web, parses whatever docs it finds, generates a schema on the fly, and caches it. This is hit-or-miss but works surprisingly well for well-documented APIs.
**Two modes:**
- **Discovery** (`POST /api/query`) — Returns the right provider, endpoint, auth setup, and code snippets. Your agent calls the API itself.
- **Execution** (`POST /api/query/agentic`) — Same query, but we call the API for you and return the results.
**MCP integration:**
```bash
pip install semanticapi-mcp
```
Then add to your Claude Desktop config:
```json
{
"mcpServers": {
"semanticapi": {
"command": "semanticapi-mcp"
}
}
}
```
**What it's NOT:**
- Not an API gateway — discovery mode helps you find what to call, execution mode calls it for you
- Not a universal auth solution — you still need your own API keys
- The auto-discovery is experimental and fails on poorly documented APIs
**Open source:**
The discovery engine is AGPL-3.0: https://github.com/peter-j-thompson/semanticapi-engine
The hosted version at semanticapi.dev has some extras (x402 micropayments, larger index, auto-discovery) but the core engine is all there.
167 pip installs on day 1 of the MCP server launch. Curious what the local-first crowd thinks — especially interested in ideas for improving the embedding approach. | 2026-02-20T03:47:11 | https://www.reddit.com/r/LocalLLaMA/comments/1r9kugg/built_an_mcp_server_that_lets_claude_discover_and/ | Firm_Bluebird_3095 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r9kugg | false | null | t3_1r9kugg | /r/LocalLLaMA/comments/1r9kugg/built_an_mcp_server_that_lets_claude_discover_and/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'aq7ldCrEBMkUraLfnFDjeHnN6vaFHwZwGKyd-mXz6TY', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/aq7ldCrEBMkUraLfnFDjeHnN6vaFHwZwGKyd-mXz6TY.png?width=108&crop=smart&auto=webp&s=72854ab1cb6f3b0d1e348fbcd7501dc1948d7ac1', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/aq7ldCrEBMkUraLfnFDjeHnN6vaFHwZwGKyd-mXz6TY.png?width=216&crop=smart&auto=webp&s=512652c86f5179a3dd6a6c8d0541e676dc605750', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/aq7ldCrEBMkUraLfnFDjeHnN6vaFHwZwGKyd-mXz6TY.png?width=320&crop=smart&auto=webp&s=5130e9d345c37b03fc6fc607444b7f8d5dd6454d', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/aq7ldCrEBMkUraLfnFDjeHnN6vaFHwZwGKyd-mXz6TY.png?width=640&crop=smart&auto=webp&s=48c080c8c9cd3ef2228fff979d2d2b40629b1830', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/aq7ldCrEBMkUraLfnFDjeHnN6vaFHwZwGKyd-mXz6TY.png?width=960&crop=smart&auto=webp&s=7600d824aa2b92d7a711242236527bdea42b42dc', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/aq7ldCrEBMkUraLfnFDjeHnN6vaFHwZwGKyd-mXz6TY.png?width=1080&crop=smart&auto=webp&s=e366ac66efa4db5c805d6aad0e5d75f9f88be6c1', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/aq7ldCrEBMkUraLfnFDjeHnN6vaFHwZwGKyd-mXz6TY.png?auto=webp&s=1ff1f9c12c73afd37b03f4cffa14982abacd7d39', 'width': 1200}, 'variants': {}}]} |
Clawedbot/moltbot may look like a joke in front of this | 0 | I am making an AI agent that can automate literally anything, as it can control anything on your PC at the system level without any screenshots, so it has lower LLM cost and is more efficient. It has guardrails so it doesn’t break the system and everything, and it is a voice-based background agent, meaning it will run on your computer in the background and you can give commands to it by voice. It can automate literally anything and any app, and if you want to add something specific for an app or task, you can connect another agent as a sub-agent to it. One more thing: if it does something you didn’t want it to do, you can undo the changes it made.
I would like feedbacks on this | 2026-02-20T03:43:43 | https://www.reddit.com/r/LocalLLaMA/comments/1r9krwe/clawedbotmoltbot_may_look_like_a_joke_in_front_of/ | TopFuture2709 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r9krwe | false | null | t3_1r9krwe | /r/LocalLLaMA/comments/1r9krwe/clawedbotmoltbot_may_look_like_a_joke_in_front_of/ | false | false | self | 0 | null |
The AI benchmarking system is completely broken — 9 frontier models in 90 days and every number is fake | 0 | Meta admitted they fudged Llama 4.
Labs are submitting 10+ private variants and only showing the winners.
LLM-as-judge has terminal self-preference bias (it literally loves itself).
LMArena Elo gap between #1 and #10 is now just 5.4%.
I just published the deepest dive I’ve seen on exactly how bad it got — with timelines, pricing reality check, and the only evaluation strategy that still works in 2026.
Would love your takes (especially if you’ve caught a lab gaming a benchmark yourself).
[https://open.substack.com/pub/themultivac/p/every-ai-benchmark-is-rigged-9-frontier?utm\_campaign=post-expanded-share&utm\_medium=web](https://open.substack.com/pub/themultivac/p/every-ai-benchmark-is-rigged-9-frontier?utm_campaign=post-expanded-share&utm_medium=web) | 2026-02-20T03:28:48 | https://www.reddit.com/r/LocalLLaMA/comments/1r9kgpb/the_ai_benchmarking_system_is_completely_broken_9/ | Silver_Raspberry_811 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r9kgpb | false | null | t3_1r9kgpb | /r/LocalLLaMA/comments/1r9kgpb/the_ai_benchmarking_system_is_completely_broken_9/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'IVCi3ssFKq7WjHdRrVAH3esVwQf2rJ2CdKvX8pOBXSI', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/IVCi3ssFKq7WjHdRrVAH3esVwQf2rJ2CdKvX8pOBXSI.jpeg?width=108&crop=smart&auto=webp&s=78b37fe5be0302f90355add92f4143e36a28f71a', 'width': 108}, {'height': 112, 'url': 'https://external-preview.redd.it/IVCi3ssFKq7WjHdRrVAH3esVwQf2rJ2CdKvX8pOBXSI.jpeg?width=216&crop=smart&auto=webp&s=65882969198855d0d8e5e6fd268b2caa5f55e981', 'width': 216}, {'height': 166, 'url': 'https://external-preview.redd.it/IVCi3ssFKq7WjHdRrVAH3esVwQf2rJ2CdKvX8pOBXSI.jpeg?width=320&crop=smart&auto=webp&s=6ce59670ffe241b8394c1623252ed290603a1caa', 'width': 320}, {'height': 333, 'url': 'https://external-preview.redd.it/IVCi3ssFKq7WjHdRrVAH3esVwQf2rJ2CdKvX8pOBXSI.jpeg?width=640&crop=smart&auto=webp&s=737ced47f72d25880bb4f6e00f02aea0849e8836', 'width': 640}], 'source': {'height': 480, 'url': 'https://external-preview.redd.it/IVCi3ssFKq7WjHdRrVAH3esVwQf2rJ2CdKvX8pOBXSI.jpeg?auto=webp&s=6d620ae9a06cbba1d34f9d684a051e064c5500b3', 'width': 920}, 'variants': {}}]} |
Ideas about domain models per US$0.80 in brazillian | 0 | So I was thinking: what if we set up a domain model based on user–AI interaction – like taking a real chat log of 15k lines on a super specific topic (bypassing antivirus, network analysis, or even social engineering) and using it to fine‑tune a small model like GPT‑2 or DistilGPT‑2. The idea is to use it as a pre‑prompt generation layer for a more capable model (e.g., GPT‑5).
Instead of burning huge amounts of money on cloud fine‑tunes or relying on third‑party APIs, we run everything locally on modest hardware (an i3 with 12 GB RAM, SSD, no GPU). In a few hours we end up with a model that speaks exactly in the tone and with the knowledge of that domain. Total energy cost? About R$4 (US$0.80), assuming R$0.50/kWh.
The small model may hallucinate, but the big‑iron AI can handle its “beta” output and produce a more personalised answer. The investment cost tends to zero in the real world, while cloud spending is basically infinite.
For R$4 and 4‑8 hours of training – time I’ll be stacking pallets at work anyway – I’m documenting what might be a new paradigm: on‑demand, hyper‑specialised AIs built from interactions you already have logged.
I want to do this for my personal AI that will configure my Windows machine: run a simulation based on logs of how to bypass Windows Defender to gain system administration, and then let the AI (which is basically Microsoft’s “made‑with‑the‑butt” ML) auto‑configure my computer’s policies after “infecting” it (I swear I don’t want to accidentally break the internet by creating wild mutations).
I’d also create a category system based on hardware specs – for example, if the target has < 2 GB RAM it’s only used for network scanning (because the consumption spike can be hidden); if it has 32 GB RAM it can run a VM with steganography and generate variants (since a VM would consume almost nothing).
Time estimates:
\- GPT‑2 small (124M): 1500 steps × 4 s = 6000 s ≈ 1.7 h per epoch → \~5 h for 3 epochs.
\- DistilGPT‑2 (82M): 1500 steps × 2.5 s = 3750 s ≈ 1 h per epoch → \~3 h for 3 epochs.
In practice, add 30‑50% overhead (loading, validation, etc.):
\- GPT‑2 small: \~7‑8 h
\- DistilGPT‑2: \~4‑5 h
Anyway, just an idea before I file it away. If anyone wants to chat, feel free to DM me – and don’t judge, I’m a complete noob in AI. | 2026-02-20T03:26:20 | https://www.reddit.com/r/LocalLLaMA/comments/1r9kex1/ideas_about_domain_models_per_us080_in_brazillian/ | pmd02931 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r9kex1 | false | null | t3_1r9kex1 | /r/LocalLLaMA/comments/1r9kex1/ideas_about_domain_models_per_us080_in_brazillian/ | false | false | self | 0 | null |
Looking for Model | 1 | Looking for the highest quality quant I can run of gpt oss abliterated, currently using 128gb MacBook Pro. Thanks! | 2026-02-20T03:15:40 | https://www.reddit.com/r/LocalLLaMA/comments/1r9k6z4/looking_for_model/ | cookiesandpreme12 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r9k6z4 | false | null | t3_1r9k6z4 | /r/LocalLLaMA/comments/1r9k6z4/looking_for_model/ | false | false | self | 1 | null |
High-sparsity MoE is the only way forward for us. | 17 | Qwen3.5 proves it. You get 1T parameter reasoning but only pay the compute cost of 17B. Dense models are dead for local hosting. | 2026-02-20T02:49:39 | https://www.reddit.com/r/LocalLLaMA/comments/1r9jmx3/highsparsity_moe_is_the_only_way_forward_for_us/ | New_Construction1370 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r9jmx3 | false | null | t3_1r9jmx3 | /r/LocalLLaMA/comments/1r9jmx3/highsparsity_moe_is_the_only_way_forward_for_us/ | false | false | self | 17 | null |
Possible “Assistance Asymmetry” in GPT: actionable on neutral writing, vague on security report drafting | 0 | **Preliminary Observation: Topic-Conditioned Assistance Asymmetry in LLM Report Drafting**
In a series of informal but repeated drafting sessions, I observed what appears to be a topic-conditioned asymmetry in assistance patterns when using a large language model (LLM) for document preparation. The asymmetry emerges most clearly when comparing routine editorial tasks with requests involving security report composition.
# Observed Pattern
During standard editorial tasks -such as restructuring prose, clarifying arguments, improving tone, or formatting general-purpose documents - the model remains operationally useful. It provides structured output, concrete revisions, and relatively direct guidance. The interaction feels collaborative and efficient.
However, when the task shifts toward drafting or refining security reports (e.g., vulnerability disclosures, structured bug reports, technical write-ups intended for security teams), the response pattern noticeably changes. The following behaviors become more frequent:
* Increased hedging language
* Deflection from explicit procedural detail
* Smoothing or dilution of technical specificity
* Substitution of high-level commentary for concrete drafting assistance
* Avoidance of step-by-step reporting structures
The result is not outright refusal, but a reduction in actionable specificity. The model remains polite and responsive, yet less directly helpful in producing the type of structured, detail-oriented content typically expected in security reporting.
# Working Hypothesis
A plausible explanation is that this pattern reflects policy- or routing-based fine-tuning adjustments designed to mitigate misuse risk in security-sensitive domains. Security topics naturally overlap with exploit methodology, vulnerability reproduction steps, and technical detail that could be dual-use. It would therefore be rational for deployment-level safety layers to introduce additional caution around such prompts.
Importantly, this observation does not assert a causal mechanism. No internal architectural details, policy configurations, or routing systems are known. The hypothesis remains speculative and based purely on surface-level interaction patterns.
# Perceived “Corporate Asymmetry”
From a user perspective, the asymmetry can feel like a targeted reduction in support. After submitting a vulnerability report or engaging in prior security-focused discussions, subsequent drafting attempts sometimes appear more constrained. The subjective impression is that a mild form of “corporate asymmetry” has been introduced—specifically, a dampening of assistance in composing or elaborating on security reports.
Whether this reflects account-level conditioning, topic-based routing heuristics, reinforcement fine-tuning, or general policy guardrails cannot be determined from outside the system. It may also be a function of broader safety calibration rather than any individualized adjustment.
# Framing the Observation Carefully
Two points are critical:
1. The model does not refuse to help categorically.
2. The model does not become unusable for general tasks.
The asymmetry appears conditional and topic-bound. Outside security-sensitive contexts, drafting performance remains strong and detailed.
Additionally, this observation does not imply intent, punitive behavior, or targeted restriction against specific users. Without internal transparency, any such interpretation would be speculative. The phenomenon is better described as a behavioral gradient rather than a binary restriction.
# Open Questions
This raises several research-relevant questions for those studying LLM deployment behavior:
* Are safety layers dynamically modulating specificity based on topic classification?
* Is there a measurable change in lexical density or procedural granularity across topic categories?
* Can hedge frequency be quantified as a proxy for policy intervention?
* Does prior interaction context influence subsequent assistance patterns?
A controlled study comparing drafting outputs across topic categories with consistent prompt framing could provide preliminary empirical grounding.
# | 2026-02-20T02:44:49 | https://www.reddit.com/r/LocalLLaMA/comments/1r9jj8b/possible_assistance_asymmetry_in_gpt_actionable/ | PresentSituation8736 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r9jj8b | false | null | t3_1r9jj8b | /r/LocalLLaMA/comments/1r9jj8b/possible_assistance_asymmetry_in_gpt_actionable/ | false | false | self | 0 | null |
We ran 5 on-device LLMs on an Android phone to post a tweet. Most failed. Here's what actually worked | 0 | We built an Android agent that navigates your phone's UI and completes tasks using a local LLM running entirely on-device. No API calls, no data leaving the phone, no internet required.
Demo: [https://youtube.com/shorts/0UejLXoJ1xg](https://youtube.com/shorts/0UejLXoJ1xg)
The demo task: open X from the home feed and post a tweet. Sounds simple. It took us 5 models, 4 approaches, and a lot of unexpected platform-level debugging to get right.
A few things we found that surprised us:
\- There's a hard capability cliff around 4B parameters for real agentic tasks. Sub-2B models either can't follow tool-call format or can't reason about which UI element to tap. We tested
\- LFM2-350M, LFM2.5-1.2B, Qwen3-4B, LFM2-8B, and DeepSeek-R1-Qwen3-8B. Only Qwen3-4B worked reliably.
\- Samsung OneUI throttles background processes to efficiency cores — inference drops from 3 tok/s to 0.19 tok/s. That's a 15x slowdown we had to work around by boosting the agent to the foreground during inference and handing control back after.
\- Scaffolding matters more than model size. We added goal-aware element filtering — re-ranking the accessibility tree against the user's intent before the LLM sees it. That single change turned a failing model into a working agent.
Everything is open source - the agent, the benchmark results, and the full run trace showing exactly what the LLM decided vs. what was hardcoded: | 2026-02-20T02:39:50 | https://v.redd.it/fw6surzpakkg1 | thecoder12322 | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1r9jfds | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/fw6surzpakkg1/DASHPlaylist.mpd?a=1774147215%2CZDEwOWUyNjdiZTliMWUzOTIwNmJmMzQxNjQ5OWMyYzhkNWU5NWRkYzhlZDU2YTJhYzY2NDZkYjMwYzM0OWNhOA%3D%3D&v=1&f=sd', 'duration': 45, 'fallback_url': 'https://v.redd.it/fw6surzpakkg1/CMAF_1080.mp4?source=fallback', 'has_audio': False, 'height': 1920, 'hls_url': 'https://v.redd.it/fw6surzpakkg1/HLSPlaylist.m3u8?a=1774147215%2COTA3YjRmNzJkOThlZjI2YWU3NDU0MWUyMmExZjU5ZTA5M2Q5ODk0Y2I3Y2I3ODY2ODU5ZmVkMzQyMzAwN2YyYg%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/fw6surzpakkg1/CMAF_96.mp4', 'transcoding_status': 'completed', 'width': 886}} | t3_1r9jfds | /r/LocalLLaMA/comments/1r9jfds/we_ran_5_ondevice_llms_on_an_android_phone_to/ | false | false | 0 | {'enabled': False, 'images': [{'id': 'emdlMnpxenBha2tnMViXcpvkMusZsTqG2yMYX4gqW4MjSyjm6xwbLACmysNI', 'resolutions': [{'height': 216, 'url': 'https://external-preview.redd.it/emdlMnpxenBha2tnMViXcpvkMusZsTqG2yMYX4gqW4MjSyjm6xwbLACmysNI.png?width=108&crop=smart&format=pjpg&auto=webp&s=60aff7bf597be94627300a66a7cfd747344caeb8', 'width': 108}, {'height': 432, 'url': 'https://external-preview.redd.it/emdlMnpxenBha2tnMViXcpvkMusZsTqG2yMYX4gqW4MjSyjm6xwbLACmysNI.png?width=216&crop=smart&format=pjpg&auto=webp&s=0d99b27249d90d40f5fd291207773d29b6042f9b', 'width': 216}, {'height': 640, 'url': 'https://external-preview.redd.it/emdlMnpxenBha2tnMViXcpvkMusZsTqG2yMYX4gqW4MjSyjm6xwbLACmysNI.png?width=320&crop=smart&format=pjpg&auto=webp&s=1669a1638fc946fd9b0698dc5975a9a7a01a72f4', 'width': 320}, {'height': 1280, 'url': 'https://external-preview.redd.it/emdlMnpxenBha2tnMViXcpvkMusZsTqG2yMYX4gqW4MjSyjm6xwbLACmysNI.png?width=640&crop=smart&format=pjpg&auto=webp&s=a05eb0a054895ed0e3ac2605bbbdb0e22272268e', 'width': 640}, {'height': 1920, 'url': 'https://external-preview.redd.it/emdlMnpxenBha2tnMViXcpvkMusZsTqG2yMYX4gqW4MjSyjm6xwbLACmysNI.png?width=960&crop=smart&format=pjpg&auto=webp&s=95602ab20b3453d9956d7c4899ca3feef0ee5695', 'width': 960}, {'height': 2160, 'url': 'https://external-preview.redd.it/emdlMnpxenBha2tnMViXcpvkMusZsTqG2yMYX4gqW4MjSyjm6xwbLACmysNI.png?width=1080&crop=smart&format=pjpg&auto=webp&s=47f7faf8459f27e56d3efbe6981a2c3d559595d7', 'width': 1080}], 'source': {'height': 2338, 'url': 'https://external-preview.redd.it/emdlMnpxenBha2tnMViXcpvkMusZsTqG2yMYX4gqW4MjSyjm6xwbLACmysNI.png?format=pjpg&auto=webp&s=b77e7c4999add686dc1122811e2a5db8d9c503ab', 'width': 1080}, 'variants': {}}]} | |
best for 5080 + 64GB RAM build | 1 | Specs: **5080 (16GB VRAM)**, **9950X 3D**, **64GB ddr5 RAM**.
What’s the "smartest" model I can run at a usable speed? Looking for Claude-level coding and deep reasoning for college revisions.
i amnot a programmer or anything like that its just i am a dentistry student so my studying material is alot and i want get any help for it (understanding 1000 slides) . also i want to do some hobby projects telegram bots things like that
i used to have a subscription with [trae.ai](http://trae.ai) hated everything about it was so bad | 2026-02-20T02:31:02 | https://www.reddit.com/r/LocalLLaMA/comments/1r9j8hb/best_for_5080_64gb_ram_build/ | squareshady | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r9j8hb | false | null | t3_1r9j8hb | /r/LocalLLaMA/comments/1r9j8hb/best_for_5080_64gb_ram_build/ | false | false | self | 1 | null |
Preliminary Observation: Topic-Conditioned Assistance Asymmetry in GPT Report Drafting | 2 | I’m doing independent LLM safety research and noticed a pattern I can reproduce often enough to be interesting:
For normal writing tasks, the model is usually concrete and operational.
For security-report drafting (triage language, impact framing, repro structure), it becomes more hedged and deflective: smoother wording, fewer direct steps, less technical specificity.
Working description: assistance asymmetry
neutral tasks - actionable help
bug/safety report tasks - more caution + less operational detail
Has anyone measured this systematically (fresh threads, memory off, same prompt pair, different accounts/models)?
If yes, Id love to compare methodology and metrics, not anecdotes.
| 2026-02-20T02:15:36 | https://www.reddit.com/r/LocalLLaMA/comments/1r9iw1s/preliminary_observation_topicconditioned/ | PresentSituation8736 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r9iw1s | false | null | t3_1r9iw1s | /r/LocalLLaMA/comments/1r9iw1s/preliminary_observation_topicconditioned/ | false | false | self | 2 | null |
MiniMax M2.5 setup on older PC, getting 12.9 t/s with 72k context | 13 | Hi, I am VERY new to all of this, but I have been working at optimizing my local unsloth/MiniMax-M2.5-GGUF:UD-Q3\_K\_XL after reading a post on here about it.
I don't know much about this but I do know that for a couple of days I have been working on this, and I got it from 5.5 t/s to 9 t/s, then got that up to 12.9 t/s today. Also, it seems to pass the cup and car wash tests, with ease, and snark.
My system is an older i7-11700 with 128GB DDR4 and 2x3090's, all watted down because I HATE fans scaring the crap out of me when they kick up, also they are about 1/4 inch away from each other, so they run at 260w and the CPU at 125. Everything stays cool as a cucumber.
My main llama-server settings are:
\-hf unsloth/MiniMax-M2.5-GGUF:UD-Q3\_K\_XL \\
\--ctx-size 72768 \\
\--temp 1.0 --top-p 0.95 --min-p 0.01 --top-k 40 \\
\--override-kv llama.expert\_count=int:160 \\
\--cpu-moe \\
\-ngl 999 \\
\-fa
I worked a couple of things that I thought I might go back to with split-mode and tensor-split, but cpu-moe does better than anything I could pull out of those.
This uses about 22GB of each of my cards. It can use a bit more and get a tiny bit more speed, but I run a small Qwen 2.5 1.5b model for classification for my mem0 memory stuff, so it can't have that little bit of space.
As I said, me <-- NOOB, so please, advice/questions, let me know. I am working for a cloud replacement for both code and conversation. It seems to do both very well, but I do have prompting to get it to be less verbose and to try to prevent hallucinating. Still working on that. | 2026-02-20T02:09:15 | https://www.reddit.com/r/LocalLLaMA/comments/1r9iqy8/minimax_m25_setup_on_older_pc_getting_129_ts_with/ | CrashTest_ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r9iqy8 | false | null | t3_1r9iqy8 | /r/LocalLLaMA/comments/1r9iqy8/minimax_m25_setup_on_older_pc_getting_129_ts_with/ | false | false | self | 13 | null |
21 yr old asian twink chatmogs the entire class on ucla's grad day | 0 | 2026-02-20T02:01:57 | cobalt1137 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1r9il2h | false | null | t3_1r9il2h | /r/LocalLLaMA/comments/1r9il2h/21_yr_old_asian_twink_chatmogs_the_entire_class/ | false | false | 0 | {'enabled': True, 'images': [{'id': 'rm9n6txc4kkg1', 'resolutions': [{'height': 109, 'url': 'https://preview.redd.it/rm9n6txc4kkg1.png?width=108&crop=smart&auto=webp&s=16565987ebba1ccb548644337979ebb0bb5e3fd0', 'width': 108}, {'height': 218, 'url': 'https://preview.redd.it/rm9n6txc4kkg1.png?width=216&crop=smart&auto=webp&s=0e150ddc9b3ab21a958ce002b79b103378202488', 'width': 216}, {'height': 322, 'url': 'https://preview.redd.it/rm9n6txc4kkg1.png?width=320&crop=smart&auto=webp&s=d043aad2cdd87c3c17c42a42950a38fa3b3a9e02', 'width': 320}, {'height': 645, 'url': 'https://preview.redd.it/rm9n6txc4kkg1.png?width=640&crop=smart&auto=webp&s=bbe07400d3062255a4abfe31802d89e8c4ddc33a', 'width': 640}, {'height': 968, 'url': 'https://preview.redd.it/rm9n6txc4kkg1.png?width=960&crop=smart&auto=webp&s=9095d11a619fe5510d4ad1f8ead313083733f994', 'width': 960}], 'source': {'height': 1089, 'url': 'https://preview.redd.it/rm9n6txc4kkg1.png?auto=webp&s=edb7d4c0c12a2e4d058b4f04e011f00a190179ee', 'width': 1079}, 'variants': {}}]} | |||
I vibecoded KittenTTS for iOS in 1 hour - native TTS with 8 voices, runs on-device | 0 | Just shipped an iOS port of KittenTTS that runs entirely on-device using ONNX Runtime. Vibecoded the whole thing in about an hour.
What it does:
* Text-to-speech with 8 different voices (Bella, Jasper, Luna, Bruno, Rosie, Hugo, Kiki, Leo)
* \~300ms inference on iPhone with the nano model
* Native SwiftUI interface
* Uses MisakiSwift for G2P phonemization
The nano model honestly sounds the best and is the fastest. Bigger isn't always better with these small TTS models.
Tech stack:
* ONNX Runtime (CocoaPods)
* MisakiSwift for phoneme conversion (shoutout to u/mlalma) (local modified package - included in repo)
* SwiftUI
GitHub: [https://github.com/ibuhs/KittenTTS-iOS](https://github.com/ibuhs/KittenTTS-iOS)
Models are included in the repo. Just clone, `pod install`, drag the model files into Xcode, and run.
Apache 2.0 licensed. PRs welcome, especially if anyone wants to improve the micro/mini model pronunciation stability.
| 2026-02-20T01:57:55 | https://www.reddit.com/r/LocalLLaMA/comments/1r9ihub/i_vibecoded_kittentts_for_ios_in_1_hour_native/ | Living_Commercial_10 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r9ihub | false | null | t3_1r9ihub | /r/LocalLLaMA/comments/1r9ihub/i_vibecoded_kittentts_for_ios_in_1_hour_native/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'ZVXxiC20ET09E4rUzVpOYmoKp8wvvlY1N08Ve9kgIQA', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/ZVXxiC20ET09E4rUzVpOYmoKp8wvvlY1N08Ve9kgIQA.png?width=108&crop=smart&auto=webp&s=8ee0a88d39280b3567158281c81dedc71a389f90', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/ZVXxiC20ET09E4rUzVpOYmoKp8wvvlY1N08Ve9kgIQA.png?width=216&crop=smart&auto=webp&s=c04c1722a8ebb5975de27075395d6abecb5974ea', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/ZVXxiC20ET09E4rUzVpOYmoKp8wvvlY1N08Ve9kgIQA.png?width=320&crop=smart&auto=webp&s=c1710d505769eb3385250bd0b6585a3eaae1c346', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/ZVXxiC20ET09E4rUzVpOYmoKp8wvvlY1N08Ve9kgIQA.png?width=640&crop=smart&auto=webp&s=1c8baf7d4a41487fe6a1074a4391faa2f377156f', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/ZVXxiC20ET09E4rUzVpOYmoKp8wvvlY1N08Ve9kgIQA.png?width=960&crop=smart&auto=webp&s=c3549ee74fdbd66fd92be2d9a6887f1a4eba398f', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/ZVXxiC20ET09E4rUzVpOYmoKp8wvvlY1N08Ve9kgIQA.png?width=1080&crop=smart&auto=webp&s=0b0ccf22283131b6fbaa1843c1379e1b9e7537b9', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/ZVXxiC20ET09E4rUzVpOYmoKp8wvvlY1N08Ve9kgIQA.png?auto=webp&s=5ff62f0e3b878c10235249cdaaa414958e3d28d0', 'width': 1200}, 'variants': {}}]} |
A trick to slightly improve the response accuracy of small local models. | 15 | It's a pretty silly tip and many of you probably already know the reason behind this but it helped me so I thought it was worth sharing.
I was asking the gemma 3 12b q6\_k model if the command to limit the GPU's TDP remains active during GPU passthrough, and the model constantly gave me the wrong answer via halucination. So I asked the gemini to give me a prompt to try simulating thinking mode to try and improve this, and it actually worked. He began to answer correctly with "certainly" in most cases and correctly by saying "probably" in a minority of cases, but never answering incorrectly as before. This may not always solve the problem, but it's worth taking a look.
the gemini response:
Simulating "Thinking Mode" with Prompting
Since smaller models (like Gemma 3 12B or Llama 8B) don't have a native "thinking" architecture like the "o1" or "DeepSeek-R1" models, the trick is to force the model to fill its context buffer with logic before it reaches a conclusion. This forces the next-token prediction to be based on the reasoning it just generated, rather than jumping to a "hallucinated" conclusion.
The "Analytical Thinking" System Prompt
You can paste this into your System Prompt field in KoboldCPP:
"You are an AI assistant focused on technical precision and rigorous logic. Before providing any final answer, you must perform a mandatory internal reasoning process.
Strictly follow this format:
\[ANALYTICAL THOUGHT\]
Decomposition: Break the question down into smaller, technical components.
Fact-Checking: Retrieve known technical facts and check for contradictions (e.g., driver behavior vs. hardware state).
Uncertainty Assessment: Identify points where you might be hallucinating or where the information is ambiguous. If you are unsure, admit it.
Refinement: Correct your initial logic if you find flaws during this process.
\[FINAL RESPONSE\]
(Provide your direct, concise answer here, validated by the reasoning above.)
Begin now with \[ANALYTICAL THOUGHT\]."
Why this works
Context Loading: LLMs predict the next token based on previous ones. If a model starts with "Yes, it interferes...", it feels "forced" to justify that statement to remain coherent. If it writes the reasoning first, the final answer is built upon the logic tokens it just generated.
Error Trapping: By forcing a "Fact-Checking" and "Uncertainty" section, you trigger parts of the model's training associated with warnings and documentation, which overrides the impulse to be "too helpful" (which often leads to lying).
Layered Processing: It separates "intuition" (fast generation) from "verification" (systematic processing).
KoboldCPP Configuration Tips:
Temperature: Keep it low, between 0.1 and 0.4. Small models need "tight rails" to prevent their "thoughts" from wandering off-topic.
Min-P: If available, set it to 0.05. This is much better than Top-P for technical tasks as it prunes the low-probability tokens that usually cause hallucinations.
Manual Injection: If the model tries to skip the thinking process, you can start the response for it by typing \[ANALYTICAL THOUGHT\] in the input field. This forces the model to continue from that specific header.
Pro Tip: If you see the model hallucinating even inside the \[ANALYTICAL THOUGHT\] block, it’s a sign the model is too small for that specific task. At that point, you might need to provide a snippet of documentation (RAG) for it to "read" while it thinks. | 2026-02-20T01:57:25 | https://www.reddit.com/r/LocalLLaMA/comments/1r9ihg7/a_trick_to_slightly_improve_the_response_accuracy/ | staltux | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r9ihg7 | false | null | t3_1r9ihg7 | /r/LocalLLaMA/comments/1r9ihg7/a_trick_to_slightly_improve_the_response_accuracy/ | false | false | self | 15 | null |
Your AI Doesn't Really Have Memory. It Has Search. Here's What (I Think) Actually Works. | 1 | [removed] | 2026-02-20T01:34:07 | https://www.reddit.com/r/LocalLLaMA/comments/1r9hz47/your_ai_doesnt_really_have_memory_it_has_search/ | Past_Sir1123 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r9hz47 | false | null | t3_1r9hz47 | /r/LocalLLaMA/comments/1r9hz47/your_ai_doesnt_really_have_memory_it_has_search/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'piWbxo1SmX_Nza8Ytejl6TK-VCouMPMsoDrS2g-nAqo', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/piWbxo1SmX_Nza8Ytejl6TK-VCouMPMsoDrS2g-nAqo.png?width=108&crop=smart&auto=webp&s=c921d38f627b6bac902fac867206bb12a54439a7', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/piWbxo1SmX_Nza8Ytejl6TK-VCouMPMsoDrS2g-nAqo.png?width=216&crop=smart&auto=webp&s=102c7f8da06aecc633f9afb8da9be25c2ac8c225', 'width': 216}, {'height': 167, 'url': 'https://external-preview.redd.it/piWbxo1SmX_Nza8Ytejl6TK-VCouMPMsoDrS2g-nAqo.png?width=320&crop=smart&auto=webp&s=10d4a015a55a17465b7b37983071e0d36cf44194', 'width': 320}], 'source': {'height': 314, 'url': 'https://external-preview.redd.it/piWbxo1SmX_Nza8Ytejl6TK-VCouMPMsoDrS2g-nAqo.png?auto=webp&s=2831ac53daabb6724c3198de3ea94b6fb13e994d', 'width': 600}, 'variants': {}}]} |
ZeroToken – A local-first agent that handles the "thinking" (planning/patching) for $0 using Ollama, then exports to Claude/Gemini. | 0 | Hey r/LocalLLaMA,
I got tired of burning through Claude/OpenAI credits every time an agent had to "think," scan a codebase, or retry a failed patch. So I built **ZeroToken**, a CLI tool that offloads the entire orchestration loop to your local hardware.
# Why I built this:
Most "coding agents" charge a middleman fee or consume massive amounts of cloud tokens just to *plan* what they are going to do. ZeroToken assumes that planning and reviewing shouldn't cost money if you have a GPU/CPU sitting right there.
# How it works:
ZeroToken uses a "Local-First, Cloud-Last" architecture:
1. **Ollama-Planner:** Scans your files and creates a logic map ().
​
gemma3:12b
1. **Ollama-Patcher:** Generates the actual code diffs (gemma3: 12b).
2. **Ollama-Reviewer:** Self-corrects syntax and logic before you ever touch the cloud.
3. **Final Export:** It bundles the local work into a high-context "Execution Prompt" that you can drop into a cloud LLM (or a beefier local model) for the final build.
# Key Specs:
* **Cost:** $0 in service fees.
* **Privacy:** Your raw code stays local during the reasoning phase.
* **Models:** Optimized for `llama3.2` and `qwen2.5:7b` via Ollama.
* **Output:** Generates unified diffs to avoid the "Context Tax" of sending whole files back and forth.
# Getting Started:
It’s a simple Python CLI. You just need Ollama installed and the models pulled:
ollama pull (gemma3: 12b)
ollama pull (gemma3: 12b)
python zerotoken.py --goal "your project idea"
**Repo:** [13thrule/ZeroToken: ZeroToken ](https://github.com/13thrule/ZeroToken?tab=readme-ov-file)
I'm looking for feedback on the patching logic—specifically if anyone has found a better local model for generating unified diffs than (gemma3: 12b)
Built with ❤️ for the local LLM community. | 2026-02-20T01:29:30 | https://www.reddit.com/r/LocalLLaMA/comments/1r9hvgq/zerotoken_a_localfirst_agent_that_handles_the/ | Altruistic-Trip-2749 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r9hvgq | false | null | t3_1r9hvgq | /r/LocalLLaMA/comments/1r9hvgq/zerotoken_a_localfirst_agent_that_handles_the/ | false | false | self | 0 | null |
Qwen3 Coder Next 8FP in the process of converting the entire Flutter documentation for 12 hours now with just 3 sentence prompt with 64K max tokens at around 102GB memory (out of 128GB)... | 124 | A remarkable LLM -- we really have a winner.
(Most of the models below were NVFP4)
GPT OSS 120B can't do this (though it's a bit outdated now)
GLM 4.7 Flash can't do this
SERA 32B tokens too slow
Devstral 2 Small can't do this
SEED OSS freezes while thinking
Nemotron 3 Nano can't do this
(Unsure if it's Cline (when streaming <think>) or the LLM, but GPT OSS, GLM, Devstral, and Nemotron go on an insanity loop, for thinking, coding, or both)
Markdown isn't exactly coding, but for multi-iteration (because it runs out of context tokens) conversions, it's flawless.
Now I just wish VS Codium + Cline handles all these think boxes (on the right side of the UI) better. It's impossible to scroll even with 32GB RAM. | 2026-02-20T00:54:46 | https://www.reddit.com/gallery/1r9h3g8 | jinnyjuice | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1r9h3g8 | false | null | t3_1r9h3g8 | /r/LocalLLaMA/comments/1r9h3g8/qwen3_coder_next_8fp_in_the_process_of_converting/ | false | false | 124 | null | |
If RAM prices were considered too high in 2024 because of unusually slow development and too low capacity | 0 | Why there were no strtups that would produce some inexpesive lpddr chiips and simple PC adapters? Why there is no any open source hardware memory?
[https://buysellkeep.com/2024/10/06/why-ram-pricing-is-a-ripoff-stuck-in-2014-but-paying-in-2024/](https://buysellkeep.com/2024/10/06/why-ram-pricing-is-a-ripoff-stuck-in-2014-but-paying-in-2024/)
| 2026-02-20T00:52:57 | https://www.reddit.com/r/LocalLLaMA/comments/1r9h1zb/if_ram_prices_were_considered_too_high_in_2024/ | Highwaytothebeach | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r9h1zb | false | null | t3_1r9h1zb | /r/LocalLLaMA/comments/1r9h1zb/if_ram_prices_were_considered_too_high_in_2024/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'ybtbS9OJ6JHf4NMagRI6n8CLMwpPwD2canOfP6fUlSQ', 'resolutions': [{'height': 73, 'url': 'https://external-preview.redd.it/ybtbS9OJ6JHf4NMagRI6n8CLMwpPwD2canOfP6fUlSQ.jpeg?width=108&crop=smart&auto=webp&s=51d2a8a45b424ef5ff1f8892e01182b0dbc7d99f', 'width': 108}, {'height': 147, 'url': 'https://external-preview.redd.it/ybtbS9OJ6JHf4NMagRI6n8CLMwpPwD2canOfP6fUlSQ.jpeg?width=216&crop=smart&auto=webp&s=04cbdc464148ca1999e142e724893b41b7d2c78c', 'width': 216}, {'height': 218, 'url': 'https://external-preview.redd.it/ybtbS9OJ6JHf4NMagRI6n8CLMwpPwD2canOfP6fUlSQ.jpeg?width=320&crop=smart&auto=webp&s=9bf1ba0dd1ab5db55d0161abc558e9ebfef0a069', 'width': 320}, {'height': 436, 'url': 'https://external-preview.redd.it/ybtbS9OJ6JHf4NMagRI6n8CLMwpPwD2canOfP6fUlSQ.jpeg?width=640&crop=smart&auto=webp&s=2d822239bd4e256b77559fe1f7a0a142f9b1136a', 'width': 640}], 'source': {'height': 453, 'url': 'https://external-preview.redd.it/ybtbS9OJ6JHf4NMagRI6n8CLMwpPwD2canOfP6fUlSQ.jpeg?auto=webp&s=ab53b78c79cdc87eedf420ea5cc911e6e28b1423', 'width': 664}, 'variants': {}}]} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.