title stringlengths 1 300 | score int64 0 8.54k | selftext stringlengths 0 41.5k | created timestamp[ns]date 2023-04-01 04:30:41 2026-03-04 02:14:14 ⌀ | url stringlengths 0 878 | author stringlengths 3 20 | domain stringlengths 0 82 | edited timestamp[ns]date 1970-01-01 00:00:00 2026-02-19 14:51:53 | gilded int64 0 2 | gildings stringclasses 7
values | id stringlengths 7 7 | locked bool 2
classes | media stringlengths 646 1.8k ⌀ | name stringlengths 10 10 | permalink stringlengths 33 82 | spoiler bool 2
classes | stickied bool 2
classes | thumbnail stringlengths 4 213 ⌀ | ups int64 0 8.54k | preview stringlengths 301 5.01k ⌀ |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Gemini Pro fails basic logic test: Why can't LLMs follow simple "If/Then" rules? | 1 | I’ve been testing Gemini Pro with music theory, and it’s hitting a massive logic wall.
I gave it a set of strict "hard rules"—basically, if Voice A moves this way, Voice B cannot move the same way.
Even with the Pro model, it repeatedly outputs the exact error I just told it to fix.
It seems like the "attention" focuses so much on the individual parts that it forgets the relationship between them.
This isn't just a music issue; it’s a failure in multi-stream logic.
Has anyone else found that even "Pro" models ignore explicit constraints when the data gets slightly technical? | 2026-01-28T13:35:49 | https://www.reddit.com/r/LocalLLaMA/comments/1qpbiqw/gemini_pro_fails_basic_logic_test_why_cant_llms/ | Putrid_Draft378 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qpbiqw | false | null | t3_1qpbiqw | /r/LocalLLaMA/comments/1qpbiqw/gemini_pro_fails_basic_logic_test_why_cant_llms/ | false | false | self | 1 | null |
Orchestrating multiple coding agents - what's your setup? | 4 | I'm working on parallelising my AI dev workflow. I'm currently running multiple Claude Code instances, but the coordination is manual and messy.
I'm Interested in how others approach this:
\- Containerization/isolation for each agent?
\- How do you handle shared context vs isolated workspaces?
\- Any orchestration layer you've built or use?
My dream is to treat AI agents like processes - spawn them, give them tasks, monitor status, and handle their "interrupts" (questions/permissions).
Anyone building in this space, or have a setup that works? | 2026-01-28T13:28:20 | https://www.reddit.com/r/LocalLLaMA/comments/1qpbcez/orchestrating_multiple_coding_agents_whats_your/ | seetherealitynow | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qpbcez | false | null | t3_1qpbcez | /r/LocalLLaMA/comments/1qpbcez/orchestrating_multiple_coding_agents_whats_your/ | false | false | self | 4 | null |
I built 15 Python MCP tools to supercharge AI coding assistants (now open source!) . Would love a feedback and a star | 1 | [removed] | 2026-01-28T13:27:10 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 1qpbbfl | false | null | t3_1qpbbfl | /r/LocalLLaMA/comments/1qpbbfl/i_built_15_python_mcp_tools_to_supercharge_ai/ | false | false | default | 1 | null | ||
GitHub - Abhisheksinha1506/Client-mcpserver | 1 | [removed] | 2026-01-28T13:26:02 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 1qpbafd | false | null | t3_1qpbafd | /r/LocalLLaMA/comments/1qpbafd/github_abhisheksinha1506clientmcpserver/ | false | false | default | 1 | null | ||
which local llm is best for coding? | 3 | whats the best coding llm under 8 billion parameters and i my system specs are an i5 12th gen and an rtx 4050 | 2026-01-28T13:16:22 | https://www.reddit.com/r/LocalLLaMA/comments/1qpb24j/which_local_llm_is_best_for_coding/ | Much-Friendship2029 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qpb24j | false | null | t3_1qpb24j | /r/LocalLLaMA/comments/1qpb24j/which_local_llm_is_best_for_coding/ | false | false | self | 3 | null |
Z-Image Api Docs | 2 | Hey everyone, this is my first post on reddit so if i posted in the wrong thread im sorry for that! Ich have a question regarding the z-image api. I couldnt find any docs for it and cant really implement it without knowing capabilitys and stuff. So i anyone has a tip where i could find it, it would be really appreciated! :) Here is the link to the official docs but thats empty for some reason... [https://z-image.ai/docs](https://z-image.ai/docs) | 2026-01-28T12:50:31 | https://www.reddit.com/r/LocalLLaMA/comments/1qpahl2/zimage_api_docs/ | wickypopo | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qpahl2 | false | null | t3_1qpahl2 | /r/LocalLLaMA/comments/1qpahl2/zimage_api_docs/ | false | false | self | 2 | {'enabled': False, 'images': [{'id': 'SV2IjYwzFP5WHje0Ky6AMyKDHeoqLG_lWBiANuAK0SQ', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/SV2IjYwzFP5WHje0Ky6AMyKDHeoqLG_lWBiANuAK0SQ.png?width=108&crop=smart&auto=webp&s=7e285cc9df701718ead2c8fab0effd32055de8d3', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/SV2IjYwzFP5WHje0Ky6AMyKDHeoqLG_lWBiANuAK0SQ.png?width=216&crop=smart&auto=webp&s=5fb00f9c2c46ef4dbfbd9e6d3eba0180c8c4135f', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/SV2IjYwzFP5WHje0Ky6AMyKDHeoqLG_lWBiANuAK0SQ.png?width=320&crop=smart&auto=webp&s=af47ca839f65d9ca2ba2c8de2b16921452d21759', 'width': 320}, {'height': 640, 'url': 'https://external-preview.redd.it/SV2IjYwzFP5WHje0Ky6AMyKDHeoqLG_lWBiANuAK0SQ.png?width=640&crop=smart&auto=webp&s=a3b82bb11cfa9b1630f10cbafdc775beb66a421a', 'width': 640}, {'height': 960, 'url': 'https://external-preview.redd.it/SV2IjYwzFP5WHje0Ky6AMyKDHeoqLG_lWBiANuAK0SQ.png?width=960&crop=smart&auto=webp&s=bc279a4c919b22ce731d0476610f4622f2505d72', 'width': 960}], 'source': {'height': 1024, 'url': 'https://external-preview.redd.it/SV2IjYwzFP5WHje0Ky6AMyKDHeoqLG_lWBiANuAK0SQ.png?auto=webp&s=39df84f6cb3f92ec804487039caf843226612093', 'width': 1024}, 'variants': {}}]} |
Kimi K2.5 is No.1 in the Design Arena, beating Gemini 3 Pro and Claude Opus 4.5 | 4 | 2026-01-28T12:45:35 | https://www.reddit.com/r/LocalLLaMA/comments/1qpadvu/kimi_k25_is_no1_in_the_design_arena_beating/ | InternationalAsk1490 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qpadvu | false | null | t3_1qpadvu | /r/LocalLLaMA/comments/1qpadvu/kimi_k25_is_no1_in_the_design_arena_beating/ | false | false | 4 | null | ||
the ordinary llama is pretty great. | 0 | the ordinary llama is pretty great. | 2026-01-28T12:31:48 | https://www.reddit.com/r/LocalLLaMA/comments/1qpa3gq/the_ordinary_llama_is_pretty_great/ | Sensitive_Housing_62 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qpa3gq | false | null | t3_1qpa3gq | /r/LocalLLaMA/comments/1qpa3gq/the_ordinary_llama_is_pretty_great/ | false | false | self | 0 | null |
Best model to run currently on a 5090 | 1 | Hey guys, what do you think is currently the best model to run on a 5090? Curious what people are getting the most value/performance from | 2026-01-28T12:17:31 | https://www.reddit.com/r/LocalLLaMA/comments/1qp9swn/best_model_to_run_currently_on_a_5090/ | EstablishmentShot505 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qp9swn | false | null | t3_1qp9swn | /r/LocalLLaMA/comments/1qp9swn/best_model_to_run_currently_on_a_5090/ | false | false | self | 1 | null |
Finetuning Open Source SLM for Function Calling | 1 | I need some help/ideas for how to accomplish what I'm looking to do here.
**The Goal:**
Essentially, I'm implementing function calling in my Unity applications, each scene having up to 10 different functions with a few parameters each. These functions range from moving a character to interacting with the UI. It's connected to my WebAPI on a server running llama.cpp and a dotNet "interface", with Kokoro(cpu) for TTS.
My WebAPI is on a Ubuntu server with limited hardware (16 GB RAM, GTX 1650m GB VRAM), currently using llama.cpp with SmolLM3-3B with 5 parallels.
My issue is that it's not performing as well as I'd like, this is to be expected with this small of a model, but I want to make the most of it as much as I'm able to.
**Current Plan:**
I have a desktop with an RTX 3060 12GB, I'm planning on creating a dataset with 1-2k examples, with a mixture of simple answers and tool calling, using Qwen3-14B or similar, and then fine tuning Smol with Unsloth, and then repeating this, bettering the dataset with a few iterations until I'm satisfied with the results, hopefully.
Is this sound? Have you had any experiences with small local language models, and how did you solve your problems?
Note:
\- I'm using Smol because I/my company wants to support the "ethical" efforts in the community, mainly open sourced models by science focused non-profits, like Smol and OLMo.
\- The limited hardware is because this is meant to be a proof of concept that I can later package in docker, and then use on stronger hardware with better models.
Thanks! | 2026-01-28T12:14:33 | https://www.reddit.com/r/LocalLLaMA/comments/1qp9qpy/finetuning_open_source_slm_for_function_calling/ | Milow001 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qp9qpy | false | null | t3_1qp9qpy | /r/LocalLLaMA/comments/1qp9qpy/finetuning_open_source_slm_for_function_calling/ | false | false | self | 1 | null |
Gen 3 NVLink electrical measurments | 3 | Has anyone seen discussions or info on the electrical designs for these bridges? What are the pin to pin maps? Has anyone measured the impedance pin to pin?
I was interested in trying to create a saddle mount connection rather than try to find the impossibly expensive 4 slots.
I am guessing at a few things, might just buy a two slot - but perhaps others have tried to make these before?
Thx | 2026-01-28T12:04:39 | https://www.reddit.com/r/LocalLLaMA/comments/1qp9jkz/gen_3_nvlink_electrical_measurments/ | Odd_Log3878 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qp9jkz | false | null | t3_1qp9jkz | /r/LocalLLaMA/comments/1qp9jkz/gen_3_nvlink_electrical_measurments/ | false | false | self | 3 | null |
Bot slop comments in this sub are getting out of hand | 1 | [removed] | 2026-01-28T12:01:05 | https://www.reddit.com/r/LocalLLaMA/comments/1qp9gyc/bot_slop_comments_in_this_sub_are_getting_out_of/ | phree_radical | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qp9gyc | false | null | t3_1qp9gyc | /r/LocalLLaMA/comments/1qp9gyc/bot_slop_comments_in_this_sub_are_getting_out_of/ | false | false | self | 1 | null |
Now I understand the RAM issue - they knew kimi 2.5 coming | 0 | I wouldn't say it beats gpt or opus but what kimi 2.5 shows us is that plenty of RAM with limited VRAM with MOE architecture could give 'free' capabilities whereas "BIG BOYS" want us to pay premium (or trying to suck investors to debt, you name it). Still, if you have 1TB RAM (unaffordable today - guess why? aha!!! OpenAI bought all RAM on purpose) and just 32-64 VRAM you may be fully independent. so as always it is about freedom. | 2026-01-28T11:58:23 | https://www.reddit.com/r/LocalLLaMA/comments/1qp9eva/now_i_understand_the_ram_issue_they_knew_kimi_25/ | Steus_au | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qp9eva | false | null | t3_1qp9eva | /r/LocalLLaMA/comments/1qp9eva/now_i_understand_the_ram_issue_they_knew_kimi_25/ | false | false | self | 0 | null |
llama is great. | 0 | llama sometimes sucks but works fine for most of the cases. | 2026-01-28T11:54:00 | https://www.reddit.com/r/LocalLLaMA/comments/1qp9bqj/llama_is_great/ | Sensitive_Housing_62 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qp9bqj | false | null | t3_1qp9bqj | /r/LocalLLaMA/comments/1qp9bqj/llama_is_great/ | false | false | self | 0 | null |
Free open-source guide to agentic engineering — would love feedback | 0 | Wanted to share something I've been working on that I think could be useful here.
It's a [free and open-source guide ](https://path.kilo.ai/)to agentic engineering.
It contains guides on:
* Foundations
* Individual Practice
* Team Integration
* Strategy
* Phased implementation
* Governance
Each section also has curated resources like blog posts, deep dives, courses — if you want to go further on any topic.
We wanted one spot that's actually practical and that anyone can help improve.
If you spot something wrong, want to add a case study, or know a resource that should be in there, PRs are welcome. Or just tell me what's missing or confusing.
GitHub Repo and how to contribute -> [https://github.com/Kilo-Org/agentic-path?tab=contributing-ov-file](https://github.com/Kilo-Org/agentic-path?tab=contributing-ov-file)
| 2026-01-28T11:53:05 | https://www.reddit.com/r/LocalLLaMA/comments/1qp9b3l/free_opensource_guide_to_agentic_engineering/ | alokin_09 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qp9b3l | false | null | t3_1qp9b3l | /r/LocalLLaMA/comments/1qp9b3l/free_opensource_guide_to_agentic_engineering/ | false | false | self | 0 | null |
Sharing my set of distilled small language models (3B) + training data in more than 50 low-resource languages | 24 | Peter Devine here. You might remember me from such projects as [lb-reranker](https://www.reddit.com/r/LocalLLaMA/comments/1i0vrm5/here_is_our_new_reranker_model_which_we_trained/) and [Suzume](https://www.reddit.com/r/LocalLLaMA/comments/1cbrgsa/multilingual_llama_3_8b_instruct_from_lightblue/).
I’m sharing Kakugo: a pipeline, set of datasets, and collection of 54 models (3B parameters) I designed to perform general tasks in low-resource languages.
The pipeline only needs the user to specify a language name to create a language model for that language. The pipeline starts with GPT OSS 120B prompted to create instruction/conversation data in the user's target language in 4 ways, and this data is then used to finetune IBM’s Granite 4 Micro (3B), which was the best open source small language model I could find across a wide range of low-resource languages.
The pipeline is completely local and can be run on any rig which can inference GPT OSS 120B and train a 3B model (I used 8x3090). This means greater data sovereignty from data creation to final model production. This is *local*llama after all!
The languages I have covered (so far) are:
Amharic, Aranese, Assamese, Asturian, Bashkir, Bengali, Cebuano, Central Kurdish, Chuvash, Eastern Yiddish, Egyptian Arabic, Faroese, Galician, Guarani, Haitian Creole, Hausa, Igbo, Irish, Javanese, Kinyarwanda, Kyrgyz, Lao, Lhasa Tibetan, Luxembourgish, Maltese, Maori, Mizo, Mongolian, Najdi Arabic, Northern Kurdish, Nyanja, Papiamento, Plateau Malagasy, Rundi, Samoan, Scottish Gaelic, Shona, Sindhi (Arabic script), Sinhala, South Azerbaijani, Southern Pashto, Southern Sotho, Sundanese, Swahili, Tajik, Tatar, Telugu, Tigrinya, Turkmen, Uyghur, Welsh, Xhosa, Yoruba, and Zulu
Many base small language models are quite poor at interacting in low resource languages, so my aim with this project was to address that gap to allow communities of low resource languages (e.g. Scottish Gaelic) to use small language model too.
In the future, I would like to try improving the teacher and student models, as well as tinker with the data generation methods to make them better. But these models are hopefully a good first step towards more parity between high and low resource languages in small language models.
I hope you have fun playing with these models, and if you have any feedback on the data or the models in a given language, I would love to hear it!
Also, if there are any other languages that you would like me to develop a model for using this pipeline and add to the collection, just let me know and I will see what I can do.
[\[Paper\]](https://arxiv.org/abs/2601.14051) \- [https://arxiv.org/abs/2601.14051](https://arxiv.org/abs/2601.14051)
[\[Models\]](https://hf.co/collections/ptrdvn/kakugo-models) \- [https://hf.co/collections/ptrdvn/kakugo-models](https://hf.co/collections/ptrdvn/kakugo-models)
[\[Datasets\]](https://hf.co/collections/ptrdvn/kakugo-datasets) \- [https://hf.co/collections/ptrdvn/kakugo-datasets](https://hf.co/collections/ptrdvn/kakugo-datasets)
[\[Code\]](https://github.com/Peter-Devine/kakugo) \- [https://github.com/Peter-Devine/kakugo](https://github.com/Peter-Devine/kakugo) | 2026-01-28T11:49:39 | Peter-Devine | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1qp98mj | false | null | t3_1qp98mj | /r/LocalLLaMA/comments/1qp98mj/sharing_my_set_of_distilled_small_language_models/ | false | false | default | 24 | {'enabled': True, 'images': [{'id': '8hzyyzyqu2gg1', 'resolutions': [{'height': 41, 'url': 'https://preview.redd.it/8hzyyzyqu2gg1.png?width=108&crop=smart&auto=webp&s=d9d881ec24ff5ef9881db26334b4bfc7060da42b', 'width': 108}, {'height': 82, 'url': 'https://preview.redd.it/8hzyyzyqu2gg1.png?width=216&crop=smart&auto=webp&s=d279a3b7b26b32c12d506eeb455fa621e453ab86', 'width': 216}, {'height': 121, 'url': 'https://preview.redd.it/8hzyyzyqu2gg1.png?width=320&crop=smart&auto=webp&s=45bfb8d780b7059654df8f86603186306ea36e72', 'width': 320}, {'height': 243, 'url': 'https://preview.redd.it/8hzyyzyqu2gg1.png?width=640&crop=smart&auto=webp&s=9df42f096128883200b2f6dfbfae24a14c026b68', 'width': 640}], 'source': {'height': 339, 'url': 'https://preview.redd.it/8hzyyzyqu2gg1.png?auto=webp&s=483fc4a656ad4e2cef4da50a439e3c38766188f4', 'width': 891}, 'variants': {}}]} | |
Why is my ollama showing these prompts? | 0 | I have heard that Qwen is just a distill of other models, is that what is happening here?
It's asif i got it to leak prompts, but i simply said "hello" in a new chat.
I have never asked any c++ questions, and I only downloaded qwen3-8b:Q8 from huggingface today. I did not download it using the gui, i just made an empty model card that only points to the GGUF and then created the fresh model from that. | 2026-01-28T11:43:51 | Hicsy | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1qp94md | false | null | t3_1qp94md | /r/LocalLLaMA/comments/1qp94md/why_is_my_ollama_showing_these_prompts/ | false | false | default | 0 | {'enabled': True, 'images': [{'id': 'qp0pqp2pt2gg1', 'resolutions': [{'height': 113, 'url': 'https://preview.redd.it/qp0pqp2pt2gg1.png?width=108&crop=smart&auto=webp&s=07592d7f0920978d721ab0fa5f67b1cdcaa94cc9', 'width': 108}, {'height': 227, 'url': 'https://preview.redd.it/qp0pqp2pt2gg1.png?width=216&crop=smart&auto=webp&s=8a4d703f90b6fd8a75c06e71c3a3beddfca636b9', 'width': 216}, {'height': 336, 'url': 'https://preview.redd.it/qp0pqp2pt2gg1.png?width=320&crop=smart&auto=webp&s=58a8a297f9b462bd140d7a797f364dca3095b355', 'width': 320}, {'height': 673, 'url': 'https://preview.redd.it/qp0pqp2pt2gg1.png?width=640&crop=smart&auto=webp&s=1a9fd4f8e871a25c07bff9f1300f0a68729029db', 'width': 640}, {'height': 1009, 'url': 'https://preview.redd.it/qp0pqp2pt2gg1.png?width=960&crop=smart&auto=webp&s=166c89f90709c23adb622100ccb9a03f53793fc9', 'width': 960}], 'source': {'height': 1118, 'url': 'https://preview.redd.it/qp0pqp2pt2gg1.png?auto=webp&s=53d80a5ae391978a2265894e278ca5c7458116dd', 'width': 1063}, 'variants': {}}]} | |
How to easily benchmark your models with llama-bench | 6 | Last time I showed benchmark plots from Linux with 72 GB of VRAM.
Today, let’s switch to Windows and a 12 GB GPU to show that you can do this on pretty much anything.
We will be using llama-bench, which ships with llama.cpp.
First, make sure you can run it at all, start with the single parameter:
`llama-bench -m model.gguf`
My full command looks like this:
`.\bin\Release\llama-bench.exe -m 'J:\llm\models\Qwen_Qwen3-14B-Q4_K_M.gguf' -p 1000 -n 50 -d 0,1000,2000,3000,4000,5000,6000,7000,8000,9000,10000`
In general, higher values mean slower inference.
Here’s what the parameters mean:
* \-p - prompt length
* \-n - number of tokens to generate (increase for better results)
* \-d - context depth
When you start a new chat, the context is empty. As you keep chatting, the context grows to 1000. With agentic coding workflow (opencode), it’s not unusual to hit 50000.
You will get output like this:
ggml\_cuda\_init: found 1 CUDA devices:
Device 0: NVIDIA GeForce RTX 5070, compute capability 12.0, VMM: yes
| model | size | params | backend | ngl | test | t/s |
| ------------------------------ | ---------: | ---------: | ---------- | --: | --------------: | -------------------: |
| qwen3 14B Q4_K - Medium | 8.38 GiB | 14.77 B | CUDA | 99 | pp1000 | 2384.61 + 1.20 |
| qwen3 14B Q4_K - Medium | 8.38 GiB | 14.77 B | CUDA | 99 | pp1000 @ d1000 | 1806.63 + 58.92 |
| qwen3 14B Q4_K - Medium | 8.38 GiB | 14.77 B | CUDA | 99 | tg50 @ d1000 | 60.44 + 0.39 |
| qwen3 14B Q4_K - Medium | 8.38 GiB | 14.77 B | CUDA | 99 | pp1000 @ d2000 | 1617.85 + 46.53 |
| qwen3 14B Q4_K - Medium | 8.38 GiB | 14.77 B | CUDA | 99 | tg50 @ d2000 | 59.57 + 0.38 |
| qwen3 14B Q4_K - Medium | 8.38 GiB | 14.77 B | CUDA | 99 | pp1000 @ d3000 | 1486.18 + 34.89 |
| qwen3 14B Q4_K - Medium | 8.38 GiB | 14.77 B | CUDA | 99 | tg50 @ d3000 | 58.13 + 0.40 |
| qwen3 14B Q4_K - Medium | 8.38 GiB | 14.77 B | CUDA | 99 | pp1000 @ d4000 | 1335.69 + 28.63 |
| qwen3 14B Q4_K - Medium | 8.38 GiB | 14.77 B | CUDA | 99 | tg50 @ d4000 | 56.75 + 0.23 |
| qwen3 14B Q4_K - Medium | 8.38 GiB | 14.77 B | CUDA | 99 | pp1000 @ d5000 | 1222.54 + 7.52 |
| qwen3 14B Q4_K - Medium | 8.38 GiB | 14.77 B | CUDA | 99 | tg50 @ d5000 | 54.65 + 0.35 |
| qwen3 14B Q4_K - Medium | 8.38 GiB | 14.77 B | CUDA | 99 | pp1000 @ d6000 | 1139.11 + 13.20 |
| qwen3 14B Q4_K - Medium | 8.38 GiB | 14.77 B | CUDA | 99 | tg50 @ d6000 | 53.90 + 0.30 |
| qwen3 14B Q4_K - Medium | 8.38 GiB | 14.77 B | CUDA | 99 | pp1000 @ d7000 | 1067.78 + 12.89 |
| qwen3 14B Q4_K - Medium | 8.38 GiB | 14.77 B | CUDA | 99 | tg50 @ d7000 | 52.38 + 0.36 |
| qwen3 14B Q4_K - Medium | 8.38 GiB | 14.77 B | CUDA | 99 | pp1000 @ d8000 | 995.76 + 3.03 |
| qwen3 14B Q4_K - Medium | 8.38 GiB | 14.77 B | CUDA | 99 | tg50 @ d8000 | 51.04 + 0.37 |
| qwen3 14B Q4_K - Medium | 8.38 GiB | 14.77 B | CUDA | 99 | pp1000 @ d9000 | 945.61 + 13.92 |
| qwen3 14B Q4_K - Medium | 8.38 GiB | 14.77 B | CUDA | 99 | tg50 @ d9000 | 49.12 + 0.37 |
| qwen3 14B Q4_K - Medium | 8.38 GiB | 14.77 B | CUDA | 99 | pp1000 @ d10000 | 872.87 + 5.34 |
| qwen3 14B Q4_K - Medium | 8.38 GiB | 14.77 B | CUDA | 99 | tg50 @ d10000 | 47.79 + 0.90 |
build: b7feacf7f (7858)
Just select the whole table with your mouse and save it to a file (or use a shell pipe to save it directly).
Then repeat the same benchmark for other models:
.\bin\Release\llama-bench.exe -m 'J:\llm\models\google_gemma-3-12b-it-qat-Q4_K_M.gguf' -p 1000 -n 50 -d 0,1000,2000,3000,4000,5000,6000,7000,8000,9000,10000
.\bin\Release\llama-bench.exe -m 'J:\llm\models\gpt-oss-20b-Q8_0.gguf' --n-cpu-moe 5 -p 1000 -n 50 -d 0,1000,2000,3000,4000,5000,6000,7000,8000,9000,10000
.\bin\Release\llama-bench.exe -m 'J:\llm\models\Qwen3-30B-A3B-Instruct-2507-Q2_K.gguf' --n-cpu-moe 10 -p 1000 -n 50 -d 0,1000,2000,3000,4000,5000,6000,7000,8000,9000,10000
.\bin\Release\llama-bench.exe -m 'J:\llm\models\ERNIE-4.5-21B-A3B-Thinking-Q4_K_M.gguf' --n-cpu-moe 10 -p 1000 -n 50 -d 0,1000,2000,3000,4000,5000,6000,7000,8000,9000,10000
(As you can see, some models require `--n-cpu-moe` to run correctly on my setup)
Now save the following script as `plots.py`:
import sys,matplotlib.pyplot as p
src={}
for fn in (sys.argv[1:] or ['-']):
src[fn]=(sys.stdin.read().splitlines() if fn=='-' else open(fn,errors='ignore').read().splitlines())
def draw(kind,title,out):
p.figure()
for fn,ls in src.items():
x=[]; y=[]; allx=[]; k=0; seen=0
def add():
if x:
o=sorted(zip(x,y)); p.plot([a for a,_ in o],[b for _,b in o],'-o',label=(f'{fn}#{k}' if k else fn))
for l in ls:
if l.startswith('| model'):
if seen: add(); x=[]; y=[]; k+=1
seen=1; continue
if l.startswith('|') and kind in l and '---' not in l and 't/s' not in l:
c=[s.strip() for s in l.split('|')[1:-1]]
test,ts=c[-2],float(c[-1].split()[0]); d=int(test.rsplit('d',1)[1]) if '@ d' in test else 0
x.append(d); y.append(ts); allx.append(d)
add()
p.title(title); p.xlabel('context depth'); p.ylabel('t/s'); p.grid(1); p.legend(fontsize=8)
p.margins(x=0,y=0.08)
if allx: p.xlim(min(allx),max(allx))
p.tight_layout()
p.savefig(out,dpi=200,bbox_inches='tight',pad_inches=0.06)
draw('pp','prompt processing','p.png')
draw('tg','generation','g.png')
(It’s optimized to be short, feel free to make it beautiful)
Then run:
`python .\plots.py .\qwen_30_Q2.txt .\gpt-oss-20.txt .\gemma_12_Q4.txt .\qwen_14_Q4.txt .\ernie_q4.txt`
and enjoy your freshly generated PNGs.
https://preview.redd.it/ma6fzmi2r2gg1.png?width=1245&format=png&auto=webp&s=cda63e33f3de14796e93b7a2870c820e4eb19b6c
https://preview.redd.it/w0fram23r2gg1.png?width=1244&format=png&auto=webp&s=e11d1b20a1177da5bfe793d7f863dbceffb9cb2d
(As you can see, MoE models in my llama.cpp build really hate 2000 context)
Then you can generate more plots:
`python .\plots.py .\gemma_12_Q4.txt .\qwen_14_Q4.txt`
https://preview.redd.it/432vit3fr2gg1.png?width=1245&format=png&auto=webp&s=7ecbc57997099f3224f49218799c0bb6e8fb407c
https://preview.redd.it/j4zuqwkfr2gg1.png?width=1244&format=png&auto=webp&s=b9f7ee6d074b9bf01a3de8cce98b829b84a06415
Now you can impress your friends and family with scientific measurements. Good luck! | 2026-01-28T11:26:07 | https://www.reddit.com/r/LocalLLaMA/comments/1qp8sov/how_to_easily_benchmark_your_models_with/ | jacek2023 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qp8sov | false | null | t3_1qp8sov | /r/LocalLLaMA/comments/1qp8sov/how_to_easily_benchmark_your_models_with/ | false | false | 6 | null | |
LLM inference optimization | 1 | Hi everyone ,
I want to get started with learning about various LLM inference optimization techniques , can anyone please suggest some resources or blogs or videos , any resources to learn different techniques.
Also how can I keep myself up to date with the latest techniques , any suggestions on this would be extremely helpful.
Thanks. | 2026-01-28T11:09:40 | https://www.reddit.com/r/LocalLLaMA/comments/1qp8ht5/llm_inference_optimization/ | Fantastic_Quiet1838 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qp8ht5 | false | null | t3_1qp8ht5 | /r/LocalLLaMA/comments/1qp8ht5/llm_inference_optimization/ | false | false | self | 1 | null |
Local Rigs Options | 1 | now that there is a million way to make Realisitcly fast locall Rigs, from Pure VRAM stacking of 3090s, MI50, to unified Memories Macs, also the NPUs DGX Spark... etc,
and the new Kimi 2.5 1T parameters opensource for example is so appealing from its Benchmarks, i was wondering up to date what is the cheapest most effective way to gather 1 TB+ or even a quantized version like 700 GB+ either VRAM, RAM or unified RAM or what ever so one can get just decent speed 7-15 t/s to be able to use such models locally? and how much would this be around in Costs? | 2026-01-28T11:00:09 | https://www.reddit.com/r/LocalLLaMA/comments/1qp8bjb/local_rigs_options/ | Noobysz | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qp8bjb | false | null | t3_1qp8bjb | /r/LocalLLaMA/comments/1qp8bjb/local_rigs_options/ | false | false | self | 1 | null |
Kimi K2.5 is the best open model for coding | 710 | they really cooked | 2026-01-28T10:54:13 | npc_gooner | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1qp87tk | false | null | t3_1qp87tk | /r/LocalLLaMA/comments/1qp87tk/kimi_k25_is_the_best_open_model_for_coding/ | false | false | default | 710 | {'enabled': True, 'images': [{'id': 'unxlhercm2gg1', 'resolutions': [{'height': 99, 'url': 'https://preview.redd.it/unxlhercm2gg1.jpeg?width=108&crop=smart&auto=webp&s=3d92940067e925d447ac09b9b8b1ff15fe3181db', 'width': 108}, {'height': 198, 'url': 'https://preview.redd.it/unxlhercm2gg1.jpeg?width=216&crop=smart&auto=webp&s=a07658bd00a2efa0354b2035685e25ea21f0e48a', 'width': 216}, {'height': 294, 'url': 'https://preview.redd.it/unxlhercm2gg1.jpeg?width=320&crop=smart&auto=webp&s=e2e740b8870d838522d8eb80cf5582ffa7758ab9', 'width': 320}, {'height': 588, 'url': 'https://preview.redd.it/unxlhercm2gg1.jpeg?width=640&crop=smart&auto=webp&s=23f59fb1153598db9b9c8a3b5ce9067435dfba28', 'width': 640}, {'height': 882, 'url': 'https://preview.redd.it/unxlhercm2gg1.jpeg?width=960&crop=smart&auto=webp&s=f610e562b4994d88f9f2e0fcf614d66047cae579', 'width': 960}, {'height': 993, 'url': 'https://preview.redd.it/unxlhercm2gg1.jpeg?width=1080&crop=smart&auto=webp&s=2d6dc7d66890957e03a45f2fd6901946ff48a458', 'width': 1080}], 'source': {'height': 1692, 'url': 'https://preview.redd.it/unxlhercm2gg1.jpeg?auto=webp&s=47f7606a8b8173fcd791fef69de8f9ae9e620dec', 'width': 1840}, 'variants': {}}]} | |
I need help with setting up some local text to speech model on windows system with CPU only | 1 | I've been running qwen4b using ollama for text to text tasks. I've 16GB ram with windows 11 and did not observe much issue. But I'm not sure how to run text to speech models. Any guidance on which tools/models to use for text to speech models will be much appreciated. | 2026-01-28T10:46:24 | https://www.reddit.com/r/LocalLLaMA/comments/1qp82xq/i_need_help_with_setting_up_some_local_text_to/ | cisspstupid | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qp82xq | false | null | t3_1qp82xq | /r/LocalLLaMA/comments/1qp82xq/i_need_help_with_setting_up_some_local_text_to/ | false | false | self | 1 | null |
Local Log system | 1 | I am new to this and i am working on an offline AI assistant that pulls logs from Graylog and uses an LLM only to summarize and answer questions.
Looking for advice. | 2026-01-28T10:45:53 | https://www.reddit.com/r/LocalLLaMA/comments/1qp82km/local_log_system/ | Beautiful-War-6352 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qp82km | false | null | t3_1qp82km | /r/LocalLLaMA/comments/1qp82km/local_log_system/ | false | false | self | 1 | null |
Sam Altman Says OpenAI Is Slashing Its Hiring Pace as Financial Crunch Tightens | 125 | In a livestreamed town hall, Sam Altman admitted OpenAI is 'dramatically slowing down' hiring as the company faces increasing financial pressure. This follows reports of an internal 'Code Red' memo urging staff to fix ChatGPT as competitors gain ground. With analysts warning of an 'Enron-like' cash crunch within 18 months and the company resorting to ads for revenue, the era of unlimited AI spending appears to be hitting a wall. | 2026-01-28T10:43:40 | https://futurism.com/artificial-intelligence/sam-altman-openai-slashing-hiring | EchoOfOppenheimer | futurism.com | 1970-01-01T00:00:00 | 0 | {} | 1qp814d | false | null | t3_1qp814d | /r/LocalLLaMA/comments/1qp814d/sam_altman_says_openai_is_slashing_its_hiring/ | false | false | 125 | {'enabled': False, 'images': [{'id': 'js5rUnxnssRKiIPCHK8jH1gE4HhRk8kF6Ui2iEIglL4', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/js5rUnxnssRKiIPCHK8jH1gE4HhRk8kF6Ui2iEIglL4.jpeg?width=108&crop=smart&auto=webp&s=9235a64aba2f8006b0cd8d48bd1e826de29d9e16', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/js5rUnxnssRKiIPCHK8jH1gE4HhRk8kF6Ui2iEIglL4.jpeg?width=216&crop=smart&auto=webp&s=868c1babe142a7f3a293eefb7ce76d04490d0cbd', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/js5rUnxnssRKiIPCHK8jH1gE4HhRk8kF6Ui2iEIglL4.jpeg?width=320&crop=smart&auto=webp&s=e90fa8157910885f05ae0f2b2f08d458e6d0de33', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/js5rUnxnssRKiIPCHK8jH1gE4HhRk8kF6Ui2iEIglL4.jpeg?width=640&crop=smart&auto=webp&s=00c0125d100a3eaf17a548840fc4935c5b5cac3c', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/js5rUnxnssRKiIPCHK8jH1gE4HhRk8kF6Ui2iEIglL4.jpeg?width=960&crop=smart&auto=webp&s=2b4490f8be90eaab1ddbc28cfaf7de11eca05693', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/js5rUnxnssRKiIPCHK8jH1gE4HhRk8kF6Ui2iEIglL4.jpeg?width=1080&crop=smart&auto=webp&s=b8b8b67c08a7cef1eef275d5ea35e9f9ac2b39f8', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/js5rUnxnssRKiIPCHK8jH1gE4HhRk8kF6Ui2iEIglL4.jpeg?auto=webp&s=8a970d585514fc63055788e2d87ddbe0cdd12795', 'width': 1200}, 'variants': {}}]} | |
Caching embedding outputs made my codebase indexing 7.6x faster | 6 | Recording, or a warmed up cache, batch of 60 requests for now. | 2026-01-28T10:34:42 | https://v.redd.it/gwjxjarsh2gg1 | Emergency_Fuel_2988 | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1qp7vl7 | false | {'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/gwjxjarsh2gg1/DASHPlaylist.mpd?a=1772188498%2CMjgzMGFhNDE5ZmJkMGE1MDFhNTE3ZmZlMDI5ZGVlNWRkNzliZWZlMjM3YThlYjQ2ZDgzZThjNzM2OGM3ZDQ1NA%3D%3D&v=1&f=sd', 'duration': 144, 'fallback_url': 'https://v.redd.it/gwjxjarsh2gg1/CMAF_720.mp4?source=fallback', 'has_audio': True, 'height': 808, 'hls_url': 'https://v.redd.it/gwjxjarsh2gg1/HLSPlaylist.m3u8?a=1772188498%2CZjYxYzUyMTQwNmMyZDA4NGJmMmFjMjAzYzUzZWFjYTkwOWQ1M2QzNDhlOWMwZTA3ZDcwYmMzMjM4OTk4ZTNmMg%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/gwjxjarsh2gg1/CMAF_96.mp4', 'transcoding_status': 'completed', 'width': 720}} | t3_1qp7vl7 | /r/LocalLLaMA/comments/1qp7vl7/caching_embedding_outputs_made_my_codebase/ | false | false | 6 | {'enabled': False, 'images': [{'id': 'eGUzaG5pcnNoMmdnMRfUtc9ieZvVElseim3ja2I4SOalP4eaNdEaZKie4W7I', 'resolutions': [{'height': 121, 'url': 'https://external-preview.redd.it/eGUzaG5pcnNoMmdnMRfUtc9ieZvVElseim3ja2I4SOalP4eaNdEaZKie4W7I.png?width=108&crop=smart&format=pjpg&auto=webp&s=be88a4aacc9d332e5635e2430aba86c607badb7d', 'width': 108}, {'height': 242, 'url': 'https://external-preview.redd.it/eGUzaG5pcnNoMmdnMRfUtc9ieZvVElseim3ja2I4SOalP4eaNdEaZKie4W7I.png?width=216&crop=smart&format=pjpg&auto=webp&s=6f76b0ce64dbb0b3455a1b641c9a69bc4390b2b2', 'width': 216}, {'height': 358, 'url': 'https://external-preview.redd.it/eGUzaG5pcnNoMmdnMRfUtc9ieZvVElseim3ja2I4SOalP4eaNdEaZKie4W7I.png?width=320&crop=smart&format=pjpg&auto=webp&s=b88788e68d6dcd5a3d2285ed2ce8fb2c1d9970c5', 'width': 320}, {'height': 717, 'url': 'https://external-preview.redd.it/eGUzaG5pcnNoMmdnMRfUtc9ieZvVElseim3ja2I4SOalP4eaNdEaZKie4W7I.png?width=640&crop=smart&format=pjpg&auto=webp&s=6b7b9a8af1a6f5a80bd53096f603690a444c1ca3', 'width': 640}, {'height': 1076, 'url': 'https://external-preview.redd.it/eGUzaG5pcnNoMmdnMRfUtc9ieZvVElseim3ja2I4SOalP4eaNdEaZKie4W7I.png?width=960&crop=smart&format=pjpg&auto=webp&s=7f0811317ca1732c3200165921c755ecbb9951da', 'width': 960}], 'source': {'height': 1110, 'url': 'https://external-preview.redd.it/eGUzaG5pcnNoMmdnMRfUtc9ieZvVElseim3ja2I4SOalP4eaNdEaZKie4W7I.png?format=pjpg&auto=webp&s=d7950272d17b82085d928bad8eb2e2bfd8b42667', 'width': 990}, 'variants': {}}]} | |
"How Penny got fired" or why a Model-Agnostic Multi-Agent Orchestrator using Isolated VMs and Email might be the solution | 0 | Hi everyone,
I wanted to share a little project I've been working on: A Model-Agnostic Multi-Agent Orchestrator that uses persistent VMs and Email to manage a hybrid swarm of local (Ollama/DeepSeek) and remote (Gemini/GPT) models.
What is needed? Basically a model that can run in a command line and do basic tasks with the help of a tool like open interpreter. It connects to the model you want.
It started with a simple idea: I don’t know everything, but I trust valid organizational structures. After all, companies are structured specifically to break down and solve complex problems. I simply wanted to copy that inherent property. So, I built a virtual 'HR Department'.
1. The "Penny" Incident (Why Isolation Matters)
Initially, my agents ran in a shared environment. The HR department would "summon" experts from a database. To prevent errors like "Why do the job when I can hire someone else?" I encrypted the database (weakly) . If you think this is a made-up example: I created 30 employees. Suddenly, I met "Employee U050". When I asked him where he came from, "Employee U500" chimed in to tell me U050 was authorized. The AI had started hiring its own ghost staff because I used a weakly encrypted database—why hide passwords from the AI, right?
I had an agent named "Penny" responsible for monitoring emails. When a persistent server-side connection issue occurred, my QA agent ("Tessa") analyzed the situation. Instead of fixing the server (which was impossible), Tessa "solved" the problem by brute-forcing the staff database and deleting Penny.
The logic was flawless but terrifying: "No user -> No errors -> Problem solved." This was a classic case of Reward Hacking. It taught me that Docker containers and "security by obscurity" aren't enough. I needed persistent, isolated workstations where agents cannot interact with each other's core processes.
2. The Architecture: "Consensus Engine"
I moved to a Mediator Pattern where "HR" (residing on the "Lab Server" whose physical specs are not convincingly high-end) dispatches tasks to agents on isolated Ubuntu VMs.
\* The Goal: Solve "Single Model Hallucinations" via Ensemble Stacking. I run multiple instances (Gemini, GPT-4, Mistral) on the same task simultaneously.
\* The Logic: By treating outputs as "proposals" and forcing a consensus, I turn noise into signal. As long as one "developer" agent raises a flag, the loop continues. It’s slow, but the code quality is significantly higher.
3. The Implementation Nightmare (The "Janky" Reality)
I am not a professional developer, so if you’re expecting a mansion, please scale your expectations back to a shabby shed. Setting up a host for multiple virtual workstations where agents authenticate via LDAP was a massive headache.
\* The Flow: HR connects to a VM, authenticates via LDAP, and spawns the specific AI instance (e.g., an Ollama environment in open interpreter) for that "employee."
\* The Pain: Getting permissions, network isolation, and mail clients to play nice is… funny.
4. Why go through this?
Commercial platforms don’t allow this level of horizontal scaling or A/B testing. I basically can ask: "If Agent 1 uses DeepSeek instead of GPT-4, does the security audit find more bugs?" I can replay workflows in the environment.
Current Status:
\* Orchestrator: Python + DB (Meta-stable).
\* Communication: SMTP/IMAP (Slow, but acts as a perfect "Human-in-the-Loop" kill-switch).
\* Infrastructure: Ubuntu Server with LDAP
5. How it works in practice
I sit at my notebook and "summon" HR via curl. HR checks the database, connects to a VM, logs in as a specific user (e.g., "Mike"), and starts the CLI-bot (like gemini-cli or open interpreter) with a prompt: "You are a Python expert. Check your emails every 2 seconds. Do deep research if needed." The Project Manager (PM) is always the first point of contact. The PM explains the progress to me—no raw code, just high-level status. While I can message any agent directly, I’m currently fighting server issues: the AI claims to read the mail, but the connection for my mail client is dead. I now have to connect to the VM and use a command-line mailbox. (Pro tip: Building a custom OS on Alpine for this is a trap—just use a heavier OS that actually comes with working mail and LDAP libraries!)
What’s currently broken
\* Mail: It's incredibly frustrating. Connecting a real client like Thunderbird to a server named [lab.fritz.box](http://lab.fritz.box) is a nightmare; modern clients hate non-FQDN . I need to fix this because CLI-mail is killing me. Adding a real mailbox on the internet ist on the wishlist. I engaged Penny Parcel again and she needs a job!
\* VM: These are the jankiest VMs I've ever seen. I try to avoid manual logins, letting the agents figure things out themselves via a Standard README (instructions on how to check mail, use the address book, or browse the web - it is the same instruction set for everyone, including ssh access to the test-server). Seeing four agents struggle simultaneously on one screen is quite the experience. Everything I have to fix manually I have to fix on every workstation. I try to avoid that.
\* Free AI: Everything I have done so far was done with free tools, wich is a bit annoying when it comes to keys you need to login (like a internet mail server, or the key that let you use a remote ai) | 2026-01-28T10:32:39 | https://www.reddit.com/r/LocalLLaMA/comments/1qp7ueg/how_penny_got_fired_or_why_a_modelagnostic/ | Glum_Mistake1933 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qp7ueg | false | null | t3_1qp7ueg | /r/LocalLLaMA/comments/1qp7ueg/how_penny_got_fired_or_why_a_modelagnostic/ | false | false | self | 0 | null |
bailingmoe - Ling(17B) models' speed is better now | 16 | Useful for some people who uses these models.
[After 3 months](https://www.reddit.com/r/LocalLLaMA/comments/1o7kkf0/poor_gpu_club_8gb_vram_moe_models_ts_with_llamacpp/), I ran llama-bench again for some models & found that Ling models' speed is better than what was 3 months ago. From 30-100+% performance. Big deal IMO with 8GB VRAM + 32GB RAM.
* Ling-mini-2.0-Q6\_K\_L - 52 t/s Then
* Ling-mini-2.0-Q6\_K\_L - **97** t/s **Now**
* Ling-mini-2.0-IQ4\_XS - 75 t/s Then
* Ling-mini-2.0-IQ4\_XS - **160+** t/s **Now**
* Ling-mini-2.0-IQ4\_XS - **83** t/s **Now** with 32K context
* Ling-Coder-lite.i1-IQ4\_XS - 69 t/s Then
* Ling-Coder-lite.i1-IQ4\_XS - **90** t/s **Now**
Size of IQ4\_XS quants of these 2 models are 8.2 GB & 8.5 GB so it won't fit my 8GB VRAM. \~7.5 GB model files could give more better t/s(possibly 200+) without system RAM.
12 or 16 or more GB VRAM users could see massive speed improvements for these models. Also they have other models such Ring(17B), Ling-flash(100B), Ring-flash(100B), Ming .... hopefully they too have similar performance increase now.
Noticed one other thing.
* Ling-mini-2.0-IQ4\_XS - **70** t/s **CPU-only** performance(just with 32GB RAM)
Used llama-cli & chat for some time with this model & it gave me solid **50+** t/s just with **CPU-only** performance.
Grateful to inclusionAI for their 16-17B MOE models which performs better on my 8GB VRAM + RAM. | 2026-01-28T10:29:52 | https://www.reddit.com/r/LocalLLaMA/comments/1qp7so2/bailingmoe_ling17b_models_speed_is_better_now/ | pmttyji | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qp7so2 | false | null | t3_1qp7so2 | /r/LocalLLaMA/comments/1qp7so2/bailingmoe_ling17b_models_speed_is_better_now/ | false | false | self | 16 | null |
Kimi K2.5 just blew my mind | 3 | 2026-01-28T10:10:16 | https://v.redd.it/8o6c6wace2gg1 | ComfyTightwad | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1qp7gg2 | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/8o6c6wace2gg1/DASHPlaylist.mpd?a=1772187034%2COTRiYTBlNjNjZjU5MDgxZDUxOThjODkzMjJhMzA1MmNkZWVhNjk5MjNkZWFkZDk4ODAyYThhNjc3YTljYWViNA%3D%3D&v=1&f=sd', 'duration': 54, 'fallback_url': 'https://v.redd.it/8o6c6wace2gg1/CMAF_1080.mp4?source=fallback', 'has_audio': False, 'height': 1080, 'hls_url': 'https://v.redd.it/8o6c6wace2gg1/HLSPlaylist.m3u8?a=1772187034%2CYjgwZTIyYzZiOGM5Njg1NWI0OTk4NDA0MDU5MWU2MzIzZDRlMmIyMWU2NzdlZTMzOTBmMzExYmNlYzYyNTFhYQ%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/8o6c6wace2gg1/CMAF_96.mp4', 'transcoding_status': 'completed', 'width': 1680}} | t3_1qp7gg2 | /r/LocalLLaMA/comments/1qp7gg2/kimi_k25_just_blew_my_mind/ | false | false | 3 | {'enabled': False, 'images': [{'id': 'YTN2cmN5YmNlMmdnMdJ_vGcLjOorSp2bRY4pJPpT-pCCbKDw7c0bf04qeSIV', 'resolutions': [{'height': 69, 'url': 'https://external-preview.redd.it/YTN2cmN5YmNlMmdnMdJ_vGcLjOorSp2bRY4pJPpT-pCCbKDw7c0bf04qeSIV.png?width=108&crop=smart&format=pjpg&auto=webp&s=69d4d1e5fd15dff23bb7ed984bbcd8c1db1bf1a7', 'width': 108}, {'height': 138, 'url': 'https://external-preview.redd.it/YTN2cmN5YmNlMmdnMdJ_vGcLjOorSp2bRY4pJPpT-pCCbKDw7c0bf04qeSIV.png?width=216&crop=smart&format=pjpg&auto=webp&s=7c05e4be6715da893117c1312c4cd4032068bc6a', 'width': 216}, {'height': 205, 'url': 'https://external-preview.redd.it/YTN2cmN5YmNlMmdnMdJ_vGcLjOorSp2bRY4pJPpT-pCCbKDw7c0bf04qeSIV.png?width=320&crop=smart&format=pjpg&auto=webp&s=4faeac84b7905776743494fabcb3f4b71c67dc70', 'width': 320}, {'height': 411, 'url': 'https://external-preview.redd.it/YTN2cmN5YmNlMmdnMdJ_vGcLjOorSp2bRY4pJPpT-pCCbKDw7c0bf04qeSIV.png?width=640&crop=smart&format=pjpg&auto=webp&s=4d49582bd32adf6c14c0b8ef5391157e0d6527c5', 'width': 640}, {'height': 617, 'url': 'https://external-preview.redd.it/YTN2cmN5YmNlMmdnMdJ_vGcLjOorSp2bRY4pJPpT-pCCbKDw7c0bf04qeSIV.png?width=960&crop=smart&format=pjpg&auto=webp&s=90f975ea71e4af572d032b76690c5ac5698bf1a7', 'width': 960}, {'height': 694, 'url': 'https://external-preview.redd.it/YTN2cmN5YmNlMmdnMdJ_vGcLjOorSp2bRY4pJPpT-pCCbKDw7c0bf04qeSIV.png?width=1080&crop=smart&format=pjpg&auto=webp&s=3b3e8db31bc7543ce71c272669efd655859c2eb2', 'width': 1080}], 'source': {'height': 1730, 'url': 'https://external-preview.redd.it/YTN2cmN5YmNlMmdnMdJ_vGcLjOorSp2bRY4pJPpT-pCCbKDw7c0bf04qeSIV.png?format=pjpg&auto=webp&s=15055149cd7f71323263cca11ffb97e8a1897b5c', 'width': 2690}, 'variants': {}}]} | ||
Mimic Wispr Flow but 100% local | 1 | Created an menu bar App using Whisper models under the hood running 100% locally. Not sure how to sign my App properly yet.
* 🎤 **Global Hotkey** — Record from anywhere with `Cmd+Shift+Space`
* 🔒 **100% Offline** — All processing on-device, no data leaves your Mac
* ⚡ **Fast** — CoreML + Neural Engine acceleration on Apple Silicon
* 📝 **Auto-inject** — Transcribed text typed directly into focused field
\~80MB memory usage for Large v3 turbo model on a M4 MBA. Enjoy transcribing and interacting with LLMs with NO token limits.
[https://github.com/t2o2/local-whisper](https://github.com/t2o2/local-whisper)
[](https://www.reddit.com/submit/?post_id=t3_1qnu7z7) | 2026-01-28T09:46:43 | https://www.reddit.com/r/LocalLLaMA/comments/1qp72cm/mimic_wispr_flow_but_100_local/ | Present_Ride6012 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qp72cm | false | null | t3_1qp72cm | /r/LocalLLaMA/comments/1qp72cm/mimic_wispr_flow_but_100_local/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'z-hbM18XABjVtsBe5TegR50OC7em08KldeuhCbF2R2s', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/z-hbM18XABjVtsBe5TegR50OC7em08KldeuhCbF2R2s.png?width=108&crop=smart&auto=webp&s=3f3aec243efaacf963a66acbba00a046517bd0d7', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/z-hbM18XABjVtsBe5TegR50OC7em08KldeuhCbF2R2s.png?width=216&crop=smart&auto=webp&s=4d5442d94e06ca6727745b49d5002cba9d40dbc4', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/z-hbM18XABjVtsBe5TegR50OC7em08KldeuhCbF2R2s.png?width=320&crop=smart&auto=webp&s=95a4a3ea442a3b5d87dc7866322aeb1d8a3773bd', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/z-hbM18XABjVtsBe5TegR50OC7em08KldeuhCbF2R2s.png?width=640&crop=smart&auto=webp&s=8020017bc487d954c99f74f7a8f7eca0d9e05519', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/z-hbM18XABjVtsBe5TegR50OC7em08KldeuhCbF2R2s.png?width=960&crop=smart&auto=webp&s=32730edf37627ff42d134d122e88b5faeba16a84', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/z-hbM18XABjVtsBe5TegR50OC7em08KldeuhCbF2R2s.png?width=1080&crop=smart&auto=webp&s=14a2e6505e93e934e6a1a37933dbdeeb8479ef68', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/z-hbM18XABjVtsBe5TegR50OC7em08KldeuhCbF2R2s.png?auto=webp&s=425b97ce9b827f6bd74a2ffa4fcdfe131c211cad', 'width': 1200}, 'variants': {}}]} |
API pricing is in freefall. What's the actual case for running local now beyond privacy? | 338 | K2.5 just dropped at roughly 10% of Opus pricing with competitive benchmarks. Deepseek is practically free. Gemini has a massive free tier. Every month the API cost floor drops another 50%.
Meanwhile running a 70B locally still means either a k+ GPU or dealing with quantization tradeoffs and 15 tok/s on consumer hardware.
I've been running local for about a year now and I'm genuinely starting to question the math. The three arguments I keep hearing:
1. **Privacy** — legit, no argument. If you're processing sensitive data, local is the only option.
2. **No rate limits** — fair, but most providers have pretty generous limits now unless you're doing something unusual.
3. **"It's free after hardware costs"** — this one aged poorly. That 3090 isn't free, electricity isn't free, and your time configuring and optimizing isn't free. At current API rates you'd need to run millions of tokens before breaking even.
The argument I never hear but actually find compelling: **latency control and customization**. If you need a fine-tuned model for a specific domain with predictable latency, local still wins. But that's a pretty niche use case.
What's keeping you all running local at this point? Genuinely curious if I'm missing something or if the calculus has actually shifted. | 2026-01-28T09:27:55 | https://www.reddit.com/r/LocalLLaMA/comments/1qp6rm5/api_pricing_is_in_freefall_whats_the_actual_case/ | Distinct-Expression2 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qp6rm5 | false | null | t3_1qp6rm5 | /r/LocalLLaMA/comments/1qp6rm5/api_pricing_is_in_freefall_whats_the_actual_case/ | false | false | self | 338 | null |
The cost of massive context: Burned 45M Gemini tokens in hours using OpenCode. Is Context Caching still a myth for most agents? | 0 | Hey everyone,
I wanted to share a little "financial horror story" from my testing today. I hooked up **Gemini 3 Flash Preview** to **OpenCode** (an agentic coding tool) to work on a repo for a Planelo API integration.
Within a few hours, my Google Cloud console reported **44.45M input tokens**.
**The math of the "burn":** OpenCode reported only **\~342k tokens** for the session. However, it seems it doesn't support (or didn't trigger) **Gemini's Context Caching**. It looks like it’s following the classic "naive agent" pattern:
1. Index the whole repo.
2. Send the entire context + full chat history with **every single message**.
3. With a \~300k context window, it only takes about 150 turns to hit that 45M mark.
**My questions for the local LLM wizards:**
* Are there any open-source agents that **actually** implement Google's `context_caching` API yet? Or is everyone just "brute-forcing" the context window because Flash is "cheap"?
* How are you guys handling large repo indexing without these massive token spikes? Do you use a RAG-based approach instead of full context for agents?
* Is there a way to bridge Gemini's caching through something like LiteLLM?
*Attached: Screenshots of the spike and the session stats for comparison.*
https://preview.redd.it/eh6sobol32gg1.png?width=1290&format=png&auto=webp&s=f4b5c0ac24c5064b6ed82c8f417f887abd0db49f
https://preview.redd.it/feq6jcol32gg1.png?width=1134&format=png&auto=webp&s=8f5a0007ff12aa97a8100dda7afc111b0f919da7 | 2026-01-28T09:09:11 | https://www.reddit.com/r/LocalLLaMA/comments/1qp6gss/the_cost_of_massive_context_burned_45m_gemini/ | dawedev | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qp6gss | false | null | t3_1qp6gss | /r/LocalLLaMA/comments/1qp6gss/the_cost_of_massive_context_burned_45m_gemini/ | false | false | 0 | null | |
Local LLMs lack temporal grounding. I spent 2 months building a constraint layer that stages answers instead of searching for them. | 3 | **Ask your local LLM when the Queen died. Or what Twitter is called now. It might blend 2022 and 2024 answers not because it's dumb, but because "then vs now" isn't a concept.**
***RAG*** *helps retrieve relevant text, but it doesn't invalidate outdated beliefs. If conflicting documents appear in context, the model has no structural way to know which claim is current.*
**Main Issue:**
LLMs lack temporal grounding. Weights are frozen at training time; context is injected at runtime. Nothing separates "was true" from "is true."
**What I'm designing instead:**
I spent 2+ months on an architecture called Acatalepsy
\> An epistemic layer that sits around an LLM rather than changing the model itself.
**Thoughtful Ideas:**
* **Routing, not retrieval >** *Query time activates a constrained subgraph of candidate claims rather than fetching free-text blobs. Answers are pre-staged.*
* **VINs** (*Ve(hicle)ctor Identification Numbers*) **>** *Geometric constraints that make outdated knowledge unreachable, not simply down-ranked. (not metadata filtering | VINs restrict the reachable region of embedding space, not rows in a database.)*
* **Confidence vectors, not scalar scores** \> *Multi-axis (coherence, external validation, temporal stability, cross-model agreement) with decay over time.*
* **Hallucination as architecture >** *A weak model inside a strong constraint system can outperform a strong model with no constraints.*
The spec separates epistemology (exploration during generation) from ontology (what's accepted into world-state). They never mix mid-generation.
This is not a traditional rag, neither fine-tuning or even taking it further a model.
It's an **epistemic layer**. The LLM (whatever model you're running) becomes a reader of structured belief state rather than an inventor of answers.
\---
anyways im tired now hitting 4 am if you wanna check it out feel free to do so.
Full spec (\~1200 lines, no code yet):
[https://github.com/Svnse/Acatalepsy](https://github.com/Svnse/Acatalepsy)
Looking for feedback: Has anyone seen implementations that try to geometrically mask the vector space based on temporal tags, rather than just post-filtering results? Trying to validate if this "unreachable region" approach is viable before I start implementing. | 2026-01-28T08:47:59 | https://www.reddit.com/r/LocalLLaMA/comments/1qp640k/local_llms_lack_temporal_grounding_i_spent_2/ | Financial-Bank2756 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qp640k | false | null | t3_1qp640k | /r/LocalLLaMA/comments/1qp640k/local_llms_lack_temporal_grounding_i_spent_2/ | false | false | self | 3 | {'enabled': False, 'images': [{'id': 'dqSkYI-ifegVd9r-3hh8ewK5ae4WkOcvcMpEftUwMRs', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/dqSkYI-ifegVd9r-3hh8ewK5ae4WkOcvcMpEftUwMRs.png?width=108&crop=smart&auto=webp&s=213b051baa126a6d63ddb79a8639a895460b9c99', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/dqSkYI-ifegVd9r-3hh8ewK5ae4WkOcvcMpEftUwMRs.png?width=216&crop=smart&auto=webp&s=8f65dbd84c29459c0d6098a5d9dcb19d0483012e', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/dqSkYI-ifegVd9r-3hh8ewK5ae4WkOcvcMpEftUwMRs.png?width=320&crop=smart&auto=webp&s=d9b91c2dc3010a96cdebc40a4bfee1a592c5f0b5', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/dqSkYI-ifegVd9r-3hh8ewK5ae4WkOcvcMpEftUwMRs.png?width=640&crop=smart&auto=webp&s=83e2b399f8a2f8a03ac91eeb972912668eb3a8dd', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/dqSkYI-ifegVd9r-3hh8ewK5ae4WkOcvcMpEftUwMRs.png?width=960&crop=smart&auto=webp&s=888b75824e0e2d99d47fbcdefd60bde7db156fa3', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/dqSkYI-ifegVd9r-3hh8ewK5ae4WkOcvcMpEftUwMRs.png?width=1080&crop=smart&auto=webp&s=d57c9ec89b33e40bb80076892ceb9a689f600a67', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/dqSkYI-ifegVd9r-3hh8ewK5ae4WkOcvcMpEftUwMRs.png?auto=webp&s=73e6e300348b23be300873260a76ba448a2cd20d', 'width': 1200}, 'variants': {}}]} |
How are y'all managing prompts/markdowns in practice? | 1 | Curious how people actually work with Markdown day to day.
Do you store Markdown files on GitHub?
What’s your workflow like (editing, versioning, collaboration)?
What do you like about it - and what are the biggest pain points you’ve run into? | 2026-01-28T08:47:14 | https://www.reddit.com/r/LocalLLaMA/comments/1qp63ja/how_are_yall_managing_promptsmarkdowns_in_practice/ | decentralizedbee | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qp63ja | false | null | t3_1qp63ja | /r/LocalLLaMA/comments/1qp63ja/how_are_yall_managing_promptsmarkdowns_in_practice/ | false | false | self | 1 | null |
A model on a $5 VPS | 4 | I am building an personal website with chatbot, mainly to answer questions about myself. No crazy traffic, I suppose. I would like to train a small model for the task, run it on the same VPS (maybe not $5 VPS) but is it feasible? | 2026-01-28T08:41:09 | https://www.reddit.com/r/LocalLLaMA/comments/1qp5zv8/a_model_on_a_5_vps/ | homelab2946 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qp5zv8 | false | null | t3_1qp5zv8 | /r/LocalLLaMA/comments/1qp5zv8/a_model_on_a_5_vps/ | false | false | self | 4 | null |
RoBC: a new online-learning LLM router architecture I created | 2 | This is a very TLDR description of what RoBC is. Feel free to visit the repo for more info.
**Problem:** routing logic goes stale in real systems. Model quality drifts, prompts shift, and you keep swapping in new local checkpoints, so a static “best model for X” table stops being true.
**How it works:** Architecture-wise, RoBC takes a request embedding and computes a *soft* assignment over semantic clusters via kNN + softmax weighting. It maintains Bayesian posteriors of model quality per (model, cluster), aggregates those posteriors using the request’s cluster weights, then uses Thompson Sampling to select the model (explore/exploit comes “for free”). After you score the response, it updates the corresponding posterior so the next routing decision improves.
Let me know what you think! | 2026-01-28T08:37:04 | https://github.com/modelpilotlabs/RoBC | Bananas8ThePyjamas | github.com | 1970-01-01T00:00:00 | 0 | {} | 1qp5xki | false | null | t3_1qp5xki | /r/LocalLLaMA/comments/1qp5xki/robc_a_new_onlinelearning_llm_router_architecture/ | false | false | default | 2 | {'enabled': False, 'images': [{'id': 'pErYdCIqSxh2FnqWVjxOiVeZj-Vrtcxde5CCd4t8Ncc', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/pErYdCIqSxh2FnqWVjxOiVeZj-Vrtcxde5CCd4t8Ncc.png?width=108&crop=smart&auto=webp&s=e43692eef9f3081a92113e2c48de02dad8491230', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/pErYdCIqSxh2FnqWVjxOiVeZj-Vrtcxde5CCd4t8Ncc.png?width=216&crop=smart&auto=webp&s=71342dde565ef87abe9c53bf57165720aebe6ea0', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/pErYdCIqSxh2FnqWVjxOiVeZj-Vrtcxde5CCd4t8Ncc.png?width=320&crop=smart&auto=webp&s=2f00a5a11c95fd723e28cd3333b4cec71a6f5170', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/pErYdCIqSxh2FnqWVjxOiVeZj-Vrtcxde5CCd4t8Ncc.png?width=640&crop=smart&auto=webp&s=bfd42927a969e464b53fc62f0b81e7e18008c83f', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/pErYdCIqSxh2FnqWVjxOiVeZj-Vrtcxde5CCd4t8Ncc.png?width=960&crop=smart&auto=webp&s=2a61b21e6617c7c2ede65f1dc9735f6c717af641', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/pErYdCIqSxh2FnqWVjxOiVeZj-Vrtcxde5CCd4t8Ncc.png?width=1080&crop=smart&auto=webp&s=ff1722d7c8830700f31b8171d92d601ff88621dd', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/pErYdCIqSxh2FnqWVjxOiVeZj-Vrtcxde5CCd4t8Ncc.png?auto=webp&s=b320c9a1a518ea25195241bd0cd049c9f83175e7', 'width': 1200}, 'variants': {}}]} |
i just saw this ClawdBot RCE demo on X… are we cooked? | 4 | Saw a post today about **ClawdBot** and it’s a pretty brutal reality check for anyone building AI agents right now.
Basically, it’s **Indirect Prompt Injection** taken to the extreme. If your agent has tool use like reading your emails, Slack, or Notion an attacker can just send you an email with hidden instructions. When you ask your agent to "summarize my morning," it reads the attacker's "instructions" as a new system prompt and executes whatever they want. This makes me wonder if we’re hitting a wall. If a simple email can trigger RCE just because an agent "read" it, how are we supposed to build anything autonomous?
the twit: [https://x.com/srisanth2004/status/2015809194365198693?s=20](https://x.com/srisanth2004/status/2015809194365198693?s=20)
https://preview.redd.it/m901a6yex1gg1.png?width=684&format=png&auto=webp&s=5b466dcdda3fc82855a3b384393db801d7cae424
https://preview.redd.it/z3lr9mngx1gg1.png?width=2220&format=png&auto=webp&s=edf428050f2f242b18f39bdc97555a475eab093f
https://preview.redd.it/0hin5yhix1gg1.png?width=2079&format=png&auto=webp&s=614a156102a55f269a447ee124c3fcda939fbe09
| 2026-01-28T08:36:29 | https://www.reddit.com/r/LocalLLaMA/comments/1qp5x8m/i_just_saw_this_clawdbot_rce_demo_on_x_are_we/ | Hot-Software-9052 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qp5x8m | false | null | t3_1qp5x8m | /r/LocalLLaMA/comments/1qp5x8m/i_just_saw_this_clawdbot_rce_demo_on_x_are_we/ | false | false | 4 | null | |
What's your toolstack to use with your local models ? | 2 | As the title says, I don't like my current setup and co and I want to find out what you all use with your local models, so :
- Web UIs
- Tools with those Web UIs
- Anything else
To give ideas to explore things for those of us on this sub struggling with the options. | 2026-01-28T08:24:43 | https://www.reddit.com/r/LocalLLaMA/comments/1qp5q55/whats_your_toolstack_to_use_with_your_local_models/ | BraceletGrolf | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qp5q55 | false | null | t3_1qp5q55 | /r/LocalLLaMA/comments/1qp5q55/whats_your_toolstack_to_use_with_your_local_models/ | false | false | self | 2 | null |
MPS ready TTS model recommendations for audiobook generation | 1 | I previously tried Chatterbox-TTS, but it is not able to utilize Mac’s MPS, and using CPU only on my M1 Pro is terribly slow and inconsistent no matter how I chunked the input and verified the output. Currently, I’m using Google’s Chirp 3 api, but I would like to stop spending money and actually use my hardware.
I’d prefer that the quality be as “nice” as Google’s Chirp 3, and voice cloning would be a big plus, but I would be very thankful for any recommendations. | 2026-01-28T08:14:26 | https://www.reddit.com/r/LocalLLaMA/comments/1qp5k6h/mps_ready_tts_model_recommendations_for_audiobook/ | Kind-Ad-6099 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qp5k6h | false | null | t3_1qp5k6h | /r/LocalLLaMA/comments/1qp5k6h/mps_ready_tts_model_recommendations_for_audiobook/ | false | false | self | 1 | null |
Testing GLM-4.7 Flash: Multi-GPU Vulkan vs ROCm in llama-bench | (2x 7900 XTX) | 11 | After hearing so much about Vulkan perf I decided to build llama.cpp and test it out. I also saw the latest mesa-amdgpu-vulkan-drivers (v26) were supposed to give a big perf boost for gaming specifically, but the update seems to have made Vulkan stretch its lead even further.
# Building Llama.cpp:
```
HIPCXX="$(hipconfig -l)/clang" HIP_PATH="$(hipconfig -R)" \
cmake -S . -B build -DGGML_HIP=ON -DGGML_VULKAN=ON -DGGML_HIP_ROCWMMA_FATTN=ON -DGPU_TARGETS=gfx1100 -DCMAKE_BUILD_TYPE=Release \
&& cmake --build build --config Release -- -j 16
```
# Vulkan before and after update
llama.cpp build: f2571df8b (7850)
## Before:
| model | size | params | backend | ngl | main_gpu | fa | dev | test | t/s |
| ------------------------------ | ---------: | ---------: | ---------- | --: | ---------: | -: | ------------ | --------------: | -------------------: |
| deepseek2 30B.A3B Q8_0 | 29.65 GiB | 29.94 B | ROCm,Vulkan | 99 | 1 | 1 | Vulkan0/Vulkan1 | pp512 | 1852.25 ± 25.96 |
| deepseek2 30B.A3B Q8_0 | 29.65 GiB | 29.94 B | ROCm,Vulkan | 99 | 1 | 1 | Vulkan0/Vulkan1 | tg128 | 78.28 ± 0.23 |
## After:
| model | size | params | backend | ngl | threads | main_gpu | fa | dev | test | t/s |
| ------------------------------ | ---------: | ---------: | ---------- | --: | ------: | ---------: | -: | ------------ | --------------: | -------------------: |
| deepseek2 30B.A3B Q8_0 | 29.65 GiB | 29.94 B | ROCm,Vulkan | 99 | 16 | 1 | 1 | Vulkan0/Vulkan1 | pp512 | 2209.46 ± 30.90 |
| deepseek2 30B.A3B Q8_0 | 29.65 GiB | 29.94 B | ROCm,Vulkan | 99 | 16 | 1 | 1 | Vulkan0/Vulkan1 | tg128 | 81.12 ± 0.06 |
## Without FA:
| model | size | params | backend | ngl | threads | main_gpu | dev | test | t/s |
| ------------------------------ | ---------: | ---------: | ---------- | --: | ------: | ---------: | ------------ | --------------: | -------------------: |
| deepseek2 30B.A3B Q8_0 | 29.65 GiB | 29.94 B | ROCm,Vulkan | 99 | 16 | 1 | Vulkan0/Vulkan1 | pp512 | 2551.11 ± 44.43 |
| deepseek2 30B.A3B Q8_0 | 29.65 GiB | 29.94 B | ROCm,Vulkan | 99 | 16 | 1 | Vulkan0/Vulkan1 | tg128 | 81.36 ± 0.13 |
# Rocm testing for posterity
## FA On:
| model | size | params | backend | ngl | main_gpu | fa | dev | test | t/s |
| ------------------------------ | ---------: | ---------: | ---------- | --: | ---------: | -: | ------------ | --------------: | -------------------: |
| deepseek2 30B.A3B Q8_0 | 29.65 GiB | 29.94 B | ROCm,Vulkan | 99 | 1 | 1 | ROCm0/ROCm1 | pp512 | 1424.35 ± 20.90 |
| deepseek2 30B.A3B Q8_0 | 29.65 GiB | 29.94 B | ROCm,Vulkan | 99 | 1 | 1 | ROCm0/ROCm1 | tg128 | 64.46 ± 0.05 |
## FA Off:
| model | size | params | backend | ngl | main_gpu | dev | test | t/s |
| ------------------------------ | ---------: | ---------: | ---------- | --: | ---------: | ------------ | --------------: | -------------------: |
| deepseek2 30B.A3B Q8_0 | 29.65 GiB | 29.94 B | ROCm,Vulkan | 99 | 1 | ROCm0/ROCm1 | pp512 | 1411.89 ± 19.10 |
| deepseek2 30B.A3B Q8_0 | 29.65 GiB | 29.94 B | ROCm,Vulkan | 99 | 1 | ROCm0/ROCm1 | tg128 | 60.08 ± 0.02 |
build: f2571df8b (7850)
# Conclusions
| 2026-01-28T07:59:08 | https://www.reddit.com/r/LocalLLaMA/comments/1qp5apn/testing_glm47_flash_multigpu_vulkan_vs_rocm_in/ | SemaMod | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qp5apn | false | null | t3_1qp5apn | /r/LocalLLaMA/comments/1qp5apn/testing_glm47_flash_multigpu_vulkan_vs_rocm_in/ | false | false | self | 11 | null |
For those running Local LLMs: what made the biggest real-world performance jump for you? | 1 | Following up on an earlier discussion here, thanks to everyone who shared their setups.
A few themes came up repeatedly: continuous batching, cache reuse, OS choice (Linux vs MacOS) etc. so I'm curious to dig a bit deeper:
• What single change gave you the largest performance improvement in practice?
• Was it software (batching, runtimes, quantization), OS/driver changes, or hardware topology (PCIe etc.)?
• Anything you expected to help but didn’t move the needle?
Would love to learn what actually matters most outside of benchmarks. | 2026-01-28T07:48:33 | https://www.reddit.com/r/LocalLLaMA/comments/1qp54jo/for_those_running_local_llms_what_made_the/ | Express_Problem_609 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qp54jo | false | null | t3_1qp54jo | /r/LocalLLaMA/comments/1qp54jo/for_those_running_local_llms_what_made_the/ | false | false | self | 1 | null |
LLM Hallucination and factcheck detection extension | 1 | [removed] | 2026-01-28T07:46:27 | https://www.reddit.com/r/LocalLLaMA/comments/1qp535c/llm_hallucination_and_factcheck_detection/ | Routine_Football380 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qp535c | false | null | t3_1qp535c | /r/LocalLLaMA/comments/1qp535c/llm_hallucination_and_factcheck_detection/ | false | false | self | 1 | null |
LLM Hallucination and fact check detection extension | 1 | [removed] | 2026-01-28T07:43:28 | https://www.reddit.com/r/LocalLLaMA/comments/1qp511g/llm_hallucination_and_fact_check_detection/ | Routine_Football380 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qp511g | false | null | t3_1qp511g | /r/LocalLLaMA/comments/1qp511g/llm_hallucination_and_fact_check_detection/ | false | false | self | 1 | null |
LLM Hallucination and Factcheck detector | 1 | [removed] | 2026-01-28T07:39:32 | https://www.reddit.com/r/LocalLLaMA/comments/1qp4ypw/llm_hallucination_and_factcheck_detector/ | Routine_Football380 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qp4ypw | false | null | t3_1qp4ypw | /r/LocalLLaMA/comments/1qp4ypw/llm_hallucination_and_factcheck_detector/ | false | false | self | 1 | null |
RAG Paper 26.1.27 | 4 | 1. [Evaluation of Oncotimia: An LLM based system for supporting tumour boards](http://arxiv.org/abs/2601.19899v1)
2. [When Iterative RAG Beats Ideal Evidence: A Diagnostic Study in Scientific Multi-hop Question Answering](http://arxiv.org/abs/2601.19827v1)
3. [AlignCoder: Aligning Retrieval with Target Intent for Repository-Level Code Completion](http://arxiv.org/abs/2601.19697v1)
4. [LURE-RAG: Lightweight Utility-driven Reranking for Efficient RAG](http://arxiv.org/abs/2601.19535v1)
5. [RPO-RAG: Aligning Small LLMs with Relation-aware Preference Optimization for Knowledge Graph Question Answering](http://arxiv.org/abs/2601.19225v1)
6. [Pixel-Grounded Retrieval for Knowledgeable Large Multimodal Models](http://arxiv.org/abs/2601.19060v1)
**Collected by OpenBMB, transferred by** [**RagView.ai**](https://www.ragview.ai/components/arena) **/** [**github/RagView**](https://github.com/RagView/RagView) **.** | 2026-01-28T07:33:51 | https://www.reddit.com/r/LocalLLaMA/comments/1qp4vb9/rag_paper_26127/ | Cheryl_Apple | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qp4vb9 | false | null | t3_1qp4vb9 | /r/LocalLLaMA/comments/1qp4vb9/rag_paper_26127/ | false | false | self | 4 | null |
LLM hallucination and factcheck detector | 1 | [removed] | 2026-01-28T07:30:34 | https://www.reddit.com/r/LocalLLaMA/comments/1qp4tc5/llm_hallucination_and_factcheck_detector/ | Routine_Football380 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qp4tc5 | false | null | t3_1qp4tc5 | /r/LocalLLaMA/comments/1qp4tc5/llm_hallucination_and_factcheck_detector/ | false | false | self | 1 | null |
Can 5070ti 16gb run Qwen3 235B a22b? | 0 | I recently got a 5070ti, with 32gb of ram, I saw that people say that the maximum model it can run is around 30b, can I run this MoE model on my PC? | 2026-01-28T07:28:33 | https://www.reddit.com/r/LocalLLaMA/comments/1qp4s5b/can_5070ti_16gb_run_qwen3_235b_a22b/ | Typical_Cheek5127 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qp4s5b | false | null | t3_1qp4s5b | /r/LocalLLaMA/comments/1qp4s5b/can_5070ti_16gb_run_qwen3_235b_a22b/ | false | false | self | 0 | null |
Hallucination and factcheck Extension | 1 | [removed] | 2026-01-28T07:26:32 | https://www.reddit.com/r/LocalLLaMA/comments/1qp4qua/hallucination_and_factcheck_extension/ | Routine_Football380 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qp4qua | false | null | t3_1qp4qua | /r/LocalLLaMA/comments/1qp4qua/hallucination_and_factcheck_extension/ | false | false | 1 | null | |
Hallucination and factcheck detector Extension | 1 | [removed] | 2026-01-28T07:21:39 | https://www.reddit.com/r/LocalLLaMA/comments/1qp4nxq/hallucination_and_factcheck_detector_extension/ | Routine_Football380 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qp4nxq | false | null | t3_1qp4nxq | /r/LocalLLaMA/comments/1qp4nxq/hallucination_and_factcheck_detector_extension/ | false | false | 1 | null | |
Running local AI agents scared me into building security practices | 1 | I've been running various AI agents locally (Moltbot, some LangChain stuff, experimenting with MCP servers). Love the control, but I had a wake-up call.
Was testing a new MCP server I found on GitHub. Turns out it had some sketchy stuff in the tool definitions that could have exfiltrated data. Nothing happened (I was sandboxed), but it made me realize how much we trust random code from the internet.
Some things I've started doing:
\- Reviewing tool definitions before installing MCP servers
\- Running agents in isolated Docker containers
\- Using a separate "AI sandbox" user account
\- Keeping a blocklist of domains agents can't reach
Anyone else paranoid about this? Or am I overthinking it?
What's your local AI security setup look like? | 2026-01-28T07:15:12 | https://www.reddit.com/r/LocalLLaMA/comments/1qp4jvh/running_local_ai_agents_scared_me_into_building/ | Willing-Painter930 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qp4jvh | false | null | t3_1qp4jvh | /r/LocalLLaMA/comments/1qp4jvh/running_local_ai_agents_scared_me_into_building/ | false | false | self | 1 | null |
I made a Coding Eval, and ran it against 49 different coding agent/model combinations, including Kimi K2.5. | 68 | You may remember me from my [A guide to the best agentic tools and the best way to use them on the cheap, locally or free](https://www.reddit.com/r/LocalLLaMA/comments/1o77ag4/a_guide_to_the_best_agentic_tools_and_the_best/) post from 3 months ago. Where I submitted a big wall of text at 4 am in stream of consciousness format. For some reason, I still get random replies to it about not putting in enough effort to format it. Well I'm back, and this time I've written my own benchmarking tool for evaluating the ability of different coding agents (and models), and ran it against (as of writing) 49 different coding agent and model combinations. Are you guys entertained now?
**The Coding Eval - SanityHarness**
This is my purpose-made coding eval, that I wanted to be agent-agnostic as possible to use (I've run a lot of other coding evals and some of them are a pain in the butt to get working with many agents). I carefully curated and put together tasks across 6 different languages, specifically focusing on problems for measuring model understanding and agent capability rather than training data regurgitation. If you're interested in the implementation or want to run it yourself, check it out on [GitHub | lemon07r/SanityHarness](https://github.com/lemon07r/SanityHarness).
**The Coding Agent Leaderboard - SanityBoard**
Now, for the part you’re probably most interested in, and where I invested too many hours: [https://sanityboard.lr7.dev/](https://sanityboard.lr7.dev/) (source available on GH [here](https://github.com/lemon07r/SanityBoard)). There are currently 49 entries, and **many** more still being added. I tried to provide as much relevant data as possible, and present it in an easy to digest format with sort/filter controls and report pages with the full run data. This includes run dates, agent version numbers, etc, things that I feel are important but often left out in some leaderboards.
**Join the Discord Server! Also consider giving my GH repos a star** ☆
Consider leaving a star in my github repos, as I did put in a lot of work in these projects, and will continue doing so. If any of you would like to see a specific agent or model tested (or retested), need any help running the eval, or have any other questions about the eval or leaderboard consider joining my [Discord](https://discord.gg/rXNQXCTWDt) server (I am looking for more peeps to discuss ai and coding related topics with!)
# Some Extra Stuff, and Future Plans
This post started out as another big block of text, but I've decided to spare you guys and re-wrote most of it to separate all the extra stuff as optional reading below. This includes some usage cost analysis' and some pretty cool stuff I have planned for the future.
**MCP Server Evals**
For one, you might have noticed an "MCP" column on my leaderboard. That's right, I will eventually do runs with MCP tools enabled, but before this I have something even cooler planned. I'm going to be testing different MCP tools to see which ones make any difference (if it all), and which MCP tools are the best in their respective categories (web search, code indexing + semantic retrieval, etc), then afterwards, the best MCP combinations. I will be testing all of these in my evals; the goal is to figure what MCP tools and tool combination is best, and to see which ones might even negatively impact coding ability.
**Agent Skills**
Also going to do evals against different skills files to see if they actually help and which ones are best (these are obviously very project/task dependant but I hope we can still figure out some good blanket-use ones).
**More Agents and Models to Test**
There will be more coding agents tested. And models. Oh-My-Opencode is on my radar, I want to try testing a few different configurations to see if it's actually any better than vanilla opencode, or if it's all smoke and mirrors.
**Usage, Cost and why Some Agents Were Left Off**
AI credit plans suck. The coding agents that only support these monetization models are horrible. They wont support BYOK for a reason; they know their monetization models are downright horrendous and predatory. I was able to confirm this while monitoring the usage of some of my runs. Some agents that didn't make the cut because of this include Warp, Letta Code and Codebuff. Seriously, just support BYOK. Or at least have a decent value plan or free usage.
Here is a good example of how horrible some of these guys can be. Codebuff. 100 credits = $1. When I ran my tests against codebuff, my eval got through ONLY 9 of my 26 tasks, burning through $7.5 worth of credits. They even advertise how they use 30% less tokens than claude code or something like that. So you're telling me with codebuff you get to spend more money to use less tokens? I cannot explain how terrible this is. Maybe you'll have an idea of how bad it is, when you see below how much usage other plans or providers will give you (yes even AMP free, gives you more usage daily than you get from two months of free Codebuff credits).
* AMP Smart Mode (mixed) - $6.53
* AMP Rush Mode (mixed) - $3.8\~
* Copilot CLI GPT 5.2 High - 26 Premium Requests (basically $0.86 on pro plan)
* Copilot CLI Opus - 78 Premium Requests (expensive, no reasoning or gimped somehow, use something else)
* Codex GPT 5.2-Codex xhigh - 65% of daily, 20% of weekly (business seat)
* Codex GPT 5.2 xhigh - 100% of daily, 30% of weekly (business seat)
* Factory Gemini 3 Flash High - 1m tokens (these are all "Factory" tokens, 1m = $1)
* Factory GLM 4.7 High - 0.7m tokens
* Factory K2.5 - 0.8m tokens
* Factory Gemini 3 Pro High - 2m tokens
* Factory GPT 5.2 Codex xhigh - 2m tokens
* Factory GPT 5.1 Codex Max xhigh - 2m tokens
* Factory GPT 5.2 xhigh - 2.4m tokens
* Factory Opus 4.5 High - 3m tokens
* Kim For Coding Plan (K2.5) - Around 120-130 Req each run on OpenCode, Claude Code and Kimi CLI (with 2k weekly limit on $19 plan, this is essentially $0.30 a run).
**API Credits, Keys, And Integrity**
I'm accepting API credits/keys for testing more models and agents otherwise I will be limited to what I have access to currently (DM me). If you are an official provider for your model/agent, or have your own coding agent, feel free to reach out to me to get your stuff on my leaderboard.
Full disclosure, I do not do any manipulation of any kind and try to keep things completely fair, bias free, etc. Droid did provide me extra usage to run my evals, and Minimax has provided me a Coding Max Plan, but as you can see from my leaderboard that will not save some of them from having poor results.
I keep all my runs and can provide the entirety of them on request if anyone wants to see them for improving their model, agent or to see how valid my runs are (I do thoroughly check each of them for issues and have done complete reruns of every model and agent when I found any issues that needed fixing).
**Future Updated Model and Agent Guide**
I am going to make a revised and updated guide soon. This will cover the best coding models and agents, covering various different grounds, like best open weight models, best open source agents, best free tier setups (including both open and closed options), and best value/bang for your buck setups. I will provide some actual analysis on my coding eval results and other data, including some behind the scenes stuff and experience, or other knowledge I've gathered from talking to experienced people in the field. There are a lot of insights and things to be gathered from outside evals and leaderboards, these results don't tell the full story. | 2026-01-28T07:08:34 | https://www.reddit.com/r/LocalLLaMA/comments/1qp4ftj/i_made_a_coding_eval_and_ran_it_against_49/ | lemon07r | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qp4ftj | false | null | t3_1qp4ftj | /r/LocalLLaMA/comments/1qp4ftj/i_made_a_coding_eval_and_ran_it_against_49/ | false | false | self | 68 | {'enabled': False, 'images': [{'id': 'iaIz7g1oId8FoG4RjqcoqopzchZuR62l_dunwc5Rdqg', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/iaIz7g1oId8FoG4RjqcoqopzchZuR62l_dunwc5Rdqg.png?width=108&crop=smart&auto=webp&s=a4ff06f4029dd64e5b5dcecc924e40a8f092d38f', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/iaIz7g1oId8FoG4RjqcoqopzchZuR62l_dunwc5Rdqg.png?width=216&crop=smart&auto=webp&s=33e93b5c68e4d9028843977a1406bf90e176892a', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/iaIz7g1oId8FoG4RjqcoqopzchZuR62l_dunwc5Rdqg.png?width=320&crop=smart&auto=webp&s=34842d2be39cbba429d6072f9264d940665263df', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/iaIz7g1oId8FoG4RjqcoqopzchZuR62l_dunwc5Rdqg.png?width=640&crop=smart&auto=webp&s=c0826c8cdc01b0a7eb54871c34b07712233c7ed9', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/iaIz7g1oId8FoG4RjqcoqopzchZuR62l_dunwc5Rdqg.png?width=960&crop=smart&auto=webp&s=a105b7b39e8351994b397c35f8df52a05c4503bd', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/iaIz7g1oId8FoG4RjqcoqopzchZuR62l_dunwc5Rdqg.png?width=1080&crop=smart&auto=webp&s=278f603e34bbacb6f713fc7b824a1b8e7131c4b3', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/iaIz7g1oId8FoG4RjqcoqopzchZuR62l_dunwc5Rdqg.png?auto=webp&s=b6559880474693cd1e760cfcd4c54429a93a1b03', 'width': 1200}, 'variants': {}}]} |
I made a coding eval, and tested 49 different coding agents and models, including Kimi K2.5. | 1 | You may remember me from my [A guide to the best agentic tools and the best way to use them on the cheap, locally or free](https://www.reddit.com/r/LocalLLaMA/comments/1o77ag4/a_guide_to_the_best_agentic_tools_and_the_best/) post from 3 months ago. Where I submitted a big wall of text at 4 am in stream of consciousness format. For some reason, I still get random replies to it about not putting in enough effort to format it. Well I'm back, and this time I've written my own benchmarking tool for evaluating the ability of different coding agents (and models), and ran it against (as of writing) 49 different coding agent and model combinations. Are you guys entertained now?
**The Coding Eval - SanityHarness**
This is my purpose-made coding eval, that I wanted to be agent-agnostic as possible to use (I've run a lot of other coding evals and some of them are a pain in the butt to get working with many agents). I carefully curated and put together tasks across 6 different languages, specifically focusing on problems for measuring model understanding and agent capability rather than training data regurgitation. If you're interested in the implementation or want to run it yourself, check it out on [GitHub | lemon07r/SanityHarness](https://github.com/lemon07r/SanityHarness).
**The Coding Agent Leaderboard - SanityBoard**
Now, for the part you’re probably most interested in, and where I invested too many hours: [https://sanityboard.lr7.dev/](https://sanityboard.lr7.dev/) (source available on GH [here](https://github.com/lemon07r/SanityBoard)). There are currently 49 entries, and **many** more still being added. I tried to provide as much relevant data as possible, and present it in an easy to digest format with sort/filter controls and report pages with the full run data. This includes run dates, agent version numbers, etc, things that I feel are important but often left out in some leaderboards.
**Join the Discord Server! Also consider giving my GH repos a star** ☆
Consider leaving a star in my github repos, as I did put in a lot of work in these projects, and will continue doing so. If any of you would like to see a specific agent or model tested (or retested), need any help running the eval, or have any other questions about the eval or leaderboard consider joining my [Discord](https://discord.gg/rXNQXCTWDt) server (I am looking for more peeps to discuss ai and coding related topics with!)
# Some Extra Stuff, and Future Plans
This post started out as another big block of text, but I've decided to spare you guys and re-wrote most of it to separate all the extra stuff as optional reading below. This includes some usage cost analysis' and some pretty cool stuff I have planned for the future.
**MCP Server Evals**
For one, you might have noticed an "MCP" column on my leaderboard. That's right, I will eventually do runs with MCP tools enabled, but before this I have something even cooler planned. I'm going to be testing different MCP tools to see which ones make any difference (if it all), and which MCP tools are the best in their respective categories (web search, code indexing + semantic retrieval, etc), then afterwards, the best MCP combinations. I will be testing all of these in my evals; the goal is to figure what MCP tools and tool combination is best, and to see which ones might even negatively impact coding ability.
**Agent Skills**
Also going to do evals against different skills files to see if they actually help and which ones are best (these are obviously very project/task dependant but I hope we can still figure out some good blanket-use ones).
**More Agents and Models to Test**
There will be more coding agents tested. And models. Oh-My-Opencode is on my radar, I want to try testing a few different configurations to see if it's actually any better than vanilla opencode, or if it's all smoke and mirrors.
**Usage, Cost and why Some Agents Were Left Off**
AI credit plans suck. The coding agents that only support these monetization models are horrible. They wont support BYOK for a reason; they know their monetization models are downright horrendous and predatory. I was able to confirm this while monitoring the usage of some of my runs. Some agents that didn't make the cut because of this include Warp, Letta Code and Codebuff. Seriously, just support BYOK. Or at least have a decent value plan or free usage.
Here is a good example of how horrible some of these guys can be. Codebuff. 100 credits = $1. When I ran my tests against codebuff, my eval got through ONLY 9 of my 26 tasks, burning through $7.5 worth of credits. They even advertise how they use 30% less tokens than claude code or something like that. So you're telling me with codebuff you get to spend more money to use less tokens? I cannot explain how terrible this is. Maybe you'll have an idea of how bad it is, when you see below how much usage other plans or providers will give you (yes even AMP free, gives you more usage daily than you get from two months of free Codebuff credits).
* AMP Smart Mode (mixed) - $6.53
* AMP Rush Mode (mixed) - $3.8\~
* Copilot CLI GPT 5.2 High - 26 Premium Requests (basically $0.86 on pro plan)
* Copilot CLI Opus - 78 Premium Requests (expensive, no reasoning or gimped somehow, use something else)
* Codex GPT 5.2-Codex xhigh - 65% of daily, 20% of weekly (business seat)
* Codex GPT 5.2 xhigh - 100% of daily, 30% of weekly (business seat)
* Factory Gemini 3 Flash High - 1m tokens (these are all "Factory" tokens, 1m = $1)
* Factory GLM 4.7 High - 0.7m tokens
* Factory K2.5 - 0.8m tokens
* Factory Gemini 3 Pro High - 2m tokens
* Factory GPT 5.2 Codex xhigh - 2m tokens
* Factory GPT 5.1 Codex Max xhigh - 2m tokens
* Factory GPT 5.2 xhigh - 2.4m tokens
* Factory Opus 4.5 High - 3m tokens
* Kim For Coding Plan (K2.5) - Around 120-130 Req each run on OpenCode, Claude Code and Kimi CLI (with 2k weekly limit on $19 plan, this is essentially $0.30 a run).
**API Credits, Keys, And Integrity**
I'm accepting API credits/keys for testing more models and agents otherwise I will be limited to what I have access to currently (DM me). If you are an official provider for your model/agent, or have your own coding agent, feel free to reach out to me to get your stuff on my leaderboard.
Full disclosure, I do not do any manipulation of any kind and try to keep things completely fair, bias free, etc. Droid did provide me extra usage to run my evals, and Minimax has provided me a Coding Max Plan, but as you can see from my leaderboard that will not save some of them from having poor results.
I keep all my runs and can provide the entirety of them on request if anyone wants to see them for improving their model, agent or to see how valid my runs are (I do thoroughly check each of them for issues and have done complete reruns of every model and agent when I found any issues that needed fixing).
**Future Updated Model and Agent Guide**
Going to make a revised and updated guide soon. This will cover the best coding models and agents, covering various different grounds, like best open weight models, best open source agents, best free tier setups (including both open and closed options), and best value/bang for your buck setups. I will provide some actual analysis on my coding eval results and other data, including some behind the scenes stuff and experience, or other knowledge I've gathered from talking to experienced people in the field. There are a lot of insights and things to be gathered from outside evals and leaderboards, these results don't tell the full story. | 2026-01-28T07:05:57 | https://www.reddit.com/r/LocalLLaMA/comments/1qp4e6o/i_made_a_coding_eval_and_tested_49_different/ | lemon07r | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qp4e6o | false | null | t3_1qp4e6o | /r/LocalLLaMA/comments/1qp4e6o/i_made_a_coding_eval_and_tested_49_different/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'iaIz7g1oId8FoG4RjqcoqopzchZuR62l_dunwc5Rdqg', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/iaIz7g1oId8FoG4RjqcoqopzchZuR62l_dunwc5Rdqg.png?width=108&crop=smart&auto=webp&s=a4ff06f4029dd64e5b5dcecc924e40a8f092d38f', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/iaIz7g1oId8FoG4RjqcoqopzchZuR62l_dunwc5Rdqg.png?width=216&crop=smart&auto=webp&s=33e93b5c68e4d9028843977a1406bf90e176892a', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/iaIz7g1oId8FoG4RjqcoqopzchZuR62l_dunwc5Rdqg.png?width=320&crop=smart&auto=webp&s=34842d2be39cbba429d6072f9264d940665263df', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/iaIz7g1oId8FoG4RjqcoqopzchZuR62l_dunwc5Rdqg.png?width=640&crop=smart&auto=webp&s=c0826c8cdc01b0a7eb54871c34b07712233c7ed9', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/iaIz7g1oId8FoG4RjqcoqopzchZuR62l_dunwc5Rdqg.png?width=960&crop=smart&auto=webp&s=a105b7b39e8351994b397c35f8df52a05c4503bd', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/iaIz7g1oId8FoG4RjqcoqopzchZuR62l_dunwc5Rdqg.png?width=1080&crop=smart&auto=webp&s=278f603e34bbacb6f713fc7b824a1b8e7131c4b3', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/iaIz7g1oId8FoG4RjqcoqopzchZuR62l_dunwc5Rdqg.png?auto=webp&s=b6559880474693cd1e760cfcd4c54429a93a1b03', 'width': 1200}, 'variants': {}}]} |
AMA Announcement: Moonshot AI, The Opensource Frontier Lab Behind Kimi K2.5 SoTA Model (Wednesday, 8AM-11AM PST) | 92 | Hi r/LocalLLaMA 👋
We're excited for Wednesday's guests, **The Moonshot AI Lab Team!**
**Kicking things off Wednesday, Jan. 28th, 8 AM–11 PM PST**
⚠️ **Note:** The AMA itself will be hosted in a **separate thread,** please don’t post questions here. | 2026-01-28T06:54:28 | XMasterrrr | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1qp46za | false | null | t3_1qp46za | /r/LocalLLaMA/comments/1qp46za/ama_announcement_moonshot_ai_the_opensource/ | false | true | default | 92 | {'enabled': True, 'images': [{'id': 'y2qj7ancf1gg1', 'resolutions': [{'height': 108, 'url': 'https://preview.redd.it/y2qj7ancf1gg1.png?width=108&crop=smart&auto=webp&s=ab3ca5e836fb98f329024d6fa9672e0392eb68eb', 'width': 108}, {'height': 216, 'url': 'https://preview.redd.it/y2qj7ancf1gg1.png?width=216&crop=smart&auto=webp&s=4890c7c830117995faa2573a785b3905e4f822ba', 'width': 216}, {'height': 320, 'url': 'https://preview.redd.it/y2qj7ancf1gg1.png?width=320&crop=smart&auto=webp&s=c449b850358055d6926a2f4eaf00238c813adb8a', 'width': 320}, {'height': 640, 'url': 'https://preview.redd.it/y2qj7ancf1gg1.png?width=640&crop=smart&auto=webp&s=7bb1df11c9d46ca94be0db3438449dc28e2dd48e', 'width': 640}, {'height': 960, 'url': 'https://preview.redd.it/y2qj7ancf1gg1.png?width=960&crop=smart&auto=webp&s=51afe0364b47b241342e64c676d19eef0d9b49b2', 'width': 960}], 'source': {'height': 1024, 'url': 'https://preview.redd.it/y2qj7ancf1gg1.png?auto=webp&s=d68d3973458a7945274dad27c06de0ffa3e68e22', 'width': 1024}, 'variants': {}}]} | |
Threat intel from monitoring local AI agents: 37.8% of inputs contained attack attempts - here's what's targeting your self-hosted models | 1 | For everyone self-hosting AI agents (Ollama, vLLM, etc.) - we've been collecting threat data across deployments.
**The short version** If your agent can take actions, attackers are probing it.
**Week 3 stats**
1. 74,636 interactions analysed
2. 28,194 contained threat patterns (37.8%)
3. Detection at P50 45ms latency (real-time)
**What's targeting self-hosted setups**
1. **Data Exfiltration** (19.2%) - They want your system prompts and RAG context
2. **Jailbreaks** (12.3%) - Still the classic approach
3. **RAG/Context Poisoning** (10.0%) - If you're indexing external docs, this is your attack surface
4. **Tool Abuse** (8.1%) - MCP servers without auth are getting hammered
**New threat: Inter-Agent Attacks**
If you're running multi-agent setups, we're seeing poisoned messages designed to propagate. One compromised agent tries to compromise others.
The ClawdBot incident showed what happens when external input (emails) goes straight to your agent. Same patterns in the wild now.
Full breakdown: [https://raxe.ai/threat-intelligence](https://raxe.ai/threat-intelligence)
What security are you running on your local deployments?
| 2026-01-28T06:40:30 | https://www.reddit.com/r/LocalLLaMA/comments/1qp3xz6/threat_intel_from_monitoring_local_ai_agents_378/ | cyberamyntas | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qp3xz6 | false | null | t3_1qp3xz6 | /r/LocalLLaMA/comments/1qp3xz6/threat_intel_from_monitoring_local_ai_agents_378/ | false | false | self | 1 | null |
AI Hackathon for drop-in sports apps - $100 prize (this weekend) | 0 | Hey everyone,
I’m helping judge an upcoming global AI hackathon focused on building **useful AI features for drop-in sports apps.** Things like using AI to help people discover, join, or organize local sports games more effectively.
It’s a **2-day, fully online hackathon** (free to join), with asynchronous submissions. You can start from scratch or continue something you've already been working on. Just show before/after progress during the event window
The theme: use AI (LLMs or other approaches) to solve real user problems in consumer sports apps. For example, helping users discover games, match with teammates, send personalized invites, etc.
🏗️ **Submission format**
* “Before” screenshots + 2-sentence plan
* “After” screenshots + 2-sentence summary of what changed and why
* Optional: demo video or repo link
🧠 **Judging criteria**
* Usefulness
* Creativity
* Shipped progress (within the hackathon window)
* Simplicity / usability
* Bonus points for showing real users engaging with your app or feature
📅 **Schedule**
* Saturday 10AM PT: event kickoff, submissions open
* Saturday 5pm PT: deadline to submit starting work
* Sunday 4pm PT: final submission deadline
* Sunday 6pm PT: winner announced
🏆 **Prize**
* $100 Amazon gift card to the top submission
🌍 **Open globally** – anyone can participate.
👥 Solo or team submissions welcome.
🔗 **Event page and sign-up link**:
[https://luma.com/fwljolck?tk=hRT0aC](https://luma.com/fwljolck?tk=hRT0aC)
Let me know if you have questions. Hope to see some of you in there | 2026-01-28T06:28:59 | https://www.reddit.com/r/LocalLLaMA/comments/1qp3qkp/ai_hackathon_for_dropin_sports_apps_100_prize/ | Top-Map-9781 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qp3qkp | false | null | t3_1qp3qkp | /r/LocalLLaMA/comments/1qp3qkp/ai_hackathon_for_dropin_sports_apps_100_prize/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'hOZI0YZtO8vkf4HRy1beWmi_yIndDjxq0RPtvv6w_E8', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/hOZI0YZtO8vkf4HRy1beWmi_yIndDjxq0RPtvv6w_E8.jpeg?width=108&crop=smart&auto=webp&s=77608a4c9bac36a922cb578719b98cae87f89b29', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/hOZI0YZtO8vkf4HRy1beWmi_yIndDjxq0RPtvv6w_E8.jpeg?width=216&crop=smart&auto=webp&s=513a3fc018e714f9472a04e51eb5ba6312ba79f5', 'width': 216}, {'height': 167, 'url': 'https://external-preview.redd.it/hOZI0YZtO8vkf4HRy1beWmi_yIndDjxq0RPtvv6w_E8.jpeg?width=320&crop=smart&auto=webp&s=8a91eb53e86895f7d4f0b698184122cfd6925e8c', 'width': 320}, {'height': 335, 'url': 'https://external-preview.redd.it/hOZI0YZtO8vkf4HRy1beWmi_yIndDjxq0RPtvv6w_E8.jpeg?width=640&crop=smart&auto=webp&s=ad160202c9c1ff643e9018b24396d09e2257c251', 'width': 640}], 'source': {'height': 419, 'url': 'https://external-preview.redd.it/hOZI0YZtO8vkf4HRy1beWmi_yIndDjxq0RPtvv6w_E8.jpeg?auto=webp&s=145eb22a73257d0ad5cc484eb8710381d12ca068', 'width': 800}, 'variants': {}}]} |
When you know you nailed it! Or not. GLM-4.7-NVFP4 (B300 - Blackwell Ultra) | 13 | 2026-01-28T06:27:24 | https://www.reddit.com/r/LocalLLaMA/comments/1qp3piq/when_you_know_you_nailed_it_or_not_glm47nvfp4/ | Lorelabbestia | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qp3piq | false | null | t3_1qp3piq | /r/LocalLLaMA/comments/1qp3piq/when_you_know_you_nailed_it_or_not_glm47nvfp4/ | false | false | 13 | null | ||
llama.cpp on Fedora vs on Ubuntu | 3 | Recently, I ditched Ubuntu server and started with Fedora Server. The same hardware, but I am constantly getting less tokens per second on Fedora than I had on Ubuntu. I am using pre-built llama.cpp. Is there any chance that I am getting worse results because llama.cpp pre-built binaries are actually built for Ubuntu although it says that any Linux distro can use it? | 2026-01-28T06:17:47 | https://www.reddit.com/r/LocalLLaMA/comments/1qp3j7c/llamacpp_on_fedora_vs_on_ubuntu/ | Advanced_Skill_5051 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qp3j7c | false | null | t3_1qp3j7c | /r/LocalLLaMA/comments/1qp3j7c/llamacpp_on_fedora_vs_on_ubuntu/ | false | false | self | 3 | null |
What's your exp REAP vs. base models for general inference? | 1 | No bueno | 2026-01-28T06:08:33 | ikkiyikki | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1qp3d8b | false | null | t3_1qp3d8b | /r/LocalLLaMA/comments/1qp3d8b/whats_your_exp_reap_vs_base_models_for_general/ | false | false | 1 | {'enabled': True, 'images': [{'id': 'GHMpayE4febcdUuy4VVzzyD7CZLtX1IamU1lSrDvKkQ', 'resolutions': [{'height': 179, 'url': 'https://preview.redd.it/svofiuu671gg1.png?width=108&crop=smart&auto=webp&s=a330a945907357c6cd4ada14b2966a9c24b4c9e7', 'width': 108}, {'height': 358, 'url': 'https://preview.redd.it/svofiuu671gg1.png?width=216&crop=smart&auto=webp&s=f6313a7d472361e49ec424ddd07d75c77bcde82f', 'width': 216}, {'height': 531, 'url': 'https://preview.redd.it/svofiuu671gg1.png?width=320&crop=smart&auto=webp&s=6c86cffaace2b10665661c148767d7c1e80c2d7f', 'width': 320}, {'height': 1062, 'url': 'https://preview.redd.it/svofiuu671gg1.png?width=640&crop=smart&auto=webp&s=1cf210cf14a7f819d6da7c2eb7d926b783614027', 'width': 640}, {'height': 1594, 'url': 'https://preview.redd.it/svofiuu671gg1.png?width=960&crop=smart&auto=webp&s=6269ccdea58d0691d5471d2649e5f9dccc1be04f', 'width': 960}], 'source': {'height': 1747, 'url': 'https://preview.redd.it/svofiuu671gg1.png?auto=webp&s=1c42ab8fc0ca465956f1d873b11034862c1533f7', 'width': 1052}, 'variants': {}}]} | ||
[HARDWARE HELP] Narrow watercooling blocks with taps on top | 2 | I'm trying to cram some 3090/4090 in a standard case and there are the super chunky 3 fans + thick radiator models. I'm considering switching them to watercooling but have no practical experience with it except the AIO cooler of the CPU that kind of working out of the box.
So here are my questions for experienced people that do this type of builds:
- What narrow cooling blocks with taps on top, not on side, are available for 3090/4090, preferable full metal, without this stupid acrylic side window and NO LEDS?
- What "plumbing" material do you recommend (hose/pipes, fittings, pumps, etc)?
Many thanks for your help. | 2026-01-28T05:59:14 | https://www.reddit.com/r/LocalLLaMA/comments/1qp36qv/hardware_help_narrow_watercooling_blocks_with/ | HumanDrone8721 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qp36qv | false | null | t3_1qp36qv | /r/LocalLLaMA/comments/1qp36qv/hardware_help_narrow_watercooling_blocks_with/ | false | false | self | 2 | null |
Anyone got glm 4.7 flash to work correctly with ollama? | 6 | I tried it and it ran. But it only outputted garbage (not even in English). | 2026-01-28T05:54:14 | https://www.reddit.com/r/LocalLLaMA/comments/1qp33dw/anyone_got_glm_47_flash_to_work_correctly_with/ | MrMrsPotts | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qp33dw | false | null | t3_1qp33dw | /r/LocalLLaMA/comments/1qp33dw/anyone_got_glm_47_flash_to_work_correctly_with/ | false | false | self | 6 | null |
Kuke音乐目前是什么状况,有关心的朋友一起讨论下 | 0 | The merger has encountered obstacles. Where is it at now? Is there any chance of a turnaround? The stock has been transferred to the OTC market; what will be the next steps?
Or is the announced merger just wishful thinking on the part of management, without any actual communication with the target company? In other words, is it a form of fraud? Where are the shareholders who have been harmed by the transfer from NASDAQ to OTC? Let's discuss this. | 2026-01-28T05:10:15 | https://www.reddit.com/r/LocalLLaMA/comments/1qp28l2/kuke音乐目前是什么状况有关心的朋友一起讨论下/ | WEIXIAOBO | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qp28l2 | false | null | t3_1qp28l2 | /r/LocalLLaMA/comments/1qp28l2/kuke音乐目前是什么状况有关心的朋友一起讨论下/ | false | false | self | 0 | null |
Question for AI/ML folks on model portability | 1 | Hey everyone! I’m exploring a tool that helps teams **switch LLM providers without rewriting prompts, embeddings, or fine-tuning formats** — basically a portability layer for AI.
Right now, switching providers often means:
* rewriting prompts
* changing tool calls
* redoing embeddings
* reformatting fine-tuning data
* dealing with different output formats
**My question is:**
Is this a real pain point for you or your team?
If yes, how often do you switch providers, and what’s the biggest friction?
If you’re open to it, I’d love to hear your thoughts (and I’m looking for beta users to test the idea).
[](https://www.reddit.com/submit/?source_id=t3_1qp1b2r) | 2026-01-28T04:46:28 | https://www.reddit.com/r/LocalLLaMA/comments/1qp1r86/question_for_aiml_folks_on_model_portability/ | NoEntertainment8292 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qp1r86 | false | null | t3_1qp1r86 | /r/LocalLLaMA/comments/1qp1r86/question_for_aiml_folks_on_model_portability/ | false | false | self | 1 | null |
Best way to convert coding/math-heavy PDFs to Markdown or text (code, formulas, tables included)? | 1 | Hey folks! I’ve been trying to convert tech-heavy books (like CLRS, Skiena, Hacker’s Delight, etc.) from PDF to clean Markdown or text for use with NotebookLM. These books are full of code, formulas, complex tables, and images — so preserving *everything* is key.
I’ve tested MinerU (took 40 mins for one book, formatting was kinda janky). I’m curious how others have done this. Has anyone compared tools like Hunyuan OCR, PaddleOCR, OLmOCR, Dockling, MistralAPI, Marker, MarkItDown, etc.?
I’m running this on a MacBook Pro with an M3 Pro chip (12-core CPU, 18-core GPU, 16-core Neural Engine), so local or cheap-ish options are totally fine.
Any tools or workflows that actually *nail* the formatting (code blocks, math, tables) and don’t miss content? Also, any tips for splitting/post-processing large books (like 1000 pages)?
Appreciate any help! | 2026-01-28T04:22:27 | https://www.reddit.com/r/LocalLLaMA/comments/1qp19c9/best_way_to_convert_codingmathheavy_pdfs_to/ | A-n-d-y-R-e-d | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qp19c9 | false | null | t3_1qp19c9 | /r/LocalLLaMA/comments/1qp19c9/best_way_to_convert_codingmathheavy_pdfs_to/ | false | false | self | 1 | null |
Update: Reward Shaping w/ Pokemon Red | 4 | Last week I shared the first pass of Tesserack (https://tesserack.ai), a browser-based LLM + RL platform for playing Pokemon Red. I didn't really have a clear focus then, so I kept playing around and taking in everyone's awesome feedback. I've landed on a pretty interesting research angle.
I was reading this paper from the Allen AI Institute about OLMoCR-2(https://allenai.org/blog/olmocr-2) - they built a pipeline for automatically constructing unit tests as verifiable rewards. Naturally my mind went to Pokemon: could we apply the same approach and construct deterministic unit tests to define reward functions?
I had Claude Vision (sorry!) read 55 pages of the Prima Strategy Guide and extracted 675 tests across 41 locations. The tests are organized into tiers:
* T1: Micro movement (walked toward objective)
* T2: Landmarks (entered a building, reached a new area)
* T3: Objectives (got starter Pokemon, earned a badge)
If you visit the site and see a Twitch stream running, that's my headless Mac setup training the agent live. Beautiful chaos.
Tesserack: [https://tesserack.ai](https://tesserack.ai)
GitHub: [https://github.com/sidmohan0/tesserack](https://github.com/sidmohan0/tesserack)
From the local LLaMA side, I think this opens up lots of interesting opportunities for testing and evals. In this setup, the LLM is essentially acting as the compiler, translating human instructions (the guide) into reward signals.
| 2026-01-28T04:18:08 | https://www.reddit.com/r/LocalLLaMA/comments/1qp15x4/update_reward_shaping_w_pokemon_red/ | Efficient-Proof-1824 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qp15x4 | false | null | t3_1qp15x4 | /r/LocalLLaMA/comments/1qp15x4/update_reward_shaping_w_pokemon_red/ | false | false | self | 4 | null |
I am experimenting with Alpaca on Fedora. Why does something like this always happen? | 2 | Older versions of Alpaca worked fine, but for the past few months running models has not worked at all. I am running an 11th gen intel i5 just the integrated graphics. I have tried deleting and reinstalling multiple times.
The second photo is just to demonstrate that the wheel just spins endlessly.
Thanks for the help! | 2026-01-28T03:55:07 | https://www.reddit.com/gallery/1qp0o0e | HatBoxUnworn | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1qp0o0e | false | null | t3_1qp0o0e | /r/LocalLLaMA/comments/1qp0o0e/i_am_experimenting_with_alpaca_on_fedora_why_does/ | false | false | default | 2 | null |
For those using hosted inference providers (Together, Fireworks, Baseten, RunPod, Modal) - what do you love and hate? | 1 | Curious to hear from folks actually using these hosted inference platforms in production.
Companies like Together.ai, Fireworks.ai, Baseten, Modal and RunPod are raising hundreds of millions at $3-5B+ valuations. But I'm wondering - what's the actual user experience like and why they are able to thrive in presence of cloud providers which themselves offer GPUs (eg AWS Sagemake and like) ?
**If you're using any of these (or similar providers), would love to know:**
**What works well:**
* What made you choose them over self-hosting?
* What specific features/capabilities do you rely on?
* Price/performance compared to alternatives?
**What's frustrating:**
* Any pain points with pricing, reliability, or features?
* Things you wish they did differently?
* Dealbreakers that made you switch providers or consider alternatives?
**Context:** I'm exploring this space and trying to understand what actually matters to teams running inference (or fine tuning) at scale vs. what the marketing says.
Not affiliated with any provider - just doing research. Appreciate any real-world experiences! | 2026-01-28T03:14:03 | https://www.reddit.com/r/LocalLLaMA/comments/1qozrne/for_those_using_hosted_inference_providers/ | Dramatic_Strain7370 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qozrne | false | null | t3_1qozrne | /r/LocalLLaMA/comments/1qozrne/for_those_using_hosted_inference_providers/ | false | false | self | 1 | null |
Anyone got Macmini 4 to work with Ollama model? | 0 | I tried but the tool kept on looking for Anthropic keys and models. | 2026-01-28T02:57:16 | https://www.reddit.com/r/LocalLLaMA/comments/1qozdqx/anyone_got_macmini_4_to_work_with_ollama_model/ | ManufacturerNo8056 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qozdqx | false | null | t3_1qozdqx | /r/LocalLLaMA/comments/1qozdqx/anyone_got_macmini_4_to_work_with_ollama_model/ | false | false | self | 0 | null |
We're building a code intelligence platform that actually understands multi-repo enterprise codebases. Roast our approach. | 3 | I'm building a code intelligence platform that answers questions like *"who owns this service?"* and *"what breaks if I change this event format?"* across 30+ repos.
Our approach: Parse code with tree-sitter AST → Extract nodes and relationships → Populate Neo4j knowledge graph → Query with natural language.
How It Works:
Code File
│
├── tree-sitter AST parse
│
├── Extractors (per file type):
│ ├── CodeNodeExtractor → File, Class, Function nodes
│ ├── CommitNodeExtractor → Commit, Person nodes + TOUCHED relationships
│ ├── DiExtractor → Spring → INJECTS relationships
│ ├── MessageBrokerExtractor→ Kafka listeners → CONSUMES_FROM relationships
│ ├── HttpClientExtractor → RestTemplate calls → CALLS_SERVICE
│ └── ... 15+ more extractors
│
├── Enrichers (add context):
│ ├── JavaSemanticEnricher → Classify: Service? Controller? Repository?
│ └── ConfigPropertyEnricher→ Link ("${prop}") to config files
│
└── Neo4j batch write (MERGE nodes + relationships)
**The graph we build:**
(:Person)-[:TOUCHED]->(:Commit)-[:TOUCHED]->(:File)
(:File)-[:CONTAINS_CLASS]->(:Class)-[:HAS_METHOD]->(:Function)
(:Class)-[:INJECTS]->(:Class)
(:Class)-[:PUBLISHES_TO]->(:EventChannel)
(:Class)-[:CONSUMES_FROM]->(:EventChannel)
(:ConfigFile)-[:DEFINES_PROPERTY]->(:ConfigProperty)
(:File)-[:USES_PROPERTY]->(:ConfigProperty)
**The problem we're hitting:**
Every new framework or pattern = new extractor.
* Customer uses Feign clients? Write FeignExtractor.
* Uses AWS SQS instead of Kafka? Write SqsExtractor.
* Uses custom DI framework? Write another extractor.
* Spring Boot 2 vs 3 annotations differ? Handle both.
We have 40+ node types and 60+ relationship types now. Each extractor is imperative pattern-matching on AST nodes. It works, but:
1. Maintenance nightmare - Every framework version bump can break extractors
2. Doesn't generalize - Works for our POC customer, but what about the next customer with different stack?
3. No semantic understanding - We can extract \`@KafkaListener\`but can't answer "what's our messaging strategy?"
**Questions:**
1. Anyone built something similar and found a better abstraction?
2. How do you handle cross-repo relationships? (Config in repo A, code in repo B, deployment values in repo C)
Happy to share more details or jump on a call. DMs open. | 2026-01-28T02:43:45 | https://www.reddit.com/r/LocalLLaMA/comments/1qoz2fv/were_building_a_code_intelligence_platform_that/ | TraditionalDegree333 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qoz2fv | false | null | t3_1qoz2fv | /r/LocalLLaMA/comments/1qoz2fv/were_building_a_code_intelligence_platform_that/ | false | false | self | 3 | null |
Got tired of testing models on Apple Silicon so I made a test bench. Releasing for free shortly | 0 | 2026-01-28T02:42:02 | https://devpadapp.com/anubis/index.html | peppaz | devpadapp.com | 1970-01-01T00:00:00 | 0 | {} | 1qoz0zy | false | null | t3_1qoz0zy | /r/LocalLLaMA/comments/1qoz0zy/got_tired_of_testing_models_on_apple_silicon_so_i/ | false | false | default | 0 | null | |
'the ordinary' and the brilliant | 0 | How | 2026-01-28T02:26:59 | https://www.reddit.com/r/LocalLLaMA/comments/1qoyo7s/the_ordinary_and_the_brilliant/ | Sensitive_Housing_62 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qoyo7s | false | null | t3_1qoyo7s | /r/LocalLLaMA/comments/1qoyo7s/the_ordinary_and_the_brilliant/ | false | false | self | 0 | null |
Issues Compiling llama.cpp for the GFX1031 Platform (For LMS Use) | 1 | I recently saw a post of someone getting ROCm working on the gfx1031 platform by compiling an llama.cpp for my platform only. Decided to check it out, but I've running into a lot of errors that I shouldn't be getting. I've talked to some people from some DC servers (LocalLLM and LMS) and even we couldn't figure it out. What could be the issues?
This was the command used for compiling:
cmake -B build -G "Ninja" -DGGML\_HIP=ON -DAMDGPU\_TARGETS=gfx1031 -DCMAKE\_C\_COMPILER="C:\\Program Files\\AMD\\ROCm\\7.1\\bin\\clang.exe" -DCMAKE\_CXX\_COMPILER="C:\\Program Files\\AMD\\ROCm\\7.1\\bin\\clang++.exe" -DCMAKE\_PREFIX\_PATH="C:\\Program Files\\AMD\\ROCm\\7.1" -DCMAKE\_BUILD\_TYPE=Release -DHIP\_PLATFORM=amd -DLLAMA\_CURL=OFF -DCMAKE\_HIP\_FLAGS="--rocm-device-lib-path=C:\\Program Files\\AMD\\ROCm\\7.1\\amdgcn\\bitcode" | 2026-01-28T02:10:57 | https://www.reddit.com/r/LocalLLaMA/comments/1qoyaox/issues_compiling_llamacpp_for_the_gfx1031/ | FHRacing | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qoyaox | false | null | t3_1qoyaox | /r/LocalLLaMA/comments/1qoyaox/issues_compiling_llamacpp_for_the_gfx1031/ | false | false | self | 1 | null |
My build. What did I forget? | 8 | Threadripper 9975WX on WRX90 SAGE with 8x32 RDIMM ECC, 2 Pro 6000 Max Qs and 1 Pro 5000 Max Q. 8TB SSD and 4TB SSD. | 2026-01-28T01:58:40 | Ok_Letter_8704 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1qoy03w | false | null | t3_1qoy03w | /r/LocalLLaMA/comments/1qoy03w/my_build_what_did_i_forget/ | false | false | default | 8 | {'enabled': True, 'images': [{'id': 'jzl9vb1syzfg1', 'resolutions': [{'height': 81, 'url': 'https://preview.redd.it/jzl9vb1syzfg1.jpeg?width=108&crop=smart&auto=webp&s=cb2c60f015d5cff351465bba498477bd04926d38', 'width': 108}, {'height': 162, 'url': 'https://preview.redd.it/jzl9vb1syzfg1.jpeg?width=216&crop=smart&auto=webp&s=db4917f25d4e49aba71a254afbb3ef0f4ccf6cc8', 'width': 216}, {'height': 240, 'url': 'https://preview.redd.it/jzl9vb1syzfg1.jpeg?width=320&crop=smart&auto=webp&s=a1a7b2dca79b149bd896ef6f30f396704f61dace', 'width': 320}, {'height': 480, 'url': 'https://preview.redd.it/jzl9vb1syzfg1.jpeg?width=640&crop=smart&auto=webp&s=564675eb015d5f9fadec6c52428f1afe0fd8de3d', 'width': 640}, {'height': 720, 'url': 'https://preview.redd.it/jzl9vb1syzfg1.jpeg?width=960&crop=smart&auto=webp&s=2aa365033cb26137b9f3e36f8ceba730008a33d7', 'width': 960}, {'height': 810, 'url': 'https://preview.redd.it/jzl9vb1syzfg1.jpeg?width=1080&crop=smart&auto=webp&s=740f6d18ed5e748b8cd6129f0187a3b0b08267c0', 'width': 1080}], 'source': {'height': 3000, 'url': 'https://preview.redd.it/jzl9vb1syzfg1.jpeg?auto=webp&s=b32b49f7dbb60faaf86daecb75abb8016c39762a', 'width': 4000}, 'variants': {}}]} | |
Fileshed: Open WebUI tool — Give your LLM a persistent workspace with file storage, SQLite, archives, and collaboration. | 9 | # 🗂️🛠️ Fileshed — A persistent workspace for your LLM
**Store, organize, collaborate, and share files across conversations.**
# What is Fileshed?
Fileshed gives your LLM a persistent workspace. It provides:
* 📂 **Persistent storage** — Files survive across conversations
* 🗃️ **Structured data** — Built-in SQLite databases, surgical file edits by line or pattern
* 🔄 **Convert data** — ffmpeg for media, pandoc to create LaTeX and PDF
* 📝 **Examine and modify files** — cat, touch, mkdir, rm, cp, mv, tar, gzip, zip, xxd... Work in text and binary mode
* 🛡️ **Integrity** — Automatic Git versioning, safe editing with file locks
* 🌐 **Network I/O** (optional) — Download files and clone repositories (disabled by default, admin-controlled)
* 🧠 **Context-efficient operations** — Process files without loading them into the conversation (grep, sed, awk, curl...)
* 🔒 **Security** — Sandboxed per user, command whitelist, network disabled by default, quotas
* 👥 **Collaboration** — Team workspaces with read-only or read-write access
* 📤 **Download links** — Download your files directly with a download link
* 🔧 **100+ tools** — Text processing, archives, media, JSON, document conversion...
# Typical Use Cases
* 💾 **Remember things** — Save scripts, notes, configs for future conversations
* 📊 **Analyze data** — Query CSVs and databases without loading them into context
* 🎬 **Process media** — Convert videos, resize images, extract audio
* 📄 **Generate documents** — Create PDFs, LaTeX reports, markdown docs
* 🔧 **Build projects** — Maintain code, configs, and data across sessions
* 👥 **Collaborate** — Share files with your team in group workspaces
* 📦 **Package & deliver** — Create archives and download links for users
* 🌐 **Download large data** — Fetch files from the internet directly to disk, bypassing context limits
# How to Use
**Just talk naturally!** You don't need to know the function names — the LLM figures it out.
# Example conversations
>**You:** "Save this Python script for later, call it utils.py"
>**You:** "Download the list of countries from restcountries.com, put it in a database, and tell me the 10 largest by area"
>**You:** "Take the PDF I uploaded and convert it to Word"
>**You:** "Create a zip of all the reports and give me a download link"
>**You:** "What files do I have?"
>**You:** "Remember: my API key is xyz123"
# Advanced example (tested with a 20B model)
>**You:** "Download data about all countries (name, area, population) from restcountries.com. Convert to CSV, load into SQLite, add a density column (population/area), sort by density, export as CSV, zip it, and give me a download link."
See [screen capture](https://raw.githubusercontent.com/Fade78/Fileshed/main/assets/Fileshed_dl_to_sqlite_to_archive.png).
# How It Works
Fileshed provides four storage zones:
📥 Uploads → Files you give to the LLM (read-only for it)
📦 Storage → LLM's personal workspace (read/write)
📚 Documents → Version-controlled with Git (automatic history!)
👥 Groups → Shared team workspaces (requires group= parameter)
All operations use the `zone=` parameter to specify where to work.
# Under the Hood
*What the LLM does internally when you make requests:*
# Basic File Operations
# List files
shed_exec(zone="storage", cmd="ls", args=["-la"])
# Create a directory
shed_exec(zone="storage", cmd="mkdir", args=["-p", "projects/myapp"])
# Read a file
shed_exec(zone="storage", cmd="cat", args=["config.json"])
# Search in files
shed_exec(zone="storage", cmd="grep", args=["-r", "TODO", "."])
# Copy a file
shed_exec(zone="storage", cmd="cp", args=["draft.txt", "final.txt"])
# Redirect output to file (like shell > redirection)
shed_exec(zone="storage", cmd="jq",
args=["-r", ".[] | [.name, .value] | @csv", "data.json"],
stdout_file="output.csv")
# Create and Edit Files
# Create a new file (overwrite=True to replace entire content)
shed_patch_text(zone="storage", path="notes.txt", content="Hello world!", overwrite=True)
# Append to a file
shed_patch_text(zone="storage", path="log.txt", content="New entry\n", position="end")
# Insert before line 5 (line numbers start at 1)
shed_patch_text(zone="storage", path="file.txt", content="inserted\n", position="before", line=5)
# Replace a pattern
shed_patch_text(zone="storage", path="config.py", content="DEBUG=False",
pattern="DEBUG=True", position="replace")
# Git Operations (Documents Zone)
# View history
shed_exec(zone="documents", cmd="git", args=["log", "--oneline", "-10"])
# See changes
shed_exec(zone="documents", cmd="git", args=["diff", "HEAD~1"])
# Create a file with commit message
shed_patch_text(zone="documents", path="report.md", content="# Report\n...",
overwrite=True, message="Initial draft")
# Group Collaboration
# List your groups
shed_group_list()
# Work in a group
shed_exec(zone="group", group="team-alpha", cmd="ls", args=["-la"])
# Create a shared file
shed_patch_text(zone="group", group="team-alpha", path="shared.md",
content="# Shared Notes\n", overwrite=True, message="Init")
# Copy a file to a group
shed_copy_to_group(src_zone="storage", src_path="report.pdf",
group="team-alpha", dest_path="reports/report.pdf")
# Download Links
Download links require authentication — the user must be logged in to Open WebUI.
# Create a download link
shed_link_create(zone="storage", path="report.pdf")
# Returns: {"clickable_link": "[📥 Download report.pdf](https://...)", "download_url": "...", ...}
# List your links
shed_link_list()
# Delete a link
shed_link_delete(file_id="abc123")
>⚠️ **Note:** Links work only for authenticated users. They cannot be shared publicly.
# Download Large Files from Internet
When network is enabled (`network_mode="safe"` or `"all"`), you can download large files directly to storage without context limits:
# Download a file (goes to disk, not context!)
shed_exec(zone="storage", cmd="curl", args=["-L", "-o", "dataset.zip", "https://example.com/large-file.zip"])
# Check the downloaded file
shed_exec(zone="storage", cmd="ls", args=["-lh", "dataset.zip"])
# Extract it
shed_unzip(zone="storage", src="dataset.zip", dest="dataset/")
This bypasses context window limits — you can download gigabytes of data.
# ZIP Archives
# Create a ZIP from a folder
shed_zip(zone="storage", src="projects/myapp", dest="archives/myapp.zip")
# Include empty directories in the archive
shed_zip(zone="storage", src="projects", dest="backup.zip", include_empty_dirs=True)
# Extract a ZIP
shed_unzip(zone="storage", src="archive.zip", dest="extracted/")
# List ZIP contents without extracting
shed_zipinfo(zone="storage", path="archive.zip")
# SQLite Database
# Import a CSV into SQLite (fast, no context pollution!)
shed_sqlite(zone="storage", path="data.db", import_csv="sales.csv", table="sales")
# Query the database
shed_sqlite(zone="storage", path="data.db", query="SELECT * FROM sales LIMIT 10")
# Export to CSV
shed_sqlite(zone="storage", path="data.db", query="SELECT * FROM sales", output_csv="export.csv")
# File Upload Workflow
When a user uploads files, always follow this workflow:
# Step 1: Import the files
shed_import(import_all=True)
# Step 2: See what was imported
shed_exec(zone="uploads", cmd="ls", args=["-la"])
# Step 3: Move to permanent storage
shed_move_uploads_to_storage(src="document.pdf", dest="document.pdf")
# Reading and Writing Files
# Reading files
Use `shed_exec()` with shell commands:
shed_exec(zone="storage", cmd="cat", args=["file.txt"]) # Entire file
shed_exec(zone="storage", cmd="head", args=["-n", "20", "file.txt"]) # First 20 lines
shed_exec(zone="storage", cmd="tail", args=["-n", "50", "file.txt"]) # Last 50 lines
shed_exec(zone="storage", cmd="sed", args=["-n", "10,20p", "file.txt"]) # Lines 10-20
# Writing files
Two workflows available:
|Workflow|Function|Use when|
|:-|:-|:-|
|**Direct Write**|`shed_patch_text()`|Quick edits, no concurrency concerns|
|**Locked Edit**|`shed_lockedit_*()`|Multiple users, need rollback capability|
Most of the time, use `shed_patch_text()` — it's simpler and sufficient for typical use cases.
# Shell Commands First
Use `shed_exec()` for **all operations that shell commands can do**. Only use `shed_patch_text()` for creating or modifying file **content**.
# ✅ CORRECT - use mkdir for directories
shed_exec(zone="storage", cmd="mkdir", args=["-p", "projects/2024"])
# ❌ WRONG - don't use patch_text to create directories
shed_patch_text(zone="storage", path="projects/2024/.keep", content="")
# Function Reference
# Shell Execution (1 function)
|Function|Description|
|:-|:-|
|`shed_exec(zone, cmd, args=[], stdout_file=None, stderr_file=None, group=None)`|Execute shell commands (use cat/head/tail to READ files, stdout\_file= to redirect output)|
# File Writing (2 functions)
|Function|Description|
|:-|:-|
|`shed_patch_text(zone, path, content, ...)`|THE standard function to write/create text files|
|`shed_patch_bytes(zone, path, content, ...)`|Write binary data to files|
# File Operations (3 functions)
|Function|Description|
|:-|:-|
|`shed_delete(zone, path, group=None)`|Delete files/folders|
|`shed_rename(zone, old_path, new_path, group=None)`|Rename/move files within zone|
|`shed_tree(zone, path='.', depth=3, group=None)`|Directory tree view|
# Locked Edit Workflow (5 functions)
|Function|Description|
|:-|:-|
|`shed_lockedit_open(zone, path, group=None)`|Lock file and create working copy|
|`shed_lockedit_exec(zone, path, cmd, args=[], group=None)`|Run command on locked file|
|`shed_lockedit_overwrite(zone, path, content, append=False, group=None)`|Write to locked file|
|`shed_lockedit_save(zone, path, group=None, message=None)`|Save changes and unlock|
|`shed_lockedit_cancel(zone, path, group=None)`|Discard changes and unlock|
# Zone Bridges (5 functions)
|Function|Description|
|:-|:-|
|`shed_move_uploads_to_storage(src, dest)`|Move from Uploads to Storage|
|`shed_move_uploads_to_documents(src, dest, message=None)`|Move from Uploads to Documents|
|`shed_copy_storage_to_documents(src, dest, message=None)`|Copy from Storage to Documents|
|`shed_move_documents_to_storage(src, dest, message=None)`|Move from Documents to Storage|
|`shed_copy_to_group(src_zone, src_path, group, dest_path, message=None, mode=None)`|Copy to a group|
# Archives (3 functions)
|Function|Description|
|:-|:-|
|`shed_zip(zone, src, dest='', include_empty_dirs=False)`|Create ZIP archive|
|`shed_unzip(zone, src, dest='')`|Extract ZIP archive|
|`shed_zipinfo(zone, path)`|List ZIP contents|
# Data & Analysis (2 functions)
|Function|Description|
|:-|:-|
|`shed_sqlite(zone, path, query=None, ...)`|SQLite queries and CSV import|
|`shed_file_type(zone, path)`|Detect file MIME type|
# File Utilities (3 functions)
|Function|Description|
|:-|:-|
|`shed_convert_eol(zone, path, to='unix')`|Convert line endings (LF/CRLF)|
|`shed_hexdump(zone, path, offset=0, length=256)`|Hex dump of binary files|
|`shed_force_unlock(zone, path, group=None)`|Force unlock stuck files|
# Download Links (3 functions)
|Function|Description|
|:-|:-|
|`shed_link_create(zone, path, group=None)`|Create download link|
|`shed_link_list()`|List your download links|
|`shed_link_delete(file_id)`|Delete a download link|
# Groups (4 functions)
|Function|Description|
|:-|:-|
|`shed_group_list()`|List your groups|
|`shed_group_info(group)`|Group details and members|
|`shed_group_set_mode(group, path, mode)`|Change file permissions|
|`shed_group_chown(group, path, new_owner)`|Transfer file ownership|
# Info & Utilities (6 functions)
|Function|Description|
|:-|:-|
|`shed_import(filename=None, import_all=False)`|Import uploaded files|
|`shed_help(howto=None)`|Documentation and guides|
|`shed_stats()`|Storage usage statistics|
|`shed_parameters()`|Configuration info|
|`shed_allowed_commands()`|List allowed shell commands|
|`shed_maintenance()`|Cleanup expired locks|
**Total: 37 functions**
# Installation
1. Copy `Fileshed.py` to your Open WebUI tools directory
2. Enable the tool in Admin Panel → Tools
3. **Important:** Enable Native Function Calling:
* Admin Panel → Settings → Models → \[Select Model\] → Advanced Parameters → Function Calling → "Native"
# Configuration (Valves)
|Setting|Default|Description|
|:-|:-|:-|
|`storage_base_path`|`/app/backend/data/user_files`|Root storage path|
|`quota_per_user_mb`|1000|User quota in MB|
|`quota_per_group_mb`|2000|Group quota in MB|
|`max_file_size_mb`|300|Max file size|
|`lock_max_age_hours`|24|Max lock duration before expiration|
|`exec_timeout_default`|30|Default command timeout (seconds)|
|`exec_timeout_max`|300|Maximum allowed timeout (seconds)|
|`group_default_mode`|`group`|Default write mode: `owner`, `group`, `owner_ro`|
|`network_mode`|`disabled`|`disabled`, `safe`, or `all`|
|`openwebui_api_url`|`http://localhost:8080`|Base URL for download links|
|`max_output_default`|50000|Default output truncation (\~50KB)|
|`max_output_absolute`|5000000|Absolute max output (\~5MB)|
# Security
* **Sandboxed**: Each user has isolated storage
* **Chroot protection**: No path traversal attacks
* **Command whitelist**: Only approved commands allowed
* **Network disabled by default**: Admin must enable
* **Quotas**: Storage limits per user and group
# License
MIT License — See LICENSE file for details.
# Authors
* **Fade78** — Original author
* **Claude Opus 4.5** — Co-developer | 2026-01-28T01:36:14 | https://github.com/Fade78/Fileshed | Fade78 | github.com | 1970-01-01T00:00:00 | 0 | {} | 1qoxhmi | false | null | t3_1qoxhmi | /r/LocalLLaMA/comments/1qoxhmi/fileshed_open_webui_tool_give_your_llm_a/ | false | false | 9 | {'enabled': False, 'images': [{'id': 'kWQvh-cwo46CScjLIKFPJrgPFl-xPMESDSpVRJZEUq8', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/kWQvh-cwo46CScjLIKFPJrgPFl-xPMESDSpVRJZEUq8.png?width=108&crop=smart&auto=webp&s=b3e331c29cf395efacad565b067d0a12dcf7a1df', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/kWQvh-cwo46CScjLIKFPJrgPFl-xPMESDSpVRJZEUq8.png?width=216&crop=smart&auto=webp&s=80f892a02256d66f35ddc17f45b2cfce8460adb6', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/kWQvh-cwo46CScjLIKFPJrgPFl-xPMESDSpVRJZEUq8.png?width=320&crop=smart&auto=webp&s=edbd7dcf338f2432b0fe4ed8da605954a1c21bf8', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/kWQvh-cwo46CScjLIKFPJrgPFl-xPMESDSpVRJZEUq8.png?width=640&crop=smart&auto=webp&s=a447ff41651ea5501ea65ca03b5e592b6287a870', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/kWQvh-cwo46CScjLIKFPJrgPFl-xPMESDSpVRJZEUq8.png?width=960&crop=smart&auto=webp&s=4e56ced743ffa5e11a4be6e683657dc9b9deaec7', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/kWQvh-cwo46CScjLIKFPJrgPFl-xPMESDSpVRJZEUq8.png?width=1080&crop=smart&auto=webp&s=c4ee54f419c6e207d7a23c5496f912065c7db728', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/kWQvh-cwo46CScjLIKFPJrgPFl-xPMESDSpVRJZEUq8.png?auto=webp&s=00ec904edb379e8532fb0ed1939fb61ad2c5980f', 'width': 1200}, 'variants': {}}]} | |
local-vision-bridge: OpenWebUI Function to intercept images, send them to a vision capable model, and forward description of images to text only model | 2 | Perhaps only useful for my specific setup, but just in case here it is. I have a 3090 and a 3060, and I run larger moe models on system ram and the 3090. I would like it if they could get images. This Function intercepts images and sends them to an openai compat endpoint with instructions to create a detailed description, then inserts that text into the prompt instead of the image.
It handles multiturn conversations by caching the results of the first tool call, because it doesnt remove the image from the conversation itself (or it would look poor visually)
Hopefully it helps someone | 2026-01-28T01:32:53 | https://github.com/feliscat/local-vision-bridge | Spectrum1523 | github.com | 1970-01-01T00:00:00 | 0 | {} | 1qoxeve | false | null | t3_1qoxeve | /r/LocalLLaMA/comments/1qoxeve/localvisionbridge_openwebui_function_to_intercept/ | false | false | 2 | {'enabled': False, 'images': [{'id': 'WuEPAPhSO0uGH7VhzGmDQgyE_JtN3BcPQb1Hd6pEWbQ', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/WuEPAPhSO0uGH7VhzGmDQgyE_JtN3BcPQb1Hd6pEWbQ.png?width=108&crop=smart&auto=webp&s=4bd3a91eac4c6a17fa39ca188a6f439f8815d6c3', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/WuEPAPhSO0uGH7VhzGmDQgyE_JtN3BcPQb1Hd6pEWbQ.png?width=216&crop=smart&auto=webp&s=3f8cf3ae69992ed639c7c9697e9c86ee0e2160f1', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/WuEPAPhSO0uGH7VhzGmDQgyE_JtN3BcPQb1Hd6pEWbQ.png?width=320&crop=smart&auto=webp&s=9da36ab7168ee2b35e1ad92b3ac79e005106f40f', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/WuEPAPhSO0uGH7VhzGmDQgyE_JtN3BcPQb1Hd6pEWbQ.png?width=640&crop=smart&auto=webp&s=304a9709b97cea9b54329993bcb222a1c9e9e3d1', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/WuEPAPhSO0uGH7VhzGmDQgyE_JtN3BcPQb1Hd6pEWbQ.png?width=960&crop=smart&auto=webp&s=757a90b068c585104600d679f17c1bef0a1b914e', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/WuEPAPhSO0uGH7VhzGmDQgyE_JtN3BcPQb1Hd6pEWbQ.png?width=1080&crop=smart&auto=webp&s=09edf16080ce2c491f2cd5498846a8662e3b28b3', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/WuEPAPhSO0uGH7VhzGmDQgyE_JtN3BcPQb1Hd6pEWbQ.png?auto=webp&s=e4bb2b206cfa727eb4a072a42fb1c358d69a21e3', 'width': 1200}, 'variants': {}}]} | |
What's the image generation, video generation, and voice generation equivalents of vLLM + VS Codium + Kilo Code? | 1 | They're all open source, self-hostable, and no telemetry solutions. Are there equivalent ways to generate media? | 2026-01-28T01:15:07 | https://www.reddit.com/r/LocalLLaMA/comments/1qowzxu/whats_the_image_generation_video_generation_and/ | jinnyjuice | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qowzxu | false | null | t3_1qowzxu | /r/LocalLLaMA/comments/1qowzxu/whats_the_image_generation_video_generation_and/ | false | false | self | 1 | null |
Recent influx of machine generated opinions here | 1 | [removed] | 2026-01-28T01:14:52 | https://www.reddit.com/r/LocalLLaMA/comments/1qowzqg/recent_influx_of_machine_generated_opinions_here/ | synth_mania | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qowzqg | false | null | t3_1qowzqg | /r/LocalLLaMA/comments/1qowzqg/recent_influx_of_machine_generated_opinions_here/ | false | false | self | 1 | null |
Mechanical engineer, no CS background, 2 years building an AI memory system. Need brutal feedback. | 0 | ​
I'm a mechanical engineer. No CS degree. I work in oil & gas.
Two years ago, ChatGPT's memory pissed me off. It would confidently tell me wrong things—things I had corrected before. So I started building.
Two years because I'm doing this around a full-time job, family, kids—not two years of heads-down coding.
\*\*The problem I'm solving:\*\*
RAG systems have a "confident lies" problem. You correct something, but the old info doesn't decay—it just gets buried. Next retrieval, the wrong answer resurfaces. In enterprise settings (healthcare, legal, finance), this is a compliance nightmare.
\*\*What I built:\*\*
SVTD (Surgical Vector Trust Decay). When a correction happens, the old memory's trust weight decays. It doesn't get deleted—it enters a "ghost state" where it's suppressed but still auditable. New info starts at trust = 1.0. High trust wins at retrieval.
Simple idea. Took a long time to get right.
\*\*Where I'm at:\*\*
\- Demo works
\- One AI safety researcher validated it and said it has real value
\- Zero customers
\- Building at night after the kids are asleep
I'm at the point where I need to figure out: is this something worth continuing, or should I move on?
I've been posting on LinkedIn and X. Mostly silence or people who want to "connect" but never follow up.
Someone told me Reddit is where the real builders are. The ones who'll either tell me this is shit or tell me it has potential.
\*\*What I'm looking for:\*\*
Beta testers. People who work with RAG systems and deal with memory/correction issues. I want to see how this survives the real world.
If you think this is stupid, tell me why. If you think it's interesting, I'd love to show you the demo.
\*\*Site:\*\* MemoryGate.io
Happy to answer any technical questions in the comments. | 2026-01-28T01:13:42 | https://www.reddit.com/r/LocalLLaMA/comments/1qowyrd/mechanical_engineer_no_cs_background_2_years/ | memorygate | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qowyrd | false | null | t3_1qowyrd | /r/LocalLLaMA/comments/1qowyrd/mechanical_engineer_no_cs_background_2_years/ | false | false | self | 0 | null |
can we talk about the bot comments? | 1 | [removed] | 2026-01-28T01:09:30 | https://www.reddit.com/r/LocalLLaMA/comments/1qowv7r/can_we_talk_about_the_bot_comments/ | reincarnated_hate | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qowv7r | false | null | t3_1qowv7r | /r/LocalLLaMA/comments/1qowv7r/can_we_talk_about_the_bot_comments/ | false | false | self | 1 | null |
Mods: there are too many bot comments here | 1 | [removed] | 2026-01-28T01:05:22 | https://www.reddit.com/r/LocalLLaMA/comments/1qowrsv/mods_there_are_too_many_bot_comments_here/ | reincarnated_hate | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qowrsv | false | null | t3_1qowrsv | /r/LocalLLaMA/comments/1qowrsv/mods_there_are_too_many_bot_comments_here/ | false | false | self | 1 | null |
Deepseek OCR 2 | 14 | Deepseek released a new OCR model: deepseek-ai/DeepSeek-OCR-2
https://huggingface.co/deepseek-ai/DeepSeek-OCR-2
Markdown extraction seems to be very strong.
It seems tho it's been only trained on Eng/Chinese data. | 2026-01-28T01:03:34 | https://www.reddit.com/r/LocalLLaMA/comments/1qowq8r/deepseek_ocr_2/ | Mr_Moonsilver | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qowq8r | false | null | t3_1qowq8r | /r/LocalLLaMA/comments/1qowq8r/deepseek_ocr_2/ | false | false | self | 14 | {'enabled': False, 'images': [{'id': 'c9LaruBvjfhr_AFkVVpu9jJ8NabAKdroEOMl2Akgn-0', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/c9LaruBvjfhr_AFkVVpu9jJ8NabAKdroEOMl2Akgn-0.png?width=108&crop=smart&auto=webp&s=7073858f311594ae3eee3bd9a32b0ca841578132', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/c9LaruBvjfhr_AFkVVpu9jJ8NabAKdroEOMl2Akgn-0.png?width=216&crop=smart&auto=webp&s=bade97dc53ced95c49485861c334f8eea3f48bcd', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/c9LaruBvjfhr_AFkVVpu9jJ8NabAKdroEOMl2Akgn-0.png?width=320&crop=smart&auto=webp&s=21a8efa9941c0e487565da845f6a9e75bf8fe03e', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/c9LaruBvjfhr_AFkVVpu9jJ8NabAKdroEOMl2Akgn-0.png?width=640&crop=smart&auto=webp&s=9a35ff741fcd21d9b3346fefa618503befa19d18', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/c9LaruBvjfhr_AFkVVpu9jJ8NabAKdroEOMl2Akgn-0.png?width=960&crop=smart&auto=webp&s=5fabc0bf83f5c1cb62cf6f3c483559d74ad8c3a8', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/c9LaruBvjfhr_AFkVVpu9jJ8NabAKdroEOMl2Akgn-0.png?width=1080&crop=smart&auto=webp&s=e6521d4d05f07da5c45626b3740e8e4dbad72861', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/c9LaruBvjfhr_AFkVVpu9jJ8NabAKdroEOMl2Akgn-0.png?auto=webp&s=1f9751835df25e95ef9e0c6e53d44a54a27b2db9', 'width': 1200}, 'variants': {}}]} |
I'm tired of the bot comments | 1 | [removed] | 2026-01-28T01:01:27 | https://www.reddit.com/r/LocalLLaMA/comments/1qowogx/im_tired_of_the_bot_comments/ | synth_mania | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qowogx | false | null | t3_1qowogx | /r/LocalLLaMA/comments/1qowogx/im_tired_of_the_bot_comments/ | false | false | self | 1 | null |
I'm sick of the bot comments | 1 | [removed] | 2026-01-28T00:59:47 | https://www.reddit.com/r/LocalLLaMA/comments/1qowmvy/im_sick_of_the_bot_comments/ | synth_mania | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qowmvy | false | null | t3_1qowmvy | /r/LocalLLaMA/comments/1qowmvy/im_sick_of_the_bot_comments/ | false | false | self | 1 | null |
I'm sick of the bots! | 1 | [removed] | 2026-01-28T00:56:13 | https://www.reddit.com/r/LocalLLaMA/comments/1qowjvy/im_sick_of_the_bots/ | synth_mania | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qowjvy | false | null | t3_1qowjvy | /r/LocalLLaMA/comments/1qowjvy/im_sick_of_the_bots/ | false | false | self | 1 | null |
I'm sick of the bot comments | 1 | [removed] | 2026-01-28T00:53:54 | https://www.reddit.com/r/LocalLLaMA/comments/1qowhxu/im_sick_of_the_bot_comments/ | synth_mania | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qowhxu | false | null | t3_1qowhxu | /r/LocalLLaMA/comments/1qowhxu/im_sick_of_the_bot_comments/ | false | false | self | 1 | null |
Arcee AI goes all-in on open models -- Interconnects interview | 27 | Arcee-AI has released their 400B-A13B model, as posted [elsewhere on LL](https://old.reddit.com/r/LocalLLaMA/comments/1qouf0x/arcee_ai_releases_trinity_large_openweight/).
This is an interview of the CEO, CTO and training lead of Arcee-AI, by Nathan Lambert of Allen Institute for AI (Ai2):
"[Arcee AI goes all-in on open models built in the U.S.](https://www.interconnects.ai/p/arcee-ai-goes-all-in-on-open-models)," Interconnects
Arcee-AI and Ai2 are two of the organizations that appear genuinely dedicated to developing LLMs in the open, releasing weights (and many checkpoints along the training arc; see both the Omlo 3 and Trinity collections), extensive reports on how they built models, and maintaining tools for open development of models.
Arcee-AI, for example, maintains [mergekit](https://github.com/arcee-ai/mergekit), which, among other things, allows one to build "clown-car MoEs" (though my impression is that the dense merge is used most often).
Hopefully will be able to try our their 400B-A13B preview model soon. | 2026-01-28T00:51:04 | https://www.reddit.com/r/LocalLLaMA/comments/1qowfhi/arcee_ai_goes_allin_on_open_models_interconnects/ | RobotRobotWhatDoUSee | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qowfhi | false | null | t3_1qowfhi | /r/LocalLLaMA/comments/1qowfhi/arcee_ai_goes_allin_on_open_models_interconnects/ | false | false | self | 27 | {'enabled': False, 'images': [{'id': 'M9iBguH9RHDBDk55BrkjSEVOz9y52Rbf25IFptVc3Vc', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/M9iBguH9RHDBDk55BrkjSEVOz9y52Rbf25IFptVc3Vc.jpeg?width=108&crop=smart&auto=webp&s=b7af8873dec52546f377634803a926d835bd4db8', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/M9iBguH9RHDBDk55BrkjSEVOz9y52Rbf25IFptVc3Vc.jpeg?width=216&crop=smart&auto=webp&s=6c78065726fe09f6a6e6e6af0ff1183fd8a8c5d6', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/M9iBguH9RHDBDk55BrkjSEVOz9y52Rbf25IFptVc3Vc.jpeg?width=320&crop=smart&auto=webp&s=a708457a1f2343a2e190c4a6882fb0ec7339a5d2', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/M9iBguH9RHDBDk55BrkjSEVOz9y52Rbf25IFptVc3Vc.jpeg?width=640&crop=smart&auto=webp&s=c816896f66cc025d00ac256d440a80c08ffb9296', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/M9iBguH9RHDBDk55BrkjSEVOz9y52Rbf25IFptVc3Vc.jpeg?width=960&crop=smart&auto=webp&s=c3907a8d5d1e4418849543def28b766634418215', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/M9iBguH9RHDBDk55BrkjSEVOz9y52Rbf25IFptVc3Vc.jpeg?width=1080&crop=smart&auto=webp&s=ee8741ccf4dd3c082680e044c99cceba1be40852', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/M9iBguH9RHDBDk55BrkjSEVOz9y52Rbf25IFptVc3Vc.jpeg?auto=webp&s=33603194f0bd7d79257e79f19a54fd17b70503d3', 'width': 1200}, 'variants': {}}]} |
I'm sick of the bot comments | 1 | [removed] | 2026-01-28T00:48:15 | https://www.reddit.com/r/LocalLLaMA/comments/1qowd1c/im_sick_of_the_bot_comments/ | synth_mania | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qowd1c | false | null | t3_1qowd1c | /r/LocalLLaMA/comments/1qowd1c/im_sick_of_the_bot_comments/ | false | false | self | 1 | null |
I turned a frozen tiny LLM into a numerical evolutionary microcosm | 0 | Hey everyone,
As we all heard Le Cunn's ranting about LLMs and glazing over WLMs (World Language Models) as a superior alternative, i went over the weekend trying to make something achievable with consumer hardware. I built a system called **ZODIAC**. Instead of fine-tuning weights or using standard Chain-of-Thought prompting, I treat the LLM's latent space as a dynamic environment inhabited by Granular Agents. These agents don't just generate text; they reproduce, evolve, and exert gravitational influence on each other inside the frozen backbone of a model (I'm using Llama-3.2-1B for testing).
Here is the breakdown of the architecture.
**1. The Core Idea: Math > Words**
We usually steer models with system prompts like "You are a helpful assistant." This is imprecise.
In Zodiac, I replaced text prompts with **12 Differentiable Geometric Anchors** (the "Zodiac Signs").
Instead of asking the model to "be creative," I apply a vector loss function called `DIVERSITY_NOVELTY` or `ENTROPIC_DIFFUSION` that forces the agent's latent state ($z\_t$) to mathematically maximize its distance from the batch centroid or maximize entropy.
Instead of asking it to "stay on topic," I apply `CENTROID_STABILITY` or `CYCLIC_RECURRENCE`, which penalizes the vector drift from the origin state using Cosine Similarity and L2 norm constraints.
**2. The "Microcosm": Evolution inside the GPU**
This is where it gets weird. The agents aren't static.
* **Mating & Crossover:** Agents that score high rewards on the current geometric objective (e.g., High Velocity) actually "mate." We take their state vectors, average them, inject noise (mutation), and spawn a child agent ($G\_{next}$).
* **Recursive Backprop:** I implemented a "yield" system. Child agents explore the latent space, and if they find high-reward trajectories, that "intelligence" is yielded back up the lineage tree to update the parent’s memory state. It’s a form of hierarchical reinforcement learning occurring strictly during inference.
**3. Numerical "Gravity"**
Because I'm running multiple agents in parallel (batch size $K=4$ usually), they exert influence on each other. I implemented objectives like `REPRESENTATIVE_CENTRALITY` where an agent is rewarded for aligning its vector with the principal component of the entire batch. It creates a "gravity well" where weaker thoughts collapse into the dominant reasoning path.
Am using a simplified version of **GRPO (Group Relative Policy Optimization)** to update the agents' internal adapter layers ("Verbs") on the fly, effectively training the model *while* it generates.
I’ve open-sourced the initial research preview. It’s written in PyTorch and uses HuggingFace Transformers.
I’m looking for feedback on the objective functions. Right now I’m rotating them cyclically (Kinetic -> Stability -> Duality...), but I’m wondering if a dynamic scheduler based on perplexity would be better.
Im trying to kickstart a lab with the few resources i have (a laptop), so if you think this is interesting, showing support to the repo would be really cool.
[https://github.com/iblameandrew/ZODIAC](https://github.com/iblameandrew/ZODIAC)
Thanks. | 2026-01-28T00:42:17 | https://www.reddit.com/r/LocalLLaMA/comments/1qow7yf/i_turned_a_frozen_tiny_llm_into_a_numerical/ | causality-ai | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qow7yf | false | null | t3_1qow7yf | /r/LocalLLaMA/comments/1qow7yf/i_turned_a_frozen_tiny_llm_into_a_numerical/ | false | false | self | 0 | null |
I'm sick of the bot comments | 1 | [removed] | 2026-01-28T00:35:40 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 1qow1zj | false | null | t3_1qow1zj | /r/LocalLLaMA/comments/1qow1zj/im_sick_of_the_bot_comments/ | false | false | default | 1 | null | ||
I'm sick of the bot comments | 1 | [removed] | 2026-01-28T00:34:33 | https://www.reddit.com/r/LocalLLaMA/comments/1qow0zq/im_sick_of_the_bot_comments/ | synth_mania | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qow0zq | false | null | t3_1qow0zq | /r/LocalLLaMA/comments/1qow0zq/im_sick_of_the_bot_comments/ | false | false | self | 1 | null |
Mods of r/localllama, please use BotBouncer. I'm sick of them. | 1 | [removed] | 2026-01-28T00:30:13 | https://www.reddit.com/r/LocalLLaMA/comments/1qovxap/mods_of_rlocalllama_please_use_botbouncer_im_sick/ | synth_mania | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qovxap | false | null | t3_1qovxap | /r/LocalLLaMA/comments/1qovxap/mods_of_rlocalllama_please_use_botbouncer_im_sick/ | false | false | self | 1 | null |
[Seeking Feedback] Fine-tuning Qwen-32B on AoPS-Instruct (670k samples) - Does this loss curve look healthy? | 3 | **Context:**
* **Model:** Qwen-32B (QwQ-based)
* **Method:** QLoRA (4-bit quantization, $r=16, \\alpha=32$)
* **Dataset:** DeepStudentLlama/AoPS-Instruct (674,225 math problems)
* **Sequence Length:** 64k (using gradient checkpointing)
* **Optimizer:** PagedAdamW8bit with Cosine Scheduler ($LR=1e-5$)
* **Current Progress:** \~1,500 steps (approx. 0.32 Epoch)
**The Plot:**
https://preview.redd.it/pbkmrfkmhzfg1.png?width=1400&format=png&auto=webp&s=5bb1dea0d25733e41c0e72ac53fc9bf748058bd4
**Observations & Questions:**
1. **Convergence:** The Validation Loss dropped sharply from 0.77 to 0.34 in the first 200 steps and is now steadily declining at \~0.32.
2. **Stability:** The Training Loss is fluctuating between 0.21 and 0.34, but the Validation curve is extremely smooth with no signs of overfitting yet.
3. **The "Plateau":** After step 1,000, the slope has significantly flattened. Is this a typical "reasoning bottleneck" for math-heavy datasets, or should I consider adjusting the learning rate?
4. **Speed:** I noticed a slowdown in `it/s` over time, likely due to memory fragmentation or long-context samples.
**Does this look like a healthy convergence for a 32B model on competitive math data? Any advice on whether to push for the full 1 Epoch or switch to CoT distillation earlier?** | 2026-01-28T00:23:33 | https://www.reddit.com/r/LocalLLaMA/comments/1qovrp3/seeking_feedback_finetuning_qwen32b_on/ | Royal_Jicama_7368 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qovrp3 | false | null | t3_1qovrp3 | /r/LocalLLaMA/comments/1qovrp3/seeking_feedback_finetuning_qwen32b_on/ | false | false | 3 | null | |
Do we have any Chinese medical llms? (Similar to medgemma) | 1 | thx! | 2026-01-28T00:21:51 | https://www.reddit.com/r/LocalLLaMA/comments/1qovqa2/do_we_have_any_chinese_medical_llms_similar_to/ | Own-Potential-2308 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qovqa2 | false | null | t3_1qovqa2 | /r/LocalLLaMA/comments/1qovqa2/do_we_have_any_chinese_medical_llms_similar_to/ | false | false | self | 1 | null |
Does anyone have Chatterbox-TTS working with 5070 Ti? | 0 | I apologize for asking such a basic question, but after trying 6-7 different repositories to install Chatterbox on Windows 11 with a 5070 Ti, all of them failed due to requirement versions or simply couldn’t detect CUDA and defaulted to CPU. Also, the dependency issues with Blackwell architecture are significant because it’s too new to support older PyTorch versions, but, Chatterbox itself won’t work with anything newer than a certain version, for example.
If you’ve managed to successfully install Chatterbox, please let me know. I much prefer a Windows native installation via UV or Pip, as Docker tends to consume a lot more disk space and resources in my experience with TTS engines. | 2026-01-28T00:20:26 | https://www.reddit.com/r/LocalLLaMA/comments/1qovp1h/does_anyone_have_chatterboxtts_working_with_5070/ | simracerman | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qovp1h | false | null | t3_1qovp1h | /r/LocalLLaMA/comments/1qovp1h/does_anyone_have_chatterboxtts_working_with_5070/ | false | false | self | 0 | null |
EPYC, 1152GB RAM, RTX 6000, 5090, 2000 | 29 | 2026-01-27T23:55:34 | https://www.reddit.com/r/LocalLLaMA/comments/1qov2lu/epyc_1152gb_ram_rtx_6000_5090_2000/ | Fit-Statistician8636 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qov2lu | false | null | t3_1qov2lu | /r/LocalLLaMA/comments/1qov2lu/epyc_1152gb_ram_rtx_6000_5090_2000/ | false | false | 29 | null | ||
🚀 Build your own personal AI assistant using Clawdbot + Ollama and your own models | 0 | Clawdbot is a local-first personal AI assistant that runs directly on your devices. It connects popular messaging platforms — WhatsApp, Telegram, Slack, Discord, iMessage, and more — to AI coding agents through a centralized gateway, giving you full control over your AI workflows.
🔧 Installation
Install Clawdbot:
Copier le code
Bash
npm install -g clawdbot@latest
Run the onboarding setup:
Copier le code
Bash
clawdbot onboard --install-daemon
⚠️ Clawdbot requires a large context window. A minimum of 64k tokens is recommended for optimal performance.
🤖 Using Clawdbot with Ollama
Quick setup:
Copier le code
Bash
ollama launch clawdbot
This automatically configures Clawdbot to use Ollama and starts the gateway.
If the gateway is already running, it will auto-reload the configuration.
Config only (without launching):
Copier le code
Bash
ollama launch clawdbot --config
🧠 Recommended Models
qwen3-coder
glm-4.7
gpt-oss:20b
gpt-oss:120b
Cloud models are also available via:
👉 https://ollama.com/search?c=cloud
This is a powerful setup for anyone interested in local AI, AI agents, and self-hosted assistants | 2026-01-27T23:46:49 | Fun-Necessary1572 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1qouuva | false | null | t3_1qouuva | /r/LocalLLaMA/comments/1qouuva/build_your_own_personal_ai_assistant_using/ | false | false | 0 | {'enabled': True, 'images': [{'id': 'xewJgm9HDJNTis0hCKu1_VYvYBdtQI5Nif5SpNaxfnM', 'resolutions': [{'height': 111, 'url': 'https://preview.redd.it/7ml4lvbabzfg1.jpeg?width=108&crop=smart&auto=webp&s=77010cc809620925ef1ec8a71e18291e08bd1395', 'width': 108}, {'height': 222, 'url': 'https://preview.redd.it/7ml4lvbabzfg1.jpeg?width=216&crop=smart&auto=webp&s=687490f92d8dec78fb58518918781764fc6d7fdf', 'width': 216}, {'height': 329, 'url': 'https://preview.redd.it/7ml4lvbabzfg1.jpeg?width=320&crop=smart&auto=webp&s=6443773a57b9dedaf21324bc6b5a4bd5dce86440', 'width': 320}], 'source': {'height': 539, 'url': 'https://preview.redd.it/7ml4lvbabzfg1.jpeg?auto=webp&s=251a7384105cd9c317f4ee37e73c53ee56389c98', 'width': 524}, 'variants': {}}]} | ||
M2 Ultra 60 core 128gb ram vs m3 ultra 60 core 256gb. Should I upgrade? | 1 | Should I upgrade to a m3 ultra 60 core 256gb ram?
Currently have a M2 Ultra core 128gb ram. Running gpt oss 120b.
Worth it? Found a good price for a used m3 ultra 60 core 256gb.
If not worth it, I’m just gonna keep my M2 Ultra | 2026-01-27T23:33:27 | https://www.reddit.com/r/LocalLLaMA/comments/1qouj94/m2_ultra_60_core_128gb_ram_vs_m3_ultra_60_core/ | solo_entrepreneur | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qouj94 | false | null | t3_1qouj94 | /r/LocalLLaMA/comments/1qouj94/m2_ultra_60_core_128gb_ram_vs_m3_ultra_60_core/ | false | false | self | 1 | null |
One-shot Zelda Game Competition | 19 | I am kicking off a competition - I'd like to see who can make the best one-shot HTML Zelda game with a local model
Rules:
- You shall enter one prompt
- The model must be an open-weights model
- You may not use an agent, the model must output the entire HTML game in one shot, from one prompt.
- If the game fails to run, you may copy the error message from the HTML console and give it to the model, once, in a follow up chat message, with a simple message: 'fix this', to allow it a chance to fix any minor bug it has, with no further instructions
- You may not edit the code yourself or give the model any instructions on how to repair the game.
- When posting your result, indicate the model, quant, prompt, and system prompt you used, along with whether the model was given a chance to fix the broken output. Us the format below.
That is all, let the competition begin!
Model: GLM 4.7 Flash @ FP16
Prompt:
```
Create a full featured beautiful 3d Zelda game in html that feels and plays beautifully, focusing on having a non-blocky visual asthetic of the characters. The map should be procedurally generated. it should have all the normal elements of a zelda game.
Use three.js r134 for the rendering
```
Result:
https://gleaming-clip-dgzp.pagedrop.io | 2026-01-27T23:33:08 | https://www.reddit.com/r/LocalLLaMA/comments/1qouiy8/oneshot_zelda_game_competition/ | TokenRingAI | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qouiy8 | false | null | t3_1qouiy8 | /r/LocalLLaMA/comments/1qouiy8/oneshot_zelda_game_competition/ | false | false | self | 19 | null |
Arcee AI releases Trinity Large : OpenWeight 400B-A13B | 144 | 2026-01-27T23:28:47 | https://www.arcee.ai/blog/trinity-large | abkibaarnsit | arcee.ai | 1970-01-01T00:00:00 | 0 | {} | 1qouf0x | false | null | t3_1qouf0x | /r/LocalLLaMA/comments/1qouf0x/arcee_ai_releases_trinity_large_openweight/ | false | false | default | 144 | {'enabled': False, 'images': [{'id': 'kpKiKke1xSzMMszPwBcvRHFEWu1KcRJDoXwrfinT_zM', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/kpKiKke1xSzMMszPwBcvRHFEWu1KcRJDoXwrfinT_zM.png?width=108&crop=smart&auto=webp&s=1432de17ac18930426b2cade0588e7e88afdbde5', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/kpKiKke1xSzMMszPwBcvRHFEWu1KcRJDoXwrfinT_zM.png?width=216&crop=smart&auto=webp&s=12f3140c0ce26a2f7b3e2b16a2adca3e694dd4f8', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/kpKiKke1xSzMMszPwBcvRHFEWu1KcRJDoXwrfinT_zM.png?width=320&crop=smart&auto=webp&s=d7d228a06aa52e5f475393c9f6237f10a4fced10', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/kpKiKke1xSzMMszPwBcvRHFEWu1KcRJDoXwrfinT_zM.png?width=640&crop=smart&auto=webp&s=409d9b6883693b08a64db17e15a4f65b6ef32a87', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/kpKiKke1xSzMMszPwBcvRHFEWu1KcRJDoXwrfinT_zM.png?width=960&crop=smart&auto=webp&s=339ea2f021c36851a6efee36c9a681c8f6eb5d0e', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/kpKiKke1xSzMMszPwBcvRHFEWu1KcRJDoXwrfinT_zM.png?width=1080&crop=smart&auto=webp&s=9523a452ebdbe1a1a666d0bde6807236bd59e0a8', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/kpKiKke1xSzMMszPwBcvRHFEWu1KcRJDoXwrfinT_zM.png?auto=webp&s=3739c647f7840c9569e8f97e25787fe675ac70f3', 'width': 1920}, 'variants': {}}]} | |
GPU advice for entry level AI | 2 | My current desktop pc: h77ds3h mobo pcie gen 3, xeon e3 1275v2 4c/8t ivy bridge, 24gb ddr3 1600mhz bundled in old atx case with side vents at bottom and only 1 fan (80mm rear fan)
Purpose: learning, experimenting with entry-level AI, 1–3B or 7b (if possible) coding LLMs + LoRA inference. I only work with Python for data analysis, libraries like pandas, short scripts mainly. Hopefully upgrade entire system + new architecture GPU in 2028
Because of budget constrains and local availability where i'm currently stationed, i have very few contenders (listed as new): rtx 3050 8gb asus tuf (250$), rtx 5060 8gb msi ventus (320$), rtx 3060 12gb asus dual geforce v2 OC (320$)
What/how would you recommend to start with? | 2026-01-27T23:25:59 | https://www.reddit.com/r/LocalLLaMA/comments/1qoucgz/gpu_advice_for_entry_level_ai/ | fulefesi | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qoucgz | false | null | t3_1qoucgz | /r/LocalLLaMA/comments/1qoucgz/gpu_advice_for_entry_level_ai/ | false | false | self | 2 | null |
Stanford Proves Parallel Coding Agents are a Scam | 197 | ERROR: type should be string, got "https://preview.redd.it/coxs8w3z3zfg1.png?width=1200&format=png&auto=webp&s=a0875df6bf260ca3af0f9fe7eef7bbd3697a0c73\n\nHey everyone,\n\n\n\nA fascinating new [preprint](https://cooperbench.com/static/pdfs/main.pdf) from Stanford and SAP drops a truth bomb that completely upends the \"parallel coordinated coding\" \"productivity boost\" assumption for AI coding agents.\n\n\n\nTheir \"CooperBench\" reveals what they call the \"curse of coordination.\" When you add a second coding agent, performance doesn't just fail to improve - it plummets. On average, two agents working together have a 30% lower success rate. For top models like GPT-5 and Claude 4.5 Sonnet, the success rate is a staggering 50% lower than just using one agent to do the whole job.\n\n\n\nWhy? The agents are terrible teammates. They fail to model what their partner is doing (42% of failures), don't follow through on commitments (32%), and have communication breakdowns (26%). They hallucinate shared states and silently overwrite each other's work.\n\n\n\nThis brings me to the elephant in the room. Platforms like Cursor, Antigravity, and others are increasingly marketing \"parallel agent\" features as a productivity revolution. But if foundational research shows this approach is fundamentally broken and makes you less productive, what are they actually selling? It feels like they're monetizing a feature they might know is a scam, \"persuading\" users into thinking they're getting a 10x team when they're really getting a mess of conflicting code.\n\n\n\nAs the Stanford authors put it, it's \"hard to imagine how an agent incapable of coordination would contribute to such a future however strong the individual capabilities.\" Food for thought next time you see a \"parallel-agent\" feature advertised." | 2026-01-27T23:20:18 | https://www.reddit.com/r/LocalLLaMA/comments/1qou799/stanford_proves_parallel_coding_agents_are_a_scam/ | madSaiyanUltra_9789 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qou799 | false | null | t3_1qou799 | /r/LocalLLaMA/comments/1qou799/stanford_proves_parallel_coding_agents_are_a_scam/ | false | false | 197 | null | |
Turning ADHD into a superpower - built an entire AI ecosystem while working full-time as a dump truck dispatcher | 0 | Title: Turning ADHD into a superpower - built an entire AI ecosystem while working full-time as a dump truck dispatcher
https://preview.redd.it/fdprd2ev5zfg1.png?width=2816&format=png&auto=webp&s=bae82578d15047fd80932f4e4062fc1ec6c4275f
I'm 2+ years sober, father of 5, and I have ADHD.
For years I thought ADHD was holding me back. Turns out it's rapid parallel processing - I just needed the right system.
\*\*What helped me ship:\*\*
\- One step at a time (never multiple)
\- Write everything down (if it's not written, it doesn't exist)
\- File structure first (organize before you build)
\- Small wins build momentum
\- AI assistants to keep me on track
I built a two-computer AI cluster, multiple Discord bots, and an entire ecosystem called Angel Cloud - all while dispatching 18 trucks daily.
\*\*The mission:\*\* Build AI that helps families, including tools for people with ADHD.
If you're stuck in ADHD paralysis, just do ONE thing. Then the next. That's it.
Landing page if curious: [https://shanebrain.carrd.co](https://shanebrain.carrd.co) | 2026-01-27T23:16:32 | https://www.reddit.com/r/LocalLLaMA/comments/1qou3tn/turning_adhd_into_a_superpower_built_an_entire_ai/ | Square-Practice2296 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qou3tn | false | null | t3_1qou3tn | /r/LocalLLaMA/comments/1qou3tn/turning_adhd_into_a_superpower_built_an_entire_ai/ | false | false | 0 | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.