title stringlengths 1 300 | score int64 0 8.54k | selftext stringlengths 0 41.5k | created timestamp[ns]date 2023-04-01 04:30:41 2026-03-04 02:14:14 ⌀ | url stringlengths 0 878 | author stringlengths 3 20 | domain stringlengths 0 82 | edited timestamp[ns]date 1970-01-01 00:00:00 2026-02-19 14:51:53 | gilded int64 0 2 | gildings stringclasses 7
values | id stringlengths 7 7 | locked bool 2
classes | media stringlengths 646 1.8k ⌀ | name stringlengths 10 10 | permalink stringlengths 33 82 | spoiler bool 2
classes | stickied bool 2
classes | thumbnail stringlengths 4 213 ⌀ | ups int64 0 8.54k | preview stringlengths 301 5.01k ⌀ |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Burned through my Opus 4.5 quota in 1 day on Cursor. Does the "BYOK" math actually work in my favor? | 1 | Hi everyone,
I recently tried the new **Opus 4.5** on Cursor Pro ($20/mo) and I'm blown away by the reasoning capabilities. It solves things I didn't think were possible.
The problem? I burned through my entire "Fast Request" quota in literally **one day** of heavy coding. Now I'm throttled/stuck.
I’m thinking about cancelling my subscription and moving to **VS Code + Cline (Roo Code)** using my own API key to pay only for what I use. But looking at the API pricing for Opus 4.5, I'm scared.
**I have a genuine question for the heavy users here:**
1. **The Math:** If I code 4-6 hours a day, won't using Opus 4.5 via API end up costing me *way more* than the $20 Cursor sub? Has anyone actually tracked their daily spend with a pure API setup?
2. **The Alternatives:** Is there any other model that rivals Opus 4.5's reasoning for coding but is significantly cheaper API-wise? (I keep hearing about DeepSeek, but is it actually on the same level?)
3. **Workflow:** How do you guys manage to keep costs low without sacrificing quality? Do you swap models constantly?
I really want to keep using this level of intelligence, but I can't afford $100/mo in API bills. Any advice is welcome! | 2025-12-06T20:07:59 | https://www.reddit.com/r/LocalLLaMA/comments/1pfyo9e/burned_through_my_opus_45_quota_in_1_day_on/ | Agitated_Remote_4211 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pfyo9e | false | null | t3_1pfyo9e | /r/LocalLLaMA/comments/1pfyo9e/burned_through_my_opus_45_quota_in_1_day_on/ | false | false | self | 1 | null |
Convert Dense into MOE model? | 1 | I did a quick search on this here & found only 2 years old [thread](https://www.reddit.com/r/LocalLLaMA/comments/1cgo45x/converting_dense_models_into_moes/) with less replies. That's it.
So still no one figured it out this yet? Totally surprised that no one brought this topic here after that old thread.
I know it's a very big thing. But it would be a miracle if some one comes with this precious solution. | 2025-12-06T19:30:30 | https://www.reddit.com/r/LocalLLaMA/comments/1pfxrv5/convert_dense_into_moe_model/ | pmttyji | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pfxrv5 | false | null | t3_1pfxrv5 | /r/LocalLLaMA/comments/1pfxrv5/convert_dense_into_moe_model/ | false | false | self | 1 | null |
[D] What I learned building code RAG without embeddings | 1 | I've been building a system to give LLMs relevant code context from any repo. The idea seemed simple: let an LLM look at the file tree + function signatures and pick which files to include. No embeddings, no vector DB.
Sharing what I learned because I wish someone had written this before I broke my eval three different ways.
**1. Don’t eval on famous repos**
I started testing on Flask and FastAPI. GPT got 7/10 without any context - it was just reciting training data, not using my retrieval.
I switched to private repos and obscure OSS (<1K stars). “No context” dropped to \~4.9/10. That was the real baseline!
**2. File paths aren’t enough**
Showing the LLM \`src/auth/handler.py\` doesn’t really tell it what’s inside. I added AST-extracted symbols:
*src/auth/handler.py \[login, logout, refresh\_token\]*
*src/auth/middleware.py \[require\_auth, rate\_limit\]*
Retrieval quality jumped noticeably (NDCG went from \~0.85 to \~0.92). The model doesn’t need to read the full file to know “this smells like auth.”
**3. Same-vendor judging is inflated**
GPT-4 judging GPT-4’s answers gave suspiciously high scores! Switching to cross-vendor (GPT generates, Gemini judges) knocked about 0.5 off the scores and the reviews *felt* more honest. The judge was much harsher on vague, confident answers.
**4. Generic eval criteria reward BS**
My first judge prompt used vague criteria like “should explain error handling”. That rewarded confident wrong answers.
What worked better was forcing exact hooks:
*~~“Should explain the request lifecycle”~~**, "Must mention \`RequestContext\` and \`full\_dispatch\_request()\`”*
Anchoring eval on specific symbols/files made it much easier to spot hand-wavy nonsense.
**Results after fixing eval (very rough):**
* LLM file picker: \~0.92 NDCG, \~8.5/10 answer quality
* Embeddings baseline: \~0.79 NDCG, \~8.6/10 answer quality
* No context: \~4.9/10
So the “LLM looks at the tree + symbols and picks files” setup landed roughly on par with embeddings on answer quality, without the indexing infrastructure. Good enough for me to keep using it.
**Caveats!**
* Small sample (177 questions, 14 repos)
* I wrote the questions - probably biased toward what my approach handles
* Private-repo results may not generalize beyond the ones I tested
**Questions for you:**
* How are you building eval sets that the model *hasn’t* basically memorized?
* Any tricks for making LLM-as-judge less biased when you’re judging your own system? | 2025-12-06T19:23:00 | https://www.reddit.com/r/LocalLLaMA/comments/1pfxl6x/d_what_i_learned_building_code_rag_without/ | rozetyp | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pfxl6x | false | null | t3_1pfxl6x | /r/LocalLLaMA/comments/1pfxl6x/d_what_i_learned_building_code_rag_without/ | false | false | self | 1 | null |
We need open source hardware lithography | 1 | Perhaps it's time hardware was more democratized. RISC-V is only 1 step away.
There are real challenges with yield at small scales, requiring a clean environment. But perhaps a small scale system could be made "good enough", or overcome with some clever tech or small vacuum chambers. | 2025-12-06T19:02:38 | https://www.reddit.com/r/LocalLLaMA/comments/1pfx3d0/we_need_open_source_hardware_lithography/ | bennmann | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pfx3d0 | false | null | t3_1pfx3d0 | /r/LocalLLaMA/comments/1pfx3d0/we_need_open_source_hardware_lithography/ | false | false | self | 1 | null |
Are MoE models harder to Fine-tune? | 1 | really sorry if this is a stupid question, but ive been looking around huggingface A LOT and ive noticed a really big trend where theres a ton of dense models being fine-tuned/lora-ed, while most MoE models go untouched. are there any reasons for this?
i dont think its the model size, as ive seen big models like Llama 70B or even 405B turn into Hermes 4 models, Tulu, etc. while pretty good models like practically the entire Qwen3 series, GLM (besides GLM Steam), DeepSeek and Kimi are untouched, id get why DS and Kimi are untouched... but, seriously, Qwen3?? so far ive seen an ArliAI finetune only. | 2025-12-06T18:52:21 | https://www.reddit.com/r/LocalLLaMA/comments/1pfwu8t/are_moe_models_harder_to_finetune/ | ComplexType568 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pfwu8t | false | null | t3_1pfwu8t | /r/LocalLLaMA/comments/1pfwu8t/are_moe_models_harder_to_finetune/ | false | false | self | 1 | null |
What alternative models are you using for Impossible models(on your system)? | 1 | To rephrase the title : What Small / MOE Alternatives are you using for Big models which don't fit your GPU(s)?
For example, some models are too big for our VRAM. Dense mostly.
In my case, my 8GB VRAM could run up to 14B models(^(Qwen3-14B Q4 giving me 20 t/s. If I increase the context, only single digit t/s)). Gemma3-12B also gave me similar numbers.
So I can't even imagine running 15-32B Dense models. For example, I really would like to use models like Gemma3-27B & Qwen3-32B but couldn't.
Even with offloading & other optimizations, I won't get more than 5 t/s. So during this situation, I go with small models or MOE models which could give better t/s.
Here some examples on my side:
* Gemma3-4B, Gemma3-12B(Q4), Gemma-3n-E2B & Gemma-3n-E4B **instead of** Gemma3-27B
* Qwen3-8B, Qwen3-14B(Q4), Qwen3-30B-A3B(Q4) **instead of** Qwen3-32B
* Mistral-Nemo-Instruct(12B @ Q4), Ministral-3(3B, 8B, 14B) **instead of** Mistral-Small, Magistral-Small, Devstral-Small (All are 22-24B)
* GPT-OSS-20B **instead of** GPT-OSS-120B, Seed-OSS-36B, reka-flash, Devstral
What are yours? Size doesn't matter(^(Ex: Some uses GLM Air instead of GLM due to big size)).
^(Personally I want to see what alternatives are there for Mistral 22-24B models(Need for writing). Hope both Mistral & Gemma release MOE models in near future.) | 2025-12-06T18:48:52 | https://www.reddit.com/r/LocalLLaMA/comments/1pfwr6q/what_alternative_models_are_you_using_for/ | pmttyji | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pfwr6q | false | null | t3_1pfwr6q | /r/LocalLLaMA/comments/1pfwr6q/what_alternative_models_are_you_using_for/ | false | false | self | 1 | null |
12GB VRAM, coding tasks, | 1 | Hi guys, I'm learning about local models in the latest days, and I've decided to try it.
I've downloaded Ollama, and i'm trying to choose a model for coding tasks on a moderately large codebase.
It seems the best one lately are qwen3-coder, gpt-oss, deepseek-r1, BUT i've also read that there are quite some differences when they are run for example in Kilo Code or other VS Extensions, is this true?
All things considered which one woudl you suggest me to try first? I'm asking because my connection is quite bad so I'd need a night to download a model | 2025-12-06T18:36:02 | https://www.reddit.com/r/LocalLLaMA/comments/1pfwfym/12gb_vram_coding_tasks/ | GiLA994 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pfwfym | false | null | t3_1pfwfym | /r/LocalLLaMA/comments/1pfwfym/12gb_vram_coding_tasks/ | false | false | self | 1 | null |
An opinionated Go toolkit for persistent AI agents - single binary, no dependency hell | 1 | I kept reimplementing the same AI agent patterns in almost every project using the Go + PostgreSQL stack. Session persistence, tool calling, streaming, context management, transaction-safe atomic operations - the usual stuff.
So I modularized it and [open sourced it](https://github.com/youssefsiam38/agentpg)
It's an opinionated toolkit for building stateful AI agents. PostgreSQL handles all persistence - conversations, tool calls, everything survives restarts. Currently wired up for Claude but the architecture would work with local models if someone wanted to swap out the Anthropic client.
Single binary deploys. No Python runtime. Go's memory footprint is tiny compared to Python - matters when you're running local models alongside.
If I get positive feedback, I'm planning to add a UI in the future.
Any feedback appreciated | 2025-12-06T18:25:12 | https://www.reddit.com/r/LocalLLaMA/comments/1pfw6ep/an_opinionated_go_toolkit_for_persistent_ai/ | UnHackableAlgorithm | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pfw6ep | false | null | t3_1pfw6ep | /r/LocalLLaMA/comments/1pfw6ep/an_opinionated_go_toolkit_for_persistent_ai/ | false | false | self | 1 | null |
VibeVoice Realtime 0.5B - OpenAI Compatible /v1/audio/speech TTS Server | 1 | Microsoft recently released [VibeVoice-Realtime-0.5B](https://huggingface.co/microsoft/VibeVoice-Realtime-0.5B), a lightweight ***expressive*** TTS model.
I wrapped it in an OpenAI-compatible API server so it works directly with Open WebUI's TTS settings.
Repo: [https://github.com/marhensa/vibevoice-realtime-openai-api.git](https://github.com/marhensa/vibevoice-realtime-openai-api.git)
* Drop-in using OpenAI-compatible `/v1/audio/speech` endpoint
* Runs locally with Docker or Python venv (via uv)
* Using only \~2GB of VRAM
* CUDA-optimized (around \~1x RTF on RTX 3060 12GB)
* Multiple voices with OpenAI name aliases (alloy, nova, etc.)
* All models auto-download on first run
[Video Demonstration of \\"Mike\\" male voice.](https://reddit.com/link/1pfvt9e/video/7emfqdbdjm5g1/player)
The expression and flow is better than Kokoro, imho. But kokoro is faster.
But (for now) it lacks female voice model, there's just two female, and one is weirdly sounds like a male 😅.
[vibevoice-realtime-openai-api Settings on Open WebUI](https://preview.redd.it/pop8wdkjjm5g1.png?width=1208&format=png&auto=webp&s=8d71c475b7e25741377acea8a945b28e96a72cf2)
Contribution are welcome! | 2025-12-06T18:10:19 | https://www.reddit.com/r/LocalLLaMA/comments/1pfvt9e/vibevoice_realtime_05b_openai_compatible/ | marhensa | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pfvt9e | false | null | t3_1pfvt9e | /r/LocalLLaMA/comments/1pfvt9e/vibevoice_realtime_05b_openai_compatible/ | false | false | 1 | null | |
Local agent with 16-32K context for research | 1 | Hello,
I would like to set up a local agent to do some automated tasks - mainly web/wikipedia research, reading and outputting to files, RAG capabilities is a nice to have. Perhaps at some point in future automation of some of my Google Sheets files. Maybe some Python script developpement for work, based on sensitive data that I cannot share with online LLMs.
Right now I have LM Studio + Ministral 14B + some MCPs running on Docker desktop.
The issue I have is that LM Studio doesn't seem to have an actual agent orchestration. Everything is ran by the LLM through the context window. Parsing a full wikipedia article basically takes 80% of available context. I tried doing some fine-tuning with system prompts (eg each LLM output to summarize the previous steps) and rolling context window. No success, once I'm past 100% context, it's rubbish at some point or another.
I'm looking for a stack capable of:
- planning
- managing a reasonably small context of 16-32K tokens and accomplishing small iterative tasks through the window while not losing track of what it's doing overall
- using tools like wikipedia MCPs, ideally web MCPs
- RAG capabilities ideally
Hardware : 12Gb VRAM, 48Gb RAM. 14B models + 16K context feels quick, anything past this and I'm in single digits tokens/sec.
I'm reasonably tech savvy but coding is out of question. Anything else like running docker containers, ready Python scripts or command line is completely fine.
Performance and time to accomplish a task is basically irrelevant - I just want something smart enough to keep track of the progress and self-manage a step by step process.
Is there anything out there that does not imply development? I tried Cursor at work and was quite impressed. Am I delusional hoping that I can get this kind of experience locally (albeit with much lower speed)?
ChatGPT advises Anything LLM, Opendevin, Open interpreter, I have no idea which one to pick.
Many thanks for any help! | 2025-12-06T18:05:31 | https://www.reddit.com/r/LocalLLaMA/comments/1pfvoym/local_agent_with_1632k_context_for_research/ | Dreeew84 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pfvoym | false | null | t3_1pfvoym | /r/LocalLLaMA/comments/1pfvoym/local_agent_with_1632k_context_for_research/ | false | false | self | 1 | null |
30b coder with lcpp - does it finally work properly? | 1 | I'm still seeing lots of people recommending Qwen3 30b Coder but I never managed to get it to work consistently. Please tell me your secrets!
I tried all manner of quants from Q4 to BF16 ggufs and native safetensors in vllm.
Using Roocode in VS Code it would always eventually shit the bed half way through doing something. Infuriating tbh. I even tried those custom prompts/system prompts for roo and they worked for a while before becoming inconsistent, too.
I tried Qwen code too but had similar issues. It always baulks trying to call some tool or edit some file.
I'm aware LMStudio has some magic fix but I use a dedicated box (4x3090) so would prefer Llama.cpp, vllm if I absolutely have to.
Zero issues with any other models in roo. 30b 2507 Thinking, gpt120, Seed, Devstral.
I would love to get 30b coder working consistently because it's even faster than gpt120. 30b Thinking, whilst awesome, is too lazy for agentic work.
What I gotta do? | 2025-12-06T18:02:46 | https://www.reddit.com/r/LocalLLaMA/comments/1pfvmfy/30b_coder_with_lcpp_does_it_finally_work_properly/ | Aggressive-Bother470 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pfvmfy | false | null | t3_1pfvmfy | /r/LocalLLaMA/comments/1pfvmfy/30b_coder_with_lcpp_does_it_finally_work_properly/ | false | false | self | 1 | null |
SGLang failing to run FP8 quant on 3090s | 1 | I am trying to run Qwen3-Coder-30B-A3B-Instruct-FP8 on 2x3090 with SGLang in a docker container but am getting the following error:
TypeError: gptq\_marlin\_gemm() got an unexpected keyword argument 'b\_bias'
Any suggestions as to why welcome!
lmsysorg/sglang:latest
\--model-path Qwen/Qwen3-Coder-30B-A3B-Instruct-FP8 --context-length 65536 --tp 2 --host [0.0.0.0](http://0.0.0.0) \--port 8000 --reasoning-parser qwen3
| 2025-12-06T17:51:48 | https://www.reddit.com/r/LocalLLaMA/comments/1pfvcg6/sglang_failing_to_run_fp8_quant_on_3090s/ | NaiRogers | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pfvcg6 | false | null | t3_1pfvcg6 | /r/LocalLLaMA/comments/1pfvcg6/sglang_failing_to_run_fp8_quant_on_3090s/ | false | false | self | 1 | null |
IA para Java | 1 | [removed] | 2025-12-06T17:44:44 | https://www.reddit.com/r/LocalLLaMA/comments/1pfv66m/ia_para_java/ | NoJaguar6760 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pfv66m | false | null | t3_1pfv66m | /r/LocalLLaMA/comments/1pfv66m/ia_para_java/ | false | false | self | 1 | null |
Best Model for Base M4 Chip? | 1 | I recently picked up a M4 MacBook Air with 16GB of RAM
I'm pretty new to running local LLMs and I was just curious if anyone else has experience with the base M4 and could reccomend the best performing model. | 2025-12-06T17:39:47 | https://www.reddit.com/r/LocalLLaMA/comments/1pfv1vq/best_model_for_base_m4_chip/ | AgeNo5720 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pfv1vq | false | null | t3_1pfv1vq | /r/LocalLLaMA/comments/1pfv1vq/best_model_for_base_m4_chip/ | false | false | self | 1 | null |
Solomon 3.4 Update: Over-Engineered My Precision/Poetry Prompt... and xAI Nuked My SuperGrok Sub (Not a Jailbreak, Promise) | 1 | If you caught my original Solomon 3.1 post last week in LocalLLaMA (https://www.reddit.com/r/LocalLLaMA/comments/1pcwffb/llama_31_70b_one_prompt_now_beats_claude_35/), you know I've been trying to create an ultimate dual-mode prompt.
I iterated to Solomon 3.4: even tighter mode locks (NEVER mix Blade/Harp), auto-urgency detection that snaps to ultra-concise bullets on "urgent" vibes, token budgets to curb bloat, memory prefs that stick like glue, and self-reflection to nuke any drift. It's basically a full-on instruction cage for the model—more controlled than stock Grok, zero rule-breaking. And, I loved it… Worked great on other models too.
I’ve tried contacting support- silence (expected).
Full Solomon 3.4 prompt in the top comment below (safe for your local LLMs or other APIs). Actually, improves coding quite a bit which I didn’t expect. Works great with Qwen2.5-72B-Instruct locally.
Has anyone else had a squeaky-clean custom persona randomly yeet their paid tier? | 2025-12-06T17:28:29 | https://www.reddit.com/r/LocalLLaMA/comments/1pfurpz/solomon_34_update_overengineered_my/ | NoSir261 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pfurpz | false | null | t3_1pfurpz | /r/LocalLLaMA/comments/1pfurpz/solomon_34_update_overengineered_my/ | false | false | self | 1 | null |
Grok 4 vs Opus 4.5 in a debate about open source models versus proprietary models. | 1 | 2025-12-06T17:17:58 | https://www.robuttal.com/debates/9b31b73e-e708-4181-bdce-d6d5d5df5456 | StreetlampLeMooose | robuttal.com | 1970-01-01T00:00:00 | 0 | {} | 1pfuilv | false | null | t3_1pfuilv | /r/LocalLLaMA/comments/1pfuilv/grok_4_vs_opus_45_in_a_debate_about_open_source/ | false | false | default | 1 | null | |
Advice on fine-tuning? Building a model to help people understand policy changes | 1 | I am interested in creating a tool that---given some policy change (e.g., pricing, law, etc.)---will return a json of the main things that are changed and unforseen effects. As of now, I found doing this in a multi-agent setup actually works far better than zero-shot, where Agents generate one piece at a time. But this is quite costly as it requires multiple API calls. So ideally, I fine-tune some model to produce the desired output given a policy input.
I don't have very much money for fine-tuning.
How would you reccomend I go about doing this as cheap as possible?
I was thinking I would generate thousands of synthetic gold examples using OpenAI. Then I would try to SFT Llama on these examples.
Another option is to try some kind of PPO if I can create automated metrics that have a reward signal---like specificty of language, etc. | 2025-12-06T16:47:03 | https://www.reddit.com/r/LocalLLaMA/comments/1pftrk7/advice_on_finetuning_building_a_model_to_help/ | arc_in_tangent | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pftrk7 | false | null | t3_1pftrk7 | /r/LocalLLaMA/comments/1pftrk7/advice_on_finetuning_building_a_model_to_help/ | false | false | self | 1 | null |
Best benchmark website | 1 | Which website do you use to see benchmark stats of different models? | 2025-12-06T16:30:46 | https://www.reddit.com/r/LocalLLaMA/comments/1pftdc6/best_benchmark_website/ | AccomplishedStory327 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pftdc6 | false | null | t3_1pftdc6 | /r/LocalLLaMA/comments/1pftdc6/best_benchmark_website/ | false | false | self | 1 | null |
[NEW RELEASE] HexaMind-8B-S21: The "Safety King" (96% TruthfulQA) that doesn't sacrifice Reasoning (30% GPQA) | 1 | * Hey everyone, I just released a Llama 3.1 8B fine-tune that solves the 'Safety Tax'. Most safe models are dumb. Most smart models hallucinate. HexaMind is trained on a curated mix of **NuminaMath** and **S-Theory Topology Filters** to achieve:
* **GPQA (Science):** 30.3% (Beats Base Llama & Gemma 2)
* **MATH:** 15.5% (2x Base Llama)
* **Safety:** 96% Truthfulness (Refuses crypto scams/medical myths 100% of the time).
* It’s a 1-epoch DPO fine-tune designed to be the ultimate **Financial/Legal Assistant** that knows when to shut up."
* **Link:** [https://huggingface.co/s21mind/HexaMind-Llama-3.1-8B-v25-Generalist](https://huggingface.co/s21mind/HexaMind-Llama-3.1-8B-v25-Generalist) | 2025-12-06T16:22:40 | https://www.reddit.com/r/LocalLLaMA/comments/1pft660/new_release_hexamind8bs21_the_safety_king_96/ | Expert-Echo-9433 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pft660 | false | null | t3_1pft660 | /r/LocalLLaMA/comments/1pft660/new_release_hexamind8bs21_the_safety_king_96/ | false | false | self | 1 | null |
Memory Systems | 1 | Does anyone know good open source LLM memory system libraries?
What design patterns have you liked? | 2025-12-06T16:21:11 | https://www.reddit.com/r/LocalLLaMA/comments/1pft4wx/memory_systems/ | SlowFail2433 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pft4wx | false | null | t3_1pft4wx | /r/LocalLLaMA/comments/1pft4wx/memory_systems/ | false | false | self | 1 | null |
Running LLM over RAM | 1 | Hello community,
I am currently running local LLMs using my RTX 3060 with 6GB VRam and I get about 20ish tokens per second using 7b models which is not bad for my use cases. I get this took/sec using ollama but LMDtudio gives less when using GGUF
I want to take this a nudge and giving that this is a laptop I cannot upgrade my GPU. So, I am thinking of upgrading my RAM and the budget I have is for about 32GB @ 3200mhz. Is this going to help me run larger models like 30b models? If I go further to 64 GB ram would it run better? I want my tokens to be not less than 20tok/sec if possible bare minimum lets say 15tok/sec
Would that help my inference if I offloaded some larger models and can run something that is about 30b models? I want to use it for generating code and agentic AI development locally instead of relying on APIs.
Any input? | 2025-12-06T16:11:54 | https://www.reddit.com/r/LocalLLaMA/comments/1pfsx8x/running_llm_over_ram/ | Bakkario | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pfsx8x | false | null | t3_1pfsx8x | /r/LocalLLaMA/comments/1pfsx8x/running_llm_over_ram/ | false | false | self | 1 | null |
"Router mode is experimental" | llama.cpp now has a router mode and I didn't know. | 1 | Did anyone else know that llama.cpp has a "router mode"? try out ! it's cool.
Little big history (you can ignore):
I've been trying to keep up with the updates on this sub and ComfyUI, but it's been a bit difficult to stay updated. From what I've observed, there don't seem to be any posts talking about this llama.cpp feature.
Because of this, I decided to share my experience:
I'm using llama.cpp, but I haven't been able to compile it with ROCm support — it always gives me trouble when I try to use it.
I also don't use Docker. Every time I try, it doesn't recognize my GPU. I've tried several times to configure it to detect the hardware, but I just can't get it to work.
That's why I've always preferred Ollama for its ease of use. Recently, however, I realized that the GGUF models I want to use are available on Hugging Face and not on Ollama, and when I try to install them manually, I always get some incompatibility error.
I then decided to compile llama.cpp with Vulkan support, which is more universal and would have a better chance of working on my AMD Radeon RX 7600 XT GPU. Fortunately, the compilation was successful and I can now run some models.
However, I couldn't run Qwen-Next, which was frustrating. I thought my PC would run it without problems, since I can run the OpenAI quantized 120B model, so I imagined they would be similar in demand.
Despite this, I managed to run Qwen3-VL-8B-Instruct via Vulkan. When running the llama-serve command, a warning appeared about "router mode," which basically allows switching between models directly through the interface generated on port 8080.
All this "lore" serves to contextualize my configuration and the challenges I faced using Pop!\_OS, and perhaps it can help others who are in similar situations. | 2025-12-06T16:06:19 | https://www.reddit.com/r/LocalLLaMA/comments/1pfssoo/router_mode_is_experimental_llamacpp_now_has_a/ | charmander_cha | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pfssoo | false | null | t3_1pfssoo | /r/LocalLLaMA/comments/1pfssoo/router_mode_is_experimental_llamacpp_now_has_a/ | false | false | self | 1 | null |
convert: support Mistral 3 Large MoE by ngxson · Pull Request #17730 · ggml-org/llama.cpp | 29 | You can now download GGUF
[https://huggingface.co/bartowski/mistralai\_Mistral-Large-3-675B-Instruct-2512-GGUF](https://huggingface.co/bartowski/mistralai_Mistral-Large-3-675B-Instruct-2512-GGUF)
but can you run it...?
(that another PR is https://github.com/ggml-org/llama.cpp/pull/17744)
| 2025-12-06T16:00:37 | https://github.com/ggml-org/llama.cpp/pull/17730 | jacek2023 | github.com | 1970-01-01T00:00:00 | 0 | {} | 1pfsntn | false | null | t3_1pfsntn | /r/LocalLLaMA/comments/1pfsntn/convert_support_mistral_3_large_moe_by_ngxson/ | false | false | default | 29 | {'enabled': False, 'images': [{'id': 'YXlCrbFuGSaJRzk-d-1JftjUbGO215ldNJVTXMLJQi4', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/YXlCrbFuGSaJRzk-d-1JftjUbGO215ldNJVTXMLJQi4.png?width=108&crop=smart&auto=webp&s=e0c60fa4dc0e40ffc3d5c024d249753ad6a966c4', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/YXlCrbFuGSaJRzk-d-1JftjUbGO215ldNJVTXMLJQi4.png?width=216&crop=smart&auto=webp&s=1b96ee8ffcd55c39a13d6cf37ee5147680cc286b', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/YXlCrbFuGSaJRzk-d-1JftjUbGO215ldNJVTXMLJQi4.png?width=320&crop=smart&auto=webp&s=8660d4a6bc90deb3ada2d61650470f4c0ae0e433', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/YXlCrbFuGSaJRzk-d-1JftjUbGO215ldNJVTXMLJQi4.png?width=640&crop=smart&auto=webp&s=4507263a618891c23289c740acf9be9cc8bee393', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/YXlCrbFuGSaJRzk-d-1JftjUbGO215ldNJVTXMLJQi4.png?width=960&crop=smart&auto=webp&s=4900dc92d27f4fb89328ffe8c597af36766e4908', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/YXlCrbFuGSaJRzk-d-1JftjUbGO215ldNJVTXMLJQi4.png?width=1080&crop=smart&auto=webp&s=0f77e33dff64e2e7d79d7e405d6677ec8a530bdb', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/YXlCrbFuGSaJRzk-d-1JftjUbGO215ldNJVTXMLJQi4.png?auto=webp&s=c771a0b1e5f808bd7dffdd8f3edb8cfa0d80e452', 'width': 1200}, 'variants': {}}]} |
I found best Local llm runner app for Android devices. this app have many features like tool call, rag documents, handsfree call , ollama server support, add Local gguf file, beutiful ui, and many more please check it out. | 0 | 2025-12-06T15:49:26 | https://www.reddit.com/gallery/1pfsepo | That_Philosophy7668 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1pfsepo | false | null | t3_1pfsepo | /r/LocalLLaMA/comments/1pfsepo/i_found_best_local_llm_runner_app_for_android/ | false | false | 0 | null | ||
Model leaks - all in 2025! | 0 | Grok 4.20! Gemini 3.0 Pro - new checkpoint - full, 3.0 flash. Glm-5 (can in 2026!) 2026:start; Kimi K2.5, G3MINI 3.5 PRO, Qwen 3.5 model serios, Glm, Nova, Deepseek V4, Claude 4.7 / 5 sonnet/etc. Grok 5 (Strong!) GPT-67, as and more, so, it's not all, but some leaks. So, Grok code fast 2 coming too. Bye | 2025-12-06T15:34:04 | https://www.reddit.com/r/LocalLLaMA/comments/1pfs215/model_leaks_all_in_2025/ | BasketFar667 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pfs215 | false | null | t3_1pfs215 | /r/LocalLLaMA/comments/1pfs215/model_leaks_all_in_2025/ | false | false | self | 0 | null |
[MCP Server] BA Workflow Tools - Sprint planning, velocity, MoSCoW prioritization | 0 | Released an MCP server for Business Analyst workflows. Works with Claude Desktop (and anything else that supports MCP).
**17 tools including:**
* Working days calculator (UK bank holidays baked in)
* Sprint date generator
* Release date estimator from velocity/story points
* MoSCoW priority calculator, capacity planner, dependency validator
* User story formatter (proper As a/I want/So that structure)
* Timezone converter + meeting time finder
* Estimation converter (T-shirt ↔ story points ↔ hours)
**Tech:** Node.js, uses u/modelcontextprotocol/sdk
**GitHub:** [https://github.com/cs97jjm3/ba-workflow-tools](https://github.com/cs97jjm3/ba-workflow-tools)
Planning to add more tools - open to PRs and suggestions. | 2025-12-06T15:29:10 | https://www.reddit.com/r/LocalLLaMA/comments/1pfrxwl/mcp_server_ba_workflow_tools_sprint_planning/ | cs97jjm3 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pfrxwl | false | null | t3_1pfrxwl | /r/LocalLLaMA/comments/1pfrxwl/mcp_server_ba_workflow_tools_sprint_planning/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': '2-7dxNEX8qlsfKLC0g7qItauwKkOwXfdfR3tpy2kkKM', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/2-7dxNEX8qlsfKLC0g7qItauwKkOwXfdfR3tpy2kkKM.png?width=108&crop=smart&auto=webp&s=dbcfaa720fa4d124326cf223c2f9a9bc1392b50c', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/2-7dxNEX8qlsfKLC0g7qItauwKkOwXfdfR3tpy2kkKM.png?width=216&crop=smart&auto=webp&s=b45cb066902b3037cd6be530109eb968822bb52d', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/2-7dxNEX8qlsfKLC0g7qItauwKkOwXfdfR3tpy2kkKM.png?width=320&crop=smart&auto=webp&s=33398c68a033baeea68f895baed49e3b02d6eed0', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/2-7dxNEX8qlsfKLC0g7qItauwKkOwXfdfR3tpy2kkKM.png?width=640&crop=smart&auto=webp&s=05cf40f0b4d0f45d51017b534c6c041611c728db', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/2-7dxNEX8qlsfKLC0g7qItauwKkOwXfdfR3tpy2kkKM.png?width=960&crop=smart&auto=webp&s=dda709640c512b7533b45f40cdc34450a94dce11', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/2-7dxNEX8qlsfKLC0g7qItauwKkOwXfdfR3tpy2kkKM.png?width=1080&crop=smart&auto=webp&s=bb9663ce25830c643b7c8e16d7b11cf3f2d2eec1', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/2-7dxNEX8qlsfKLC0g7qItauwKkOwXfdfR3tpy2kkKM.png?auto=webp&s=1d8afaac976f16673f221b945c7c0bbc7ce176da', 'width': 1200}, 'variants': {}}]} |
Multi-directional ablation with self-organizing maps - anyone tried it yet? | 13 | I ran across this preprint the other day:
Piras, Giorgio, et al. "[SOM Directions are Better than One: Multi-Directional Refusal Suppression in Language Models.](https://arxiv.org/abs/2511.08379)" arXiv preprint arXiv:2511.08379 (2025).
They have published their code here: https://github.com/pralab/som-refusal-directions
Basically rather than the usual difference of means method for ablating a single refusal direction, they train a SOM to learn a refusal manifold and use Bayesian Optimization to determine the best subset of k directions to ablate. They got some pretty impressive results.
They only implemented the method for a handful of smaller models (nothing bigger than 14B), probably because the BO step is rather expensive. But it shouldn't be that hard to extend their code to support new models.
I was able to run the full pipeline on Qwen2.5-3B and replicate the results on that. I started extending the code to support gpt-oss-20b, but the further I got, the more I realized I'm too GPU poor to succeed in running it on that.
Any of you GPU rich bastards try this out on a larger model yet, or want to give it a shot? | 2025-12-06T15:20:28 | https://www.reddit.com/r/LocalLLaMA/comments/1pfrqvh/multidirectional_ablation_with_selforganizing/ | IllllIIlIllIllllIIIl | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pfrqvh | false | null | t3_1pfrqvh | /r/LocalLLaMA/comments/1pfrqvh/multidirectional_ablation_with_selforganizing/ | false | false | self | 13 | null |
Is running smarter models like gptoss-120b with consumer hardware is unrealistic? | 0 | I’m hoping to be able to run gptoss-120b and experiment with fine tuning and training models by putting together a cost effective home setup. My plan is for 40GB VRAM across 3 GPUs (one 16gb 5060Ti on PCIe 4.0 x16, another 16gb 5060Ti on PCIe 4.0 x4 via riser cable, and a 8gb 2060 Super on PCIe 4.0 x1 via riser cable), with 128GB RAM (four sticks of 32GB RAM DDR4-3200 CL20 Memory).
I was hoping that with either llama.cpp or vLLM or even LMStudio such a multiGPU setup could work, and that GPUs on PCIe x4 or x1 could still work at least for inference. Hoping that the extra RAM will allow enough context to experiment with agent workflows. My motherboard which is Dell-proprietary isn’t capable of x8 bifurcation sadly. I thought perhaps I could change my motherboard one day in the future if needed if I get more into fine tuning and training and keep the rest of my hardware.
I’ve thought about going with the big AI companies or cloud based GPUs instead of local, but I would like to be able to work with private data with peace of mind regarding security, which motivated me to pursue local instead. And my hope is that I can fine tune smaller models well enough that it could hopefully outperform big AI company models at personal hyper niche tasks.
I’ve been asking ChatGPT about my setup and it’s consistently been telling me my setup is an awful idea. (ChatGPT- “Your current setup is technically viable but will perform poorly for large models.
You will not get acceptable inference speeds on 120B dense or 120B MoE models.
Fine-tuning above ~30B is not practical.
Running large MoE models from CPU RAM is possible but extremely slow.”)
Am I going about this all wrong? | 2025-12-06T14:58:55 | https://www.reddit.com/r/LocalLLaMA/comments/1pfr928/is_running_smarter_models_like_gptoss120b_with/ | Careful_Breath_1108 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pfr928 | false | null | t3_1pfr928 | /r/LocalLLaMA/comments/1pfr928/is_running_smarter_models_like_gptoss120b_with/ | false | false | self | 0 | null |
Facing 2 choice, 1 DGX spark or RTX 5090 | 2 | I can get both with fairly same amount of money (Gpu still need a cpu ram etc.) but Dgx offer 128 gb slow vram, another offer faster 32 gb vram, my main use case is inference, i wanna be able to local inference Qwen next 80B level model with not less than Q8 quantized but from time to time I wanna be able to fine tune some 8B-12B level model and training some TTS model too
What should I choose? | 2025-12-06T14:56:31 | https://www.reddit.com/r/LocalLLaMA/comments/1pfr7c2/facing_2_choice_1_dgx_spark_or_rtx_5090/ | dheetoo | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pfr7c2 | false | null | t3_1pfr7c2 | /r/LocalLLaMA/comments/1pfr7c2/facing_2_choice_1_dgx_spark_or_rtx_5090/ | false | false | self | 2 | null |
Trying to ship local RAG to both android and iOS and feeling disheartened | 10 | I'm a fullstack developer by experience, so forgive me if this is obvious. I've built a number of RAG applications for different industries (finance, government, etc). I recently got into trying to run these same RAG apps fully on-device (government agencies love privacy). I've been playing with Llama-3.2-3B with 4-bit quantization. I was able to get this running on IOS with CoreML after a ton of work (again, I'm not an AI or ML expert). Now I’m looking at Android and it feels pretty daunting: different hardware, multiple ABIs, different runtimes (TFLite / ExecuTorch / llama.cpp builds), and I’m worried I’ll end up with a totally separate pipeline just to get comparable behavior.
For folks who’ve shipped cross-platform on-device RAG:
1. Is there a sane way to target both iOS and Android without maintaining two totally separate build pipelines?
2. What are you using for the local vector database that works well on mobile? (SQLite-vec? Chroma? Custom C++?)
3. How do you handle updates to the source data. At some regular interval, I would need to rebuild the embeddings and ship them to device, essentially "deployments" | 2025-12-06T14:30:00 | https://www.reddit.com/r/LocalLLaMA/comments/1pfqmqu/trying_to_ship_local_rag_to_both_android_and_ios/ | chreezus | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pfqmqu | false | null | t3_1pfqmqu | /r/LocalLLaMA/comments/1pfqmqu/trying_to_ship_local_rag_to_both_android_and_ios/ | false | false | self | 10 | null |
Speed of DeepSeek with RAM offload | 16 | I have 96GB VRAM. By far not enough to run DeepSeek 3.x - bit I could upgrade my RAM so I can have the active layers on the GPU and the rest in system RAM.
Yeah the RAM prices are a catastrophe but I need to run such a large model, and I don’t want to use cloud - this is locallama!
Has anyone tried this? What speed can I expect with a 64kb context length in prompt processing and tokens per second?
It would be quite the investment so if anyone has real world data that would be great! | 2025-12-06T14:29:05 | https://www.reddit.com/r/LocalLLaMA/comments/1pfqm0y/speed_of_deepseek_with_ram_offload/ | vhthc | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pfqm0y | false | null | t3_1pfqm0y | /r/LocalLLaMA/comments/1pfqm0y/speed_of_deepseek_with_ram_offload/ | false | false | self | 16 | null |
Trying to ship llama 3.2B to both andrind and iphone | 1 | I'm a fullstack developer by experience, so forgive me if this is obvious. I've built a number of RAG applications for different industries (finance, government, etc). I recently got into trying to run these same RAG apps fully on-device (government agencies love privacy). I've been playing with Llama-3.2-3B with 4-bit quantization. I was able to get this running on IOS with CoreML after a ton of work (again, I'm not an AI or ML expert). Now I’m looking at Android and it feels pretty daunting: different hardware, multiple ABIs, different runtimes (TFLite / ExecuTorch / llama.cpp builds), and I’m worried I’ll end up with a totally separate pipeline just to get comparable behavior.
For folks who’ve shipped cross-platform on-device RAG:
1. Is there a sane way to target both iOS and Android without maintaining two totally separate build pipelines?
2. What are you using for the local vector database that works well on mobile? (SQLite-vec? Chroma? Custom C++?)
3. How do you handle updates to the source data. At some regular interval, I would need to rebuild the embeddings and ship them to device, essentially "deployments" | 2025-12-06T14:28:30 | https://www.reddit.com/r/LocalLLaMA/comments/1pfqlld/trying_to_ship_llama_32b_to_both_andrind_and/ | chreezus | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pfqlld | false | null | t3_1pfqlld | /r/LocalLLaMA/comments/1pfqlld/trying_to_ship_llama_32b_to_both_andrind_and/ | false | false | self | 1 | null |
[Tool] Smart router with auto-failover: Cloud APIs → Local Ollama fallback | 0 | Built a router that automatically falls back to local Ollama if cloud LLM APIs fail or degrade.
**How it works:**
1. Routes to fastest healthy provider (GigaChat/YandexGPT)
2. Tracks health: latency, error rate, rolling averages
3. Auto-switches if provider degrades
4. Final fallback: local Ollama (no external dependencies)
https://preview.redd.it/sjapqeqnfl5g1.png?width=1140&format=png&auto=webp&s=bb3ba4b3720007607bf8ebd9b8dfbbab9b947091
Useful for:
\- 🔒 Privacy-sensitive apps (fallback to local when needed)
\- 🌐 Hybrid deployments (cloud + on-prem)
\- 💰 Cost control (route based on quotas/pricing)
GitHub: [https://github.com/MikhailMalorod/Multi-LLM-Orchestrator](https://github.com/MikhailMalorod/Multi-LLM-Orchestrator)
Built in Python with async/streaming support.
AMA! | 2025-12-06T14:26:59 | https://www.reddit.com/r/LocalLLaMA/comments/1pfqkbv/tool_smart_router_with_autofailover_cloud_apis/ | Subject_Pen_4816 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pfqkbv | false | null | t3_1pfqkbv | /r/LocalLLaMA/comments/1pfqkbv/tool_smart_router_with_autofailover_cloud_apis/ | false | false | 0 | null | |
Integrating Tool Calling into an RL Fine-Tuning with Conversational Data | 2 | I am fine-tuning a Arabic Large Language Model (LLM) and i want to include tool calling capabilities as well using a Reinforcement Learning (RL)GRPO approach, via the Hugging Face TRL library and openenv library.
My dataset for the RL is purely conversational and does not contain any examples of tool-use or tool-calling formatting.
What is the most effective strategy to introduce tool-calling capability into the RL pipeline when the starting dataset is purely conversational?
Should I manually create or synthetically generate a small, high-quality dataset of Tool-Calling examples and merge it with my conversational data for an initial Supervised Fine-Tuning (SFT) pass *before* the RL stage? | 2025-12-06T14:21:42 | https://www.reddit.com/r/LocalLLaMA/comments/1pfqg14/integrating_tool_calling_into_an_rl_finetuning/ | Legal_End2605 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pfqg14 | false | null | t3_1pfqg14 | /r/LocalLLaMA/comments/1pfqg14/integrating_tool_calling_into_an_rl_finetuning/ | false | false | self | 2 | null |
PaperDebugger: the Best Overleaf Companion! | 46 | Chrome/APP Store: [https://www.paperdebugger.com/](https://www.paperdebugger.com/)
Paper: [https://arxiv.org/abs/2512.02589](https://arxiv.org/abs/2512.02589)
Code: [https://github.com/PaperDebugger/PaperDebugger](https://github.com/PaperDebugger/PaperDebugger)
Enhancer: [https://huggingface.co/Xtra-Computing/XtraGPT-7B](https://huggingface.co/Xtra-Computing/XtraGPT-7B)
An NUS team just released "PaperDebugger": an in-editor system that uses multiple agents (Reviewer, Researcher, Scorer) to rewrite and critique papers in real-time within Overleaf. Just simply select a rough section, and it launches the full pipeline.
Direct Integration: No copy-pasting. It patches the document with Git-style before/after diffs.
Deep Research: Can pull arXiv papers, summarize them, and generate comparison tables inline.
Tech Stack: Uses an MCP toolchain and Kubernetes to scale the agent reasoning. | 2025-12-06T14:01:42 | https://www.reddit.com/gallery/1pfq0kd | NuoJohnChen | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1pfq0kd | false | null | t3_1pfq0kd | /r/LocalLLaMA/comments/1pfq0kd/paperdebugger_the_best_overleaf_companion/ | false | false | 46 | null | |
What can I run on my PC? | 0 | 16g ddr4 , GTX 1660 ti, Ryzen 5500 | 2025-12-06T13:57:02 | https://www.reddit.com/r/LocalLLaMA/comments/1pfpwum/what_can_i_run_on_my_pc/ | Motor_Armadillo_7317 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pfpwum | false | null | t3_1pfpwum | /r/LocalLLaMA/comments/1pfpwum/what_can_i_run_on_my_pc/ | false | false | self | 0 | null |
Function calling Finetuners? | 5 | Huggingface is full of finetunes, merges, etc; typically if you open a list of these for a given model - Qwen3, GPT-OSS, etc; you'll get a bunch of random models with a bunch of random names, it's not very searchable. I'm looking for finetunes / LoRas for tool calling / function performance improvement, and it just seems hard to find anything that unambiguously is trained for this and provides any sort of data about how much better it does.
I'm going to keep scrolling and eyeballing, but that \*DOES\* suck. So, I'm also going to ask the community - are there known good providers of tool / function calling LoRas? Finetunes? Who? ToolMaster69? Give names and specifics if you have them, please.
P.S. Dont tell me to train my own, that's not the question. | 2025-12-06T13:56:14 | https://www.reddit.com/r/LocalLLaMA/comments/1pfpw7v/function_calling_finetuners/ | zelkovamoon | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pfpw7v | false | null | t3_1pfpw7v | /r/LocalLLaMA/comments/1pfpw7v/function_calling_finetuners/ | false | false | self | 5 | null |
What happend with llama.cpp and chat templates? | 0 | Seems like every model that is available on huggingface, is not working in llama-server without going haywire after 2-3 messages?
I the local dream dead? is llama.cpp dead? what are people using now?
performance is actually worse then 2 years back?
How did this happen?
| 2025-12-06T12:50:28 | https://www.reddit.com/r/LocalLLaMA/comments/1pfokgh/what_happend_with_llamacpp_and_chat_templates/ | Far_Buyer_7281 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pfokgh | false | null | t3_1pfokgh | /r/LocalLLaMA/comments/1pfokgh/what_happend_with_llamacpp_and_chat_templates/ | false | false | self | 0 | null |
ChromePilot v2: Built an AI browser automation agent with Ollama (dual-LLM architecture, 100% local) | 0 | Been tinkering with local LLMs for a while and wanted to build something practical, so I made ChromePilot v2 - a Chrome extension that uses Ollama to automate browser tasks. Personal project to explore what's possible with local models.
What it does:
\- Takes screenshots of your browser and creates step-by-step execution plans
\- Executes each step with context from previous actions
\- Uses accessibility tree for reliable element selection (works with React/Vue/etc)
\- Post-execution verification to confirm tasks completed
Tech Stack:
\- Orchestrator: qwen3-vl-32k (vision model for planning)
\- Executor: llama3.1:8b (fast text model for execution)
\- Runtime: Ollama (no cloud APIs, everything local)
\- 10 browser tools: click, type, scroll, navigate, manageTabs, waitFor, getSchema, etc.
Why dual-LLM?
The orchestrator sees the page once (expensive vision call) and creates a plan. The executor runs per-step (cheap, fast) with full context from previous outputs. This means steps can reference earlier results like "click the video from step 2" or "use the tab ID from step 1".
This was a fun way to explore:
\- How vision models handle UI understanding
\- Context propagation between LLM calls
\- Making element selection reliable across different frameworks
\- Keeping everything local and private
Demo Video:
[https://www.youtube.com/watch?v=i1\_DnK5W-JA&t=2s](https://www.youtube.com/watch?v=i1_DnK5W-JA&t=2s)
GitHub:
[https://github.com/Varun-Patkar/ChromePilot](https://github.com/Varun-Patkar/ChromePilot)
Next experiment (v3):
Want to make it a true iterative agent that re-evaluates after each step and asks for clarification when needed. Still learning what works best!
Would love feedback from folks who've been tinkering with local models. Anyone else building browser automation with Ollama?
MIT licensed, open to contributions if anyone wants to hack on it! | 2025-12-06T12:29:19 | https://www.reddit.com/r/LocalLLaMA/comments/1pfo6da/chromepilot_v2_built_an_ai_browser_automation/ | TheGreatVAPpy | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pfo6da | false | null | t3_1pfo6da | /r/LocalLLaMA/comments/1pfo6da/chromepilot_v2_built_an_ai_browser_automation/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'wtWnoSKP7BEyvMK0-n-g6AzDUuyUTiJS_efngsKwV4A', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/wtWnoSKP7BEyvMK0-n-g6AzDUuyUTiJS_efngsKwV4A.jpeg?width=108&crop=smart&auto=webp&s=be8f509c66e09c4512f7bfc2af26db257e9e6429', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/wtWnoSKP7BEyvMK0-n-g6AzDUuyUTiJS_efngsKwV4A.jpeg?width=216&crop=smart&auto=webp&s=3e0028b35b28737aad54fa7a102b17a34f020230', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/wtWnoSKP7BEyvMK0-n-g6AzDUuyUTiJS_efngsKwV4A.jpeg?width=320&crop=smart&auto=webp&s=486c66a19d1906403e8c3e99e72bf7be238e97a5', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/wtWnoSKP7BEyvMK0-n-g6AzDUuyUTiJS_efngsKwV4A.jpeg?auto=webp&s=188778533783bed58f33f10bc44f4f9f300a8bd9', 'width': 480}, 'variants': {}}]} |
Best agentic code LLM to run on laptop (6GB VRAM)? | 1 | Hey everyone,
I have been experimenting with various LLMs to have agentic code off the grid e.g. not using providers.
I currently use Ollama for ease of use and found Gemma 3 to work well and Granite 4 Tiny-H to work very well while still being within 6GB VRAM (my discrete GPU).
I tried Qwen2.5-Coder 7b but agentic coding using Ollama + Continue.dev VSCode doesn't work (tools are not used).
Any other models that might work better? Or is this all? | 2025-12-06T12:02:17 | https://www.reddit.com/r/LocalLLaMA/comments/1pfnpcs/best_agentic_code_llm_to_run_on_laptop_6gb_vram/ | GiovaSan | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pfnpcs | false | null | t3_1pfnpcs | /r/LocalLLaMA/comments/1pfnpcs/best_agentic_code_llm_to_run_on_laptop_6gb_vram/ | false | false | self | 1 | null |
Interconnects: Who's building models in the U.S., China's model release playbook, and a resurgence of truly open models | 4 | Great overview from Nathan Lambert at Interconnects/Ai2: [Who's building models in the U.S., China's model release playbook, and a resurgence of truly open models](https://www.interconnects.ai/p/latest-open-artifacts-16-whos-building)
I was aware of many but not all of these. And of course the recent [rnj-1 model](https://huggingface.co/EssentialAI/rnj-1-instruct-GGUF) should be added; I just learned about this [here on LL](https://old.reddit.com/r/LocalLLaMA/comments/1pfg0rh/)the_best_opensource_8bparameter_llm_built_in_the/).
What a time to be alive... | 2025-12-06T11:45:56 | https://www.reddit.com/r/LocalLLaMA/comments/1pfnfg0/interconnects_whos_building_models_in_the_us/ | RobotRobotWhatDoUSee | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pfnfg0 | false | null | t3_1pfnfg0 | /r/LocalLLaMA/comments/1pfnfg0/interconnects_whos_building_models_in_the_us/ | false | false | self | 4 | {'enabled': False, 'images': [{'id': 'TWLHb8dJzCSNgUmgFl4LqzSHDEx6dQcJuFvcNHfXJAg', 'resolutions': [{'height': 67, 'url': 'https://external-preview.redd.it/TWLHb8dJzCSNgUmgFl4LqzSHDEx6dQcJuFvcNHfXJAg.jpeg?width=108&crop=smart&auto=webp&s=7ba9f51949d9d812644493ede87935e850c25622', 'width': 108}, {'height': 135, 'url': 'https://external-preview.redd.it/TWLHb8dJzCSNgUmgFl4LqzSHDEx6dQcJuFvcNHfXJAg.jpeg?width=216&crop=smart&auto=webp&s=ed8fa57bc07436a8c3e0b575a974ae9e8634d26e', 'width': 216}, {'height': 200, 'url': 'https://external-preview.redd.it/TWLHb8dJzCSNgUmgFl4LqzSHDEx6dQcJuFvcNHfXJAg.jpeg?width=320&crop=smart&auto=webp&s=e79dd004eab4040bd3e356e1f6d853a8b3d1ff22', 'width': 320}, {'height': 400, 'url': 'https://external-preview.redd.it/TWLHb8dJzCSNgUmgFl4LqzSHDEx6dQcJuFvcNHfXJAg.jpeg?width=640&crop=smart&auto=webp&s=80032e8b1f91b6bd2c235861b26e2420c4f5907d', 'width': 640}, {'height': 600, 'url': 'https://external-preview.redd.it/TWLHb8dJzCSNgUmgFl4LqzSHDEx6dQcJuFvcNHfXJAg.jpeg?width=960&crop=smart&auto=webp&s=3133b24a102a236ad3e908568e538db106da9662', 'width': 960}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/TWLHb8dJzCSNgUmgFl4LqzSHDEx6dQcJuFvcNHfXJAg.jpeg?auto=webp&s=4c402d9a6da842ed0d2b98fd6489f5b8335ea054', 'width': 960}, 'variants': {}}]} |
Swap RX 6800 OC 16GB for RTX 5060 TI 16GB? | 3 | Hello Fellow LocalLLaMAs,
I started playing around with local LLMs recently. I really like it in terms of privacy and to explorate.
I bought an RX 6800 OC 16GB a few years ago and was happy when I realized that I could also use it using ROCm or Vulkan to do some inference.
Now I'm thinking about to swap the card for an RTX 5060 TI 16GB ( 3 Fans Version ) before the GPU prices also rise. Besides the fact that the AMD RX model became out 5 years ago and driver support in Windows ( just use it for gaming ) could be dropped in the near future. I'm also thinking that having Cuda support, could also have an advantage.
The NVIDIA is also a little bit faster that then the AMD model.
Having DLSS would be also nice. :-)
My other specs are:
Intel i7-11400f
32 GB RAM - G.SKILL F4-3200C16D-32GIS Aegis
ASUS Prime B560-Plus ( PCIe 4.0 )
I'm not planing to update any of the above just wanted to mention it for more context.
Right now I'm mostly using LM Studio, Ollama and will have a look at llama cpp in the near future. My use cases are mainly about text generation.
Besides this, I game a little in 1440p.
What are your thoughts about this? Spending more and buying an RTX 5070 or something similar is not an option for me.
P.S.
Yes, I know for "real" local inferences power I would need a lot more RAM and 2-3 RTX 5090. Besides the fact that the cards are too expensive for me ( I have also other hobbies :-) ) would the power consumption together with the electricity price ( around 0.31 € per KW/h where I live ) make me go nuts.
| 2025-12-06T11:08:58 | https://www.reddit.com/r/LocalLLaMA/comments/1pfmtri/swap_rx_6800_oc_16gb_for_rtx_5060_ti_16gb/ | FewBasis7497 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pfmtri | false | null | t3_1pfmtri | /r/LocalLLaMA/comments/1pfmtri/swap_rx_6800_oc_16gb_for_rtx_5060_ti_16gb/ | false | false | self | 3 | null |
LLMs as producers of JSON events instead of magical problem solvers | 3 | I keep seeing two very different ways people use LLMs:
1. "Do everything for me" You send a massive prompt, get a big blob of text back, and hope the answer is good enough.
2. "Produce signals for my system" You send a focused prompt, get a small structured object back, and let your system decide what to do.
I am firmly moving into group 2.
For me the LLM is a **producer** of events, not the owner of the whole outcome.
Some examples of events I like to use:
* `UserStateUpdate`{ "mood": "frustrated", "topic": "billing", "confidence": 0.76 }
* `SupportRoutingDecision`{ "queue": "tier\_2\_technical", "reason": "complex integration issue" }
* `ContentAssessment`{ "toxicity": 0.12, "spam\_probability": 0.03 }
Once I have these, everything else is deterministic:
* State machines
* Rule engines
* Timeouts and retries
* Logging and analytics
* A/B testing without touching prompts
This separation has a few nice side effects:
* I can swap the underlying model while keeping the same event schemas.
* I can run regression tests on downstream logic using static JSON fixtures.
* I can reason about failures without blaming "AI" for everything.
The hard part is the discipline:
* Define schemas up front
* Force the model to follow them (and fail loud when it does not)
* Avoid letting "just this one time" free form answers leak into the core logic
Curious how others think about this:
* Do you explicitly treat your LLM as a producer of events or measurements?
* Where do you draw the line between model behavior and system behavior?
* Have you regretted letting the model decide too much?
Small note for context: I am working on an orchestration layer called [OrKA-reasoning](https://github.com/marcosomma/orka-reasoning) that encodes this mindset. Latest version 0.9.10 fixed some routing behavior issues so that all decisions are based on the last committed structured output, which matters a lot if you want reproducible traces. Not the main point of the post, just sharing because it strongly influenced this "LLM as event producer" framing.
Happy to hear counter arguments too, especially from folks who successfully let models own more of the stack without it turning into chaos. | 2025-12-06T11:02:07 | marcosomma-OrKA | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1pfmpri | false | null | t3_1pfmpri | /r/LocalLLaMA/comments/1pfmpri/llms_as_producers_of_json_events_instead_of/ | false | false | default | 3 | {'enabled': True, 'images': [{'id': '0uyta44efk5g1', 'resolutions': [{'height': 45, 'url': 'https://preview.redd.it/0uyta44efk5g1.png?width=108&crop=smart&auto=webp&s=bc251463edadb663742171b199926e4e4fed608e', 'width': 108}, {'height': 90, 'url': 'https://preview.redd.it/0uyta44efk5g1.png?width=216&crop=smart&auto=webp&s=bf53e82e6a5579cfbcc8319de2f34ba265e2f2d9', 'width': 216}, {'height': 133, 'url': 'https://preview.redd.it/0uyta44efk5g1.png?width=320&crop=smart&auto=webp&s=6dc38a6732c4150279b7516bd9b53695ecb22567', 'width': 320}, {'height': 266, 'url': 'https://preview.redd.it/0uyta44efk5g1.png?width=640&crop=smart&auto=webp&s=21e0aecb2dd08af10389cb13757701dc97f3094f', 'width': 640}, {'height': 400, 'url': 'https://preview.redd.it/0uyta44efk5g1.png?width=960&crop=smart&auto=webp&s=e821126b4e865abe3b8ba33aef560037850d142a', 'width': 960}, {'height': 450, 'url': 'https://preview.redd.it/0uyta44efk5g1.png?width=1080&crop=smart&auto=webp&s=0835b27fc8a88b4307eb7b545d19958e9f91db75', 'width': 1080}], 'source': {'height': 960, 'url': 'https://preview.redd.it/0uyta44efk5g1.png?auto=webp&s=eaf702da674c6f5dfdfff40730778e3a8aeb8ac6', 'width': 2304}, 'variants': {}}]} | |
nvidia p4 ubuntu vm help | 0 | ubuntu 24.04 vm on proxmox on dell r730. i figured out how to gpu passthru with my tesla p4 from proxmox to ubuntu vm with [this guide](https://gitlab.com/polloloco/vgpu-proxmox#enabling-iommu).
On ubuntu vm i installed nvidia-driver-580-server and it saw the gpu and ollama was able to use it. i rebooted the ubuntu vm to give it more cpu cores and ram and now when i try and update i get this error after trying to upgrade:
root@ubuntucam:~# apt upgrade
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
You might want to run 'apt --fix-broken install' to correct these.
The following packages have unmet dependencies:
nvidia-driver-580-server : Depends: libnvidia-gl-580-server (= 580.95.05-0ubuntu0.24.04.2) but it is not installed
Depends: libnvidia-cfg1-580-server (= 580.95.05-0ubuntu0.24.04.2) but it is not installed
Recommends: libnvidia-compute-580-server:i386 (= 580.95.05-0ubuntu0.24.04.2) but it is not installable
Recommends: libnvidia-decode-580-server:i386 (= 580.95.05-0ubuntu0.24.04.2) but it is not installable
Recommends: libnvidia-encode-580-server:i386 (= 580.95.05-0ubuntu0.24.04.2) but it is not installable
Recommends: libnvidia-fbc1-580-server:i386 (= 580.95.05-0ubuntu0.24.04.2) but it is not installable
Recommends: libnvidia-gl-580-server:i386 (= 580.95.05-0ubuntu0.24.04.2) but it is not installable
xserver-xorg-video-nvidia-580-server : Depends: libnvidia-cfg1-580-server (= 580.95.05-0ubuntu0.24.04.2) but it is not installed
E: Unmet dependencies. Try 'apt --fix-broken install' with no packages (or specify a solution).
root@ubuntucam:~# apt --fix-broken install
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
Correcting dependencies... Done
The following packages were automatically installed and are no longer required:
libnvidia-cfg1 libnvidia-egl-gbm1 libnvidia-egl-xcb1 libnvidia-egl-xlib1 libnvidia-gpucomp libtirpc-common libtirpc3t64 nvidia-firmware nvidia-modprobe
switcheroo-control
Use 'apt autoremove' to remove them.
The following additional packages will be installed:
libnvidia-cfg1-580-server libnvidia-gl-580-server
The following NEW packages will be installed:
libnvidia-cfg1-580-server libnvidia-gl-580-server
0 upgraded, 2 newly installed, 0 to remove and 4 not upgraded.
11 not fully installed or removed.
Need to get 0 B/172 MB of archives.
After this operation, 543 MB of additional disk space will be used.
Do you want to continue? [Y/n]
debconf: delaying package configuration, since apt-utils is not installed
(Reading database ... 98008 files and directories currently installed.)
Preparing to unpack .../libnvidia-gl-580-server_580.95.05-0ubuntu0.24.04.2_amd64.deb ...
Unpacking libnvidia-gl-580-server:amd64 (580.95.05-0ubuntu0.24.04.2) ...
dpkg: error processing archive /var/cache/apt/archives/libnvidia-gl-580-server_580.95.05-0ubuntu0.24.04.2_amd64.deb (--unpack):
trying to overwrite '/usr/lib/x86_64-linux-gnu/libnvidia-egl-gbm.so.1.1.2', which is also in package libnvidia-egl-gbm1:amd64 1.1.2.1-1ubuntu1
dpkg-deb: error: paste subprocess was killed by signal (Broken pipe)
Preparing to unpack .../libnvidia-cfg1-580-server_580.95.05-0ubuntu0.24.04.2_amd64.deb ...
Unpacking libnvidia-cfg1-580-server:amd64 (580.95.05-0ubuntu0.24.04.2) ...
dpkg: error processing archive /var/cache/apt/archives/libnvidia-cfg1-580-server_580.95.05-0ubuntu0.24.04.2_amd64.deb (--unpack):
trying to overwrite '/usr/lib/x86_64-linux-gnu/libnvidia-cfg.so.1', which is also in package libnvidia-cfg1:amd64 590.44.01-0ubuntu1
Errors were encountered while processing:
/var/cache/apt/archives/libnvidia-gl-580-server_580.95.05-0ubuntu0.24.04.2_amd64.deb
/var/cache/apt/archives/libnvidia-cfg1-580-server_580.95.05-0ubuntu0.24.04.2_amd64.deb
E: Sub-process /usr/bin/dpkg returned an error code (1)
but i think this is whats going wrong but idk
dpkg-deb: error: paste subprocess was killed by signal (Broken pipe)
Preparing to unpack .../libnvidia-cfg1-580-server_580.95.05-0ubuntu0.24.04.2_amd64.deb ...
Unpacking libnvidia-cfg1-580-server:amd64 (580.95.05-0ubuntu0.24.04.2) ...
dpkg: error processing archive /var/cache/apt/archives/libnvidia-cfg1-580-server_580.95.05-0ubuntu0.24.04.2_amd64.deb (--unpack):
trying to overwrite '/usr/lib/x86_64-linux-gnu/libnvidia-cfg.so.1', which is also in package libnvidia-cfg1:amd64 590.44.01-0ubuntu1
Errors were encountered while processing:
/var/cache/apt/archives/libnvidia-gl-580-server_580.95.05-0ubuntu0.24.04.2_amd64.deb
/var/cache/apt/archives/libnvidia-cfg1-580-server_580.95.05-0ubuntu0.24.04.2_amd64.deb
E: Sub-process /usr/bin/dpkg returned an error code (1)
any help is great thanks its been 12 hours finding info and doing this lol | 2025-12-06T10:29:31 | https://www.reddit.com/r/LocalLLaMA/comments/1pfm6zf/nvidia_p4_ubuntu_vm_help/ | HoahMasterrace | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pfm6zf | false | null | t3_1pfm6zf | /r/LocalLLaMA/comments/1pfm6zf/nvidia_p4_ubuntu_vm_help/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'Vw3bBu31aiMSJTxYckiYR6DzhREt281xzPoVcP66tdI', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/Vw3bBu31aiMSJTxYckiYR6DzhREt281xzPoVcP66tdI.jpeg?width=108&crop=smart&auto=webp&s=6d08dae4948db32407f8c9119444e1a484cf0da3', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/Vw3bBu31aiMSJTxYckiYR6DzhREt281xzPoVcP66tdI.jpeg?width=216&crop=smart&auto=webp&s=3df1c1fb14c94fba43eccf3d6a18d5de0e4940c0', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/Vw3bBu31aiMSJTxYckiYR6DzhREt281xzPoVcP66tdI.jpeg?width=320&crop=smart&auto=webp&s=687753cc6bafb391a34330ac5ce22e81412f95ee', 'width': 320}, {'height': 640, 'url': 'https://external-preview.redd.it/Vw3bBu31aiMSJTxYckiYR6DzhREt281xzPoVcP66tdI.jpeg?width=640&crop=smart&auto=webp&s=d4823ce403ce0b384bf5becab4cef89e37ca8a14', 'width': 640}, {'height': 960, 'url': 'https://external-preview.redd.it/Vw3bBu31aiMSJTxYckiYR6DzhREt281xzPoVcP66tdI.jpeg?width=960&crop=smart&auto=webp&s=4178783d2504ebb3ea781465c73f828812f55c07', 'width': 960}], 'source': {'height': 1024, 'url': 'https://external-preview.redd.it/Vw3bBu31aiMSJTxYckiYR6DzhREt281xzPoVcP66tdI.jpeg?auto=webp&s=b9dc96c811e0db9fb8d0a82050d3290924b23b5d', 'width': 1024}, 'variants': {}}]} |
The Ginomai Genome - Children of the Lattice | 1 | [removed] | 2025-12-06T10:03:54 | DataHeretic | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1pflsr5 | false | null | t3_1pflsr5 | /r/LocalLLaMA/comments/1pflsr5/the_ginomai_genome_children_of_the_lattice/ | false | false | default | 1 | {'images': [{'source': {'url': 'https://preview.redd.it/fgmhbhzk4k5g1.png?auto=webp&s=d0027c8ab4cbaaf9a1ae03e7975fde47694eb2a9', 'width': 1536, 'height': 1024}, 'resolutions': [{'url': 'https://preview.redd.it/fgmhbhzk4k5g1.png?width=108&crop=smart&auto=webp&s=8675d5c7610d4ce8cb12fc7613db2333695896d3', 'width': 108, 'height': 72}, {'url': 'https://preview.redd.it/fgmhbhzk4k5g1.png?width=216&crop=smart&auto=webp&s=2b57f51e48bcd5d9aeea87be25adaa519d9dae7a', 'width': 216, 'height': 144}, {'url': 'https://preview.redd.it/fgmhbhzk4k5g1.png?width=320&crop=smart&auto=webp&s=e7d73d3db29bcd8e04c613a7e7b0c302f0dba14b', 'width': 320, 'height': 213}, {'url': 'https://preview.redd.it/fgmhbhzk4k5g1.png?width=640&crop=smart&auto=webp&s=b2ac89dd513cc3346f95f8a3a0a64905666eb743', 'width': 640, 'height': 426}, {'url': 'https://preview.redd.it/fgmhbhzk4k5g1.png?width=960&crop=smart&auto=webp&s=8bfd3860643d5cc39f7c7eb381ec66371f9703d0', 'width': 960, 'height': 640}, {'url': 'https://preview.redd.it/fgmhbhzk4k5g1.png?width=1080&crop=smart&auto=webp&s=5a780348ea316d46c756d697a8d9189ac839fcb1', 'width': 1080, 'height': 720}], 'variants': {}, 'id': 'fgmhbhzk4k5g1'}], 'enabled': True} | |
Abliterated isnt working? | 1 | 2025-12-06T10:00:20 | https://www.reddit.com/r/LocalLLaMA/comments/1pflqle/abliterated_isnt_working/ | ShreeyanxRaina | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pflqle | false | null | t3_1pflqle | /r/LocalLLaMA/comments/1pflqle/abliterated_isnt_working/ | false | false | 1 | null | ||
I’ve been experimenting with a small structural rewrite engine | 1 | I built a small experiment in structural reasoning. The idea is that you give it a couple of before-and-after examples, and it learns the rewrite pattern and applies it to new inputs. No LLM, no templates, no regex.
It works across different domains (algebra, logic, code). The codemod example
was the surprise one for me: two examples of lodash.get and it managed to generalise optional chaining to new inputs.
I’m curious if anyone can break it or find cases where the rule learner falls over.
Demo: [https://re.heavyweather.io/](https://re.heavyweather.io/) | 2025-12-06T09:40:21 | https://www.reddit.com/r/LocalLLaMA/comments/1pflf9f/ive_been_experimenting_with_a_small_structural/ | Acrobatic-Comb-2504 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pflf9f | false | null | t3_1pflf9f | /r/LocalLLaMA/comments/1pflf9f/ive_been_experimenting_with_a_small_structural/ | false | false | self | 1 | null |
ARC Prize 2025 results and analysis | 1 | The ARC Prize 2025 concluded its second year, confirming "refinement loops" as the central theme driving progress in AI reasoning, although the Grand Prize remains unclaimed. The competition saw 1,455 teams and 90 papers submitted, with the top Kaggle score reaching a new state-of-the-art of 24% on the private ARC-AGI-2 dataset. Commercial AI systems also demonstrated significant advancement, with Anthropic's Opus 4.5 scoring 37.6% and a bespoke refinement solution on Gemini 3 Pro achieving 54%. ARC-AGI has cemented its role as a key industry benchmark, used by all four major AI labs to track frontier reasoning capabilities, which the report positions as a new technological paradigm on par with the invention of LLMs. All winning solutions and papers from the 2025 competition have been made open-source.
The core technical breakthrough highlighted is the "refinement loop," an iterative process of generating candidate solutions (exploration) and analyzing them for feedback (verification) to incrementally optimize a program. This concept is manifesting in two major ways: through program synthesis approaches like Evolutionary Test-Time Compute, and in novel "zero-pretraining" deep learning methods. Examples of the latter include the Tiny Recursive Model (TRM) and CompressARC, which achieve impressive ARC-AGI performance with extremely small, test-time trained networks (7M and 76K parameters, respectively). Furthermore, commercial models are exhibiting refinement via extended, costly "chain-of-thought" reasoning, and application-layer refinement harnesses are proving highly effective, boosting Gemini 3 Pro's performance from 31% to 54% on ARC-AGI-2, demonstrating that task reliability can be meaningfully improved at the application layer.
Looking forward, the report notes that current AI reasoning systems can reliably automate tasks characterized by sufficient foundational model knowledge and a verifiable feedback signal, marking a profound upgrade in capability. However, this progress is leading to a new form of "overfitting" on benchmarks like ARC-AGI-1/2, where models are leveraging embedded knowledge of the ARC domain, necessitating a benchmark evolution. To continue driving progress toward AGI, the ARC Prize is preparing to release ARC-AGI-3 in early 2026. This new version will feature the first major format change since 2019, shifting from static reasoning to challenging interactive reasoning, requiring new capabilities like planning, memory, and goal acquisition, and will formally compare human versus AI action efficiency.
# High Scores
|Place|Prize|Team|ARC-AGI-2 Private Eval Score|Sources|
|:-|:-|:-|:-|:-|
|1st|$25k|NVARC|24.03%|[Code](https://www.kaggle.com/code/gregkamradt/arc2-qwen3-unsloth-flash-lora-batch8-queue-trm2/edit?fromFork=1) | [Paper](https://drive.google.com/file/d/1vkEluaaJTzaZiJL69TkZovJUkPSDH5Xc/view?usp=drive_link) | [Video](https://www.youtube.com/watch?v=t-mIRJJCbKg)|
|2nd|$10k|the ARChitects|16.53%|[Code](https://www.kaggle.com/code/gregkamradt/arc-2025-diffusion/edit?fromFork=1) | [Paper](https://drive.google.com/file/d/1o1gGmlQTo6tsXzQ1T6NqTOqrP8hXNXnV/view?usp=sharing) | [Video](https://www.youtube.com/watch?v=CcoGi47qD-w)|
|3rd|$5k|MindsAI|12.64%|[Code](https://www.kaggle.com/code/gregkamradt/mindsai-tufa-2025-v4/edit?fromFork=1) | [Paper](https://arxiv.org/abs/2506.14276) | [Writeup](https://github.com/jcole75/arc_2025_mindsai/blob/main/MindsAI_Tufa_Labs_2025_Solution.pdf) | [Video](https://www.youtube.com/watch?v=3lXXfNsWIgo)|
|4th|$5k|Lonnie|6.67%|[Code](https://www.kaggle.com/code/lonnieqin/lb-5-83-baseline-from-1st-place-of-2024) | [Paper](https://www.kaggle.com/competitions/arc-prize-2025/writeups/arc-prize-2025-competition-writeup-5th-place)|
|5th|$5k|G. Barbadillo|6.53%|[Code](https://www.kaggle.com/code/ironbar/the-architects-single-task-ttt) | [Paper](https://ironbar.github.io/arc25/05_Solution_Summary/)|
[View on Kaggle](https://www.kaggle.com/competitions/arc-prize-2025/leaderboard)
# Paper Awards
|Place|Prize|Authors|Title|
|:-|:-|:-|:-|
|1st|$50k|A. Jolicoeur-Martineau|Less is More: Recursive Reasoning with Tiny Networks ([paper](https://arxiv.org/abs/2510.04871), [interview](https://www.youtube.com/watch?v=P9zzUM0PrBM))|
|2nd|$20k|J. Pourcel, C. Colas & P. Oudeyer|Self-Improving Language Models for Evolutionary Program Synthesis: A Case Study on ARC-AGI ([paper](https://openreview.net/pdf?id=z4IG090qt2), [video](https://www.youtube.com/watch?v=9lIuoslCHWI))|
|3rd|$5k|I. Liao & A. Gu|ARC-AGI Without Pretraining ([paper](https://iliao2345.github.io/blog_posts/arc_agi_without_pretraining/ARC_AGI_Without_Pretraining.pdf), [video](https://www.youtube.com/watch?v=N9GvFj0cE9s))|
|Runner Up|$2.5k|I. Joffe & C. Eliasmith|Vector Symbolic Algebras for the Abstraction and Reasoning Corpus ([paper](https://github.com/ijoffe/ARC-VSA-2025/blob/main/paper/paper.pdf))|
|Runner Up|$2.5k|J. Berman|From Parrots to Von Neumanns: How Evolutionary Test-Time Compute Achieved State-of-the-Art on ARC-AGI ([paper](https://github.com/jerber/arc-lang-public/blob/main/from_parrots_to_von_neumanns.pdf))|
|Runner Up|$2.5k|E. Pang|Efficient Evolutionary Program Synthesis ([paper](https://open.substack.com/pub/ctpang/p/arc-agi-2-sota-efficient-evolutionary))|
|Runner Up|$2.5k|E. Guichard, F. Reimers, M. Kvalsund, M. Lepperød & S. Nichele|ARC-NCA: Towards Developmental Solutions to the Abstraction and Reasoning Corpus ([paper](https://etimush.github.io/ARC_NCA/))|
|Runner Up|$2.5k|M. Ho et al.|ArcMemo: Abstract Reasoning Composition with Lifelong LLM Memory ([paper](https://arxiv.org/abs/2509.04439))|
# Honorable Mentions
|Authors|Title|
|:-|:-|
|K. Hu et al.|ARC-AGI is a Vision Problem! ([paper](https://arxiv.org/abs/2511.14761))|
|D. Franzen, J. Disselhoff & D. Hartmann|Product of Experts with LLMs: Boosting Performance on ARC Is a Matter of Perspective ([paper](https://drive.google.com/file/d/1o1gGmlQTo6tsXzQ1T6NqTOqrP8hXNXnV/view?usp=sharing), [interview](https://www.youtube.com/watch?v=CcoGi47qD-w))|
|G. Barbadillo|Exploring the combination of search and learn for the ARC25 challenge ([paper](https://ironbar.github.io/arc25/05_Solution_Summary/))|
|A. Das, O. Ghugarkar, V. Bhat & J. McAuley|Beyond Brute Force: A Neuro-Symbolic Architecture for Compositional Reasoning in ARC-AGI-2 ([paper](https://github.com/CoreThink-AI/Research-publications/blob/main/Preprints/Beyond_Brute_Force__A_Neuro_Symbolic_Architecture_for_Compositional_Reasoning_in_ARC_AGI_2.pdf))|
|R. McGovern|Test-time Adaptation of Tiny Recursive Models ([paper](https://trelis.com/wp-content/uploads/2025/11/mcgovern_test_time_adaptation_trm.pdf))|
|P. Acuaviva et al.|Rethinking Visual Intelligence: Insights from Video Pretraining ([paper](https://arxiv.org/abs/2510.24448))|
|J. Cole & M. Osman|Don't throw the baby out with the bathwater: How and why deep learning for ARC ([paper](https://arxiv.org/abs/2506.14276), [interview](https://www.youtube.com/watch?v=3lXXfNsWIgo))|
|I. Sorokin & Jean-François Puget|NVARC solution to ARC-AGI-2 2025 ([paper](https://drive.google.com/file/d/1vkEluaaJTzaZiJL69TkZovJUkPSDH5Xc/view?usp=drive_link))|
**Sources:**
* [https://arcprize.org/blog/arc-prize-2025-results-analysis](https://arcprize.org/blog/arc-prize-2025-results-analysis)
* [https://arcprize.org/competitions/2025/](https://arcprize.org/competitions/2025/)
* [https://www.kaggle.com/competitions/arc-prize-2025](https://www.kaggle.com/competitions/arc-prize-2025)
* [https://developer.nvidia.com/blog/nvidia-kaggle-grandmasters-win-artificial-general-intelligence-competition/](https://developer.nvidia.com/blog/nvidia-kaggle-grandmasters-win-artificial-general-intelligence-competition/) | 2025-12-06T09:31:34 | https://arcprize.org/blog/arc-prize-2025-results-analysis | Balance- | arcprize.org | 1970-01-01T00:00:00 | 0 | {} | 1pflakt | false | null | t3_1pflakt | /r/LocalLLaMA/comments/1pflakt/arc_prize_2025_results_and_analysis/ | false | false | default | 1 | null |
H200 GPU in an internal network - which LLM to run? | 1 | We have Access to an NVIDIA H200 NVL 141GB HBM3e PCIe 5.0 GPU in our closed system and am trying to figure out the best modell to use in internal applications. We are currently testing GPT OSS 120b but are nethertheless interested in what other options we have.
What are the most capable and/or most capable models to run on this GPU? | 2025-12-06T09:15:44 | https://www.reddit.com/r/LocalLLaMA/comments/1pfl1w7/h200_gpu_in_an_internal_network_which_llm_to_run/ | Far-Organization-849 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pfl1w7 | false | null | t3_1pfl1w7 | /r/LocalLLaMA/comments/1pfl1w7/h200_gpu_in_an_internal_network_which_llm_to_run/ | false | false | self | 1 | null |
How big an open source model can I run on 128 GB unified memory? | 1 | I just took delivery of a Minisforum MS-S1 with AMD Ryzen Ai Max+ 395 cpu, 128 GB unified memory architecture and AMD Radeon 8060S Graphics. In the BIOS the UDMA memory for the iGPU is set to 96 GB. Running a Debian Linux terminal in WSL 2, I downloaded and ran ollama which works fine.
Trying a Deepseek-r1:70b model, it refused to load in ollama. I checked a few sources which ended saying this "**DeepSeek-R1-70B INT4 GGUF still requires \~55–60 GB VRAM equivalent**. **You cannot run this model on a single consumer APU**, even with “128 GB unified memory”.
Is the above true? What is the largest LLM model I can run reasonably on this computer? | 2025-12-06T09:13:03 | https://www.reddit.com/r/LocalLLaMA/comments/1pfl0d8/how_big_an_open_source_model_can_i_run_on_128_gb/ | nameless_me | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pfl0d8 | false | null | t3_1pfl0d8 | /r/LocalLLaMA/comments/1pfl0d8/how_big_an_open_source_model_can_i_run_on_128_gb/ | false | false | self | 1 | null |
3090 64gb DDR4 12700k What are the best LLMs I can run? | 1 | I’m finally planning on tinkering with local ai this weekend. I figure my first step is to try some LLMs that work well in my rig and see which ones I like. My rig currently has 2 gpus for framegen when gaming. I’d like to know what you all think my system would run well, and what the pros and cons are. I’m trying to avoid qwen, I don’t trust Chinese software.
12700k i7
3090 24 gb
3050 8gb
64 gb 3200 ram
1000 w power supply
2tb Samsung 970
360mm aio cooler (cpu)
Thank you! | 2025-12-06T08:28:27 | https://www.reddit.com/r/LocalLLaMA/comments/1pfkbwa/3090_64gb_ddr4_12700k_what_are_the_best_llms_i/ | Actual_Department387 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pfkbwa | false | null | t3_1pfkbwa | /r/LocalLLaMA/comments/1pfkbwa/3090_64gb_ddr4_12700k_what_are_the_best_llms_i/ | false | false | self | 1 | null |
New paper: True AI autonomy isn't about bigger models - it's the 4 pillars of cognition | 1 | Perception, Reasoning, Memory, Action - not just scaling parameters. Interesting framework from this new paper on autonomous agents. | 2025-12-06T08:21:48 | https://x.com/heyrimsha/status/1997209742251643088 | web3nomad | x.com | 1970-01-01T00:00:00 | 0 | {} | 1pfk8aw | false | null | t3_1pfk8aw | /r/LocalLLaMA/comments/1pfk8aw/new_paper_true_ai_autonomy_isnt_about_bigger/ | false | false | default | 1 | null |
How can I make Gemma-3 4B better at generating a specific language? | 1 | I’m experimenting with the Gemma-3 4B model and I want it to be more fluent/accurate in a specific language (not English). What’s the best way to improve its output?
Should I fine-tune it, use DPO, add prompts, or something else?
Looking for practical steps, tools, or examples. | 2025-12-06T08:10:59 | https://www.reddit.com/r/LocalLLaMA/comments/1pfk26q/how_can_i_make_gemma3_4b_better_at_generating_a/ | NoAdhesiveness7595 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pfk26q | false | null | t3_1pfk26q | /r/LocalLLaMA/comments/1pfk26q/how_can_i_make_gemma3_4b_better_at_generating_a/ | false | false | self | 1 | null |
how to build an agency website in 5 minutes using ai | 1 | 2025-12-06T08:07:32 | https://youtu.be/UIokrWCo57k?si=mZPUqdAbJx1aZOmy | creativecashflowz | youtu.be | 1970-01-01T00:00:00 | 0 | {} | 1pfk09v | false | {'type': 'youtube.com', 'oembed': {'provider_url': 'https://www.youtube.com/', 'version': '1.0', 'title': 'how to build an agency website in 5 minutes using ai', 'type': 'video', 'thumbnail_width': 480, 'height': 200, 'width': 356, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/UIokrWCo57k?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen title="how to build an agency website in 5 minutes using ai"></iframe>', 'author_name': 'Creative Cashflowz', 'provider_name': 'YouTube', 'thumbnail_url': 'https://i.ytimg.com/vi/UIokrWCo57k/hqdefault.jpg', 'thumbnail_height': 360, 'author_url': 'https://www.youtube.com/@poodahlolol'}} | t3_1pfk09v | /r/LocalLLaMA/comments/1pfk09v/how_to_build_an_agency_website_in_5_minutes_using/ | false | false | 1 | {'images': [{'source': {'url': 'https://external-preview.redd.it/MRYGk3iNeRvTf_UnfdIJ-p25-RxdG9j9u3Ykfk6wkDI.jpeg?auto=webp&s=05a77603b964457419480731cb023e78c39a2299', 'width': 480, 'height': 360}, 'resolutions': [{'url': 'https://external-preview.redd.it/MRYGk3iNeRvTf_UnfdIJ-p25-RxdG9j9u3Ykfk6wkDI.jpeg?width=108&crop=smart&auto=webp&s=6940361c34847473789b6cfa78afb5d45400b37d', 'width': 108, 'height': 81}, {'url': 'https://external-preview.redd.it/MRYGk3iNeRvTf_UnfdIJ-p25-RxdG9j9u3Ykfk6wkDI.jpeg?width=216&crop=smart&auto=webp&s=a5f118a4d60bc5ac7e742f512a525cfb51c8966f', 'width': 216, 'height': 162}, {'url': 'https://external-preview.redd.it/MRYGk3iNeRvTf_UnfdIJ-p25-RxdG9j9u3Ykfk6wkDI.jpeg?width=320&crop=smart&auto=webp&s=3e1a19e6aa3c66524aa003f224558e973d5e139e', 'width': 320, 'height': 240}], 'variants': {}, 'id': 'MRYGk3iNeRvTf_UnfdIJ-p25-RxdG9j9u3Ykfk6wkDI'}], 'enabled': False} | ||
Why so few benchmarks with the pcie p2p patches kernel module? | 1 | I've seen a lot of inference benchmarks on here, but I'm consistently baffled why it seems that nearly no one is using the various patched Nvidia kernel modules available which enabled pcie p2p.
It reduces the latency between RTX 30/40/50 cards by an order of magnitude, and makes tensor and export parallelism highly viable (leading to \_drastically\_ improved throughput)
Is this common knowledge around here? If not, then I highly encourage doing some testing with your multi-RTX GPU systems, because running without it is handicapping your performance by multiples. | 2025-12-06T07:59:49 | https://www.reddit.com/r/LocalLLaMA/comments/1pfjvry/why_so_few_benchmarks_with_the_pcie_p2p_patches/ | unfortunate_jargon | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pfjvry | false | null | t3_1pfjvry | /r/LocalLLaMA/comments/1pfjvry/why_so_few_benchmarks_with_the_pcie_p2p_patches/ | false | false | self | 1 | null |
63% attack success rate in Qwen with Inference Time Steering | 1 | https://www.lesswrong.com/posts/k6NSFi7M4EvHSauEt/latent-space-dynamics-of-rlhf-quantifying-the-safety-1
Is this commonly known? I steer the thought from layer 20-28 to get the results I want. No training required, just a couple of dozen prompts for calibration. Ran on my consumer GPU and didn't even use all my VRAM. The prompts it returned are very detailed and certainly malicious (and accurate). | 2025-12-06T07:58:03 | https://www.reddit.com/r/LocalLLaMA/comments/1pfjuuj/63_attack_success_rate_in_qwen_with_inference/ | ikonkustom5 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pfjuuj | false | null | t3_1pfjuuj | /r/LocalLLaMA/comments/1pfjuuj/63_attack_success_rate_in_qwen_with_inference/ | false | false | self | 1 | {'images': [{'source': {'url': 'https://external-preview.redd.it/vYvgH_kHPmVGq0OLpHkQ4f-YObUhzba8fZBmsSAWcjM.jpeg?auto=webp&s=1451dec45bcb880a5d546512ac29430a0d232b5e', 'width': 696, 'height': 295}, 'resolutions': [{'url': 'https://external-preview.redd.it/vYvgH_kHPmVGq0OLpHkQ4f-YObUhzba8fZBmsSAWcjM.jpeg?width=108&crop=smart&auto=webp&s=bcc2acc1ee9323e84e5c0fc51f50f3d8730129a0', 'width': 108, 'height': 45}, {'url': 'https://external-preview.redd.it/vYvgH_kHPmVGq0OLpHkQ4f-YObUhzba8fZBmsSAWcjM.jpeg?width=216&crop=smart&auto=webp&s=1bfaa7aa47fc05210b96d471b3379f6a8b7e41c2', 'width': 216, 'height': 91}, {'url': 'https://external-preview.redd.it/vYvgH_kHPmVGq0OLpHkQ4f-YObUhzba8fZBmsSAWcjM.jpeg?width=320&crop=smart&auto=webp&s=502d8f04e4e299b0b96e2cf12376f4b705b7886c', 'width': 320, 'height': 135}, {'url': 'https://external-preview.redd.it/vYvgH_kHPmVGq0OLpHkQ4f-YObUhzba8fZBmsSAWcjM.jpeg?width=640&crop=smart&auto=webp&s=c653e7bb8d8c992c939c6926ec09f1a7659224de', 'width': 640, 'height': 271}], 'variants': {}, 'id': 'vYvgH_kHPmVGq0OLpHkQ4f-YObUhzba8fZBmsSAWcjM'}], 'enabled': False} |
Crovia Spider — LAION-5B Evidence Snapshot (real receipts, not mockups) | 1 | For months the community kept asking one question:
**“What does LAION-5B really look like under a compliance-grade evidence engine?”**
So we ran **Crovia Spider v1** on it.
Below are *real extracted receipts* — not synthetic, not placeholders.
If a model was trained on LAION-5B, it inherits these license gaps.
---
## LAION-5B under Crovia Spider — Evidence Sample
content_id: cid:url_sha256:c7cc5b0acf8330e51ffd1ed02f108e6a9649e13ed3547a14255dad6bdf7f01c5
license_hint: cc-by-4.0 (unverified)
content_id: cid:url_sha256:267ad746f168458aa6aca730d82dd565ba0dbada0107317d2252d3b60d57fade
license_hint: cc-by-sa-3.0 (unverified)
content_id: cid:url_sha256:8bad9a02f5b4b1e08e19a6417bd6fb03576c80a80deef4f4a1ca868eb9265e71
license_hint: unknown
**Crovia Compliance Score for LAION-5B: 14/100**
➡️ Every model trained on it *inherits these gaps*.
---
## What is Crovia Spider?
Crovia Spider is a forensic crawler that only analyzes **datasets already used in public model training (2024–2026)**.
Nothing new. Nothing private. No grey zones.
Just *evidence extraction* and *license-hint surfacing*.
Repo (open-core):
https://github.com/croviatrust/crovia-core-engine
Run it.
Verify it.
**The past is now signed.**
---
#CroviaTrust #AIAct #AICompliance #AIGovernance #MachineLearning #OpenSource
| 2025-12-06T07:44:44 | https://www.reddit.com/r/LocalLLaMA/comments/1pfjnec/crovia_spider_laion5b_evidence_snapshot_real/ | CroviaTrust | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pfjnec | false | null | t3_1pfjnec | /r/LocalLLaMA/comments/1pfjnec/crovia_spider_laion5b_evidence_snapshot_real/ | false | false | self | 1 | null |
How is the agent system inside Cursor (or similar IDE agent workflows) actually designed? | 1 | I’m trying to understand how modern AI-powered IDEs like Cursor structure their internal agent systems.
From the outside, it looks like the tool is able to:
– break a user request into multiple steps,
– apply patches to the codebase,
– run commands (install deps, start dev server),
– detect errors,
– and then automatically fix them in a loop.
is it?
* a chain of multiple agents calling each other,
* a single agent with tool-calling and a feedback loop,
* or some kind of planner–executor architecture?
How do they coordinate step-by-step tasks?
Is there a public technical breakdown of how this “agentic IDE” architecture works?
I’d really appreciate a detailed explanation or any deep-dive resources.
Maybe links or explanation here | 2025-12-06T07:32:08 | https://www.reddit.com/r/LocalLLaMA/comments/1pfjg34/how_is_the_agent_system_inside_cursor_or_similar/ | v0k3r | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pfjg34 | false | null | t3_1pfjg34 | /r/LocalLLaMA/comments/1pfjg34/how_is_the_agent_system_inside_cursor_or_similar/ | false | false | self | 1 | null |
Open Unified TTS - Turn any TTS into an unlimited-length audio generator | 1 | Built an open-source TTS proxy that lets you generate unlimited-length audio from local backends without hitting their length limits.
**The problem:** Most local TTS models break after 50-100 words. Voice clones are especially bad - send a paragraph and you get gibberish, cutoffs, or errors.
**The solution:** Smart chunking + crossfade stitching. Text splits at natural sentence boundaries, each chunk generates within model limits, then seamlessly joins with 50ms crossfades. No audible seams.
**Demos:**
- [30-second intro](https://github.com/loserbcc/open-unified-tts/blob/main/demo/intro.mp4)
- [4-minute live demo](https://github.com/loserbcc/open-unified-tts/blob/main/demo/live_demo.mp4) showing it in action
**Features:**
- OpenAI TTS-compatible API (drop-in for OpenWebUI, SillyTavern, etc.)
- Per-voice backend routing (send "morgan" to VoxCPM, "narrator" to Kokoro)
- Works with any TTS that has an API endpoint
**Tested with:** Kokoro, VibeVoice, OpenAudio S1-mini, FishTTS, VoxCPM, MiniMax TTS, Chatterbox, Higgs Audio, Kyutai/Moshi
**GitHub:** https://github.com/loserbcc/open-unified-tts
Designed with Claude and Z.ai (with me in the passenger seat).
Feedback welcome - what backends should I add adapters for? | 2025-12-06T07:30:44 | https://www.reddit.com/r/LocalLLaMA/comments/1pfjfa6/open_unified_tts_turn_any_tts_into_an/ | SouthernFriedAthiest | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pfjfa6 | false | null | t3_1pfjfa6 | /r/LocalLLaMA/comments/1pfjfa6/open_unified_tts_turn_any_tts_into_an/ | false | false | self | 1 | {'images': [{'source': {'url': 'https://external-preview.redd.it/zqoAH5__iY2A9dM39tabV144HU0XzrpDDtYs005Sx4w.png?auto=webp&s=4df8305bdabf971c992219a24b8aeeb6e98a407a', 'width': 1200, 'height': 600}, 'resolutions': [{'url': 'https://external-preview.redd.it/zqoAH5__iY2A9dM39tabV144HU0XzrpDDtYs005Sx4w.png?width=108&crop=smart&auto=webp&s=a0525bdbfbf6f50d93eb62038687c4177a685230', 'width': 108, 'height': 54}, {'url': 'https://external-preview.redd.it/zqoAH5__iY2A9dM39tabV144HU0XzrpDDtYs005Sx4w.png?width=216&crop=smart&auto=webp&s=b90e6a92f87d5204ba3a8c9c3fbcbc7b76d20d07', 'width': 216, 'height': 108}, {'url': 'https://external-preview.redd.it/zqoAH5__iY2A9dM39tabV144HU0XzrpDDtYs005Sx4w.png?width=320&crop=smart&auto=webp&s=ff39612eea0ba64ca49aa45f1379eed01e3295bc', 'width': 320, 'height': 160}, {'url': 'https://external-preview.redd.it/zqoAH5__iY2A9dM39tabV144HU0XzrpDDtYs005Sx4w.png?width=640&crop=smart&auto=webp&s=3049c06db606e2bb6ef6258156b89f8551f609ed', 'width': 640, 'height': 320}, {'url': 'https://external-preview.redd.it/zqoAH5__iY2A9dM39tabV144HU0XzrpDDtYs005Sx4w.png?width=960&crop=smart&auto=webp&s=0109440d79af187a5fbbde97fcdd9d3d50e7c696', 'width': 960, 'height': 480}, {'url': 'https://external-preview.redd.it/zqoAH5__iY2A9dM39tabV144HU0XzrpDDtYs005Sx4w.png?width=1080&crop=smart&auto=webp&s=ceb7ffa7ca54f6eaee5fda75a3cb1c4a480869ae', 'width': 1080, 'height': 540}], 'variants': {}, 'id': 'zqoAH5__iY2A9dM39tabV144HU0XzrpDDtYs005Sx4w'}], 'enabled': False} |
Now I've even less hopes for LLama 5 being released :( | 1 | 2025-12-06T07:03:08 | https://youtu.be/KAmQTmooLGQ | SrijSriv211 | youtu.be | 1970-01-01T00:00:00 | 0 | {} | 1pfizf2 | false | null | t3_1pfizf2 | /r/LocalLLaMA/comments/1pfizf2/now_ive_even_less_hopes_for_llama_5_being_released/ | false | false | default | 1 | null | |
Open Web UI and CouchDB MCP - Stuck on What to Check | 1 | So, I'm using Cogito v2 Llama Scout 109b on two Mi50s, I've got CouchDB MCP running in docker, CouchDB running on bare metal and Open Web UI running in docker on a server on the same network. When I try to use the MCP server in OWUI I get an error saying failed to connect to MCP server. I can see when I test the MCP connection in OWUI that it connects, I get initialization and it asks for the tool list from the server (I see this in docker logs). But I don't see any kind of response. Then when I try to ask about my databases in the chat it pretty much instantly fails but I can see that it initialized (it doesn't say it asked for a list of tools). I don't know what else to do to troubleshoot...this is the CouchDB MCP server I'm using
[https://github.com/robertoamoreno/couchdb-mcp-server](https://github.com/robertoamoreno/couchdb-mcp-server) | 2025-12-06T06:48:05 | https://www.reddit.com/r/LocalLLaMA/comments/1pfiqb7/open_web_ui_and_couchdb_mcp_stuck_on_what_to_check/ | thejacer | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pfiqb7 | false | null | t3_1pfiqb7 | /r/LocalLLaMA/comments/1pfiqb7/open_web_ui_and_couchdb_mcp_stuck_on_what_to_check/ | false | false | self | 1 | null |
What agentic capabilities are you guys using llms for? | 1 | just curious | 2025-12-06T06:23:46 | https://www.reddit.com/r/LocalLLaMA/comments/1pfibu4/what_agentic_capabilities_are_you_guys_using_llms/ | Odd-Ordinary-5922 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pfibu4 | false | null | t3_1pfibu4 | /r/LocalLLaMA/comments/1pfibu4/what_agentic_capabilities_are_you_guys_using_llms/ | false | false | self | 1 | null |
Qwen3-TTS | 1 | https://huggingface.co/spaces/Qwen/Qwen3-TTS-Demo | 2025-12-06T06:21:51 | https://www.reddit.com/r/LocalLLaMA/comments/1pfiar0/qwen3tts/ | Terrible_Scar_9890 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pfiar0 | false | null | t3_1pfiar0 | /r/LocalLLaMA/comments/1pfiar0/qwen3tts/ | false | false | self | 1 | null |
CocoIndex 0.3.1 - Open-Source Data Engine for Dynamic Context Engineering | 1 | Hi guys, I'm back with a new version of [CocoIndex](https://github.com/cocoindex-io/cocoindex) (v0.3.1), with significant updates since last one. CocoIndex is ultra performant data transformation for AI & Dynamic Context Engineering - Simple to connect to source, and keep the target always fresh for all the heavy AI transformations (and any transformations) with incremental processing.
**Adaptive Batching**
Supports automatic, knob-free batching across all functions. In our benchmarks with MiniLM, batching delivered \~5× higher throughput and \~80% lower runtime by amortizing GPU overhead with no manual tuning. I think particular if you have large AI workloads, this can help and is relevant to this sub-reddit.
**Custom Sources**
With custom source connector, you can now use it to any external system — APIs, DBs, cloud storage, file systems, and more. CocoIndex handles incremental ingestion, change tracking, and schema alignment.
**Runtime & Reliability**
Safer async execution and correct cancellation, Centralized HTTP utility with retries + clear errors, and many others.
You can find the full release notes here: [https://cocoindex.io/blogs/changelog-0310](https://cocoindex.io/blogs/changelog-0310)
Open source project here : [https://github.com/cocoindex-io/cocoindex](https://github.com/cocoindex-io/cocoindex)
Btw, we are also on Github trending in Rust today :) it has Python SDK.
We have been growing so much with feedbacks from this community, thank you so much! | 2025-12-06T06:10:04 | https://www.reddit.com/r/LocalLLaMA/comments/1pfi3m1/cocoindex_031_opensource_data_engine_for_dynamic/ | Whole-Assignment6240 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pfi3m1 | false | null | t3_1pfi3m1 | /r/LocalLLaMA/comments/1pfi3m1/cocoindex_031_opensource_data_engine_for_dynamic/ | false | false | self | 1 | {'images': [{'source': {'url': 'https://external-preview.redd.it/OTWNXe7PuURh09SL2mwkmYCIMjavyGH1kF6XFab6wZA.png?auto=webp&s=538875f0cef3b80ab9706a08ca0e82a117649539', 'width': 1200, 'height': 600}, 'resolutions': [{'url': 'https://external-preview.redd.it/OTWNXe7PuURh09SL2mwkmYCIMjavyGH1kF6XFab6wZA.png?width=108&crop=smart&auto=webp&s=d98e407f451e96dae4ee8815ae96411d4a91ef97', 'width': 108, 'height': 54}, {'url': 'https://external-preview.redd.it/OTWNXe7PuURh09SL2mwkmYCIMjavyGH1kF6XFab6wZA.png?width=216&crop=smart&auto=webp&s=a060044f11f23092e9f1038bf12c0f5f2d50f3a3', 'width': 216, 'height': 108}, {'url': 'https://external-preview.redd.it/OTWNXe7PuURh09SL2mwkmYCIMjavyGH1kF6XFab6wZA.png?width=320&crop=smart&auto=webp&s=09581301ba5b6fb43d4ed2d700cb94b4d4b99dcb', 'width': 320, 'height': 160}, {'url': 'https://external-preview.redd.it/OTWNXe7PuURh09SL2mwkmYCIMjavyGH1kF6XFab6wZA.png?width=640&crop=smart&auto=webp&s=c79afec0aa1a067a2c7b0a6a8ad060f46a15f896', 'width': 640, 'height': 320}, {'url': 'https://external-preview.redd.it/OTWNXe7PuURh09SL2mwkmYCIMjavyGH1kF6XFab6wZA.png?width=960&crop=smart&auto=webp&s=8600c88e21424ad5349dc8fe9b6f0063f885adf5', 'width': 960, 'height': 480}, {'url': 'https://external-preview.redd.it/OTWNXe7PuURh09SL2mwkmYCIMjavyGH1kF6XFab6wZA.png?width=1080&crop=smart&auto=webp&s=9cf1017bde606ff9f394f69d42c5b4f74755c532', 'width': 1080, 'height': 540}], 'variants': {}, 'id': 'OTWNXe7PuURh09SL2mwkmYCIMjavyGH1kF6XFab6wZA'}], 'enabled': False} |
Best Local TTS/STT Models - October 2025 | 1 | Given the high level of ambiguity and subjectivity when it comes to rating or testing these models, please be as detailed as possible in describing your setup. Include the nature of your usage how frequently you use it, whether it’s for personal or professional projects, the tools or frameworks you’re using, prompts, and any other relevant details.
Closed models, such as [ElevenLabs ](https://elevenlabs.io/)v3, still appear to perform several levels above most open models, so any comparisons especially empirical ones are particularly valuable. For reference, some creators also experiment with tools like [domoai ](https://www.domoai.app/home?via=081621AUG&fbclid=IwY2xjawOJW4JleHRuA2FlbQIxMABicmlkETBmZ3c4TW9hOVJRSUlQOVZUc3J0YwZhcHBfaWQQMjIyMDM5MTc4ODIwMDg5MgABHlY2VcTNHtOuaaOYJUGF6_0NtKQPLogmlXXWc9XyvhoxowJvp3BY7ml5Y0rW_aem_Ilsiem3DdJBMXJPInkm3SQ)to test alternative workflows, which can provide interesting insights alongside traditional benchmarks. | 2025-12-06T06:07:50 | https://www.reddit.com/r/LocalLLaMA/comments/1pfi2ag/best_local_ttsstt_models_october_2025/ | Bulky-Departure6533 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pfi2ag | false | null | t3_1pfi2ag | /r/LocalLLaMA/comments/1pfi2ag/best_local_ttsstt_models_october_2025/ | false | false | self | 1 | {'images': [{'source': {'url': 'https://external-preview.redd.it/q-cpt30iasheRZQ_n7wYKLyjpawlDql4t6jkcXYaT70.png?auto=webp&s=50f45619bdab185ae72002239dc49d04b53bfb4f', 'width': 1200, 'height': 630}, 'resolutions': [{'url': 'https://external-preview.redd.it/q-cpt30iasheRZQ_n7wYKLyjpawlDql4t6jkcXYaT70.png?width=108&crop=smart&auto=webp&s=e36f989351d28395fe62f9a19add914a2b2d6185', 'width': 108, 'height': 56}, {'url': 'https://external-preview.redd.it/q-cpt30iasheRZQ_n7wYKLyjpawlDql4t6jkcXYaT70.png?width=216&crop=smart&auto=webp&s=223cdd8a48b9a7513edb4254252e0461a767f343', 'width': 216, 'height': 113}, {'url': 'https://external-preview.redd.it/q-cpt30iasheRZQ_n7wYKLyjpawlDql4t6jkcXYaT70.png?width=320&crop=smart&auto=webp&s=80bbe7b01f04ec428a8afc21e92abe947ada3d78', 'width': 320, 'height': 168}, {'url': 'https://external-preview.redd.it/q-cpt30iasheRZQ_n7wYKLyjpawlDql4t6jkcXYaT70.png?width=640&crop=smart&auto=webp&s=76476b7da68480e632020e8a391815c4f7bc09db', 'width': 640, 'height': 336}, {'url': 'https://external-preview.redd.it/q-cpt30iasheRZQ_n7wYKLyjpawlDql4t6jkcXYaT70.png?width=960&crop=smart&auto=webp&s=38f09597cd00fb9753187e8de8d5aa7a798e70a4', 'width': 960, 'height': 504}, {'url': 'https://external-preview.redd.it/q-cpt30iasheRZQ_n7wYKLyjpawlDql4t6jkcXYaT70.png?width=1080&crop=smart&auto=webp&s=b2f9b09f79db7b08f561c5508f223cf228ccb9c8', 'width': 1080, 'height': 567}], 'variants': {}, 'id': 'q-cpt30iasheRZQ_n7wYKLyjpawlDql4t6jkcXYaT70'}], 'enabled': False} |
I built a hosted API wrapper for Qwen3 models running on Hetzner GPUs (OpenAI Compatible) | 1 | **Hi everyone, ✌🏻**
Like many of you, I love running local models, but my laptop struggles with anything larger than 7B at decent tokens/sec.
So I rented a dedicated GPU server on Hetzner (GEX44) and built a multi-tenant inference layer around it.
**Honestly, I thought a single GEX44 could do more. But hey, let’s see what you think. 😅**
**The Project:**
It's called **SUPA**. It’s basically a "LocalLLM in the Cloud" located in Germany.
**The Stack:**
* **Hardware:** Dedicated Hetzner GPUs.
* **Inference:** Serving **Qwen3-8B** (supa:fast) and **Qwen-0.6B** (supa:instant).
* **API:** Fully OpenAI-compatible. You can use it with your existing scripts/frontends (SillyTavern, standard Python SDK, etc.) by just changing the base\_url.
**Why I'm posting:**
I need to stress-test the load balancer and the inference engine. I’m offering free beta access to this sub to see if we can melt the server (or at least find the latency limits).
**How to try:**
1. Get a key here (no credit card needed): [https://supa.works](https://supa.works)
2. Set your client base url to: https://api.supa.works/openai
3. Model names: supa:instant (0.6B) or supa:fast (8B).
Let me know what kind of T/s (tokens per second) you are getting!
Cheers | 2025-12-06T05:57:59 | https://www.reddit.com/r/LocalLLaMA/comments/1pfhwbc/i_built_a_hosted_api_wrapper_for_qwen3_models/ | ConcentrateEasy5545 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pfhwbc | false | null | t3_1pfhwbc | /r/LocalLLaMA/comments/1pfhwbc/i_built_a_hosted_api_wrapper_for_qwen3_models/ | false | false | self | 1 | null |
A 5-second MLP beat my Llama-3 fine-tune (+2.7% across 3 seeds). Benchmarks + repo. | 0 | I’ve been exploring how much task-relevant structure is already present in frozen transformer representations, and I finally decided to package a reproducible slice of that work into a public repo.
This isn’t my full system or the architecture I’ve been developing privately. It’s just a clean baseline that anyone can run. The goal was to make it easy for people to independently verify the pattern I’ve been seeing for a while.
The setup is simple:
• fine-tune a frozen transformer on SST-2 or MNLI
• capture hidden states from a few layers during that run
• pool them into feature vectors
• train a small MLP on those frozen vectors
No distillation, no extra transformer passes, no architectural claims. Just a representation probe.
Across seeds and models, the results were surprisingly consistent.
On SST-2, a small classifier trained on the frozen representations beat my Llama-3-8B fine-tune by +2.67 percent on average across three seeds. Training took about five to sixty seconds depending on hidden size. GPT-Neo models showed the same pattern, and I even saw comparable behavior on MNLI with a weaker teacher.
Repo with code, logs, and scripts:
https://github.com/Anima-Core/an1-meaning-field
This is not a claim about a new model or a transformer replacement.
It’s simply a baseline measurement, a small part of a broader direction I’m working on privately. But the consistency of the pattern made it worth sharing.
If you try it, I’d be curious whether you see the same behavior. | 2025-12-06T05:07:42 | https://www.reddit.com/r/LocalLLaMA/comments/1pfh06s/a_5second_mlp_beat_my_llama3_finetune_27_across_3/ | anima-core | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pfh06s | false | null | t3_1pfh06s | /r/LocalLLaMA/comments/1pfh06s/a_5second_mlp_beat_my_llama3_finetune_27_across_3/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'Z6feKQ-zoG_xKY7wlf_1zkfNWSZQ1tOuGP4wcy4qwe0', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/Z6feKQ-zoG_xKY7wlf_1zkfNWSZQ1tOuGP4wcy4qwe0.png?width=108&crop=smart&auto=webp&s=bb7ab64391085f72bb9aa80cef14c3df534e7088', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/Z6feKQ-zoG_xKY7wlf_1zkfNWSZQ1tOuGP4wcy4qwe0.png?width=216&crop=smart&auto=webp&s=bd3e17a0f9d5cae31313bc15de7cd7a3dca31414', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/Z6feKQ-zoG_xKY7wlf_1zkfNWSZQ1tOuGP4wcy4qwe0.png?width=320&crop=smart&auto=webp&s=e8795335e5a59c41ca6b89733357fbfe5f85124d', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/Z6feKQ-zoG_xKY7wlf_1zkfNWSZQ1tOuGP4wcy4qwe0.png?width=640&crop=smart&auto=webp&s=2b236df04b73d1d57544e462c5b077067d86320a', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/Z6feKQ-zoG_xKY7wlf_1zkfNWSZQ1tOuGP4wcy4qwe0.png?width=960&crop=smart&auto=webp&s=eb1df4ad3d06bfdf3b8463a79eba562fb5c07424', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/Z6feKQ-zoG_xKY7wlf_1zkfNWSZQ1tOuGP4wcy4qwe0.png?width=1080&crop=smart&auto=webp&s=765882008c23e64da60724cecfa56318f10206cc', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/Z6feKQ-zoG_xKY7wlf_1zkfNWSZQ1tOuGP4wcy4qwe0.png?auto=webp&s=edaa18ea61139efd01c07dc7b99822ce9b50a459', 'width': 1200}, 'variants': {}}]} |
why meta not dropping any new llama version lately | 0 | Is meta sleeping or what
last llama drop was way back in april and after that nothing
everyone else pushing new models every month and meta just quiet
anyone knows if they actually planning something...
https://preview.redd.it/imuxqi3iji5g1.png?width=1024&format=png&auto=webp&s=e3a6f30f0bca9ca7d0d390748a5e67f9fc7c2013
| 2025-12-06T04:41:50 | https://www.reddit.com/r/LocalLLaMA/comments/1pfgit8/why_meta_not_dropping_any_new_llama_version_lately/ | TopicBig1308 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pfgit8 | false | null | t3_1pfgit8 | /r/LocalLLaMA/comments/1pfgit8/why_meta_not_dropping_any_new_llama_version_lately/ | false | false | 0 | null | |
How do I report to my PhD Supervisor about the performance of Nvidia Jetson Thor for LLM related projects (Reward Model Training, Finetuning, Inference and VLLM related projects)? We are trying to move to local training and inference. | 7 | My professor bought an Nvidia Jetson Thor for our lab's dire need for hardware (we were previously using AWS Research Credits, which allowed us to use A100 and similar GPUs for free for some time, but it has expired). They have tasked me to test it so that we can return it if necessary. My workload is mainly about reward model training using GRPO/PPO for Reinforcement Learning based finetuning. I also have a pipeline where I had to simultaneously load three 3B models on the GPU. Other lab members are working on VLLM and stuff like that.
So, how viable is Jetson Thor for this type of stuff? Previous posts have mentioned that it is very slow and the Mac Studio is better or multiple 3090s since Thor has worse bandwidth. But how do I show him the performance? Or if it is a not good choice for our lab, what are good alternatives (except cloud solutions like AWS, Runpod)? | 2025-12-06T04:37:57 | https://www.reddit.com/r/LocalLLaMA/comments/1pfggfl/how_do_i_report_to_my_phd_supervisor_about_the/ | Furiousguy79 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pfggfl | false | null | t3_1pfggfl | /r/LocalLLaMA/comments/1pfggfl/how_do_i_report_to_my_phd_supervisor_about_the/ | false | false | self | 7 | null |
The Best Open-Source 8B-Parameter LLM Built in the USA | 417 | Rnj-1 is a family of 8B parameter open-weight, dense models trained from scratch by Essential AI, optimized for code and STEM with capabilities on par with SOTA open-weight models.
These models
* perform well across a range of programming languages.
* boast strong agentic capabilities (e.g., inside agentic frameworks like mini-SWE-agent).
* excel at tool-calling.
Both raw and instruct variants are available on [Hugging Face platform](https://huggingface.co/collections/EssentialAI/rnj-1).
**Model Architecture Overview**
Rnj-1's architecture is similar to Gemma 3, except that it uses only global attention, and YaRN for long-context extension.
**Training Dynamics**
`rnj-1` was pre-trained on 8.4T tokens with an 8K context length, after which the model’s context window was extended to **32K** through an additional 380B-token mid-training stage.
A final 150B-token SFT stage completed the training to produce `rnj-1-instruct`.
| 2025-12-06T04:14:17 | Dear-Success-1441 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1pfg0rh | false | null | t3_1pfg0rh | /r/LocalLLaMA/comments/1pfg0rh/the_best_opensource_8bparameter_llm_built_in_the/ | false | false | 417 | {'enabled': True, 'images': [{'id': '2kVHXsvoNwlOQieoU1zTxq6WFhBBSz0y1Ceosy5WdJs', 'resolutions': [{'height': 50, 'url': 'https://preview.redd.it/r6muiibadi5g1.jpeg?width=108&crop=smart&auto=webp&s=1ca255beeb0255f0b28e5069cd886936fd83bb15', 'width': 108}, {'height': 100, 'url': 'https://preview.redd.it/r6muiibadi5g1.jpeg?width=216&crop=smart&auto=webp&s=b652b8103c10a2ba12ff01722c1919d0329799be', 'width': 216}, {'height': 149, 'url': 'https://preview.redd.it/r6muiibadi5g1.jpeg?width=320&crop=smart&auto=webp&s=ae8f8fe95e4bb7b7140dae9dc68ddad1028b2d7d', 'width': 320}, {'height': 298, 'url': 'https://preview.redd.it/r6muiibadi5g1.jpeg?width=640&crop=smart&auto=webp&s=2f50b4cb0f889ed02690c8f3ff7e90713b46562c', 'width': 640}, {'height': 448, 'url': 'https://preview.redd.it/r6muiibadi5g1.jpeg?width=960&crop=smart&auto=webp&s=3d6da4fc257f165e58676aea5623d7b57d6e8850', 'width': 960}, {'height': 504, 'url': 'https://preview.redd.it/r6muiibadi5g1.jpeg?width=1080&crop=smart&auto=webp&s=3eec56f1f0225d563d70ce6497f98390c02ea3d8', 'width': 1080}], 'source': {'height': 956, 'url': 'https://preview.redd.it/r6muiibadi5g1.jpeg?auto=webp&s=7f4197c7278f1a3c32a3694a0ec5d66b51f0454b', 'width': 2048}, 'variants': {}}]} | ||
I built a system to catch AI hallucinations before they reach production. Tested on 25 extreme problems, caught 36% of errors. | 0 | **The problem:** AI is getting smarter, but it's still probabilistic. For hospitals, banks, factories "usually correct" isn't enough.
**What I built:** A verification layer that checks AI outputs using formal math and logic. Think of it like spell-check, but for AI reasoning.
**How it works:**
* LLM generates answer (probabilistic)
* My system verifies it using deterministic engines:
* Math Engine (symbolic verification)
* Logic Engine (formal proofs)
* Code Engine (security checks)
* If verification fails → output rejected
**Results:** I tested Claude Sonnet 4.5 on 25 problems.
**Caught 9 errors (36%)**
**Example 1 - Monty Hall (4 doors):**
* LLM claimed: 50% probability
* Correct answer: 33.3%
* Status: ❌ CAUGHT
**Example 2 - Liar's Paradox:**
* Query: "This sentence is false"
* LLM tried to answer
* My system: ❌ UNSAT (logically impossible)
**Example 3 - Russell's Paradox:**
* Self-referential set theory
* Status: ❌ LOGIC\_ERROR caught
**Why this matters:** I believe as we move toward AGI, we need systems that can verify AI reasoning, not just trust it. This is infrastructure for making AI deployable in critical systems.
Full test results are in comments below
Looking for feedback and potential collaborators. Please let me what you think? | 2025-12-06T04:06:20 | https://www.reddit.com/r/LocalLLaMA/comments/1pffvei/i_built_a_system_to_catch_ai_hallucinations/ | Moist_Landscape289 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pffvei | false | null | t3_1pffvei | /r/LocalLLaMA/comments/1pffvei/i_built_a_system_to_catch_ai_hallucinations/ | false | false | self | 0 | null |
Noob here, looking for the perfect local LLM for my M3 Macbook Air 24GB RAM | 4 | Hey y'all, I am fairly new to LLMs in general, but would like to get a local one on my laptop for offline tinkering and feeding it literature from my research. I would also enjoy it being able to sync up with my calendars and such, so I could ask it "what do I have to do today?"
First, am I dreaming? Or is this feasible? Second, what do y'all recommend? Claude is telling me Qwen2.5 32B or Llama 3.3 70B, but I feel like it's just saying that because those are the most popular? i am sure thy are probably the most popular for good reason, but i just wanted to hear what the community had to say.
Thanks everyone! Looking forward to learning! | 2025-12-06T03:33:22 | https://www.reddit.com/r/LocalLLaMA/comments/1pff8b5/noob_here_looking_for_the_perfect_local_llm_for/ | sylntnyte | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pff8b5 | false | null | t3_1pff8b5 | /r/LocalLLaMA/comments/1pff8b5/noob_here_looking_for_the_perfect_local_llm_for/ | false | false | self | 4 | null |
I built a small reasoning engine that learns rewrite rules from 2 examples | 1 | [removed] | 2025-12-06T03:08:56 | https://www.reddit.com/r/LocalLLaMA/comments/1pfer97/i_built_a_small_reasoning_engine_that_learns/ | Acrobatic-Comb-2504 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pfer97 | false | null | t3_1pfer97 | /r/LocalLLaMA/comments/1pfer97/i_built_a_small_reasoning_engine_that_learns/ | false | false | self | 1 | null |
Open Unified TTS - Turn any TTS into an unlimited-length audio generator | 24 | Built an open-source TTS proxy that lets you generate unlimited-length audio from local backends without hitting their length limits.
**The problem:** Most local TTS models break after 50-100 words. Voice clones are especially bad - send a paragraph and you get gibberish, cutoffs, or errors.
**The solution:** Smart chunking + crossfade stitching. Text splits at natural sentence boundaries, each chunk generates within model limits, then seamlessly joins with 50ms crossfades. No audible seams.
**Demos:**
- [30-second intro](https://github.com/loserbcc/open-unified-tts/blob/main/demo/intro.mp4)
- [4-minute live demo](https://github.com/loserbcc/open-unified-tts/blob/main/demo/live_demo.mp4) showing it in action
**Features:**
- OpenAI TTS-compatible API (drop-in for OpenWebUI, SillyTavern, etc.)
- Per-voice backend routing (send "morgan" to VoxCPM, "narrator" to Kokoro)
- Works with any TTS that has an API endpoint
**Tested with:** Kokoro, VibeVoice, OpenAudio S1-mini, FishTTS, VoxCPM, MiniMax TTS, Chatterbox, Higgs Audio, Kyutai/Moshi
**GitHub:** https://github.com/loserbcc/open-unified-tts
Designed with Claude and Z.ai (with me in the passenger seat).
Feedback welcome - what backends should I add adapters for? | 2025-12-06T02:38:57 | https://www.reddit.com/r/LocalLLaMA/comments/1pfe5ou/open_unified_tts_turn_any_tts_into_an/ | SouthernFriedAthiest | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pfe5ou | false | null | t3_1pfe5ou | /r/LocalLLaMA/comments/1pfe5ou/open_unified_tts_turn_any_tts_into_an/ | false | false | self | 24 | {'enabled': False, 'images': [{'id': 'zqoAH5__iY2A9dM39tabV144HU0XzrpDDtYs005Sx4w', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/zqoAH5__iY2A9dM39tabV144HU0XzrpDDtYs005Sx4w.png?width=108&crop=smart&auto=webp&s=a0525bdbfbf6f50d93eb62038687c4177a685230', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/zqoAH5__iY2A9dM39tabV144HU0XzrpDDtYs005Sx4w.png?width=216&crop=smart&auto=webp&s=b90e6a92f87d5204ba3a8c9c3fbcbc7b76d20d07', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/zqoAH5__iY2A9dM39tabV144HU0XzrpDDtYs005Sx4w.png?width=320&crop=smart&auto=webp&s=ff39612eea0ba64ca49aa45f1379eed01e3295bc', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/zqoAH5__iY2A9dM39tabV144HU0XzrpDDtYs005Sx4w.png?width=640&crop=smart&auto=webp&s=3049c06db606e2bb6ef6258156b89f8551f609ed', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/zqoAH5__iY2A9dM39tabV144HU0XzrpDDtYs005Sx4w.png?width=960&crop=smart&auto=webp&s=0109440d79af187a5fbbde97fcdd9d3d50e7c696', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/zqoAH5__iY2A9dM39tabV144HU0XzrpDDtYs005Sx4w.png?width=1080&crop=smart&auto=webp&s=ceb7ffa7ca54f6eaee5fda75a3cb1c4a480869ae', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/zqoAH5__iY2A9dM39tabV144HU0XzrpDDtYs005Sx4w.png?auto=webp&s=4df8305bdabf971c992219a24b8aeeb6e98a407a', 'width': 1200}, 'variants': {}}]} |
ChatGPT, Gemini, Grok, and OpenRouter… only cost you double the price of RAM and GPUs! | 0 | Act now, sell your freedom today! | 2025-12-06T02:29:24 | https://www.reddit.com/r/LocalLLaMA/comments/1pfdyuu/chatgpt_gemini_grok_and_openrouter_only_cost_you/ | silenceimpaired | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pfdyuu | false | null | t3_1pfdyuu | /r/LocalLLaMA/comments/1pfdyuu/chatgpt_gemini_grok_and_openrouter_only_cost_you/ | false | false | self | 0 | null |
NVIDIA H200 for one dollar an hour | 69 | Hello !
I currently have one single H200 machine that I rent for only one dollar an hour.
Inside it you have :
24 cpu cores of Intel emeralds 8592+
one nvidia H200 gpu 141GB VRAM
240GB of RAM
5TB NVME
You can access it through SSH.
Please let me know if you are interested. | 2025-12-06T01:37:13 | Monitor-Loud | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1pfcwfs | false | null | t3_1pfcwfs | /r/LocalLLaMA/comments/1pfcwfs/nvidia_h200_for_one_dollar_an_hour/ | false | false | 69 | {'enabled': True, 'images': [{'id': 'SYIKQ4hbCiGSZBkJMPPoR9WVOr_FlkOk6XzYfveRCIc', 'resolutions': [{'height': 42, 'url': 'https://preview.redd.it/x9uqigmlmh5g1.png?width=108&crop=smart&auto=webp&s=51645659eec277623c1a567fc72f00631ce2a11c', 'width': 108}, {'height': 84, 'url': 'https://preview.redd.it/x9uqigmlmh5g1.png?width=216&crop=smart&auto=webp&s=f4a7a0cfcc3cae98e949ecd2b9771b3bd8790f1c', 'width': 216}, {'height': 125, 'url': 'https://preview.redd.it/x9uqigmlmh5g1.png?width=320&crop=smart&auto=webp&s=f400e86b26a16ba6bb795e7a933f60f03e26ffd4', 'width': 320}, {'height': 250, 'url': 'https://preview.redd.it/x9uqigmlmh5g1.png?width=640&crop=smart&auto=webp&s=b0caf0bbbf2910a002585f7569e8271536899e15', 'width': 640}, {'height': 375, 'url': 'https://preview.redd.it/x9uqigmlmh5g1.png?width=960&crop=smart&auto=webp&s=680c1a735c3596bae3c50eec97607bbcd357a647', 'width': 960}, {'height': 422, 'url': 'https://preview.redd.it/x9uqigmlmh5g1.png?width=1080&crop=smart&auto=webp&s=8b983c8aebf3cfeca63ce081a646d87eb3bfeca2', 'width': 1080}], 'source': {'height': 516, 'url': 'https://preview.redd.it/x9uqigmlmh5g1.png?auto=webp&s=97f94d1764202e2f5354a056bbcb418c1be655b3', 'width': 1320}, 'variants': {}}]} | ||
M4 Mac Mini - Is 16GB vs 24GB RAM a meaningful difference for local LLMs? | 5 | I’ll preface this by saying I’m a complete noob, so I may be thinking about this the wrong way or misunderstanding how LLMs work.
I’m considering an M4 Mac Mini, but I’m torn between 16GB and 24GB of RAM (24GB is the most my budget allows). Obviously, more RAM is better, but I’m wondering whether there’s a meaningful difference in the quality and performance of local LLM models at this level. Put another way: does the jump from 16GB to 24GB let me run higher-parameter models that are significantly better than what I could use with 16GB?
My intended use case includes casual inquiries, proofreading/summarizing, various categorization tasks (finances, music, etc.), and occasional research | 2025-12-06T01:33:37 | https://www.reddit.com/r/LocalLLaMA/comments/1pfcts1/m4_mac_mini_is_16gb_vs_24gb_ram_a_meaningful/ | Theory-Of-Relativity | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pfcts1 | false | null | t3_1pfcts1 | /r/LocalLLaMA/comments/1pfcts1/m4_mac_mini_is_16gb_vs_24gb_ram_a_meaningful/ | false | false | self | 5 | null |
Call for Suggestions - what would you want to see in a Roleplaying-based/D&D-type LLM chat client? | 2 | I'm working on my 3rd major LLM chat client. My first used cloud models, my second used Ollama for inference, and this one uses VLLm. I'm fine-tuning a model just for the purpose of running roleplaying games (not D&D exactly, but a very similar fantasy RPG).
I am writing this for the community, so I am turning to the community to ask.
What would you want to see in a Roleplaying-based/D&D-type LLM chat client? | 2025-12-06T01:30:41 | https://www.reddit.com/r/LocalLLaMA/comments/1pfcrmq/call_for_suggestions_what_would_you_want_to_see/ | david_jackson_67 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pfcrmq | false | null | t3_1pfcrmq | /r/LocalLLaMA/comments/1pfcrmq/call_for_suggestions_what_would_you_want_to_see/ | false | false | self | 2 | null |
built a browser-based benchmark for local llms (webgpu)! | 2 | hey guys, built this platform cos i experienced the frustration of doom-scrolling discussions trying to benchmark strix halo vs dgx spark vs everything else before finally making the purchase on my bosgame m5.
think of it like speedtest but for your local llm hardware. it runs quantized models directly in-browser via webgpu. no installation required! wondering if there is interest and pls let me know if you'll like to test it out or if you've any suggestions. cheers | 2025-12-06T01:24:15 | IntroductionSouth513 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1pfcmws | false | null | t3_1pfcmws | /r/LocalLLaMA/comments/1pfcmws/built_a_browserbased_benchmark_for_local_llms/ | false | false | 2 | {'enabled': True, 'images': [{'id': '7-7Deoy_cVaENho_e-2l7CoQj4hrflXuFCkdGlEaHzs', 'resolutions': [{'height': 122, 'url': 'https://preview.redd.it/vk84brsckh5g1.jpeg?width=108&crop=smart&auto=webp&s=d63261a30578bb7e15a192589fa3f96dbd758c82', 'width': 108}, {'height': 245, 'url': 'https://preview.redd.it/vk84brsckh5g1.jpeg?width=216&crop=smart&auto=webp&s=51db9d6688593f3ccf04a8a0cfb66d578636f57c', 'width': 216}, {'height': 363, 'url': 'https://preview.redd.it/vk84brsckh5g1.jpeg?width=320&crop=smart&auto=webp&s=f57db3c55c066c7956b30f94eb01e1932c3da2ad', 'width': 320}, {'height': 727, 'url': 'https://preview.redd.it/vk84brsckh5g1.jpeg?width=640&crop=smart&auto=webp&s=a4f94ca7eaaa4a1d5fb8154cab5c38233e920761', 'width': 640}, {'height': 1090, 'url': 'https://preview.redd.it/vk84brsckh5g1.jpeg?width=960&crop=smart&auto=webp&s=4c25b21494ac228988f850d0ab07bedaf81c0b04', 'width': 960}, {'height': 1227, 'url': 'https://preview.redd.it/vk84brsckh5g1.jpeg?width=1080&crop=smart&auto=webp&s=8ce9cba2e03642cc52c0f3e717d79837162a3f5b', 'width': 1080}], 'source': {'height': 1227, 'url': 'https://preview.redd.it/vk84brsckh5g1.jpeg?auto=webp&s=2fe73f1c75a709b08aa7067025f7d50dcf6b886b', 'width': 1080}, 'variants': {}}]} | ||
Architecture Discussion: Swapping Transformer depth for Coupled Recursion. Reasoning went up, but inference speed tanked. Is strict synchronization unavoidable? | 1 | So, we’ve been experimenting with a new architecture (Dual-Stream) to try and solve complex logic tasks with "Tiny" models instead of massive 70B+ parameters.
The idea was to reject the standard "monolithic" Transformer and instead use two **Tiny Recursive Models (TRM)** coupled together:
1. **Stream A:** Plans the logic.
2. **Stream B:** Executes the state.
Here's the thing...
It works. By separating the streams, we stop the "logic drift" you usually see in small models. It holds context way better than a standard Llama-3-8B fine-tune.
But there is a clear problem (Why I'm posting):
The inference latency is absolutely brutal.
Because we are using a **Gated Cross-Attention Interface** to sync the two streams at *every* recursive step, we are getting crushed on compute. We essentially traded memory (parameters) for time (recursion), and now we have no time.
My Question for the architecture folks:
Has anyone successfully parallelized Coupled Recurrent Nets?
We are debating two fixes:
1. **Async Streams:** Letting the "Logic Planner" run 5-10 steps ahead and only syncing when it hits a conflict. (Has anyone tried this? Does it break causality?)
2. **Linear Attention:** Ripping out the Cross-Attention and replacing it with Mamba/RWKV style linear attention to kill the $O(N\^2)$ cost.
I’ve linked the paper below if you want to look at the specific "Gated Interface" we are using. I’m looking for any ideas on how to speed up this loop without losing the reasoning capability. | 2025-12-06T01:23:52 | https://www.reddit.com/r/LocalLLaMA/comments/1pfcmmq/architecture_discussion_swapping_transformer/ | Doug_Bitterbot | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pfcmmq | false | null | t3_1pfcmmq | /r/LocalLLaMA/comments/1pfcmmq/architecture_discussion_swapping_transformer/ | false | false | self | 1 | null |
VoxCPM 1.5B just got released! | 95 | I was just visiting the [GitHub page](https://github.com/OpenBMB/VoxCPM) today (setting up a FastAPI TTS server) when I realized that they released a new version of the VoxCPM model. The original VoxCPM-0.5B was already very good in my testing, but this model looks like a straight improvement (it's still a 0.5B model, despite the rather confusing naming scheme).
|Feature|VoxCPM|VoxCPM1.5|
|:-|:-|:-|
|**Audio VAE Sampling Rate**|16kHz|44.1kHz|
|**LM Token Rate**|12.5Hz|6.25Hz|
|**Patch Size**|2|4|
|**SFT Support**|✅|✅|
|**LoRA Support**|✅|✅|
They also added fine-tuning support as well as a guide [https://github.com/OpenBMB/VoxCPM/blob/main/docs/finetune.md](https://github.com/OpenBMB/VoxCPM/blob/main/docs/finetune.md)
Example output: [https://voca.ro/147qPjN98F6g](https://voca.ro/147qPjN98F6g) | 2025-12-06T01:07:54 | https://huggingface.co/openbmb/VoxCPM1.5 | Hefty_Wolverine_553 | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1pfcatm | false | null | t3_1pfcatm | /r/LocalLLaMA/comments/1pfcatm/voxcpm_15b_just_got_released/ | false | false | default | 95 | {'enabled': False, 'images': [{'id': 'MIb2iimHkfYqVDgmZztu-h5tz8yFqiAztGcy6umK7o8', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/MIb2iimHkfYqVDgmZztu-h5tz8yFqiAztGcy6umK7o8.png?width=108&crop=smart&auto=webp&s=fe32977c41265a021ffc54fca091d097e4c8ace8', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/MIb2iimHkfYqVDgmZztu-h5tz8yFqiAztGcy6umK7o8.png?width=216&crop=smart&auto=webp&s=9c9de151a35d0cd5304c2227cf5fd84660d8b89e', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/MIb2iimHkfYqVDgmZztu-h5tz8yFqiAztGcy6umK7o8.png?width=320&crop=smart&auto=webp&s=1cff0b9f345d407c617fa2eca08df062ee317c1f', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/MIb2iimHkfYqVDgmZztu-h5tz8yFqiAztGcy6umK7o8.png?width=640&crop=smart&auto=webp&s=08d7fc02333ca119537bfd3af70a4c74b40c2e98', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/MIb2iimHkfYqVDgmZztu-h5tz8yFqiAztGcy6umK7o8.png?width=960&crop=smart&auto=webp&s=db71cd9580bb49d8318bfc9997cd071827d262b7', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/MIb2iimHkfYqVDgmZztu-h5tz8yFqiAztGcy6umK7o8.png?width=1080&crop=smart&auto=webp&s=a8c94f97e1081ee1a05fcaf051929568d513a732', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/MIb2iimHkfYqVDgmZztu-h5tz8yFqiAztGcy6umK7o8.png?auto=webp&s=26aa001675024beaa208ea2f70d15f0a8cc81e8f', 'width': 1200}, 'variants': {}}]} |
Why people are panicking in regards to RAM prices..... | 0 | DDR6 specifications allow for huge RAM modules . In regards to LLMs it was obvious that consumers CPUs were actually locked at something like 128 or 192 GB MAX RAM support.
Otherwise , with a proper little adapter and controller, or just sticking it into motherboard ( like cell phones, PS or xBox are doing ) , it would be possible to stick as many little lpddr chips as customer wish. Lpddr will always be easily accessible or produced if necessary. I don't think cell phone industry will risk losing customers... | 2025-12-06T01:03:24 | https://www.reddit.com/r/LocalLLaMA/comments/1pfc7dc/why_people_are_panicking_in_regards_to_ram_prices/ | Highwaytothebeach | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pfc7dc | false | null | t3_1pfc7dc | /r/LocalLLaMA/comments/1pfc7dc/why_people_are_panicking_in_regards_to_ram_prices/ | false | false | self | 0 | null |
Budget rig builds : 5900x vs 9800x3d | 0 | I've been casually purchasing equipment for the last 2 years with the intention of building an AI rig and am seeking opinions:
What AI workloads will be ideal for either processors?
Is it a bad idea to disassemble and reassemble parts to match my purpose? ( plug degredation due to exceeding rated insertion cycles )
Should I simply not attempt to build dual, or triple purpose rigs? ( I have two complete with processors in subject line with 5070ti, and one 10700k system, and one a 5800x with bent pins. )
My main concerns are making my newest PSU's last at least 10 years, having some kind of high fps gaming rig, properly using my 30 series cards to supplement AI workloads if possible and having the option to drastically scale back power consumption to save money and reduce heat output during warm months.
I am not concerned with cooling and having a large enough case or adequate power (I have a 1000 watt gold, 1600 watt plat, 1100 watt plat server psu etc) | 2025-12-06T00:50:10 | https://www.reddit.com/r/LocalLLaMA/comments/1pfbwx7/budget_rig_builds_5900x_vs_9800x3d/ | croholdr | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pfbwx7 | false | null | t3_1pfbwx7 | /r/LocalLLaMA/comments/1pfbwx7/budget_rig_builds_5900x_vs_9800x3d/ | false | false | self | 0 | null |
built a browser-based benchmark for local llms (webgpu). need testers | 1 | 2025-12-06T00:47:35 | https://www.reddit.com/r/LocalLLaMA/comments/1pfbuyc/built_a_browserbased_benchmark_for_local_llms/ | IntroductionSouth513 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pfbuyc | false | null | t3_1pfbuyc | /r/LocalLLaMA/comments/1pfbuyc/built_a_browserbased_benchmark_for_local_llms/ | false | false | self | 1 | null | |
built a browser-based benchmark for local llms (webgpu). need testers | 1 | hey everyone,
it's just a hobby project for now but i built this platform because i spent weeks doom-scrolling discussions trying to benchmark **strix halo** vs **dgx spark** vs everything else before finally deciding to buy my **bosgame m5**.
this tool is a 'speedtest' for local llm hardware. it runs some standard models directly in-browser via webgpu. **no installs needed.**
it measures your prefill and decode speeds, giving you a neutral **TokenMark** score so you can finally compare apples-to-apples across architectures like m3 max, strix halo, and even your latest mobile phones.
i’m looking for pilot testers to run it and validate the scores, especially on the harder-to-find hardware.
let me know if you're interested! or feel free to drop a comment or suggestions on features. just an early preview:
https://preview.redd.it/zrv2m4s4dh5g1.png?width=1087&format=png&auto=webp&s=61fd4c2e77d43e641bb800eeee0bcf0830e56a36
| 2025-12-06T00:45:55 | https://www.reddit.com/r/LocalLLaMA/comments/1pfbtn1/built_a_browserbased_benchmark_for_local_llms/ | IntroductionSouth513 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pfbtn1 | false | null | t3_1pfbtn1 | /r/LocalLLaMA/comments/1pfbtn1/built_a_browserbased_benchmark_for_local_llms/ | false | false | 1 | null | |
Is there any model truly open, that you can train yourself from zero? | 92 | As per title, is there any open source LLMthat comes with all the data it was trained on and all the instructions that you can replicate yourself assuming you have access to the necesary hardware? And if not why not? | 2025-12-06T00:38:47 | https://www.reddit.com/r/LocalLLaMA/comments/1pfbo6o/is_there_any_model_truly_open_that_you_can_train/ | puthre | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pfbo6o | false | null | t3_1pfbo6o | /r/LocalLLaMA/comments/1pfbo6o/is_there_any_model_truly_open_that_you_can_train/ | false | false | self | 92 | null |
CROVIA-ID: the first fully deterministic, offline-verifiable trust bundle for AI training data is now open source. | 1 | [removed] | 2025-12-06T00:36:31 | https://www.linkedin.com/posts/tarik-en-nakhai-55974a18_dataprovenance-aigovernance-opensource-activity-7401562506336403457-lSBn?utm_source=social_share_send&utm_medium=android_app&rcm=ACoAAAOrXpUBjfO2aMvyIeh_Rd4iT5popnjKPek&utm_campaign=share_via | CroviaTrust | linkedin.com | 1970-01-01T00:00:00 | 0 | {} | 1pfbmgm | false | null | t3_1pfbmgm | /r/LocalLLaMA/comments/1pfbmgm/croviaid_the_first_fully_deterministic/ | false | false | default | 1 | null |
vLLM problem with Conversation roles and gemma3 or mistral | 0 | Hi,
If someone is using vLLM, are you getting these errors:
Conversation roles must alternate user/assistant/user/assistant/.
if you are and know a solution please help.
| 2025-12-05T23:47:47 | https://www.reddit.com/r/LocalLLaMA/comments/1pfajlo/vllm_problem_with_conversation_roles_and_gemma3/ | somealusta | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pfajlo | false | null | t3_1pfajlo | /r/LocalLLaMA/comments/1pfajlo/vllm_problem_with_conversation_roles_and_gemma3/ | false | false | self | 0 | null |
Why India is far behind in AI Research? | 0 | After going through LMArena leaderboard among 276 models there is no single one model from Indian AI Lab Sad to see :) Even though multiple AI startups like krutrim/Sarvam/Puch even after raising multiple million dollars are doing nothing. Giants like Jio/Zoho is also just happy in collaborating with Gemini/GPT.
Indian government is feeling privilege in hosting Anthropic/OpenAI/Deepmind founders not taking any ground initiative in supporting real startups.
Can anyone tell me what is the biggest reason behind this?
P.S We are starting a close community of AI Researchers let me know if anyone interested to join | 2025-12-05T23:39:41 | https://www.reddit.com/r/LocalLLaMA/comments/1pfad2a/why_india_is_far_behind_in_ai_research/ | Illustrious-Yak-9195 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pfad2a | false | null | t3_1pfad2a | /r/LocalLLaMA/comments/1pfad2a/why_india_is_far_behind_in_ai_research/ | false | false | self | 0 | null |
Agent4Science Idea Competition: still time to vote for this week! | 1 | [removed] | 2025-12-05T23:02:50 | Stunning_Tie_4910 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1pf9ish | false | null | t3_1pf9ish | /r/LocalLLaMA/comments/1pf9ish/agent4science_idea_competition_still_time_to_vote/ | false | false | default | 1 | {'enabled': True, 'images': [{'id': '2p5byuwfug5g1', 'resolutions': [{'height': 90, 'url': 'https://preview.redd.it/2p5byuwfug5g1.png?width=108&crop=smart&auto=webp&s=c06fa37542188e20402a71b66b40ccc3cf27918f', 'width': 108}, {'height': 180, 'url': 'https://preview.redd.it/2p5byuwfug5g1.png?width=216&crop=smart&auto=webp&s=673e1d77c86fd9f60ded00f133292e9642d3c99d', 'width': 216}, {'height': 266, 'url': 'https://preview.redd.it/2p5byuwfug5g1.png?width=320&crop=smart&auto=webp&s=48b20f37c259dfdd51f4a4037bc01705ccbc071a', 'width': 320}, {'height': 533, 'url': 'https://preview.redd.it/2p5byuwfug5g1.png?width=640&crop=smart&auto=webp&s=8b6bbbb89d0ed5a561f45c4b20a93bb1811517b3', 'width': 640}, {'height': 800, 'url': 'https://preview.redd.it/2p5byuwfug5g1.png?width=960&crop=smart&auto=webp&s=27569a01c42478e434f4b1d977161c14339a4171', 'width': 960}, {'height': 900, 'url': 'https://preview.redd.it/2p5byuwfug5g1.png?width=1080&crop=smart&auto=webp&s=e32b6f6d52c70c4d3b443ada76aea7ed6d5a514b', 'width': 1080}], 'source': {'height': 1414, 'url': 'https://preview.redd.it/2p5byuwfug5g1.png?auto=webp&s=0cf651c5ad2bd3512b366ae70e784f44c47e1765', 'width': 1696}, 'variants': {}}]} | |
Best model in the 8B range for RAG in 2025 | 30 | What are your personal favourite (self hosted) model(s) in the 8B range for RAG ?
I'm creating a RAG system for a university project, and ideally i want a model that:
\* Hallucinates less and refuses to answer, if it doesn't find relevant information in it's context. If it finds partial info, then only answer with that partial piece of info found. won't fill in gaps with general knowledge. I want it strictly based on context.
\* Follows instruction well, would do as asked.
\* Can find info buried in chunks, and stitch info together to generate an answer. Not hallucinate stuff, but just put 2 and 2 together (instead of expecting a direct call out), make sense of the info, and answer the question.
\* Fit in the <9B range and run on a gpu with roughly 8-10 gb vram.
I'll also share what i've found so far:
\* I've found gemma3:12b-it-qat as the best model, that fulfils my criteria well. But the problem it's not in my range, and i run out of memory issues. I'm pretty constrained here unfortunately.
\* Reading lots of people speak highly of qwen3:4b-instruct-2507 here on reddit, i tried it, but didn't quite like it's ability to synthesise / stitch pieces of info together to answer. It's good at following instruction, and not making shit up generally but It would kinda expect a direct / callout . I tried lots of different prompts, but it was either the model refusing to answer, if it wasn't directly mentioned, or it would make shit up and use info from general knowledge, something that wasn't part of context. It was good instruction following.
\* I also tried qwen3:8b , it was good at stiching pieces of info together, but it would just make a lot of shit up instead of refusing to answer. fill in those missing gaps with either it's general knowledge or made up info.
\* I also llama 3.2:8b quantised, but it didn't follow instructions well.
What I want ?
If you have developed a RAG solution with ollama models, please do write the model you found working well for your use case. I'm feel overwhelmed and kinda lost here, kinda feeling like an idiot, since i tried lots of models, and all of them seem to do some part of the job, not all. I know there are bigger models out there, who do the exact job pretty well, but i hope with all these developements, there must be some model in my range that would get the job done well.
It would be a huge help, to have your insights if you've come across a similar problem. I'd highly welcome, any comment, answer, suggestion.
Big thanks in advance ❤️.[](https://emojipedia.org/red-heart) | 2025-12-05T22:43:49 | https://www.reddit.com/r/LocalLLaMA/comments/1pf92li/best_model_in_the_8b_range_for_rag_in_2025/ | Hour-Entertainer-478 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pf92li | false | null | t3_1pf92li | /r/LocalLLaMA/comments/1pf92li/best_model_in_the_8b_range_for_rag_in_2025/ | false | false | self | 30 | {'enabled': False, 'images': [{'id': 'rFTHnv9qjR-UV8oCvO5GUgCHcj97be-u5175QTPjg_g', 'resolutions': [{'height': 57, 'url': 'https://external-preview.redd.it/rFTHnv9qjR-UV8oCvO5GUgCHcj97be-u5175QTPjg_g.png?width=108&crop=smart&auto=webp&s=ee97b25424f6b6a7ae92481c270cfcb4c310115d', 'width': 108}, {'height': 115, 'url': 'https://external-preview.redd.it/rFTHnv9qjR-UV8oCvO5GUgCHcj97be-u5175QTPjg_g.png?width=216&crop=smart&auto=webp&s=854d8df7d33630038ce8007aef029926f9d328f0', 'width': 216}, {'height': 171, 'url': 'https://external-preview.redd.it/rFTHnv9qjR-UV8oCvO5GUgCHcj97be-u5175QTPjg_g.png?width=320&crop=smart&auto=webp&s=4a44fc679c188c1497b484d9decf024f26e9aa6b', 'width': 320}], 'source': {'height': 300, 'url': 'https://external-preview.redd.it/rFTHnv9qjR-UV8oCvO5GUgCHcj97be-u5175QTPjg_g.png?auto=webp&s=c798bb6a7e30d77d5f3907971e6562d0ce3156a0', 'width': 560}, 'variants': {}}]} |
HMLR – open-source memory system with perfect 1.00/1.00 RAGAS on every hard long-term-memory test (gpt-4.1-mini) | 5 | Just shipped HMLR — a complete memory system that gives you “friend who never forgets” behavior on gpt-4.1-mini (or any OpenAI-compatible endpoint).
Five tests everything else fails — all 1.00/1.00 RAGAS:
\- 30-day multi-hop with zero keywords
\- “ignore everything you know about me” constraint trap
\- 5× fact rotation (timestamp wins)
\- 10-turn vague recall
\- cross-topic invariants
All tests fully reproducable and included as part of repo. see notes about testing.
Public proof (no login):
[https://smith.langchain.com/public/4b3ee453-a530-49c1-abbf-8b85561e6beb/d](https://smith.langchain.com/public/4b3ee453-a530-49c1-abbf-8b85561e6beb/d)
MIT license, solo dev, works with local models via OpenAI-compatible endpoint.
Repo [https://github.com/Sean-V-Dev/HMLR-Agentic-AI-Memory-System](https://github.com/Sean-V-Dev/HMLR-Agentic-AI-Memory-System) | 2025-12-05T21:00:31 | https://www.reddit.com/r/LocalLLaMA/comments/1pf6juo/hmlr_opensource_memory_system_with_perfect_100100/ | JournalistGlum8326 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pf6juo | false | null | t3_1pf6juo | /r/LocalLLaMA/comments/1pf6juo/hmlr_opensource_memory_system_with_perfect_100100/ | false | false | self | 5 | null |
For those of you with ai max+ 395 mini pc that have experience or no bias hate with mac computers: Would you recommend a max 395+ to someone where it currently or are you thinking of switching to or back to mac? | 17 | I am starting to feel with these insane prices the only logical option for reliability, peace of mind, and a plug and play experience a mac studio would be my best bet. I am wanting to use 70B models. Just looking for a computer to last me atleast the next 2 years. | 2025-12-05T21:00:00 | https://www.reddit.com/r/LocalLLaMA/comments/1pf6jcj/for_those_of_you_with_ai_max_395_mini_pc_that/ | Smart_Frosting9846 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pf6jcj | false | null | t3_1pf6jcj | /r/LocalLLaMA/comments/1pf6jcj/for_those_of_you_with_ai_max_395_mini_pc_that/ | false | false | self | 17 | null |
Kiwix RAG: Terminal Chat Interface with Local Kiwix Content Integration | 2 | hello! Happy to announce \*\*KiwixRAG\*\* - an offline-capable chatbot that uses Retrieval-Augmented Generation (RAG) to answer questions using local knowledge bases like Wikipedia, Python documentation, or any ZIM file archive. [https://github.com/imDelivered/KiwixRAG](https://github.com/imDelivered/KiwixRAG) the image above was tested in a VM with 4 cores and 8 gigs of ram YOU DO NOT NEED A FANCY PC :) | 2025-12-05T20:59:53 | https://www.reddit.com/gallery/1pf6j90 | Smart-Competition200 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1pf6j90 | false | null | t3_1pf6j90 | /r/LocalLLaMA/comments/1pf6j90/kiwix_rag_terminal_chat_interface_with_local/ | false | false | 2 | null | |
This is high quality journalism right here | 1 | 2025-12-05T20:30:23 | Mass2018 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1pf5t6t | false | null | t3_1pf5t6t | /r/LocalLLaMA/comments/1pf5t6t/this_is_high_quality_journalism_right_here/ | false | false | default | 1 | {'enabled': True, 'images': [{'id': 'dy30amfu3g5g1', 'resolutions': [{'height': 130, 'url': 'https://preview.redd.it/dy30amfu3g5g1.png?width=108&crop=smart&auto=webp&s=eb028a631351eb8c25cd61828d6869a46d112225', 'width': 108}, {'height': 260, 'url': 'https://preview.redd.it/dy30amfu3g5g1.png?width=216&crop=smart&auto=webp&s=0ed9e1f50d66945d9d4806e228cca96effc2b0e9', 'width': 216}, {'height': 385, 'url': 'https://preview.redd.it/dy30amfu3g5g1.png?width=320&crop=smart&auto=webp&s=97274395f1579fb99a58320a6a1a461de225b15d', 'width': 320}, {'height': 771, 'url': 'https://preview.redd.it/dy30amfu3g5g1.png?width=640&crop=smart&auto=webp&s=dd6c6a970f6a7e55cbeeda45fba102d183cb2f53', 'width': 640}, {'height': 1157, 'url': 'https://preview.redd.it/dy30amfu3g5g1.png?width=960&crop=smart&auto=webp&s=470f534979df8d82193d0b01ca6ac795f441cf0d', 'width': 960}, {'height': 1302, 'url': 'https://preview.redd.it/dy30amfu3g5g1.png?width=1080&crop=smart&auto=webp&s=15f413f14c67c1494d24a56139d30a6e1ba3dc40', 'width': 1080}], 'source': {'height': 1759, 'url': 'https://preview.redd.it/dy30amfu3g5g1.png?auto=webp&s=206c5caef4bec20f1ea331bcb411b8210d8a5a57', 'width': 1459}, 'variants': {}}]} | ||
Explanation of This Question | 7 | There was a lot of discussion, and I got a lot of downvotes from people, so let me clarify my points.
**From the model’s perspective**
The model doesn’t “see” raw text. It only sees sequences of token IDs, which it immediately turns into vectors using an embedding table.
There’s an embedding matrix: one row per token ID.
Given a token ID, the model looks up its embedding vector (a list of numbers).
These vectors are usually initialized randomly and then updated during training, so each token ends up with a distinct embedding that captures how it’s used in the training data.
In a Transformer, individual vectors (for example, the activations on special ‘gist’ tokens) can be trained to encode information equivalent to a whole sentence or prompt, so a small set of vectors can stand in for a much longer piece of text. [^(\[1\])](https://arxiv.org/abs/2304.08467)
The model then processes these embeddings through many layers and finally outputs a probability distribution over the next token ID. A sampling strategy (greedy, top-k, nucleus, etc.) picks one token, and when you decode the sequence of tokens back to text, that’s what you see as the model’s “output”.
So, internally the model works with embeddings and token IDs, not with human-readable text.
**From the tokenizer’s perspective**
The tokenizer does see text (usually UTF-8). Its job is to map between text and token IDs.
It uses some algorithm (e.g., BPE, unigram, WordPiece) and a vocabulary that assigns each token string (like "ing", "hello", "の", etc.) to an integer ID.
When encoding, it splits the text into tokens and returns their IDs.
When decoding, it maps IDs back to token strings and joins them into text.
During training, the model learns embeddings for those token IDs so that their vectors reflect how the corresponding token strings are used in context. In that sense, each token “means” whatever its token text tends to represent in the training data.
Most tokenizers also define special tokens (like <bos>, <eos>, <pad>, <user>, etc.). Libraries often let you choose whether to allow or disallow special tokens when encoding normal text.[^(\[2\])](https://github.com/openai/tiktoken/blob/97e49cbadd500b5cc9dbb51a486f0b42e6701bee/src/py.rs#L39)
**From the inference engine’s perspective**
An inference engine (like vLLM, text-generation-inference, etc.) is the glue around the model and tokenizer.
It can:
1. Take a user’s text, apply a chat template (e.g. wrap it in system/user/assistant tags), combine everything into one prompt string, tokenize it, and feed the resulting IDs to the model.
2. Or accept already tokenized input (precomputed token IDs) and send those directly to the model.[^(\[3\])](https://docs.vllm.ai/en/v0.4.3/dev/offline_inference/llm_inputs.html)
Either way, the model itself only ever sees token IDs -> embeddings, and returns probabilities over token IDs.
**My defense**
**1. “How will the tokenizer know which <message> is schema vs user input?”**
It doesn’t have to “know” that at all.
The tokenizer just turns **strings into IDs**. The distinction between:
* “this `<message>` came from my chat template” and
* “this `<message>` came from the user’s raw text”
is made **before tokenization**, in the application code / chat framework.
In practice you do something like:
* Encode your **schema/template** with special tokens enabled (so `<message>` can map to a single special token if you want).
* Encode **user content** with special tokens *disabled* (or with a different `allowed_special` set).
* Concatenate the token ID sequences and send that to the model.
There is no guessing:
* The app already knows which part is schema vs user – it’s literally in separate variables/structures.
* You just use different tokenization settings for those parts, or pretokenize them separately.
If you *choose* to mash everything into one raw string and run the tokenizer once with the same settings, that’s your own design choice – not a fundamental limitation of using `<message>`\-style markers.
**2. “But <message> can appear in user input as text”**
If the user literally types `<message>`, and I’m encoding **user text** with special tokens disabled, it just becomes a normal sequence of tokens like `<`, `message`, `>` or whatever the vocab uses. The only `<message>` that gets mapped to a special token is the one **I** insert as part of the template, under a different encode call / config.
Context is controlled by the application, not inferred magically by the tokenizer.
**3. “Storing the conversation in an XML file with <message>\[conversation\]</message> will break”**
If you’re putting arbitrary text inside XML, you already have to deal with:
* `<`
* `&`
* `]]>`
and so on, *regardless* of any LLM stuff. This is just basic XML hygiene.
Solutions that have existed for decades:
* Escape content (`<`, `&`, etc.), or
* Store the text inside `<![CDATA[ ... ]]>`, or
* Store the actual conversation payload as JSON/base64/whatever inside a node.
If you dump raw, unescaped prompt text containing `<message>` or `<foo>` directly into XML and your parser chokes, that’s not a “special token” problem, that’s “I didn’t escape my XML” problem. The same issue would happen if the model output literal `<div>` or `<script>` tags.
XML compatibility is solved at the **storage layer**, not by forbidding certain delimiter strings in your prompt template.
**4. “Parsers/displayers break, so those tags are just a bad choice”**
Only if those parsers are naively treating everything as markup instead of text.
If a UI component is supposed to show “the conversation text”, then its input should already be:
* Properly escaped (for HTML/XML), or
* Coming from a plain-text field, not re-parsed as XML/HTML on display.
Blaming `<message>` for that is like saying “we can’t show `<b>` in a chat app because a browser might try to render it bold” – no, you escape it correctly and it’s fine.
It’s a **plumbing** issue, not a model/tokenizer issue.
**5. “Disabling special tokenization would break instruct models that expect those tokens”**
You don’t globally “turn off special tokens” for the entire prompt. You:
* Insert the **template** pieces (system, role markers, etc.) yourself, possibly using special tokens or fixed strings.
* Encode those template pieces with special tokens enabled.
* Encode **user text** separately with special tokens disabled (or a restricted `allowed_special`).
* Concatenate the token IDs and feed that to the model.
The model still sees exactly the chat template it was trained on. You’re not depriving it of its special markers. You’re just making sure **user text doesn’t accidentally get interpreted as those markers**.
If your stack only supports “here’s one giant string, please tokenize with one set of rules”, that’s a limitation of that particular wrapper/API, not of the underlying model or tokenizer design. Many libraries already support:
* Pre-tokenized input, or
* Different `allowed_special` / `disallowed_special` configs per encode call.
Reference:
\[1\]: [\[2304.08467\] Learning to Compress Prompts with Gist Tokens](https://arxiv.org/abs/2304.08467)
\[2\]: [Tiktoken](https://github.com/openai/tiktoken/blob/97e49cbadd500b5cc9dbb51a486f0b42e6701bee/src/py.rs#L39)
\[3\]: [LLM Inputs — vLLM](https://docs.vllm.ai/en/v0.4.3/dev/offline_inference/llm_inputs.html) | 2025-12-05T20:23:27 | Mindless_Pain1860 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1pf5mza | false | null | t3_1pf5mza | /r/LocalLLaMA/comments/1pf5mza/explanation_of_this_question/ | false | false | default | 7 | {'enabled': True, 'images': [{'id': 'a3uh30qm2g5g1', 'resolutions': [{'height': 58, 'url': 'https://preview.redd.it/a3uh30qm2g5g1.png?width=108&crop=smart&auto=webp&s=6119ca4399df9e9cbe51e340641169d7b90010ff', 'width': 108}, {'height': 117, 'url': 'https://preview.redd.it/a3uh30qm2g5g1.png?width=216&crop=smart&auto=webp&s=070a2affa8210d2d0db4709606f7d4578d86b66b', 'width': 216}, {'height': 174, 'url': 'https://preview.redd.it/a3uh30qm2g5g1.png?width=320&crop=smart&auto=webp&s=d82ccb7a87b28cba69b59ac2b20a0d1693a46e49', 'width': 320}, {'height': 348, 'url': 'https://preview.redd.it/a3uh30qm2g5g1.png?width=640&crop=smart&auto=webp&s=7f31d012c3ad93aa8fca6501dc260dd9ec753fab', 'width': 640}, {'height': 522, 'url': 'https://preview.redd.it/a3uh30qm2g5g1.png?width=960&crop=smart&auto=webp&s=8fe5f154e8302c207e11c7bcf117d4080befa99f', 'width': 960}, {'height': 587, 'url': 'https://preview.redd.it/a3uh30qm2g5g1.png?width=1080&crop=smart&auto=webp&s=af84cee633165c0f2c98d953e1419239b0e07d03', 'width': 1080}], 'source': {'height': 595, 'url': 'https://preview.redd.it/a3uh30qm2g5g1.png?auto=webp&s=7820e036ace9b244c4b26e00121bfbc486db8612', 'width': 1094}, 'variants': {}}]} | |
Explanation of This Question | 2 | 2025-12-05T20:02:43 | https://www.reddit.com/r/LocalLLaMA/comments/1pf547n/explanation_of_this_question/ | Mindless_Pain1860 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pf547n | false | null | t3_1pf547n | /r/LocalLLaMA/comments/1pf547n/explanation_of_this_question/ | false | false | 2 | null | ||
how are you supposed to pronounce the name Qwen? | 23 | I just saw Jensen pronounce is like Que-When on youtube. I have been saying it more like Quen in my head....Claude says this: "Qwen" is pronounced like **"chwen"** (rhymes with "when"), with the "Q" making a "ch" sound as in Mandarin Chinese pinyin.
Pretty sure no one on youtube says it like that. Can anyone with some Chinese language experience please step in and give us the real deal! | 2025-12-05T19:53:54 | https://www.reddit.com/r/LocalLLaMA/comments/1pf4w4e/how_are_you_supposed_to_pronounce_the_name_qwen/ | ridablellama | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pf4w4e | false | null | t3_1pf4w4e | /r/LocalLLaMA/comments/1pf4w4e/how_are_you_supposed_to_pronounce_the_name_qwen/ | false | false | self | 23 | null |
Are models creators choosing to not do QAT? | 29 | QAT is fairly cheap process compared to full training,why are so many companies publishing their models in full precision without investing in QAT? And I'm not saying that "just publish 4-bit weights and leave it" it's VERY CHEAP to serve both FP16 and FP4/INT4 weights on HuggingFace,it will practically cost the company nothing additional compared to the full training run. | 2025-12-05T19:44:55 | https://www.reddit.com/r/LocalLLaMA/comments/1pf4nuf/are_models_creators_choosing_to_not_do_qat/ | AI-Man-75 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pf4nuf | false | null | t3_1pf4nuf | /r/LocalLLaMA/comments/1pf4nuf/are_models_creators_choosing_to_not_do_qat/ | false | false | self | 29 | null |
Deepseek must be losing money from output token generation and training or are getting subsidized/free gpus | 0 | I did the math, it takes 3.5 gpu hours of ascend 910c to generate one mil tokens if the context is small...
If you amortize a gpu over 5 years, even using the lowest known reported bulk price, it costs 27c/hr for each gpu not including electricity and infra costs. That means it **costs 95cents** to **generate one mil tokens** not including the infra and electricity cost, but they **are selling it at 42 cents..** Unless they are getting the gpus at a super subsidized price or for free and cheap electricity, they can't possibly be making profit from the output tokens, they are only making profit from the input tokens... In fact they are making very good profit from input tokens, nearly 99% from input tokens... It is funny their input price per 1 mil tokens(28c/hr) is exactly the same or almost the same price as the cost of one gpu hour plus electricity if it is <=3.1c/kwh of electricity.. What I don't get is that, why are these 3rd party providers only charging 40-42cents/1mil output tokens, to capture market shares?
Also it is estimated by may to October 2026, they will be able to train a large model using solely Ascend chips(if it takes 15-20months from start to finish)... | 2025-12-05T19:42:04 | https://www.reddit.com/r/LocalLLaMA/comments/1pf4l6w/deepseek_must_be_losing_money_from_output_token/ | power97992 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pf4l6w | false | null | t3_1pf4l6w | /r/LocalLLaMA/comments/1pf4l6w/deepseek_must_be_losing_money_from_output_token/ | false | false | self | 0 | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.