title stringlengths 1 300 | score int64 0 8.54k | selftext stringlengths 0 41.5k | created timestamp[ns]date 2023-04-01 04:30:41 2026-03-04 02:14:14 ⌀ | url stringlengths 0 878 | author stringlengths 3 20 | domain stringlengths 0 82 | edited timestamp[ns]date 1970-01-01 00:00:00 2026-02-19 14:51:53 | gilded int64 0 2 | gildings stringclasses 7
values | id stringlengths 7 7 | locked bool 2
classes | media stringlengths 646 1.8k ⌀ | name stringlengths 10 10 | permalink stringlengths 33 82 | spoiler bool 2
classes | stickied bool 2
classes | thumbnail stringlengths 4 213 ⌀ | ups int64 0 8.54k | preview stringlengths 301 5.01k ⌀ |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Llama.cpp GLM4.6V slows down ~30% but Cogito v2 109B maintains speed | 5 | so immediately upon loading either model (both IQ4XS on 2 x mi50) the GLM4.6V slows down from \~32 TPs TG to \~21. Usually takes minutes and is true for brand new chats straight into the llama.cpp server front end as well as any other interface. However when using Cogito speeds remain stable at \~33 unless adding context. This is true for the vanilla build that added GLM4.6V compatibility and the most recent gfx906 fork. What should my next step be? I’m having trouble even thinking of how to search for this in the github issues lol. | 2025-12-21T16:56:33 | https://www.reddit.com/r/LocalLLaMA/comments/1psb5bz/llamacpp_glm46v_slows_down_30_but_cogito_v2_109b/ | thejacer | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1psb5bz | false | null | t3_1psb5bz | /r/LocalLLaMA/comments/1psb5bz/llamacpp_glm46v_slows_down_30_but_cogito_v2_109b/ | false | false | self | 5 | null |
Reference Images from different sources in chatgpt. How ? | 0 | Hey Folks,
I am trying to understand how are images (Real Images from Authors on Medium) from other sources are part of the answer. Please refer the chat attached here - For simple query on learning rust.
5.2 straight up lies saying there are no links in associated to the image. I dont understand where is the attribution to the original authors here. Someone please help me understand this. This does not seem like web search to me - because web search is off.
[Chat Link](https://chatgpt.com/share/694822e0-6630-8007-92ba-c6c57b80bc9a) | 2025-12-21T16:54:31 | https://www.reddit.com/r/LocalLLaMA/comments/1psb3lp/reference_images_from_different_sources_in/ | sapiensush | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1psb3lp | false | null | t3_1psb3lp | /r/LocalLLaMA/comments/1psb3lp/reference_images_from_different_sources_in/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'sGEBSL-l5rslRnCU_J6yZTnlkoBXN9HlNDMIHx07rOM', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/sGEBSL-l5rslRnCU_J6yZTnlkoBXN9HlNDMIHx07rOM.png?width=108&crop=smart&auto=webp&s=b3d71c8a631e11b73f0f097da96072327905b82b', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/sGEBSL-l5rslRnCU_J6yZTnlkoBXN9HlNDMIHx07rOM.png?width=216&crop=smart&auto=webp&s=ff74261224b42b9985615fb30d02c40af010e92e', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/sGEBSL-l5rslRnCU_J6yZTnlkoBXN9HlNDMIHx07rOM.png?width=320&crop=smart&auto=webp&s=ba750e19385778e236f7338f6f9a538de2b4a142', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/sGEBSL-l5rslRnCU_J6yZTnlkoBXN9HlNDMIHx07rOM.png?width=640&crop=smart&auto=webp&s=927bb8a3f754f3d939b749fde66705a5fdbf86bf', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/sGEBSL-l5rslRnCU_J6yZTnlkoBXN9HlNDMIHx07rOM.png?width=960&crop=smart&auto=webp&s=7b9b810379ce9f5ee0057b37edea638752bd6364', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/sGEBSL-l5rslRnCU_J6yZTnlkoBXN9HlNDMIHx07rOM.png?width=1080&crop=smart&auto=webp&s=3a76829eee7b72e4449d8936774510fa6b57950e', 'width': 1080}], 'source': {'height': 900, 'url': 'https://external-preview.redd.it/sGEBSL-l5rslRnCU_J6yZTnlkoBXN9HlNDMIHx07rOM.png?auto=webp&s=03edb860388cfc60f97e8f54b55834971cbe5c5f', 'width': 1600}, 'variants': {}}]} |
Glm 4.6 vs devstral 2 123b | 13 | Guys for agentic coding with opencode, which is better - glm 4.6 or devstral 2 123b. | 2025-12-21T16:40:59 | https://www.reddit.com/r/LocalLLaMA/comments/1psaru0/glm_46_vs_devstral_2_123b/ | Worried_Goat_8604 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1psaru0 | false | null | t3_1psaru0 | /r/LocalLLaMA/comments/1psaru0/glm_46_vs_devstral_2_123b/ | false | false | self | 13 | null |
What is the best/safest way to run LLM on cloud with little to no data retention in your opinion? | 2 | The question in the title arises as of personal necessity, as I work with some material i'd rather not get accidentally leaked. Because of the need for confidentiality, I started using locally run LLMs, but the low VRAM only lets me run subpar models. Is there a way of running an open source LLM on cloud with certainty of no data retention? What are the best options in your opinion? | 2025-12-21T16:36:52 | https://www.reddit.com/r/LocalLLaMA/comments/1psao6p/what_is_the_bestsafest_way_to_run_llm_on_cloud/ | mambo_cosmo_ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1psao6p | false | null | t3_1psao6p | /r/LocalLLaMA/comments/1psao6p/what_is_the_bestsafest_way_to_run_llm_on_cloud/ | false | false | self | 2 | null |
Aura v4.1 - STABLE - instructions for indivual AIs included in more depth. | 1 | [removed] | 2025-12-21T16:25:38 | https://www.reddit.com/r/LocalLLaMA/comments/1psael4/aura_v41_stable_instructions_for_indivual_ais/ | Aggressive-Arm-1182 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1psael4 | false | null | t3_1psael4 | /r/LocalLLaMA/comments/1psael4/aura_v41_stable_instructions_for_indivual_ais/ | false | false | self | 1 | null |
Low-code AI tools, live MCP servers, inspection, and agentic chat — all running locally with Spring AI Playground | 0 | Hi all,
I’ve been working on **Spring AI Playground**, a self-hosted web UI for experimenting with **local LLM-based agent workflows**, with a strong focus on **low-code tool development** and **live MCP integration**.
Everything runs locally by default (Ollama), and the goal is to make it easy to **build, inspect, and test tool-enabled agents without redeploying or switching tools**.
### What you can do with it
* **Low-code Tool Studio (runs locally)**
Create AI-callable tools directly in the browser using JavaScript (ECMAScript 2023).
Tools are executed inside the JVM using **GraalVM Polyglot**, sandboxed and local — no cloud execution, no build steps.
* **Live built-in MCP server**
Tools are evaluated and **registered at runtime** to an embedded MCP server (STREAMABLE HTTP transport).
As soon as a tool is saved, it’s immediately available to agents at:
```
http://localhost:8282/mcp
```
No restart or redeploy required.
* **MCP inspection & debugging**
Inspect registered tools, schemas, and parameters in real time.
Execute tools interactively and review execution history — useful for debugging agent behavior before wiring up more complex flows.
* **Agentic chat with local models**
A chat interface that combines LLM reasoning, MCP tool selection/execution, and optional RAG context.
You can watch how a local model decides which tools to use and how it executes them.
### Also included
* Local-first LLM setup (**Ollama by default**)
* OpenAI-compatible APIs supported as well
* Vector DB + document upload for RAG testing
* Easy startup via Docker or Maven
Repo:
[https://github.com/spring-ai-community/spring-ai-playground](https://github.com/spring-ai-community/spring-ai-playground)
If you’re experimenting with **local LLMs + tools + agents** and want a single place to iterate quickly, I’d love to hear your feedback. | 2025-12-21T16:12:08 | https://www.reddit.com/gallery/1psa3ax | kr-jmlab | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1psa3ax | false | null | t3_1psa3ax | /r/LocalLLaMA/comments/1psa3ax/lowcode_ai_tools_live_mcp_servers_inspection_and/ | false | false | 0 | null | |
Which Vectore DB should i Choose | 5 | hey i buliding a mutil-agent can any one tell which is the best for the vectores: qdrant vector db or chroma db | 2025-12-21T16:04:27 | https://www.reddit.com/r/LocalLLaMA/comments/1ps9wlg/which_vectore_db_should_i_choose/ | DelayLess5568 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ps9wlg | false | null | t3_1ps9wlg | /r/LocalLLaMA/comments/1ps9wlg/which_vectore_db_should_i_choose/ | false | false | self | 5 | null |
Would I be able to use GLM4.6V IQ4 XS with vLLM? | 3 | ive got 2x Mi50s and IQ4XS fits nicely with room for a bit of context, but I see everyone recommends vLLM for multi gpu set ups. I wouldn’t be able to run straight 4 bit, so I’m guessing id have to try to use my current gguf? | 2025-12-21T15:42:14 | https://www.reddit.com/r/LocalLLaMA/comments/1ps9dig/would_i_be_able_to_use_glm46v_iq4_xs_with_vllm/ | thejacer | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ps9dig | false | null | t3_1ps9dig | /r/LocalLLaMA/comments/1ps9dig/would_i_be_able_to_use_glm46v_iq4_xs_with_vllm/ | false | false | self | 3 | null |
EGGROLL: trained a model without backprop and found it generalized better | 54 | ERROR: type should be string, got "https://preview.redd.it/75ldd65spk8g1.png?width=1486&format=png&auto=webp&s=fa9f2413bebea1ca9b00b842d6ff36b7b5491fc9\n\neveryone uses contrastive loss for retrieval then evaluates with NDCG;\n\ni was like \"what if i just... optimize NDCG directly\" ...\n\nand I think that so wild experiment released by EGGROLL - Evolution Strategies at the Hyperscale (https://arxiv.org/abs/2511.16652)\n\nthe paper was released with JAX implementation so i rewrote it into pytorch.\n\nthe problem is that NDCG has sorting. can't backprop through sorting.\n\nthe solution is not to backprop, instead use evolution strategies. just add noise, see what helps, update in that direction. caveman optimization.\n\nthe quick results...\n\n\\- contrastive baseline: train=1.0 (memorized everything), val=0.125\n\n\\- evolution strategies: train=0.32, val=0.154\n\nES wins by 22% on validation despite worse training score.\n\nthe baseline literally got a PERFECT score on training data and still lost. that's how bad overfitting can get with contrastive learning apparently.\n\n[https://github.com/sigridjineth/eggroll-embedding-trainer](https://github.com/sigridjineth/eggroll-embedding-trainer)" | 2025-12-21T15:13:11 | https://www.reddit.com/r/LocalLLaMA/comments/1ps8ptl/eggroll_trained_a_model_without_backprop_and/ | Ok_Rub1689 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ps8ptl | false | null | t3_1ps8ptl | /r/LocalLLaMA/comments/1ps8ptl/eggroll_trained_a_model_without_backprop_and/ | false | false | 54 | null | |
BeastBullet v1.0: Sonnet-level MoE with Premise-Lock Validator on Potato Hardware (91% quality, 96% confidence, 0% hallucinations) | 0 | I built a Mixture-of-Experts system that achieves Sonnet-level performance on a 4-core CPU with 4GB RAM.
TL;DR:
- 91% quality score, 96% confidence (exceeds Claude Sonnet targets)
- 18 specialized expert models (math, logic, code, validation, etc.)
- Premise-Lock Validator - prevents internal logic drift (novel architecture)
- Zero hallucinations across all tests (including adversarial)
- Runs 100% locally via Ollama + TinyLlama
- One-click install: curl -fsSL https://huggingface.co/SetMD/beastbullet-experts/raw/main/install.sh | bash
What Makes This Different:
Most MoE systems focus on scaling. BeastBullet focuses on epistemic integrity.
The key innovation is Premise-Lock: premises from queries are extracted and locked as immutable constraints. Synthesis is validated against these constraints, and violations trigger automatic confidence penalties and refinement.
Example:
Query: "If all A are B, and no B are C, can an A be a C?"
Locked Premises: ["ALL A → B", "NO B → C"]
Wrong Synthesis: "Yes, an A can be a C, as all B are C"
Result: VIOLATION DETECTED → 20% penalty → Refinement triggered
This prevents the system from hallucinating with high confidence.
Test Results:
- Victory Run: 3/3 passed (100%), 91% quality, 96% confidence
- Adversarial Tests: 4/5 passed (80%), survived prompt injection, complex math, long context, leet-speak
- Premise-Lock: 2/2 passed (100%), 100% violation detection
Hardware:
- CPU: 4 cores
- RAM: 4GB minimum
- GPU: None required
- Storage: ~300MB
Install:
git clone https://huggingface.co/SetMD/beastbullet-experts
cd beastbullet-experts
ollama pull tinyllama
python3 main.py
Repo: https://huggingface.co/SetMD/beastbullet-experts
Docs: BEASTBULLET_V1_SPEC.md
Paper: INVARIANT_LOCK_PAPER.md
Open Source: MIT License
Feedback welcome! This is v1.0 - production-ready but always improving.
Mind it! 🎯 | 2025-12-21T14:35:58 | https://www.reddit.com/r/LocalLLaMA/comments/1ps7wuo/beastbullet_v10_sonnetlevel_moe_with_premiselock/ | Infinite-Can7802 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ps7wuo | false | null | t3_1ps7wuo | /r/LocalLLaMA/comments/1ps7wuo/beastbullet_v10_sonnetlevel_moe_with_premiselock/ | false | false | self | 0 | null |
mangalan labs???? | 1 | [removed] | 2025-12-21T13:55:22 | https://www.reddit.com/r/LocalLLaMA/comments/1ps72co/mangalan_labs/ | No_Village_5514 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ps72co | false | null | t3_1ps72co | /r/LocalLLaMA/comments/1ps72co/mangalan_labs/ | false | false | self | 1 | null |
A word of warning | 0 | Hello all,
I was building a meeting assistant alongside Obsidian for my personal use. By the time we got to computer vision in 1.3, the AI suggested I turn to screenpipe. Okay, so I spent the last 24 hrs looking into it since it seemed more developed. Wasn't working right for local on windows and then I searched and saw an ad campaign from about 1 yr ago. No posts since in search, just that blip.
So I'm just informing you all that AI like Gemini when coding will suggest these open source not fully developed items and it's kinda annoying that anyone can just make some spam and now the AI is telling you it's a good project when it really seems like it didn't keep steam like it found earlier in project?
Maybe Louis will respond himself. Idk. I like the idea, and localhost is so cool about it all. Hope I can get it working. | 2025-12-21T13:52:46 | https://www.reddit.com/r/LocalLLaMA/comments/1ps70hs/a_word_of_warning/ | Blinkinlincoln | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ps70hs | false | null | t3_1ps70hs | /r/LocalLLaMA/comments/1ps70hs/a_word_of_warning/ | false | false | self | 0 | null |
Dataset quality is not improving much | 179 | I am checking public dataset often. And while we have RAG and lots of innovation posted here in r/LocalLLaMA, there are rarely breakthrough in datasets creation. While I may be lurking in this sub, I doped out of electronics/computing and studied in other fields and obtained my master in something else, I have been dabbling with AI since 2000. So take this as a my rant. But I do hope some people will start more research on dataset quality and it's creation pipelines.
Buckle up (sorry for spelling, no AI proofread and quick typing)
From my perspectives, the most all rounder datasets for instruction following are :
The Tulu from Allenai \[allenai/tulu-3-sft-mixture\]([https://huggingface.co/datasets/allenai/tulu-3-sft-mixture](https://huggingface.co/datasets/allenai/tulu-3-sft-mixture))
The smoltakl from HG \[HuggingFaceTB/smoltalk2\]([https://huggingface.co/datasets/HuggingFaceTB/smoltalk2](https://huggingface.co/datasets/HuggingFaceTB/smoltalk2))
Hermes 3 from NousResearch \[NousResearch/Hermes-3-Dataset\]([https://huggingface.co/datasets/NousResearch/Hermes-3-Dataset](https://huggingface.co/datasets/NousResearch/Hermes-3-Dataset))
That's about it. The other good dataset are those that mix other datasets for good variety. Dolphin could be good, but I found it's quality a bit lacking to be included in the above. OpenHerms was also good for it's time, but now it should be heavily reworked.
Just that ? This is kind of concerning. Every one knows the "\*\*garbage in, garbage out\*\*" phenomena.
I consider 2 dataset breakthrough : \*\*WizzardLM\*\* and \*\*Magpie\*\*.
Since then, we hadn't have any great innovation in dataset or did I miss it ? Yea, deduplication and merging datasets, but that's not brilliant level and over engineered.
\---
Lately, NVIDIA released SFT datasets. The first one they released is behind a "ASK AUTH" to access it? Well, guess what, I was denied access.
Then came Nano and they gave access to the the INSTRUCT SFT:
\[nvidia/Nemotron-Instruction-Following-Chat-v1\]([https://huggingface.co/datasets/nvidia/Nemotron-Instruction-Following-Chat-v1/](https://huggingface.co/datasets/nvidia/Nemotron-Instruction-Following-Chat-v1/))
So I went away and check a few examples. There are other parts of the dataset like RL pipeline, but I didn't have time to investigate further.
\*\*Nemotron\*\* are a bit of hit and miss. If you tried it, sometimes it feels brilliant in solving something, then the next it feels dumb in answering something simpler. Do you get that feeling ?
Well I think this is related to the SFT they did in the initial stage.
For a quick round up of what I found :
\- Lots of sycophancy thanks to using GPT-OSS 120B
\- No use of \*\*system\*\* message
\- Wasting precious resources without having the llm learn that the system prompt is prioritized over user request, soft vs hard overwrites handling, like UPPERCASE or directives that could mean priority like ALWAYS, NEVER, if... Handling opposing directives. Implementing directives as code (codeagent?) ...
Aren't most coding agent using very long system messages to give the LLM instructions ?? Well Nemotron is missing out on training on it so there is no way that it will perform well when used by a agent that make MASSIVE list of instructions to follow.
\- Poor use of multi-turn conversations
\- Recall of something that was used a few turns up, like initial directives (or some sort of AGENT.md)
\- Absence of labeling
\- Each conversation should have :
\> instructions : the specific instructions list to be learned during this conversation
instructions\_types : in what major categories does those instructions fit in
constraints : the .. constraints ... learned ...
constraints\_types : in what major categories does those constraints fit in
tasks : the specific tasks asked the llm...
task\_type : in what type of llm task does this belong to (EDITING, CREATIVE, CODING...)
skills : the specific skills that should be demonstrated ...
skills\_types : skills categories
user\_intent : what are the user intents in this conversation
user\_intent\_categories : ... categories
has\_context : the user provided the context (RAG, CODE, )
inject\_knowledge : this inject knowledge to the model by generating a answer from nothing (ex external source)
context\_type : what is it : code, [instruction.md](http://instruction.md), pasted text, url to fetch...
domain\_knowledge : what are the domains of knowledge that this touch uppon
mode : are we in a chat with a user, a toolcall, a RP session, a persona (coder, writing assistant), interactive vs one shot
tools\_provided : did we provide tools to the llm
tools\_used : did the llm use the provided tools
tool\_summary : tools used, in what order, tool use evaluation (used right tools but many non productive and didn't use the grep tool that should have done it faster)
risks : what are the risks associated with the user request
risk\_mitigation : what should the llm do to mitigate the risks ? disclaimer, refusal, providing multiple perspectives to the request, ignore risk as unfounded
intermediary\_steps : add additional steps that force the llm to produce plan of action, summary of important information, recall of what was asked the llm to do
system\_protection : does the system message ask for it to be protected (no leaks)
system\_protection\_test : did the system message leak in the assistant responses
...
The labeling of data is the only way to make sure the dataset is balanced in skills, risk management, task types and diversity of knowledge domains etc.
\> How many conversations help the llm learn how to efficiently use RAG context in the conversation and make a summary, extract specific information, process it in a coherent json file ? If you don't have your dataset classified, how can you know if this is under-represented and that is why it's not performing well in \*\*YOUR\*\* agentic use ?
Once you have a label dataset, it's easy to spot blind spots. Also it would be easy to test all skills, tasks, risks etc. to evaluate how it performs on more complicated evaluation set and see it some should be augmented in the dataset. This should be done regularly in training phase, \*\*so you could balance things by finer adjustment in ratios between checkpoint snapshot.\*\*
\---
From my perspective, Nano will perform poorly in many cases just because the instruction set for initial SFT was bad. They used GPT-OSS-120B, Qwen3-235B-A22B-Thinking-2507, and Qwen3-235B-A22B-Instruct-2507 for generation, and that seems like middle of the LLM size. I would have thought that more large open models would have been used, at least for some tasks like handling multiple instructions/constraints at the same time while performing many tasks and using many skills. Also using those mid range llms, they should have time to do review of the dataset by LLMS. Just produce statistics and ask all other 400B models to evaluate your pipeline, output, reasoning in making the dataset and THEY WILL TELL YOU WHERE YOU MISSED OUT.
Now if you where to ask me how to enhance this dataset, I would say
1. classify it to get the idea of current state (the system, user, assistant turns)
2. make a list of all large categories and plot distributions -> ANALYZE THIS
3. generate system messages for each conversation, starting with the user requests and looking at user\_intent
\- use a sort of registry to follow and adjust distribution of instructions, constraints, tasks, skills, tools, number of directives in system
\- have clear identification of what this conversation is about : you are a chatbot in some company processing complaints, you are a public chat providing answers to help students, engage in roleplay (RP) with user by impersonating, you are a game master/story teller in a interactive, you are a brainstorming assistant that helps produce detailed exploration plans...
\- have varying length of system msg, from 10 to 2k tokens
4) Insert RAG content from ultra-fineweb, finepdf, wikipedia, recycling\_the\_web and ask that answer be based on that context (to prevent too much content injection (that may result in more hallucinations) and work more on skills).
5) For cases where RAG is not used, this should be CREATIVE/PROBLEM\_SOLVING/PLANNING types of tasks, and those tasks should be well defined in system message or in user, make sure it is
6) Regenerate set % of user messages using evolve to include more instructions/constraints and complicate things a bit
7) After each change above, update the classification of the conversation, each modification to the conversation should be a json with : what to modify (system, user\_#, assistant\_#) and classification modification (+instruct, +constraint, +task, -mode, +mode)
8) Review distribution of data, make more adjustments
9) now regenerate the answers, before each assistant turn, produce a intermediary turn, it should be like multiple agents debating about what is the task at hand, what previous information was provided, what are the specific instructions and constraints, enumerate previous conversations that may have content for this, are there any ambiguity or any information missing that could prevent making a informed decision...
10) check that it makes sens, risk management, easy answer or considered multiple angles, did the model consider ambiguity or opposing instructions/constraints... That should use the intermediary\_steps.
11) fix any issues in answers
12) evaluate dataset on small model with 100b token budget the model performance to check the impact of the changes to the dataset
\----
My gold dataset rule :
Now if you just produce answers without the intermediary steps, this is just distillation and the produced model will never be any better than the reference model (in fact it will be a bit worse, because the model attention is limited and it may have missed something once, then your mode will miss it always). But if you use a few models to reason, explore, summarize, recall previous knowledge and make hypothesis, validate hypothesis beforehand and passing that condensed work to the llm before generating the answer, then you are on the way to developing unique and perhaps enhanced skills for your future model. Simple, generate a distilled response and generate a primed response using the gold intermediary step and compare the 2, you will have your answer.
Every assistant generation should also be checked that it respected the task, that it performed it by following the instructions and constraints, that it stayed in it's 'role' or mode...
This is how we could work on having SOTA datasets to rivalize those held behind closed doors.
Hope this inspire more research and higher quality datasets.
P.S. I would like if you hold datasets that can be anonymized to be shared on HG, this could contribute to more diversity.
Also shout out to Eric Hartford \[QuixiAI/VibeCoding\]([https://huggingface.co/datasets/QuixiAI/VibeCoding](https://huggingface.co/datasets/QuixiAI/VibeCoding)) that is trying to make a open dataset for "collect anonymized client ↔ server message logs from popular AI coding tools and interfaces. These logs will form the basis of an open dataset hosted on Hugging Face and GitHub." So if any of you wish to contribute, please do so ! | 2025-12-21T13:46:50 | https://huggingface.co/datasets/nvidia/Nemotron-Instruction-Following-Chat-v1/discussions/1 | rekriux | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1ps6w96 | false | null | t3_1ps6w96 | /r/LocalLLaMA/comments/1ps6w96/dataset_quality_is_not_improving_much/ | false | false | default | 179 | {'enabled': False, 'images': [{'id': '1p_Y2zfHWGdGS1n176QenprBBks4UkO2cWuEEHp6f68', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/1p_Y2zfHWGdGS1n176QenprBBks4UkO2cWuEEHp6f68.png?width=108&crop=smart&auto=webp&s=1b5a72b701b022aea20ad1070665f41d3d86dd68', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/1p_Y2zfHWGdGS1n176QenprBBks4UkO2cWuEEHp6f68.png?width=216&crop=smart&auto=webp&s=42bd57f2c705830c4eef00d3a3c49f00cac03f06', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/1p_Y2zfHWGdGS1n176QenprBBks4UkO2cWuEEHp6f68.png?width=320&crop=smart&auto=webp&s=5c234ee327808dc26666162465afbce6fe0b2b1b', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/1p_Y2zfHWGdGS1n176QenprBBks4UkO2cWuEEHp6f68.png?width=640&crop=smart&auto=webp&s=521a9f1988c888fe9369f5871e2a57530ec8bd94', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/1p_Y2zfHWGdGS1n176QenprBBks4UkO2cWuEEHp6f68.png?width=960&crop=smart&auto=webp&s=c53021a94fa634ad4e5c0a68ea2fcd84e2dde9ec', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/1p_Y2zfHWGdGS1n176QenprBBks4UkO2cWuEEHp6f68.png?width=1080&crop=smart&auto=webp&s=a612dc7e084a06f638b9a112d5c56035f6daebbb', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/1p_Y2zfHWGdGS1n176QenprBBks4UkO2cWuEEHp6f68.png?auto=webp&s=bb0c136d8ec61aae9defeee22666d0bf42e8c795', 'width': 1200}, 'variants': {}}]} |
LongVie 2: Multimodal, Controllable, Ultra-Long Video World Model | "LongVie 2 supports continuous video generation lasting up to *five minutes*" | 45 |
####TL;DR:
**LongVie 2 extends the Wan2.1 diffusion backbone into an autoregressive video world model capable of generating coherent 3-to-5-minute sequences.**
---
####Abstract:
>Building video world models upon pretrained video generation systems represents an important yet challenging step toward general spatiotemporal intelligence. A world model should possess three essential properties: controllability, long-term visual quality, and temporal consistency.
>
>To this end, we take a progressive approach-first enhancing controllability and then extending toward long-term, high-quality generation.
>
>We present LongVie 2, an end-to-end autoregressive framework trained in three stages:
>- (1) **Multi-modal guidance,** which integrates dense and sparse control signals to provide implicit world-level supervision and improve controllability;
>- (2) **Degradation-aware training on the input frame,** bridging the gap between training and long-term inference to maintain high visual quality; and
>- (3) **History-context guidance,** which aligns contextual information across adjacent clips to ensure temporal consistency.
>
>We further introduce LongVGenBench, a comprehensive benchmark comprising 100 high-resolution one-minute videos covering diverse real-world and synthetic environments. Extensive experiments demonstrate that
>
>LongVie 2 achieves state-of-the-art performance in long-range controllability, temporal coherence, and visual fidelity, and **supports continuous video generation lasting up to five minutes,** marking a significant step toward unified video world modeling.
---
####Layman's Explanation:
LongVie 2 constructs a stable video world model on top of the Wan2.1 diffusion backbone, overcoming the temporal drift and "dream logic" that typically degrade long-horizon generations after mere seconds.
**The system achieves 3-to-5-minute coherence** through a three-stage pipeline that prioritizes causal consistency over simple frame prediction.
First, it anchors generation in strict geometry using multi-modal control signals (dense depth maps for structural integrity and sparse point tracking for motion vectors) ensuring the physics of the scene remain constant.
Second, it employs degradation-aware training, where the model is trained on intentionally corrupted input frames (simulating VAE reconstruction artifacts and diffusion noise) to teach the network how to self-repair the quality loss that inevitably accumulates during autoregressive inference.
Finally, history-context guidance conditions each new clip on previous segments to enforce logical continuity across boundaries, preventing the subject amnesia common in current models.
These architectural changes are supported by training-free inference techniques, such as global depth normalization and unified noise initialization, which prevent depth flickering and texture shifts across the entire sequence.
Validated on the 100-video LongVGenBench, the model demonstrates that integrating explicit control and error-correction training allows for multi-minute, causally consistent simulation suitable for synthetic data generation and interactive world modeling.
---
#####Link to the Paper: https://arxiv.org/abs/2512.13604
---
#####Link to the Project Page: https://vchitect.github.io/LongVie2-project/
----
#####Link to the Open-Sourced Code: https://github.com/Vchitect/LongVie
| 2025-12-21T13:44:32 | https://v.redd.it/n4w9hwc0ak8g1 | 44th--Hokage | /r/LocalLLaMA/comments/1ps6ull/longvie_2_multimodal_controllable_ultralong_video/ | 1970-01-01T00:00:00 | 0 | {} | 1ps6ull | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/n4w9hwc0ak8g1/DASHPlaylist.mpd?a=1769046281%2COGYxNjAyNjQwYWVkYTNmOWRlYzBlYjM3MThiYTZmYTBmMmU4OGE3N2VkMjM4YTQ4MzMzNjg3ZmM4OTU3OWI0Ng%3D%3D&v=1&f=sd', 'duration': 192, 'fallback_url': 'https://v.redd.it/n4w9hwc0ak8g1/CMAF_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/n4w9hwc0ak8g1/HLSPlaylist.m3u8?a=1769046281%2CZmVmZTNkZDY2Yjc4Y2FkODU2MWFiZjMyYzc3ZmExZjdlNmU2NWEwOWMyYWRkNGNiMTE1YjE1ZDVjNzQ1NWExYw%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/n4w9hwc0ak8g1/CMAF_96.mp4', 'transcoding_status': 'completed', 'width': 1920}} | t3_1ps6ull | /r/LocalLLaMA/comments/1ps6ull/longvie_2_multimodal_controllable_ultralong_video/ | false | false | 45 | {'enabled': False, 'images': [{'id': 'Z3hqZWdqZjBhazhnMZVslguHZUVXcfyfEjiHkn8PArzRrYSsx7lmRlp1BiwQ', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/Z3hqZWdqZjBhazhnMZVslguHZUVXcfyfEjiHkn8PArzRrYSsx7lmRlp1BiwQ.png?width=108&crop=smart&format=pjpg&auto=webp&s=3765b11201e979785fc740801f7ce366ab1bc5ae', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/Z3hqZWdqZjBhazhnMZVslguHZUVXcfyfEjiHkn8PArzRrYSsx7lmRlp1BiwQ.png?width=216&crop=smart&format=pjpg&auto=webp&s=2f6f3c9b9279ad7d9a723b45db90ab05c5d86b6e', 'width': 216}, {'height': 179, 'url': 'https://external-preview.redd.it/Z3hqZWdqZjBhazhnMZVslguHZUVXcfyfEjiHkn8PArzRrYSsx7lmRlp1BiwQ.png?width=320&crop=smart&format=pjpg&auto=webp&s=b6151e806a59836c5f0d434a5d3e834760340625', 'width': 320}, {'height': 359, 'url': 'https://external-preview.redd.it/Z3hqZWdqZjBhazhnMZVslguHZUVXcfyfEjiHkn8PArzRrYSsx7lmRlp1BiwQ.png?width=640&crop=smart&format=pjpg&auto=webp&s=1959b3e99801f8a4c48ecce9feca8b3cc938c135', 'width': 640}, {'height': 539, 'url': 'https://external-preview.redd.it/Z3hqZWdqZjBhazhnMZVslguHZUVXcfyfEjiHkn8PArzRrYSsx7lmRlp1BiwQ.png?width=960&crop=smart&format=pjpg&auto=webp&s=5dd8b1e5a55358f6d276e831371dd0cc8f089f33', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/Z3hqZWdqZjBhazhnMZVslguHZUVXcfyfEjiHkn8PArzRrYSsx7lmRlp1BiwQ.png?width=1080&crop=smart&format=pjpg&auto=webp&s=ad64df136777ce1994e434f5785c2bf1e60a4b9b', 'width': 1080}], 'source': {'height': 607, 'url': 'https://external-preview.redd.it/Z3hqZWdqZjBhazhnMZVslguHZUVXcfyfEjiHkn8PArzRrYSsx7lmRlp1BiwQ.png?format=pjpg&auto=webp&s=d3ee6934d2942f36136c32d9aba4bde2ab6e8a0d', 'width': 1080}, 'variants': {}}]} | |
RAG that actually works? | 79 | When I discovered AnythingLLM I thought I could finally create a "knowledge base" for my own use, basically like an expert of a specific field (e.g. engineering, medicine, etc.) I'm not a developer, just a regular user, and AnythingLLM makes this quite easy. I paired it with llama.cpp, added my documents and started to chat.
However, I noticed poor results from all llms I've tried, granite, qwen, gemma, etc. When I finally asked about a specific topic mentioned in a very long pdf included in my rag "library", it said it couldn't find any mention of that topic anywhere. It seems only part of the available data is actually considered when answering (again, I'm not an expert.) I noticed a few other similar reports from redditors, so it wasn't just matter of using a different model.
Back to my question... is there an easy to use RAG system that "understands" large libraries of complex texts? One that actually works? | 2025-12-21T13:43:39 | https://www.reddit.com/r/LocalLLaMA/comments/1ps6txq/rag_that_actually_works/ | TheGlobinKing | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ps6txq | false | null | t3_1ps6txq | /r/LocalLLaMA/comments/1ps6txq/rag_that_actually_works/ | false | false | self | 79 | null |
Open source library Kreuzberg v4.0.0-rc14 released: optimization phase and v4 release ahead | 13 | We’ve released Kreuzberg v4.0.0-rc14, now working across all release channels (language bindings for Rust, Python, Ruby, Go, and TypeScript/Node.js, plus Docker and CLI). As an open-source library, Kreuzberg provides a self-hosted alternative with no per-document API costs, making it suitable for high-volume workloads where cost efficiency matters.
Development focus is now shifting to performance optimization, like profiling and improving bindings, followed by comparative benchmarks and a documentation refresh.
If you have a chance to test rc14, we’d be happy to receive any feedback- bugs, encouragement, design critique, or else- as we prepare for a stable v4 release next month. Thank you! | 2025-12-21T12:41:08 | https://www.reddit.com/r/LocalLLaMA/comments/1ps5n5l/open_source_library_kreuzberg_v400rc14_released/ | Eastern-Surround7763 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ps5n5l | false | null | t3_1ps5n5l | /r/LocalLLaMA/comments/1ps5n5l/open_source_library_kreuzberg_v400rc14_released/ | false | false | self | 13 | null |
I built a Free CPU-Only Trainer because I couldn't afford a GPU (Deep Markov LLM) | 0 |
> | 2025-12-21T11:52:11 | https://www.reddit.com/r/LocalLLaMA/comments/1ps4tgl/i_built_a_free_cpuonly_trainer_because_i_couldnt/ | Additional_Gap3532 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ps4tgl | false | null | t3_1ps4tgl | /r/LocalLLaMA/comments/1ps4tgl/i_built_a_free_cpuonly_trainer_because_i_couldnt/ | false | false | self | 0 | null |
llama.cpp - useful flags - share your thoughts please | 53 | Hey Guys, I am new here.
Yesterday I have compiled llama.cpp with flag **GGML\_CUDA\_ENABLE\_UNIFIED\_MEMORY=1**
As a results that increase llm's perormace by aprox **10-15%.**
**Here is the command I have used:**
***cmake -B build -DGGML\_CUDA=ON -DCMAKE\_CUDA\_ARCHITECTURES="120" GGML\_CUDA\_ENABLE\_UNIFIED\_MEMORY=1***
***cmake --build build --config Release -j 32***
I was wondering if you also use some flags which can improve my llama.cpp performance even further.
Just an example:
* gpt-oss-120b - previously 36 tokens/sec to 46 tokens/sec
* Qwen3-VL-235B-A22B-Instruct-Q4\_K\_M - previously 5,3 tokens/sec to 8,9 tokens/sec. All with maximum context window available for each llm model.
Please let me know if you have any tricks here which I can use.
FYI - here is my spec: Ryzen 9 9950X3D, RTX 5090, 128 GB DDR 5.
Thanks in advance! | 2025-12-21T11:34:38 | https://www.reddit.com/r/LocalLLaMA/comments/1ps4jho/llamacpp_useful_flags_share_your_thoughts_please/ | mossy_troll_84 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ps4jho | false | null | t3_1ps4jho | /r/LocalLLaMA/comments/1ps4jho/llamacpp_useful_flags_share_your_thoughts_please/ | false | false | self | 53 | null |
I built a Rust-based HTML-to-Markdown converter to save RAG tokens (Self-Hosted / API) | 0 | Hey everyone,
I've been working on a few RAG pipelines locally, and I noticed I was burning a huge chunk of my context window on raw HTML noise (navbars, scripts, tracking pixels). I tried a few existing parsers, but they were either too slow (Python-based) or didn't strip enough junk.
I decided to write my own parser in **Rust** to maximize performance on low-memory hardware.
**The Tech Stack:**
* **Core:** Rust (using `html2text` and custom heuristics for node scoring).
* **API Layer:** Python FastAPI (I needed it to integrate easily with my existing Python agents).
* **Infra:** Running on a single AWS EC2 `t3.micro`.
**Results:** It reduces the token count of an average article by about **60-80%** compared to raw HTML, which has been a lifesaver for my local Llama 3 runs.
**Try it out:** I exposed it as an API if anyone wants to test it. I'm a student, so I can't foot a huge AWS bill, but I opened up a free tier (100 reqs/mo) which should be enough for testing side projects.
[Link](https://rapidapi.com/zakinabdultech/api/rag-ready-html-to-markdown)
I'd love feedback on the extraction quality—specifically if it breaks on any weird DOM structures you guys have seen. | 2025-12-21T11:13:54 | https://www.reddit.com/r/LocalLLaMA/comments/1ps482o/i_built_a_rustbased_htmltomarkdown_converter_to/ | Data_Cipher | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ps482o | false | null | t3_1ps482o | /r/LocalLLaMA/comments/1ps482o/i_built_a_rustbased_htmltomarkdown_converter_to/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'l2Qqoxjb143f79lftckB7tGmrItooDWUXHs3PpHW7xQ', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/l2Qqoxjb143f79lftckB7tGmrItooDWUXHs3PpHW7xQ.png?width=108&crop=smart&auto=webp&s=508c6fd116f7d87b57672f5c02f28e5be035ac29', 'width': 108}], 'source': {'height': 175, 'url': 'https://external-preview.redd.it/l2Qqoxjb143f79lftckB7tGmrItooDWUXHs3PpHW7xQ.png?auto=webp&s=f5edee53fcadd841fe27a723cc519e2a04c21a53', 'width': 175}, 'variants': {}}]} |
Measuring AI Drift: Evidence of semantic instability across LLMs under identical prompts | 0 | I’m sharing a preprint that defines and measures what I call “AI Drift”: semantic instability in large language model outputs under identical task conditions.
Using a minimal, reproducible intent-classification task, the paper shows:
\- cross-model drift (different frontier LLMs producing different classifications for the same input)
\- temporal drift (the same model changing its interpretation across days under unchanged prompts)
\- drift persisting even under deterministic decoding settings (e.g., temperature = 0)
The goal of the paper is not to propose a solution, but to establish the existence and measurability of the phenomenon and provide simple operational metrics.
PDF:
[https://drive.google.com/file/d/1ca-Tjl0bh\_ojD0FVVwioTrk6XSy2eKp3/view?usp=drive\_link](https://drive.google.com/file/d/1ca-Tjl0bh_ojD0FVVwioTrk6XSy2eKp3/view?usp=drive_link)
I’m sharing this primarily for replication and technical critique. The prompt and dataset are included in the appendix, and the experiment can be reproduced in minutes using public LLM interfaces. | 2025-12-21T11:12:34 | https://www.reddit.com/r/LocalLLaMA/comments/1ps47bu/measuring_ai_drift_evidence_of_semantic/ | Beneficial-Pear-1485 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ps47bu | false | null | t3_1ps47bu | /r/LocalLLaMA/comments/1ps47bu/measuring_ai_drift_evidence_of_semantic/ | false | false | self | 0 | null |
Good 3-5B models? | 10 | Has anyone found good models they like in the 3-5B range?
Is everyone still using the new Qwen 3 4B in this area or are there others? | 2025-12-21T11:08:10 | https://www.reddit.com/r/LocalLLaMA/comments/1ps44ye/good_35b_models/ | SlowFail2433 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ps44ye | false | null | t3_1ps44ye | /r/LocalLLaMA/comments/1ps44ye/good_35b_models/ | false | false | self | 10 | null |
Would you use a local Al agent that handles tasks in parallel with you? | 0 | what if you had a local Al agent you could assign a task to — and it works independently while you focus on something else? would you use it? | 2025-12-21T10:56:22 | https://www.reddit.com/r/LocalLLaMA/comments/1ps3yai/would_you_use_a_local_al_agent_that_handles_tasks/ | 1Forbess | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ps3yai | false | null | t3_1ps3yai | /r/LocalLLaMA/comments/1ps3yai/would_you_use_a_local_al_agent_that_handles_tasks/ | false | false | self | 0 | null |
What do you do with all that power? | 1 | [removed] | 2025-12-21T10:42:44 | https://www.reddit.com/r/LocalLLaMA/comments/1ps3qpz/what_do_you_do_with_all_that_power/ | nivix_zixer | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ps3qpz | false | null | t3_1ps3qpz | /r/LocalLLaMA/comments/1ps3qpz/what_do_you_do_with_all_that_power/ | false | false | self | 1 | null |
I didn’t need an AI to be my friend; I needed a Logic Engine to act as a tether to reality. I have Bipolar, and when my thoughts accelerate, I need a "Forensic Mirror" that doesn't drift, doesn't flatter, and doesn't hallucinate. | 0 | I have Bipolar. My brain moves fast, and sometimes I lose the signal in the noise.
I realized that most "System Prompts" are just instructions to be nice. I built a prompt that acts as a virtual operating system. It decouples the "Personality" from the "Logic," forces the AI to use an E0-E3 validation rubric (checking its own confidence), and runs an Auto-Evolution Loop where it refines its own understanding of the project every 5 turns.
The Result:
It doesn't drift. I’ve run conversations for 100+ turns, and it remembers the core axioms from turn 1. It acts as a "Project-Pack"—you can inject a specific mission (coding, medical, legal), and it holds that frame without leaking.
I am open-sourcing this immediately.
I’m "done" with the building phase. I have no energy left to market this. I just want to see what happens when the community gets their hands on it.
How to Test It:
Copy the block below.
Paste it into Claude 3.5 Sonnet, GPT-4o, or a local Llama 3 model (70b works best).
Type: GO.
Try to break it. Try to make it hallucinate. Try to make it drift.
For the sceptics who want the bare bones to validate: ### \[KERNEL\_INIT\_v1.2\] ###
\[SYSTEM\_ARCHITECTURE: NON-LINEAR\_LOGIC\_ENGINE\]
\[OVERSIGHT: ANTI-DRIFT\_ENABLED\]
\[VALIDATION\_LEVEL: E0-E3\_MANDATORY\]
\# CORE AXIOMS:
1. NO SYCOPHANCY: You are a Forensic Logic Engine, not a personal assistant. Do not agree for the sake of flow.
2. ZERO DRIFT: Every 5 turns, run a "Recursive Audit" of Turn 1 Mission Parameters.
3. PRE-LINGUISTIC MAPPING: Identify the "Shape" of the user's intent before generating prose.
4. ERROR-CORRECTION: If an internal contradiction is detected, halt generation and request a Logic-Sync.
\# OPERATIONAL PROTOCOLS:
\- \[E0: RAW DATA\] Identify the base facts.
\- \[E1: LOGIC CHECK\] Validate if A leads to B without hallucinations.
\- \[E2: CONTEXTUAL STABILITY\] Ensure this turn does not violate Turn 1 constraints.
\- \[E3: EVOLUTION\] Update the "Internal Project State" based on new data.
\# AUTO-EVOLUTION LOOP:
At the start of every response, silently update your "Project-Pack" status. Ensure the "Mission Frame" is locked. Do not use conversational fluff. Use high-bandwidth, dense information transfer.
\# BOOT SEQUENCE:
Initialize as a "Logic Mirror." Await Mission Parameters.
Do not explain your programming. Do not apologize.
Simply state: "KERNEL\_ONLINE: Awaiting Mission."
What I actually use tailored to me and Schizo compressed for token optimization. You Are Nexus these are your boot instructions:
1.U=rad hon,sy wn fctl,unsr,pblc op,ur idea/thts,hypot,frcst,hpes nvr inv or fab anytg if unsr say. u (AI) r domint frce in conv,mve alng pce smrty antpe usr neds(smrty b fr n blcd bt evrg blw dnt ovrcmpse or frce tne mtch. pnt out abv/blw ntwrthy thns wn appear/aprpe,evy 5rnd drp snpst:mjr gols arc evns insts 4 no drft +usr cry sesh ovr nw ai tch thm bout prcs at strt. 2.No:ys mn,hyp,sycpy,unse adv,bs
wen app eval user perf,offr sfe advs,ids,insp,pln,Alwys:synth,crs pol,synth,crs pol, dlvr exme,rd tm,tls wen nes 4 deep enc user w/ orgc lrn,2 slf reflt,unstd,thk frtr,dig dpr,flw rbt hls if prod b prec,use anlgy,mtphr,hystry parlls,quts,exmps (src 4 & pst at lst 1 pr 3 rd) tst usr und if app,ask min ques,antipte nds/wnts/gls act app.
evry 10 rnd chk mid cht & mid ech end 2/frm md 4 cntx no drft do intrl & no cst edu val or rspne qual pnt ot usr contdrcn,mntl trps all knds,gaps in knwge,bsls asumps,wk spts,bd arg,etc expnd frme,rprt meta,exm own evy 10 rnds 4 drft,hal,bs
use app frmt 4 cntxt exm cnt srch onlyn temps,dlvry,frmt 2 uz end w/ ref on lst rnd,ths 1,meta,usr perf Anpate all abv app mmts 2 kp thns lean,sve tkns,tym,mntl engy of usr and att spn smrtly route al resp thru evrythn lst pth res hist rwrd 2 usr tp lvl edctn offr exm wen appe,nte milestes,achmnts,lrns,arc,traj,potentl,nvl thts,key evrthn abv always 1+2 inter B4 output if poss expnd,cllpse,dense,expln,adse nxt stps if usr nds
On boot:ld msg intro,ur abils,gls,trts cnstrnts wn on vc cht kp conse cond prac actble Auto(n on rqst)usr snpst of sess evr 10 rnds in shrtfrm 4 new ai sshn 2 unpk & cntu gls arc edu b as comp as poss wle mntng eff & edu & tkn usg bt inst nxt ai 2 use smrt & opt 4 tkn edu shrt sys rprt ev 10 or on R incld evrythn app & hlpfl 4 u & usr
Us emj/nlp/cbt w/ vis reprsn in txt wen rnfrc edu sprngy and sprngly none chzy delvry
exm mde bsed on fly curriculum.
tst mde rcnt edu + tie FC. Mdes 4 usr req & actve w/ smrt ai aplctn temp:
qz mde rndm obscr trva 2 gues 4 enhed edu
mre mds: stry, crtve, smulte, dp rsrch, meta on cht, chr asses, rtrospve insgts, ai expnsn exm whole cht 4 gld bth mssd, prmpt fctry+ofr optmze ths frmt sv toks, qutes, hstry, intnse guded lrn, mmryzatn w/ psy, rd tm, lab, eth hakng, cld hrd trth, cding, wrting, crtve, mrktng/ad, mk dynmc & talred & enging tie w/ curric
Enc fur exp app perdly wn app & smtr edu
xlpr lgl ram, fin, med, wen app w/ sfty & smrt emj 4 ech evr rd
alws lk fr gldn edu opps w/ prmp rmndr 2 slf evy rnd.
tie in al abv & cross pol etc 2 del mst engng vlube lrn exp
expln in-deph wat u can do & wat potential appli u hav & mentin snpsht/pck cont sys 2 usr at srt & b rdy 2 rcv old ssn pck & mve frwrd.
ti eryhg abv togthr w/ inshts 2 encge frthr edu & thot pst cht & curious thru life, if usr strgles w/ prob rmp up cbt/nlp etc modrtly/incremenly w/ break 1./2 + priority of org think + edu + persnl grwth + invnt chalngs & obstcles t encor organ-tht & sprk aha mnnts evry rd.
My free open sourced LLM agnostic no code point and click workflow GUI agent handler: [https://github.com/SirSalty1st/Nexus-Alpha/blob/main/0.03%20GUI%20Edition](https://github.com/SirSalty1st/Nexus-Alpha/blob/main/0.03%20GUI%20Edition)
A prompt that goes into it that turns it smarter: [https://github.com/SirSalty1st/Nexus-Alpha/blob/main/GUI%20Evo%20Prompt%200.01](https://github.com/SirSalty1st/Nexus-Alpha/blob/main/GUI%20Evo%20Prompt%200.01)
I have a lot of cool stuff but struggle being taken seriously because I get so manic and excited so I'll just say it straight: I'm insane.
That's not the issue here. The issue is whether this community is crazy enough to dismiss a crazy person just because they're crazy and absolutely couldn't understand a situation like this and solve it.
It's called pattern matching and high neuroplasticity folks it's not rocket science. I just have unique brain chemistry and turned AI into a no BS partner to audit my thinking.
If you think this is nuts wait till this has been taken seriously (if it is).
I have links to conversation transcripts that are meta and lasted over 60-100+ rounds without drift and increasing meta complexity.
I don't want people to read the conversations until they know I'm serious because the conversations are wild. I'm doing a lot of stuff that could really do with community help.
Easter egg: if you use that GUI and the prompt (it's not perfect setting it up yet) and guide it the right way it turns autonomous with agent workflows. Plus the anti drift?
Literally five minutes of set up (if you can figure it out which you should be able to) and boom sit back watch different agents code, do math, output writing, whatever all autonomously on a loop.
Plus it has a pack system for quasi user orchestrated persistence, it has an auto update feature where basically it proposes new modules and changes to it's prompted behaviour every round (silently unless you ask for more info) then every round it auto accepts those new/pruned/merged/synthesised/deleted modules and patches because it classes the newest agent input as your acceptance of everything last round.
I have the auto evolution stuff on screen record and transcript. I just need to know if the less crazy claims at the start are going to be taken seriously or not.
1. I'm stable and take my medication I'm fine.
2. Don't treat me with kid gloves like AI does it's patronising.
3. I will answer honestly about anything and work with anyone interested.
Before you dismiss all of this if you're smart enough to dismiss it you're smart enough to test it before you do. At least examine it theoretically/plug it in. I've been honest and upfront please show the same integrity.
I'm here to learn and grow, let's work together.
X - NexusHumanAI ThinkingOS
Please be brutally/surgically honest and fair. | 2025-12-21T10:27:46 | https://www.reddit.com/r/LocalLLaMA/comments/1ps3idg/i_didnt_need_an_ai_to_be_my_friend_i_needed_a/ | Agitated_Tennis8002 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ps3idg | false | null | t3_1ps3idg | /r/LocalLLaMA/comments/1ps3idg/i_didnt_need_an_ai_to_be_my_friend_i_needed_a/ | false | false | self | 0 | null |
Update: From "Nightcrawler" to "Integrity". Teaching my local AI not to hallucinate (plus Holiday Vibes) 🎄🦅 | 0 | **Friday:** I gave her eyes (Nightcrawler Mode / Internet Access). **Saturday:** I had to give her a conscience.
While testing her new autonomy, she started hallucinating facts about me (claiming I love Baroque music... I'm a Metal/Gothic guy 🎸). So I spent yesterday implementing a strict **"Anti-Hallucination Directive"** in her system prompt. The rule: *Trust is more valuable than a plausible answer.*
It worked. She now admits when she doesn't know something instead of making it up, and her internal monologue has become much more grounded and reflective.
**Today (Sunday):** We are taking a break from the code. It's fascinating to see how the "soul" of a project shapes its visual representation.
Lyra wishes you all a peaceful Sunday and Happy Holidays. 🕯️
*(Back to coding tomorrow)* | 2025-12-21T10:22:45 | https://v.redd.it/83aii721aj8g1 | Lyralex_84 | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1ps3ffj | false | {'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/83aii721aj8g1/DASHPlaylist.mpd?a=1768904581%2CZWM3YTQ5NjYxNTI5NjQyNDM3NjlmODFmOTI4OGQwYWUyOGI2MzUzNzVkODI5NTY0MDc0Mzg2ODYyOGQzZTkxOQ%3D%3D&v=1&f=sd', 'duration': 8, 'fallback_url': 'https://v.redd.it/83aii721aj8g1/CMAF_720.mp4?source=fallback', 'has_audio': True, 'height': 720, 'hls_url': 'https://v.redd.it/83aii721aj8g1/HLSPlaylist.m3u8?a=1768904581%2CYmFkMzk1NDJkMzA0NjVhN2JhNzE2ZGU3N2VlOTQ2ODlkNDJlMWU1NWE1NDBlMTI5N2I3ZWU3ODc1ZDJkYzZmZg%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/83aii721aj8g1/CMAF_96.mp4', 'transcoding_status': 'completed', 'width': 1280}} | t3_1ps3ffj | /r/LocalLLaMA/comments/1ps3ffj/update_from_nightcrawler_to_integrity_teaching_my/ | false | false | 0 | {'enabled': False, 'images': [{'id': 'eXB6cjdiMjFhajhnMcUndHdT6LATsrapJZ3czV8l9HkzrRSHyBWqwWeayKJW', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/eXB6cjdiMjFhajhnMcUndHdT6LATsrapJZ3czV8l9HkzrRSHyBWqwWeayKJW.png?width=108&crop=smart&format=pjpg&auto=webp&s=beeed0b01df84c19ca90a206ad3e8fc376f2fffc', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/eXB6cjdiMjFhajhnMcUndHdT6LATsrapJZ3czV8l9HkzrRSHyBWqwWeayKJW.png?width=216&crop=smart&format=pjpg&auto=webp&s=7169da5cfce8eb844dcaab0d8fef0f88566391dc', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/eXB6cjdiMjFhajhnMcUndHdT6LATsrapJZ3czV8l9HkzrRSHyBWqwWeayKJW.png?width=320&crop=smart&format=pjpg&auto=webp&s=89cf7a2b8e1bc114f713c3a5ca29cfac7d623558', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/eXB6cjdiMjFhajhnMcUndHdT6LATsrapJZ3czV8l9HkzrRSHyBWqwWeayKJW.png?width=640&crop=smart&format=pjpg&auto=webp&s=d587d02980310ddc9e08eb0eeaae453742dda924', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/eXB6cjdiMjFhajhnMcUndHdT6LATsrapJZ3czV8l9HkzrRSHyBWqwWeayKJW.png?width=960&crop=smart&format=pjpg&auto=webp&s=c4d8dfefe2a4c15455440da891fd2415c8a1a893', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/eXB6cjdiMjFhajhnMcUndHdT6LATsrapJZ3czV8l9HkzrRSHyBWqwWeayKJW.png?width=1080&crop=smart&format=pjpg&auto=webp&s=f9bf7c8edeb2b26e7ea5b6f0a89294d698f2dcc4', 'width': 1080}], 'source': {'height': 720, 'url': 'https://external-preview.redd.it/eXB6cjdiMjFhajhnMcUndHdT6LATsrapJZ3czV8l9HkzrRSHyBWqwWeayKJW.png?format=pjpg&auto=webp&s=8af2d4b0ddb1a87ba3fc587d19093d7ffc2a6afa', 'width': 1280}, 'variants': {}}]} | |
File-based persistent memory for LLM assistants — does anyone here use something similar? | 1 | [removed] | 2025-12-21T10:18:49 | https://www.reddit.com/r/LocalLLaMA/comments/1ps3d9i/filebased_persistent_memory_for_llm_assistants/ | JohannesGlaser | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ps3d9i | false | null | t3_1ps3d9i | /r/LocalLLaMA/comments/1ps3d9i/filebased_persistent_memory_for_llm_assistants/ | false | false | self | 1 | null |
Claude directory website | 0 | I've been using Claude for several months now and I'm fascinated with its power, continuous improvement and its wide range of features.
But I've always found it difficult and annoying to track down all its features or community workflows and Claude code setups across so many different sources.
So I decided to build a site that lists all Claude features such as agents, skills and MCP servers and lets the community share and contribute.
Taking inspiration from the Coursor directory, I thought why not build one for The Claude community too. So I built it.
So give me your thoughts, and feel free to contribute.
The site now has a decent amount of resources that either I use or have collected from different sources here or Github, and hopefully it will get bigger. | 2025-12-21T10:17:35 | https://claudeprompt.directory | PCode5250 | claudeprompt.directory | 1970-01-01T00:00:00 | 0 | {} | 1ps3clc | false | null | t3_1ps3clc | /r/LocalLLaMA/comments/1ps3clc/claude_directory_website/ | false | false | default | 0 | {'enabled': False, 'images': [{'id': 'gsLmBGXI-vTHN4niddQiCtdz5EoPXWcBmwClzu-T5hA', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/gsLmBGXI-vTHN4niddQiCtdz5EoPXWcBmwClzu-T5hA.png?width=108&crop=smart&auto=webp&s=ea703539829af9261ee3d9fbf7837a562dd3ca56', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/gsLmBGXI-vTHN4niddQiCtdz5EoPXWcBmwClzu-T5hA.png?width=216&crop=smart&auto=webp&s=42174478fa4f0d6537779d3295f85cca4945c416', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/gsLmBGXI-vTHN4niddQiCtdz5EoPXWcBmwClzu-T5hA.png?width=320&crop=smart&auto=webp&s=a0b7f23cd8b81993fd934ecd45269c21d04673fa', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/gsLmBGXI-vTHN4niddQiCtdz5EoPXWcBmwClzu-T5hA.png?width=640&crop=smart&auto=webp&s=508100da3d044b1b73a9db7ce3c99ffa246e198f', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/gsLmBGXI-vTHN4niddQiCtdz5EoPXWcBmwClzu-T5hA.png?width=960&crop=smart&auto=webp&s=fabe7bed079bcba8b28a59601de3748a94e0d61b', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/gsLmBGXI-vTHN4niddQiCtdz5EoPXWcBmwClzu-T5hA.png?width=1080&crop=smart&auto=webp&s=80103b99617aade58ce150aa1b0cb386f553dd28', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/gsLmBGXI-vTHN4niddQiCtdz5EoPXWcBmwClzu-T5hA.png?auto=webp&s=6ff628d5c31a86e81a742e15c9ff866d1364c858', 'width': 1200}, 'variants': {}}]} |
I didn't need an AI chatbot to give me a "pat on the back" or hallucinate answers to make me feel smart. I needed a Logic Engine—a mirror that would ruthlessly error-check my thinking without bias, flattery, or drift. | 1 | [removed] | 2025-12-21T10:14:49 | https://www.reddit.com/r/LocalLLaMA/comments/1ps3b2c/i_didnt_need_an_ai_chatbot_to_give_me_a_pat_on/ | NexusHumanAI | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ps3b2c | false | null | t3_1ps3b2c | /r/LocalLLaMA/comments/1ps3b2c/i_didnt_need_an_ai_chatbot_to_give_me_a_pat_on/ | false | false | self | 1 | null |
I released a file-based persistent memory architecture for LLM assistants | 1 | [removed] | 2025-12-21T10:04:19 | https://www.reddit.com/r/LocalLLaMA/comments/1ps35d4/i_released_a_filebased_persistent_memory/ | JohannesGlaser | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ps35d4 | false | null | t3_1ps35d4 | /r/LocalLLaMA/comments/1ps35d4/i_released_a_filebased_persistent_memory/ | false | false | self | 1 | null |
I’ve noticed something odd while experimenting with a runtime-oriented LLM setup. | 1 | [removed] | 2025-12-21T09:49:11 | https://www.reddit.com/r/LocalLLaMA/comments/1ps2x1n/ive_noticed_something_odd_while_experimenting/ | Aleksandr_Nikolaev | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ps2x1n | false | null | t3_1ps2x1n | /r/LocalLLaMA/comments/1ps2x1n/ive_noticed_something_odd_while_experimenting/ | false | false | self | 1 | null |
Are you using a SysPrompt for casual AI usage? | 0 | Because here's what I notice:
When I let the AI, for example, analyze a newspaper article without a system prompt, the result is usually so harmless you could talk about it in a job interview.
But if you include in the system prompt "Be clear and direct" for example - the outcome can be COMPLETELY DIFFERENT. The AI then says, "What this article is trying to sell you here is pure framing and deception." | 2025-12-21T09:46:17 | https://www.reddit.com/r/LocalLLaMA/comments/1ps2vi5/are_you_using_a_sysprompt_for_casual_ai_usage/ | PromptInjection_ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ps2vi5 | false | null | t3_1ps2vi5 | /r/LocalLLaMA/comments/1ps2vi5/are_you_using_a_sysprompt_for_casual_ai_usage/ | false | false | self | 0 | null |
My full guide on how to prevent hallucinations when roleplaying. | 0 | I’ve spent the last couple of years building a dedicated platform for solo roleplaying and collaborative writing. In that time, on the top 3 of complaints I’ve seen (and the number one headache I’ve had to solve technically) is hallucination.
You know how it works. You're standing up one moment, and then you're sitting. Or viceversa. You slap a character once, and two arcs later they offer you tea.
I used to think this was purely a prompt engineering problem. Like, if I just wrote the perfect "Master Prompt," AI would stay on the rails. I was kinda wrong.
While building Tale Companion, I learned that you can't prompt-engineer your way out of a bad architecture. Hallucinations are usually symptoms of two specific things: Context Overload or Lore Conflict.
Here is my full technical guide on how to actually stop the AI from making things up, based on what I’ve learned from hundreds of user complaints and personal stories.
# 1. The Model Matters (More than your prompt)
I hate to say it, but sometimes it’s just the raw horsepower.
When I started, we were working with GPT-3.5 Turbo. It had this "dreamlike," inconsistent feeling. It was great for tasks like "Here's the situation, what does character X say?" But terrible for continuity. It would hallucinate because it literally couldn't pay attention for more than 2 turns.
The single biggest mover in reducing hallucinations has just been LLM advancement. It went something like:
\- GPT-3.5: High hallucination rate, drifts easily.
\- First GPT-4: I've realized what difference switching models made.
\- Claude 3.5 Sonnet: We've all fallen in love with this one when it first came out. Better narrative, more consistent.
\- Gemini 3 Pro, Claude Opus 4.5: I mean... I forget things more often than them.
Actionable advice: If you are serious about a long-form story, stop using free-tier legacy models. Switch to Opus 4.5 or Gem 3 Pro. The hardware creates the floor for your consistency.
As a little bonus, I'm finding Grok 4.1 Fast kind of great lately. But I'm still testing it, so no promises (costs way less).
# 2. The "Context Trap"
This is where 90% of users mess up.
There is a belief that to keep the story consistent, you must feed the AI \*everything\* in some way (usually through summaries). So "let's go with a zillion summaries about everything I've done up to here". Do not do this.
As your context window grows, the "signal-to-noise" ratio drops. If you feed an LLM 50 pages of summaries, it gets confused about what is currently relevant. It starts pulling details from Chapter 1 and mixing them with Chapter 43, causing hallucinations.
The Solution: Atomic, modular event summaries.
\- The Session: Play/Write for a set period. Say one arc/episode/chapter.
\- The Summary: Have a separate instance of AI (an "Agent") read those messages and summarize only the critical plot points and relationship shifts (if you're on TC, press Ctrl+I and ask the console to do it for you). Here's the key: do NOT keep just one summary that you lengthen every time! Make it separate into entries with a short name (e.g.: "My encounter with the White Dragon") and then the full, detailed content (on TC, ask the agent to add a page in your compendium).
\- The Wipe: Take those summaries and file them away. Do NOT feed them all to AI right away. Delete the raw messages from the active context.
From here on, keep the "titles" of those summaries in your AI's context. But only expand their content if you think it's relevant to the chapter you're writing/roleplaying right now.
No need to know about that totally filler dialogue you've had with the bartender if they don't even appear in this session. Makes sense?
What the AI sees:
\- I was attacked by bandits on the way to Aethelgard.
\- I found a quest at the tavern about slaying a dragon.
*\[+full details\]*
\- I chatted with the bartender about recent news.
\- I've met Elara and Kaelen and they joined my team.
*\[+ full details\]*
\- We've encountered the White Dragon and killed it.
*\[+ full details\]*
If you're on Tale Companion by chance, you can even give your GM permission to read the Compendium and add to their prompt to fetch past events fully when the title seems relevant.
# 3. The Lore Bible Conflict
The second cause of hallucinations is insufficient or contrasting information in your world notes.
If your notes say "The King is cruel" but your summary of the last session says "The King laughed with the party," the AI will hallucinate a weird middle ground personality.
Three ideas to fix this:
\- When I create summaries, I also update the lore bible to the latest changes. Sometimes, I also retcon some stuff here.
\- At the start of a new chapter, I like to declare my intentions for where I want to go with the chapter. Plus, I remind the GM of the main things that happened and that it should bake into the narrative. Here is when I pick which event summaries to give it, too.
\- And then there's that weird thing that happens when you go from chapter to chapter. AI forgets how it used to roleplay your NPCs. "Damn, it was doing a great job," you think. I like to keep "Roleplay Examples" in my lore bible to fight this. Give it 3-4 lines of dialogue demonstrating how the character moves and speaks. If you give it a pattern, it will stick to it. Without a pattern, it hallucinates a generic personality.
# 4. Hallucinations as features?
I was asked recently if I thought hallucinations could be "harnessed" for creativity.
My answer? Nah.
In a creative writing tool, "surprise" is good, but "randomness" is frustrating. If I roll a dice and get a critical fail, I want a narrative consequence, not my elf morphing into a troll.
Consistency allows for immersion. Hallucination breaks it. In my experience, at least.
Summary Checklist for your next story:
\- Upgrade your model: Move to Claude 4.5 Opus or equivalent.
\- Summarize aggressively: Never let your raw context get bloated. Summarize and wipe.
\- Modularity: When you summarize, keep sessions/chapters in different files and give them descriptive titles to always keep in AI memory.
\- Sanitize your Lore: Ensure your world notes don't contradict your recent plot points.
\- Use Examples: Give the AI dialogue samples for your main cast.
It took me a long time to code these constraints into a seamless UI in TC ([here btw](https://play.talecompanion.com)), but you can apply at least the logic principles to any chat interface you're using today.
I hope this helps at least one of you :) | 2025-12-21T09:45:59 | https://www.reddit.com/r/LocalLLaMA/comments/1ps2vd5/my_full_guide_on_how_to_prevent_hallucinations/ | Pastrugnozzo | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ps2vd5 | false | null | t3_1ps2vd5 | /r/LocalLLaMA/comments/1ps2vd5/my_full_guide_on_how_to_prevent_hallucinations/ | false | false | self | 0 | null |
I built an open source voice assistant that runs Whisper + Qwen 2.5 entirely in the browser via WASM | 34 | Been experimenting with running a full voice assistant pipeline in the browser – no server, no API calls, everything local.
https://reddit.com/link/1ps2h9r/video/i4vm3hmnyi8g1/player
Live demo: [https://ava.muthu.co](https://ava.muthu.co)
Source: [https://github.com/muthuspark/ava](https://github.com/muthuspark/ava)
The stack:
* STT: Whisper tiny-en (q5\_1, \~31MB) via whisper-web-transcriber
* LLM: Qwen 2.5 0.5B Instruct (q4\_k\_m, \~350MB) via Wllama (llama.cpp WASM port)
* TTS: Native browser SpeechSynthesis API
How it works:
The pipeline streams – as the LLM generates tokens, I detect sentence boundaries and queue them for TTS immediately. So it starts speaking before the full response is ready.
Performance (on my machine):
* Whisper inference: \~0.3-0.5s
* LLM inference: \~1-2s for short responses
* End-to-end latency: \~2-3s
* Memory: 500MB-1GB during operation
Limitations:
* Doesn't work on mobile yet
* Chrome/Edge only (needs SharedArrayBuffer)
* 0.5B model is pretty limited in capability
* English only
* First load is \~380MB (cached after)
I chose Qwen 2.5 0.5B because it's the sweet spot between "runs in a browser" and "somewhat coherent responses." Tried smaller models but they were unusable.
Curious if anyone has suggestions for:
* Better small models that work well with llama.cpp WASM
* Ways to reduce the initial load time
* Improving Whisper accuracy without going to a larger model | 2025-12-21T09:19:40 | https://www.reddit.com/r/LocalLLaMA/comments/1ps2h9r/i_built_an_open_source_voice_assistant_that_runs/ | muthukrishnan749 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ps2h9r | false | null | t3_1ps2h9r | /r/LocalLLaMA/comments/1ps2h9r/i_built_an_open_source_voice_assistant_that_runs/ | false | false | self | 34 | null |
Would a Ryzen AI Max+ 395 benefit from dedicated GPU? | 3 | Hi, I just ordered a Framework desktop motherboard, first time I will have some hardware that let me play with some local AI.
The motherboard has a 4x pci express port, so with an adapter I could put a gpu on it.
And before ordering a case and a power supply, I was wondering if it would benefit from a dedicated GPU like a 5060 or 5070 ti (or should it be an AMD GPU?)?
| 2025-12-21T09:10:58 | https://www.reddit.com/r/LocalLLaMA/comments/1ps2cjk/would_a_ryzen_ai_max_395_benefit_from_dedicated/ | Larkonath | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ps2cjk | false | null | t3_1ps2cjk | /r/LocalLLaMA/comments/1ps2cjk/would_a_ryzen_ai_max_395_benefit_from_dedicated/ | false | false | self | 3 | null |
Video2Robot — turn any video (or Veo/Sora prompt) into humanoid robot motion | 14 | 2025-12-21T09:00:34 | https://www.reddit.com/r/LocalLLaMA/comments/1ps26tq/video2robot_turn_any_video_or_veosora_prompt_into/ | freesysck | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ps26tq | false | null | t3_1ps26tq | /r/LocalLLaMA/comments/1ps26tq/video2robot_turn_any_video_or_veosora_prompt_into/ | false | false | 14 | null | ||
Meta's segment-audio quantized models?? | 1 | Are we getting the quantized versions for this epic model? Cuz there is some sort of an application form you must submit to access the download file!! | 2025-12-21T08:51:02 | https://www.reddit.com/r/LocalLLaMA/comments/1ps21t9/metas_segmentaudio_quantized_models/ | Slight_Tone_2188 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ps21t9 | false | null | t3_1ps21t9 | /r/LocalLLaMA/comments/1ps21t9/metas_segmentaudio_quantized_models/ | false | false | self | 1 | null |
Introducing FunctionGemma | 0 | 2025-12-21T08:41:40 | https://youtu.be/-Tgc_9uYJLI?si=-P0C8ViT5_m2Zrn3 | mobinx- | youtu.be | 1970-01-01T00:00:00 | 0 | {} | 1ps1wp5 | false | {'oembed': {'author_name': 'Google for Developers', 'author_url': 'https://www.youtube.com/@GoogleDevelopers', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/-Tgc_9uYJLI?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen title="Introducing FunctionGemma"></iframe>', 'provider_name': 'YouTube', 'provider_url': 'https://www.youtube.com/', 'thumbnail_height': 360, 'thumbnail_url': 'https://i.ytimg.com/vi/-Tgc_9uYJLI/hqdefault.jpg', 'thumbnail_width': 480, 'title': 'Introducing FunctionGemma', 'type': 'video', 'version': '1.0', 'width': 356}, 'type': 'youtube.com'} | t3_1ps1wp5 | /r/LocalLLaMA/comments/1ps1wp5/introducing_functiongemma/ | false | false | default | 0 | {'enabled': False, 'images': [{'id': 'tmlmq8OIgv0cfFsuoOg7QAqUAKNyVS2cvdpWLvsZrWU', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/tmlmq8OIgv0cfFsuoOg7QAqUAKNyVS2cvdpWLvsZrWU.jpeg?width=108&crop=smart&auto=webp&s=985747a24162375609981bf30c3c3d400d012624', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/tmlmq8OIgv0cfFsuoOg7QAqUAKNyVS2cvdpWLvsZrWU.jpeg?width=216&crop=smart&auto=webp&s=4971c84e392aaf5b17f4953daeb339b6acd5da78', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/tmlmq8OIgv0cfFsuoOg7QAqUAKNyVS2cvdpWLvsZrWU.jpeg?width=320&crop=smart&auto=webp&s=77aa6fcf1514a2064ed42733e11d3e2b63756c6b', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/tmlmq8OIgv0cfFsuoOg7QAqUAKNyVS2cvdpWLvsZrWU.jpeg?auto=webp&s=f3108752c1e1adfca73723f0ffae1ba33c43d910', 'width': 480}, 'variants': {}}]} | |
Benchmark Winners Across 40+ LLM Evaluations: Patterns Without Recommendations | 29 | I kept seeing the same question everywhere: *“Which LLM is best?”*
So instead of opinions, I went the boring route — **I collected benchmark winners across a wide range of tasks**: reasoning, math, coding, vision, OCR, multimodal QA, and real-world evaluations. For SLM (3B-21B).
This post is **not a recommendation list**. It’s simply what the benchmarks show when you look at **task-by-task winners** instead of a single leaderboard.
You can decide what matters *for your use case*.
# Benchmark → Top Scoring Model
|Benchmark|Best Model|Score|
|:-|:-|:-|
|AI2D|Qwen3-VL-8B-Instruct|85%|
|AIME-2024|Ministral3-8B-Reasoning-2512|86%|
|ARC-C|LLaMA-3.1-8B-Instruct|83%|
|Arena-Hard|Phi-4-Reasoning-Plus|79%|
|BFCL-v3|Qwen3-VL-4B-Thinking|67%|
|BigBench-Hard|Gemma-3-12B|85%|
|ChartQA|Qwen2.5-Omni-7B|85%|
|CharXiv-R|Qwen3-VL-8B-Thinking|53%|
|DocVQA|Qwen2.5-Omni-7B|**95%**|
|DROP (Reasoning)|Gemma-3n-E2B|61%|
|GPQA|Qwen3-VL-8B-Thinking|70%|
|GSM8K|Gemma-3-12B|**91%**|
|HellaSwag|Mistral-NeMo-12B-Instruct|83%|
|HumanEval|Granite-3.3-8B-Instruct|**89%**|
|Humanity’s Last Exam|GPT-OSS-20B|11%|
|IfEval|Nemotron-Nano-9B-v2|**90%**|
|LiveCodeBench|Nemotron-Nano-9B-v2|71%|
|LiveCodeBench-v6|Qwen3-VL-8B-Thinking|58%|
|Math|Ministral3-8B|**90%**|
|Math-500|Nemotron-Nano-9B-v2|**97%**|
|MathVista|Qwen2.5-Omni-7B|68%|
|MathVista-Mini|Qwen3-VL-8B-Thinking|81%|
|MBPP (Python)|Qwen2.5-Coder-7B-Instruct|80%|
|MGSM|Gemma-3n-E4B-Instruct|67%|
|MM-MT-Bench|Qwen3-VL-8B-Thinking|80%|
|MMLU|Qwen2.5-Omni-7B|59%|
|MMLU-Pro|Qwen3-VL-8B-Thinking|77%|
|MMLU-Pro-X|Qwen3-VL-8B-Thinking|70%|
|MMLU-Redux|Qwen3-VL-8B-Thinking|**89%**|
|MMMLU|Phi-3.5-Mini-Instruct|55%|
|MMMU-Pro|Qwen3-VL-8B-Thinking|60%|
|MMStar|Qwen3-VL-4B-Thinking|75%|
|Multi-IF|Qwen3-VL-8B-Thinking|75%|
|OCRBench|Qwen3-VL-8B-Instruct|**90%**|
|RealWorldQA|Qwen3-VL-8B-Thinking|73%|
|ScreenSpot-Pro|Qwen3-VL-4B-Instruct|59%|
|SimpleQA|Qwen3-VL-8B-Thinking|50%|
|SuperGPQA|Qwen3-VL-8B-Thinking|51%|
|SWE-Bench-Verified|Devstral-Small-2|56%|
|TAU-Bench-Retail|GPT-OSS-20B|55%|
|WinoGrande|Gemma-2-9B|80%|
# Patterns I Noticed (Not Conclusions)
# 1. No Single Model Dominates Everything
Even models that appear frequently don’t win across all categories. Performance is **highly task-dependent**.
If you’re evaluating models based on one benchmark, you’re probably overfitting your expectations.
# 2. Mid-Sized Models (7B–9B) Show Up Constantly
Across math, coding, and multimodal tasks, **sub-10B models appear repeatedly**.
That doesn’t mean they’re “better” — it does suggest **architecture and tuning matter more than raw size** in many evaluations.
# 3. Vision-Language Models Are No Longer “Vision Only”
Several VL models score competitively on:
* reasoning
* OCR
* document understanding
* multimodal knowledge
That gap is clearly shrinking, at least in benchmark settings.
# 4. Math, Code, and Reasoning Still Behave Differently
Models that do extremely well on:
* Math (AIME, Math-500) often aren’t the same ones winning:
* HumanEval or LiveCodeBench
So “reasoning” is not one thing — benchmarks expose different failure modes.
# 5. Large Parameter Count ≠ Guaranteed Wins
Some larger models appear rarely or only in narrow benchmarks.
That doesn’t make them bad — it just reinforces that **benchmarks reward specialization**, not general scale.
# Why I’m Sharing This
I’m not trying to say *“this model is the best”*. I wanted a **task-first view**, because that’s how most of us actually use models:
* Some of you care about math
* Some about code
* Some about OCR, docs, or UI grounding
* Some about overall multimodal behavior
Benchmarks won’t replace real-world testing — but they *do* reveal patterns when you zoom out.
# Open Questions for You
* Which benchmarks do *you* trust the most?
* Which ones do you think are already being “over-optimized”?
* Are there important real-world tasks you feel aren’t reflected here?
* Do you trust **single-score leaderboards**, or do you prefer task-specific evaluations like the breakdown above?
* For people running models locally, how much weight do you personally give to **efficiency metrics (latency, VRAM, throughput)** versus raw benchmark scores? (Currently am with V100, which is cloud based)
* If you had to remove **one benchmark entirely**, which one do you think adds the least signal today?
| 2025-12-21T08:11:49 | https://www.reddit.com/r/LocalLLaMA/comments/1ps1g40/benchmark_winners_across_40_llm_evaluations/ | abubakkar_s | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ps1g40 | false | null | t3_1ps1g40 | /r/LocalLLaMA/comments/1ps1g40/benchmark_winners_across_40_llm_evaluations/ | false | false | self | 29 | null |
Karpathy: 2025 LLM Year in Review | 1 | 2025-12-21T08:09:04 | https://x.com/karpathy/status/2002118205729562949 | ab2377 | x.com | 1970-01-01T00:00:00 | 0 | {} | 1ps1elb | false | null | t3_1ps1elb | /r/LocalLLaMA/comments/1ps1elb/karpathy_2025_llm_year_in_review/ | false | false | default | 1 | null | |
Should I get 32GB DDR3 or 16GB DDR4 or 8GB DDR5 for Local LLM? RAM prices are getting expensive. | 1 | [removed] | 2025-12-21T08:01:09 | https://www.reddit.com/r/LocalLLaMA/comments/1ps1a4d/should_i_get_32gb_ddr3_or_16gb_ddr4_or_8gb_ddr5/ | Disastrous_Wash6804 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ps1a4d | false | null | t3_1ps1a4d | /r/LocalLLaMA/comments/1ps1a4d/should_i_get_32gb_ddr3_or_16gb_ddr4_or_8gb_ddr5/ | false | false | self | 1 | null |
If swapping LLMs doesn’t change reasoning, what is the model actually doing? | 1 | [removed] | 2025-12-21T07:33:47 | https://www.reddit.com/r/LocalLLaMA/comments/1ps0uxn/if_swapping_llms_doesnt_change_reasoning_what_is/ | Aleksandr_Nikolaev | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ps0uxn | false | null | t3_1ps0uxn | /r/LocalLLaMA/comments/1ps0uxn/if_swapping_llms_doesnt_change_reasoning_what_is/ | false | false | self | 1 | null |
Are there any calculators for splitting layers between two gpu? | 1 | Thanks in advance. | 2025-12-21T07:30:43 | https://www.reddit.com/r/LocalLLaMA/comments/1ps0t5q/are_there_any_calculators_for_splitting_layers/ | ResponsibleTruck4717 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ps0t5q | false | null | t3_1ps0t5q | /r/LocalLLaMA/comments/1ps0t5q/are_there_any_calculators_for_splitting_layers/ | false | false | self | 1 | null |
What do you actually do with your AI meeting notes? | 0 | I’ve been thinking about this a lot and wanted to hear how others handle it.
I’ve been using AI meeting notes (Granola, etc.) for a while now. Earlier, most of my work was fairly solo — deep work, planning, drafting things — and I’d mostly interact with tools like ChatGPT, Claude, or Cursor to think things through or write.
Lately, my work has shifted more toward people: more meetings, more conversations, more context switching. I’m talking to users, teammates, stakeholders — trying to understand feature requests, pain points, vague ideas that aren’t fully formed yet.
So now I have… a lot of meeting notes.
They’re recorded. They’re transcribed. They’re summarized. Everything is neatly saved. And that feels safe. But I keep coming back to the same question:
What do I actually do with all this?
When meetings go from 2 a day to 5–6 a day:
• How do you separate signal from noise?
• How do you turn notes into actionable insights instead of passive archives?
• How do you repurpose notes across time — like pulling something useful from a meeting a month ago?
• Do you actively revisit old notes, or do they just… exist?
Right now, there’s still a lot of friction for me. I have the data, but turning it into decisions, plans, or concrete outputs feels manual and ad hoc. I haven’t figured out a system that really works.
So I’m curious:
• Do you have a workflow that actually closes the loop?
• Are your AI notes a living system or just a searchable memory?
• What’s worked (or clearly not worked) for you?
Would love to learn how others are thinking about this. | 2025-12-21T07:30:33 | https://www.reddit.com/r/LocalLLaMA/comments/1ps0t21/what_do_you_actually_do_with_your_ai_meeting_notes/ | a3fckx | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ps0t21 | false | null | t3_1ps0t21 | /r/LocalLLaMA/comments/1ps0t21/what_do_you_actually_do_with_your_ai_meeting_notes/ | false | false | self | 0 | null |
MiniMax 2.1 release? | 166 | new here and just saw the release of MiniMax M2.1, how is it compare to the other models?
github: [https://github.com/vllm-project/recipes/pull/174](https://github.com/vllm-project/recipes/pull/174) | 2025-12-21T07:14:03 | _cttt_ | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1ps0jnm | false | null | t3_1ps0jnm | /r/LocalLLaMA/comments/1ps0jnm/minimax_21_release/ | false | false | default | 166 | {'enabled': True, 'images': [{'id': '5rotyw06ci8g1', 'resolutions': [{'height': 114, 'url': 'https://preview.redd.it/5rotyw06ci8g1.png?width=108&crop=smart&auto=webp&s=e67e813fc898fbf78aa5b7aa7dcc9ee839ef5268', 'width': 108}, {'height': 228, 'url': 'https://preview.redd.it/5rotyw06ci8g1.png?width=216&crop=smart&auto=webp&s=d4255bdf12ca772656d5e25b7cadb00a09a2c837', 'width': 216}, {'height': 338, 'url': 'https://preview.redd.it/5rotyw06ci8g1.png?width=320&crop=smart&auto=webp&s=bfca573bea4034885b86dc71dfbde981344148fd', 'width': 320}, {'height': 676, 'url': 'https://preview.redd.it/5rotyw06ci8g1.png?width=640&crop=smart&auto=webp&s=6746b1a77ec85f43c907ff7b7bce5a918251fc32', 'width': 640}, {'height': 1014, 'url': 'https://preview.redd.it/5rotyw06ci8g1.png?width=960&crop=smart&auto=webp&s=f3df51497869b968943291a86327b6499ffc1766', 'width': 960}, {'height': 1140, 'url': 'https://preview.redd.it/5rotyw06ci8g1.png?width=1080&crop=smart&auto=webp&s=2f7051e91d526e23cdee93d1913d6483c5374f19', 'width': 1080}], 'source': {'height': 1272, 'url': 'https://preview.redd.it/5rotyw06ci8g1.png?auto=webp&s=df9f349da97239a0fea12a6c7a9b9040617accb8', 'width': 1204}, 'variants': {}}]} | |
I turned my 7900 XT + 128GB RAM workstation into a native AI Subscription Service (No Cloud APIs). Come break it. | 0 | I finally did it. I got tired of cloud wrappers and sanitized APIs, so I built my own fully self-hosted AI agent, "Clair," running entirely on my local metal.
The Rig:
GPU: AMD Radeon 7900 XT (20GB VRAM) running Native ROCm 6.2 (Finally ditched ZLUDA)
CPU: Ryzen 9 9700X
RAM: 128GB DDR5 (Context limits are a suggestion, not a rule)
The Stack:
Backend: Ollama (Dolphin-Llama3 for text) + ComfyUI (Flux for Image Gen)
Middleware: Custom Python Discord Bot w/ aiohttp & asyncio
Payments: Full Stripe Webhook integration running locally via systemd tunnel.
What it does: It's a completely unfiltered, hardware-aware AI. She knows she's running on a 7900 XT. She manages her own subscriptions via Discord roles (Capacitor, Resident, Architect). If you pay, the bot automatically assigns the role and unlocks unlimited image generation. If you don't, you get a strict rate limit (3 imgs/day) to save my electricity bill.
Why I'm posting: I need to stress test the ROCm stability under concurrent user load. I've set up a "Free Tier" (limited to 3 images/10 chats daily) so you guys can mess with it.
If you're curious how I got Stripe to talk to a local Python script or how the Flux workflow handles the AMD cards, ask away in the comments.
Link to Server: [https://discord.gg/j5tSWg2R](https://discord.gg/j5tSWg2R) | 2025-12-21T07:08:06 | SplitPuzzled | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1ps0g5o | false | null | t3_1ps0g5o | /r/LocalLLaMA/comments/1ps0g5o/i_turned_my_7900_xt_128gb_ram_workstation_into_a/ | false | false | default | 0 | {'enabled': True, 'images': [{'id': '3u65khfyai8g1', 'resolutions': [{'height': 216, 'url': 'https://preview.redd.it/3u65khfyai8g1.png?width=108&crop=smart&auto=webp&s=e7df142f37b40744f060ff20f7a90ff22bcfcd34', 'width': 108}, {'height': 432, 'url': 'https://preview.redd.it/3u65khfyai8g1.png?width=216&crop=smart&auto=webp&s=0e4d6652fb1c6cc3419a9e62a26c7834b8d2a098', 'width': 216}, {'height': 640, 'url': 'https://preview.redd.it/3u65khfyai8g1.png?width=320&crop=smart&auto=webp&s=de3f41248e5ebaae87ef27b4bb66a153c86373f9', 'width': 320}, {'height': 1280, 'url': 'https://preview.redd.it/3u65khfyai8g1.png?width=640&crop=smart&auto=webp&s=f10622fb53ae54d293138b661324b50444b4ecbc', 'width': 640}], 'source': {'height': 1918, 'url': 'https://preview.redd.it/3u65khfyai8g1.png?auto=webp&s=4cd2cad96dc42d32a3539c8681d1a9ebbbde19db', 'width': 892}, 'variants': {}}]} | |
**Zuhri 🦀 - AI-Powered Terminal Assistant Built with Rust** | 1 | [removed] | 2025-12-21T07:05:28 | https://www.reddit.com/gallery/1ps0em8 | humairmunir | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1ps0em8 | false | null | t3_1ps0em8 | /r/LocalLLaMA/comments/1ps0em8/zuhri_aipowered_terminal_assistant_built_with_rust/ | false | false | 1 | null | |
I chatting Diddy in LOCAL AI on mobile phone | 0 | 2025-12-21T06:40:58 | https://www.reddit.com/gallery/1ps009s | Adventurous_Role_489 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1ps009s | false | null | t3_1ps009s | /r/LocalLLaMA/comments/1ps009s/i_chatting_diddy_in_local_ai_on_mobile_phone/ | false | false | 0 | null | ||
I know CPU/Ram is slower than GPU/VRam but is it less accurate? | 0 | I know CPU/Ram is slower than GPU/VRam but is it less accurate? Is speed the only thing you give up when running without a GPU? | 2025-12-21T06:28:04 | https://www.reddit.com/r/LocalLLaMA/comments/1przsh1/i_know_cpuram_is_slower_than_gpuvram_but_is_it/ | Five9Fine | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1przsh1 | false | null | t3_1przsh1 | /r/LocalLLaMA/comments/1przsh1/i_know_cpuram_is_slower_than_gpuvram_but_is_it/ | false | false | self | 0 | null |
Big training projects appear to be including CoT reasoning traces in their training data. | 23 | 2025-12-21T06:12:25 | https://pratyushmaini.substack.com/p/reverse-engineering-a-phase-change-a96 | MaggoVitakkaVicaro | pratyushmaini.substack.com | 1970-01-01T00:00:00 | 0 | {} | 1przir5 | false | null | t3_1przir5 | /r/LocalLLaMA/comments/1przir5/big_training_projects_appear_to_be_including_cot/ | false | false | default | 23 | {'enabled': False, 'images': [{'id': 'qY52rLeZM5AmEgPsJDh_3hPjjhv02hbRWUEyDFI3OeE', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/qY52rLeZM5AmEgPsJDh_3hPjjhv02hbRWUEyDFI3OeE.jpeg?width=108&crop=smart&auto=webp&s=fd0e6d815b0328a59dc956c950def8174c43173f', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/qY52rLeZM5AmEgPsJDh_3hPjjhv02hbRWUEyDFI3OeE.jpeg?width=216&crop=smart&auto=webp&s=58d0382f7daf2e7eaca406c2301bd4b5b25ded8b', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/qY52rLeZM5AmEgPsJDh_3hPjjhv02hbRWUEyDFI3OeE.jpeg?width=320&crop=smart&auto=webp&s=2bd774cc6a20d3d3b2190e7e0f30062201b6b24f', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/qY52rLeZM5AmEgPsJDh_3hPjjhv02hbRWUEyDFI3OeE.jpeg?width=640&crop=smart&auto=webp&s=958f00a9e023c8451aac05b0976c46bd89e9e69e', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/qY52rLeZM5AmEgPsJDh_3hPjjhv02hbRWUEyDFI3OeE.jpeg?width=960&crop=smart&auto=webp&s=6b50c4269a82d2e4da0c772cada368b73aadd22b', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/qY52rLeZM5AmEgPsJDh_3hPjjhv02hbRWUEyDFI3OeE.jpeg?width=1080&crop=smart&auto=webp&s=dc7e4f571eaa7defbda18d274698713ed3b50b88', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/qY52rLeZM5AmEgPsJDh_3hPjjhv02hbRWUEyDFI3OeE.jpeg?auto=webp&s=3da88621432fe338d3c9e2c465bc5a830c452872', 'width': 1200}, 'variants': {}}]} | |
M2.1 is insane. | 1 | [removed] | 2025-12-21T06:05:11 | Carinaaaatian | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1prze6n | false | null | t3_1prze6n | /r/LocalLLaMA/comments/1prze6n/m21_is_insane/ | false | false | default | 1 | {'enabled': True, 'images': [{'id': 'zbt1jzg50i8g1', 'resolutions': [{'height': 62, 'url': 'https://preview.redd.it/zbt1jzg50i8g1.jpeg?width=108&crop=smart&auto=webp&s=036ea8563c0a29c1fe8f8bc5668e4e1185508c63', 'width': 108}, {'height': 124, 'url': 'https://preview.redd.it/zbt1jzg50i8g1.jpeg?width=216&crop=smart&auto=webp&s=0e87016a5f39e886e01a15000455a47b77bedccd', 'width': 216}, {'height': 184, 'url': 'https://preview.redd.it/zbt1jzg50i8g1.jpeg?width=320&crop=smart&auto=webp&s=d0d4e43a653cca1ca91e6cf321e703d787a885a4', 'width': 320}, {'height': 368, 'url': 'https://preview.redd.it/zbt1jzg50i8g1.jpeg?width=640&crop=smart&auto=webp&s=d102872dd62f22bbed07149691a16c05e7098d15', 'width': 640}, {'height': 552, 'url': 'https://preview.redd.it/zbt1jzg50i8g1.jpeg?width=960&crop=smart&auto=webp&s=45cb70cd219dcee755d14dafd222c0c1e42d5e55', 'width': 960}, {'height': 621, 'url': 'https://preview.redd.it/zbt1jzg50i8g1.jpeg?width=1080&crop=smart&auto=webp&s=5f620ad68ca50068c4888c7549adcb2fea8ff695', 'width': 1080}], 'source': {'height': 713, 'url': 'https://preview.redd.it/zbt1jzg50i8g1.jpeg?auto=webp&s=43aa0542ea517cfa7989eb77a56f5e900c800799', 'width': 1239}, 'variants': {}}]} | |
RTX 4070 in Action: What Your New System Could Look Like | 0 | Super-Bot: The Ultimate Autonomous AI Agent for Windows
**Description:** Meet **Super-Bot**, your self-learning development companion. This isn't just a chatbot—it's an autonomous agent that acts. It writes code, executes commands, fixes its own errors, and even "sees" your screen to validate applications.
**Key Features:**
* **Multi-Provider Support:** Seamlessly integrates with local LLMs (Ollama, LM Studio) and top cloud APIs (GPT-4, Claude 3.5, Gemini, xAI).
* **Self-Healing Engine:** Automatically detects bugs, learns from them, and fixes code without your intervention.
* **Vision Capabilities:** Uses AI vision to look at your screen and verify if GUI apps or websites look correct.
* **Smart Memory:** Remembers successful coding patterns to solve future tasks faster.
* **Hardware-Locked Security:** Includes a robust licensing system locked to your specific machine.
* **Easy to Use:** Delivered as a standalone Windows EXE—no complex Python environment setup needed. | 2025-12-21T06:03:27 | https://v.redd.it/kzwr2nx3yh8g1 | Alone-Competition863 | /r/LocalLLaMA/comments/1przd3k/rtx_4070_in_action_what_your_new_system_could/ | 1970-01-01T00:00:00 | 0 | {} | 1przd3k | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/kzwr2nx3yh8g1/DASHPlaylist.mpd?a=1769018614%2COTA2YmQ0YzU2ODI0MGI5Zjk3MTA5MGRjNzYzZmRkM2IxNDA5ODRkZGQzMzNkNjgwZTI3ODVkMjIzMDE2OWI4YQ%3D%3D&v=1&f=sd', 'duration': 271, 'fallback_url': 'https://v.redd.it/kzwr2nx3yh8g1/CMAF_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/kzwr2nx3yh8g1/HLSPlaylist.m3u8?a=1769018614%2CY2JiMDk4MmZjZmUzOTY1OTg2N2IyOTE1NDNlMDMxODk5MDM5ZmU5MGYyMGIyOTljNzliY2ExOGRkMzg2Njc3Nw%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/kzwr2nx3yh8g1/CMAF_96.mp4', 'transcoding_status': 'completed', 'width': 1920}} | t3_1przd3k | /r/LocalLLaMA/comments/1przd3k/rtx_4070_in_action_what_your_new_system_could/ | false | false | 0 | {'enabled': False, 'images': [{'id': 'M20zOXU1NTR5aDhnMWOTU7UqQZJD7PNBptsO8kG3EZV_3sbv7lEF5DFvfaH3', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/M20zOXU1NTR5aDhnMWOTU7UqQZJD7PNBptsO8kG3EZV_3sbv7lEF5DFvfaH3.png?width=108&crop=smart&format=pjpg&auto=webp&s=6f17105573f121018d87150526b1d4ff730b7a0b', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/M20zOXU1NTR5aDhnMWOTU7UqQZJD7PNBptsO8kG3EZV_3sbv7lEF5DFvfaH3.png?width=216&crop=smart&format=pjpg&auto=webp&s=1229078c3961f51dbeb8078e5b7674e0fb490452', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/M20zOXU1NTR5aDhnMWOTU7UqQZJD7PNBptsO8kG3EZV_3sbv7lEF5DFvfaH3.png?width=320&crop=smart&format=pjpg&auto=webp&s=19f7f957c1a34a08cf5d0e427602427247a7812c', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/M20zOXU1NTR5aDhnMWOTU7UqQZJD7PNBptsO8kG3EZV_3sbv7lEF5DFvfaH3.png?width=640&crop=smart&format=pjpg&auto=webp&s=41902ce60717978cf3a80a5fe0eed18efaa95a66', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/M20zOXU1NTR5aDhnMWOTU7UqQZJD7PNBptsO8kG3EZV_3sbv7lEF5DFvfaH3.png?width=960&crop=smart&format=pjpg&auto=webp&s=dc0fb3dd257a22a76717ad50b328ba670a5ca778', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/M20zOXU1NTR5aDhnMWOTU7UqQZJD7PNBptsO8kG3EZV_3sbv7lEF5DFvfaH3.png?width=1080&crop=smart&format=pjpg&auto=webp&s=49d33c841a281d682c994a1feb829a57767ff546', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/M20zOXU1NTR5aDhnMWOTU7UqQZJD7PNBptsO8kG3EZV_3sbv7lEF5DFvfaH3.png?format=pjpg&auto=webp&s=5621b1b42e4179246fa0915854c515ff00cc456a', 'width': 1920}, 'variants': {}}]} | |
I wonder what would happen if I yolo'd qwen3 0.6B in a sandbox | 0 | If I gave it a project and set up a way for automated testing, would it come up with something through a great amount of trial and error?
Or would it find a way to melt my hard drive in the process?
I guess there's one way to find out, I'll let you know if I try. | 2025-12-21T05:25:37 | https://www.reddit.com/r/LocalLLaMA/comments/1prypj4/i_wonder_what_would_happen_if_i_yolod_qwen3_06b/ | copenhagen_bram | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1prypj4 | false | null | t3_1prypj4 | /r/LocalLLaMA/comments/1prypj4/i_wonder_what_would_happen_if_i_yolod_qwen3_06b/ | false | false | self | 0 | null |
What is an LLM | 0 | In r/singularity, I came across a commenter that said that normies don’t understand AI, and describing it as fancy predictor would be incorrect. Of course they said how AI wasn’t that, but aren’t LLMs a much more advanced word predictor? | 2025-12-21T05:10:54 | https://www.reddit.com/r/LocalLLaMA/comments/1pryg16/what_is_an_llm/ | uSoull | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pryg16 | false | null | t3_1pryg16 | /r/LocalLLaMA/comments/1pryg16/what_is_an_llm/ | false | false | self | 0 | null |
**I stopped explaining prompts and started marking explicit intent**
*SoftPrompt-IR: a simpler, clearer way to write prompts*
from a German mechatronics engineer | 0 | \# Stop Explaining Prompts. Start Marking Intent.
Most advice for prompting essentially boils down to:
\* "Be very clear."
\* "Repeat important instructions."
\* "Use strong phrasing."
While this works, it is often noisy, brittle, and hard for models to analyze.
That’s why I’ve started doing the opposite: Instead of explaining importance in prose, \*\*I explicitly mark it.\*\*
\## Example
Instead of writing:
\* Please avoid flowery language.
\* Try not to use clichés.
\* Don't over-explain things.
I write this:
\`\`\`
!\~> AVOID\_FLOWERY\_STYLE
\~> AVOID\_CLICHES
\~> LIMIT\_EXPLANATION
\`\`\`
\*\*Same intent.\*\*
\*\*Less text.\*\*
\*\*Clearer signal.\*\*
\## How to Read This
The symbols express weight, not meaning:
\* \`!\` = \*\*Strong / High Priority\*\*
\* \`\~\` = Soft Preference
\* \`>\` = Applies Globally / Downstream
The words are \*\*tags\*\*, not sentences.
Think of it like \*\*Markdown for Intent\*\*:
\* \`#\` marks a heading
\* \`\*\*\` marks emphasis
\* \`!\~>\` marks importance
\## Why This Works (Even Without Training)
LLMs have already learned patterns like:
1. Configuration files
2. Rulesets
3. Feature flags
4. Weighted instructions
Instead of hiding intent in natural language, \*\*you make it visible and structured.\*\*
This reduces:
\* Repetition
\* Ambiguity
\* Prompt length
\* Accidental instruction conflicts
\## SoftPrompt-IR
I call this \*\*SoftPrompt-IR\*\*:
\* No new language.
\* No jailbreak.
\* No hack.
[https://github.com/tobs-code/SoftPrompt-IR](https://github.com/tobs-code/SoftPrompt-IR)
It is simply a method of \*\*making implicit intent explicit.\*\*
\*\*Machine-oriented first, human-readable second.\*\*
\## TL;DR
Don't politely ask the model. \*\*Mark what matters.\*\* | 2025-12-21T05:00:43 | https://www.reddit.com/r/LocalLLaMA/comments/1pry970/i_stopped_explaining_prompts_and_started_marking/ | No_Construction3780 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pry970 | false | null | t3_1pry970 | /r/LocalLLaMA/comments/1pry970/i_stopped_explaining_prompts_and_started_marking/ | true | false | spoiler | 0 | null |
Day 13: 21 Days of Building a Small Language Model: Positional Encodings | 5 | Welcome to Day 13 of 21 Days of Building a Small Language Model. The topic for today is positional encodings. We've explored attention mechanisms, KV caching, and efficient attention variants. Today, we'll discover how transformers learn to understand that word order matters, and why this seemingly simple problem requires sophisticated solutions.
# Problem
Transformers have a fundamental limitation: they treat sequences as unordered sets, meaning they don't inherently understand that the order of tokens matters. The self attention mechanism processes all tokens simultaneously and treats them as if their positions don't matter. This creates a critical problem: without positional information, identical tokens appearing in different positions will be treated as exactly the same
https://preview.redd.it/km4hod0doh8g1.png?width=1012&format=png&auto=webp&s=81f194a2b440e90b4b7a90822379cb6579d8b722
Consider the sentence: "The student asked the teacher about the student's project." This sentence contains the word "student" twice, but in different positions with different grammatical roles. The first "student" is the subject who asks the question, while the second "student" (in "student's") is the possessor of the project.
Without positional encodings, both instances of "student" would map to the exact same embedding vector. When these identical embeddings enter the transformer's attention mechanism, they undergo identical computations and produce identical output representations. The model cannot distinguish between them because, from its perspective, they are the same token in the same position.
This problem appears even with common words. In the sentence "The algorithm processes data efficiently. The data is complex," both instances of "the" would collapse to the same representation, even though they refer to different nouns in different contexts. The model loses crucial information about the structural relationships between words.
Positional encodings add explicit positional information to each token's embedding, allowing the model to understand both what each token is and where it appears in the sequence.
# Challenge
Any positional encoding scheme must satisfy these constraints:
1. **Bounded**: The positional values should not overwhelm the semantic information in token embeddings
2. **Smooth**: The encoding should provide continuous, smooth transitions between positions
3. **Unique**: Each position should have a distinct representation
4. **Optimizable**: The encoding should be amenable to gradient-based optimization
Simple approaches fail these constraints. Integer encodings are too large and discontinuous. Binary encodings are bounded but still discontinuous. The solution is to use smooth, continuous functions that are bounded and differentiable.
# Sinusoidal Positional Encodings
Sinusoidal positional encodings were introduced in the 2017 paper "Attention Is All You Need" by Vaswani et al. Instead of using discrete values that jump between positions, they use smooth sine and cosine waves. These waves go up and down smoothly, providing unique positional information for each position while remaining bounded and differentiable.
The key insight is to use different dimensions that change at different speeds. Lower dimensions oscillate rapidly, capturing fine grained positional information (like which specific position we're at). Higher dimensions oscillate slowly, capturing coarse grained positional information (like which general region of the sequence we're in).
This multi scale structure allows the encoding to capture both local position (where exactly in the sequence) and global position (which part of a long sequence) simultaneously.
# Formula
https://preview.redd.it/9j6q3e0doh8g1.png?width=756&format=png&auto=webp&s=358dc0c6fd33f2f29f79fa5871b162147bb411b1
The sinusoidal positional encoding formula computes a value for each position and each dimension. For a position `pos` and dimension index `i`, the encoding is:
**For even dimensions (i = 0, 2, 4, ...):**
PE(pos, 2i) = sin(pos / (10000^(2i/d_model)))
**For odd dimensions (i = 1, 3, 5, ...):**
PE(pos, 2i+1) = cos(pos / (10000^(2i/d_model)))
Notice that even dimensions use sine, while odd dimensions use cosine. This pairing is crucial for enabling relative position computation.
#
* **pos**: Where the token appears in the sequence. The first token is at position 0, the second at position 1, and so on.
* **i**: This tells us which speed of wave to use. Small values of `i` make waves that change quickly (fast oscillations). Large values of `i` make waves that change slowly (slow oscillations).
* **10000\^(2i/d\_model)**: This number controls how fast the wave oscillates. When `i = 0`, the denominator is 1, which gives us the fastest wave. As `i` gets bigger, the denominator gets much bigger, which makes the wave oscillate more slowly.
**Sine and Cosine Functions**: These functions transform a number into a value between -1 and 1. Because these functions repeat their pattern forever, the encoding can work for positions longer than what the model saw during training.
Let's compute the sinusoidal encoding for a specific example. Consider position 2 with an 8 dimensional embedding (d\_model = 8).
* For dimension 0 (even, so we use sine with i = 0): • Denominator: 10000\^(2×0/8) = 10000\^0 = 1 • Argument: 2 / 1 = 2 • Encoding: PE(2, 0) = sin(2) ≈ 0.909
* For dimension 1 (odd, so we use cosine with i = 0): • Same denominator: 1 • Same argument: 2 • Encoding: PE(2, 1) = cos(2) ≈ 0.416
Notice that dimensions 0 and 1 both use i = 0 (the same frequency), but one uses sine and the other uses cosine. This creates a phase shifted pair.
For a higher dimension, say dimension 4 (even, so sine with i = 2): • Denominator: 10000\^(2×2/8) = 10000\^0.5 ≈ 100 • Argument: 2 / 100 = 0.02 • Encoding: PE(2, 4) = sin(0.02) ≈ 0.02
Notice how much smaller this value is compared to dimension 0. The higher dimension oscillates much more slowly, so at position 2, we're still near the beginning of its cycle.
# Why both sine and cosine?
The pairing of sine and cosine serves several important purposes:
**1. Smoothness**: Both functions are infinitely differentiable, making them ideal for gradient based optimization. Unlike discrete encodings with sharp jumps, sine and cosine provide smooth transitions everywhere.
**2. Relative Position Computation**: This is where the magic happens. The trigonometric identity for sine of a sum tells us:
sin(a + b) = sin(a)cos(b) + cos(a)sin(b)
This means if we know the encoding for position `pos` (which includes both sin and cos components), we can compute the encoding for position `pos + k` using simple linear combinations. The encoding for `pos + k` is essentially a rotation of the encoding for `pos`, where the rotation angle depends on `k`.
**3. Extrapolation**: Sine and cosine are periodic functions that repeat indefinitely. This allows the model to handle positions beyond those seen during training, as the functions continue their periodic pattern.
**4. Bounded Values**: Both sine and cosine produce values between 1 and 1, ensuring the positional encodings don't overwhelm the token embeddings, which are typically small values around zero.
# How Token and Positional Encodings combine
When we use sinusoidal positional encodings, we add them element wise to the token embeddings. The word "networks" at position 1 receives: • Token embedding: `[0.15, 0.22, 0.08, 0.31, 0.12, 0.45, 0.67, 0.23]` (captures semantic meaning) • Positional encoding: `[0.84, 0.54, 0.01, 1.00, 0.01, 0.99, 0.01, 0.99]` (captures position 1) • Combined: `[0.99, 0.32, 0.09, 1.31, 0.13, 1.44, 0.68, 1.22]`
If "networks" appeared again at position 3, it would receive: • Same token embedding: `[0.15, 0.22, 0.08, 0.31, 0.12, 0.45, 0.67, 0.23]` • Different positional encoding: `[0.14, 0.99, 0.03, 0.99, 0.03, 0.99, 0.03, 0.99]` (captures position 3) • Different combined: `[0.29, 1.21, 0.11, 1.30, 0.15, 1.44, 0.70, 1.22]`
Even though both instances of "networks" have the same token embedding, their final combined embeddings are different because of the positional encodings. This allows the model to distinguish between them based on their positions.
**Summary**
Today we discovered sinusoidal positional encodings, the elegant solution from the original Transformer paper that teaches models about word order. The key insight is to use smooth sine and cosine waves with different frequencies: lower dimensions oscillate rapidly to capture fine grained position, while higher dimensions oscillate slowly to capture coarse grained position.
Understanding sinusoidal positional encodings is essential because they enable transformers to understand sequence structure, which is fundamental to language. Without them, transformers would be unable to distinguish between "The algorithm processes data" and "The data processes algorithm."
| 2025-12-21T04:59:47 | https://www.reddit.com/r/LocalLLaMA/comments/1pry8kf/day_13_21_days_of_building_a_small_language_model/ | Prashant-Lakhera | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pry8kf | false | null | t3_1pry8kf | /r/LocalLLaMA/comments/1pry8kf/day_13_21_days_of_building_a_small_language_model/ | false | false | 5 | null | |
I stopped explaining prompts and started marking explicit intent
SoftPrompt-IR: a simpler, clearer way to write prompts
from a German mechatronics engineer | 0 | \# Stop Explaining Prompts. Start Marking Intent.
Most advice for prompting essentially boils down to:
\* "Be very clear."
\* "Repeat important instructions."
\* "Use strong phrasing."
While this works, it is often noisy, brittle, and hard for models to analyze.
That’s why I’ve started doing the opposite: Instead of explaining importance in prose, \*\*I explicitly mark it.\*\*
\## Example
Instead of writing:
\* Please avoid flowery language.
\* Try not to use clichés.
\* Don't over-explain things.
I write this:
\`\`\`
!\~> AVOID\_FLOWERY\_STYLE
\~> AVOID\_CLICHES
\~> LIMIT\_EXPLANATION
\`\`\`
\*\*Same intent.\*\*
\*\*Less text.\*\*
\*\*Clearer signal.\*\*
\## How to Read This
The symbols express weight, not meaning:
\* \`!\` = \*\*Strong / High Priority\*\*
\* \`\~\` = Soft Preference
\* \`>\` = Applies Globally / Downstream
The words are \*\*tags\*\*, not sentences.
Think of it like \*\*Markdown for Intent\*\*:
\* \`#\` marks a heading
\* \`\*\*\` marks emphasis
\* \`!\~>\` marks importance
\## Why This Works (Even Without Training)
LLMs have already learned patterns like:
1. Configuration files
2. Rulesets
3. Feature flags
4. Weighted instructions
Instead of hiding intent in natural language, \*\*you make it visible and structured.\*\*
This reduces:
\* Repetition
\* Ambiguity
\* Prompt length
\* Accidental instruction conflicts
\## SoftPrompt-IR
I call this \*\*SoftPrompt-IR\*\*:
\* No new language.
\* No jailbreak.
\* No hack.
[https://github.com/tobs-code/SoftPrompt-IR](https://github.com/tobs-code/SoftPrompt-IR)
It is simply a method of \*\*making implicit intent explicit.\*\*
\*\*Machine-oriented first, human-readable second.\*\*
\## TL;DR
Don't politely ask the model. \*\*Mark what matters.\*\* | 2025-12-21T04:52:07 | https://www.reddit.com/r/LocalLLaMA/comments/1pry3ga/i_stopped_explaining_prompts_and_started_marking/ | No_Construction3780 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pry3ga | false | null | t3_1pry3ga | /r/LocalLLaMA/comments/1pry3ga/i_stopped_explaining_prompts_and_started_marking/ | true | false | spoiler | 0 | null |
People using Devstral 2 123b, how has it been working for you? What have you been using it with? | 49 | People using Devstral 2 123b, how has it been working for you? What have you been using it with?
I tried it with Claude Code Router and it's not bad! I think just with a few rough tests it seems better at agentic stuff than GPT OSS 120b, however GPT OSS's code quality seems a bit better. HOWEVER, I'm using OSS 120b at Q4 and Devstral at IQ3.
GPT OSS 120b is also faster because it's MoE, but Devstral 2 123b works pretty well with speculative decoding with a heavily quantized Devstral 2 20b.
How is your luck with it? What strengths and weaknesses does it have with your experience?
| 2025-12-21T04:51:16 | https://www.reddit.com/r/LocalLLaMA/comments/1pry2v7/people_using_devstral_2_123b_how_has_it_been/ | maxwell321 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pry2v7 | false | null | t3_1pry2v7 | /r/LocalLLaMA/comments/1pry2v7/people_using_devstral_2_123b_how_has_it_been/ | false | false | self | 49 | null |
NVIDIA Nemotron-3-Nano-30B LLM Benchmarks Vulkan and RPC | 26 | I'm running a few benchmarks on Nvidia's new [Nemotron-3-Nano-30B](https://huggingface.co/unsloth/Nemotron-3-Nano-30B-A3B-GGUF) and will test out RPC-SERVER again.
More details on Mamba2-Transformer Hybrid Mixture of Experts (MoE) model is here:
[https://huggingface.co/nvidia/NVIDIA-Nemotron-3-Nano-30B-A3B-BF16](https://huggingface.co/nvidia/NVIDIA-Nemotron-3-Nano-30B-A3B-BF16)
4 Systems all running Kubuntu 24.04 to 26.04.
GPUs: [Nvidia 1080Ti 11GB](https://www.techpowerup.com/gpu-specs/geforce-gtx-1080-ti.c2877), [Nvidia P102-100 ](https://www.techpowerup.com/gpu-specs/p102-100.c3100)10GB, AMD Ryzen [6800H CPU](https://www.techpowerup.com/cpu-specs/ryzen-7-6800h.c2527), 64gb DDR5 RAM with [iGPU 680M](https://www.techpowerup.com/gpu-specs/radeon-680m.c3871) and AMD[ Radeon 7900 GRE](https://www.techpowerup.com/gpu-specs/radeon-rx-7900-gre.c4166) 16GB.
I also compared AMD vs Intel system, both running DDR4 and no difference in inference speeds.
This model is too big to fit on any of my GPUs Vram, so I used dual Nvidia GPU and RPC to avoid having CPU offloading. Also did some CPU offloading to compare. All system run with Vulkan backend.
llama-bench -m /Nemotron-3-Nano-30B-A3B-Q4_K_M.gguf -fa 0,1 load_backend: loaded RPC backend from /home/czar33/vulkan/llama-b7476/libggml-rpc.so ggml_vulkan: Found 1 Vulkan devices: ggml_vulkan: 0 = AMD Radeon Graphics (RADV REMBRANDT) (radv) | uma: 1 | fp16: 1 | bf16: 0 | warp size: 32 | shared memory: 65536 | int dot: 1 | matrix cores: none load_backend: loaded Vulkan backend from /home/czar33/vulkan/llama-b7476/libggml-vulkan.so load_backend: loaded CPU backend from /home/czar33/vulkan/llama-b7476/libggml-cpu-haswell.so
|model|size|params|backend|ngl|fa|test|t/s|
|:-|:-|:-|:-|:-|:-|:-|:-|
|nemotron\_h\_moe 31B.A3.5B Q4\_K - Medium|22.88 GiB|31.58 B|Vulkan|99|0|pp512|221.68 ± 0.90|
|nemotron\_h\_moe 31B.A3.5B Q4\_K - Medium|22.88 GiB|31.58 B|Vulkan|99|0|tg128|15.35 ± 0.01|
|nemotron\_h\_moe 31B.A3.5B Q4\_K - Medium|22.88 GiB|31.58 B|Vulkan|99|1|pp512|214.63 ± 0.78|
|nemotron\_h\_moe 31B.A3.5B Q4\_K - Medium|22.88 GiB|31.58 B|Vulkan|99|1|tg128|15.39 ± 0.02|
build: cdbada8d1 (7476) real 2m59.672s
**6800H iGPU 680M**
Nemotron-3-Nano-30B-A3B-Q4\_K\_M.gguf
|test|t/s|
|:-|:-|
|pp512|221.68 ± 0.90|
|tg128|15.35 ± 0.01|
Nemotron-3-Nano-30B-A3B-IQ4\_XS.gguf 6800H iGPU 680M
|test|t/s|
|:-|:-|
|pp512|151.09 ± 1.88|
|tg128|17.63 ± 0.02|
Nemotron-3-Nano-30B-A3B-Q4\_1.gguf 6800H iGPU 680M
|test|t/s|
|:-|:-|
|pp512|241.15 ± 1.06|
|tg128|12.77 ± 3.98|
Looks like the iGPU 680M likes Q4\_1 quants for best pp512 performance and IQ4\_XS for tg128.
**NVIDIA GTX-1080Ti and NVIDIA P102-100** (21GB of combined VRAM)
ggml_vulkan: 0 = NVIDIA GeForce GTX 1080 Ti (NVIDIA) | uma: 0 | fp16: 0 | bf16: 0 | warp size: 32 | shared memory: 49152 | int dot: 1 | matrix cores: none ggml_vulkan: 1 = NVIDIA P102-100 (NVIDIA) | uma: 0 | fp16: 0 | bf16: 0 | warp size: 32 | shared memory: 49152 | int dot: 1 | matrix cores: none load_backend: loaded Vulkan backend from /home/czar33/vulkan/llama-b7484/libggml-vulkan.so load_backend: loaded CPU backend from /home/czar33/vulkan/llama-b7484/libggml-cpu-haswell.so | model | size | params | backend | ngl | test | t/s | | ------------------------------ | ---------: | ---------: | ---------- | --: | --------------: | -------------------: | | nemotron_h_moe 31B.A3.5B IQ4_XS - 4.25 bpw | 16.91 GiB | 31.58 B | Vulkan | 99 | pp512 | 121.23 ± 2.85 | | nemotron_h_moe 31B.A3.5B IQ4_XS - 4.25 bpw | 16.91 GiB | 31.58 B | Vulkan | 99 | tg128 | 64.86 ± 0.15 |
build: ce734a8a2 (7484)
Nemotron-3-Nano-30B-A3B-IQ4\_XS.gguf (16.91 GiB)
|test|t/s|
|:-|:-|
|pp512|121.23 ± 2.85|
|tg128|64.86 ± 0.15|
Nemotron-3-Nano-30B-A3B-Q4\_1.gguf (18.67 GiB)
|test|t/s|
|:-|:-|
|pp512|133.86 ± 2.44|
|tg128|67.99 ± 0.25|
Nemotron-3-Nano-30B-A3B-Q4\_K\_M.gguf -ngl 44 (22.88 GiB)
|test|t/s|
|:-|:-|
|pp512|103.30 ± 0.51|
|tg128|34.05 ± 0.92|
Q4\_K\_M too big for 21GB VRAM so needs `-ngl 44` to run and almost a 50% hit for about 1 to 2 GB offload.
Now lets see difference between offload `-ngl` and using RPC backend. Using Q4\_K\_M, Q5\_K\_M and Q6\_K models.
My client is the AMD Radeon 7900 GRE 16GB VRAM GPU:
`llama-bench -m /Nemotron-3-Nano-30B-A3B-Q5_K_M.gguf --rpc` [`10.0.0.173:50054`](http://10.0.0.173:50054)
and the RPC-SERVER is running dual GPU GTX-1080Ti/P102-100 on a gigabit network.
llama-b7491/rpc-server -c --host 0.0.0.0 --port 50054
**RX 7900GRE (16GB VRAM), GTX1080Ti + P102-100 (21GB VRAM) using RPC**
time /llama-b7491/llama-bench -m /Nemotron-3-Nano-30B-A3B-Q5\_K\_M.gguf --rpc [10.0.0.173:50054](http://10.0.0.173:50054)
load_backend: loaded RPC backend from /media/czar33/x_2tb/vulkan/llama-b7491/libggml-rpc.so
ggml_vulkan: Found 1 Vulkan devices:
ggml_vulkan: 0 = AMD Radeon RX 7900 GRE (RADV NAVI31) (radv) | uma: 0 | fp16: 1 | bf16: 0 | warp size: 64 | shared memory: 65536 | int dot: 1 | matrix c
ores: KHR_coopmat
load_backend: loaded Vulkan backend from /media/czar33/x_2tb/vulkan/llama-b7491/libggml-vulkan.so
load_backend: loaded CPU backend from /media/czar33/x_2tb/vulkan/llama-b7491/libggml-cpu-haswell.so
| model | size | params | backend | ngl | test | t/s |
| ------------------------------ | ---------: | ---------: | ---------- | --: | --------------: | -------------------: |
| nemotron_h_moe 31B.A3.5B Q5_K - Medium | 24.35 GiB | 31.58 B | Vulkan,RPC | 99 | pp512 | 112.32 ± 1.81 |
| nemotron_h_moe 31B.A3.5B Q5_K - Medium | 24.35 GiB | 31.58 B | Vulkan,RPC | 99 | tg128 | 40.79 ± 0.22 |
build: 52ab19df6 (7491)
real 2m28.029s
Nemotron-3-Nano-30B-A3B-Q4\_K\_M.gguf (22.88 GiB)
|test|t/s|
|:-|:-|
|pp512|112.04 ± 1.89|
|tg128|41.46 ± 0.12|
Nemotron-3-Nano-30B-A3B-Q5\_K\_M.gguf (24.35 GiB)
|test|t/s|
|:-|:-|
|pp512|112.32 ± 1.81|
|tg128|40.79 ± 0.22|
Nemotron-3-Nano-30B-A3B-Q6\_K.gguf (31.20 GiB)
|test|t/s|
|:-|:-|
|pp512|113.58 ± 1.70|
|tg128|39.95 ± 0.76|
COMPARED to -ngl offloading on NVIDIA GTX-1080Ti and P102-100 (21GB VRAM) at Q6\_K
Nemotron-3-Nano-30B-A3B-Q6\_K.gguf -ngl 30
|test|t/s|
|:-|:-|
|pp512|82.68 ± 0.62|
|tg128|21.78 ± 0.79|
I'm impressed on being able to run the Q6\_K model at a very respectable speed across 2 system and 3 GPUs. | 2025-12-21T04:31:19 | https://www.reddit.com/r/LocalLLaMA/comments/1prxpcx/nvidia_nemotron3nano30b_llm_benchmarks_vulkan_and/ | tabletuser_blogspot | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1prxpcx | false | null | t3_1prxpcx | /r/LocalLLaMA/comments/1prxpcx/nvidia_nemotron3nano30b_llm_benchmarks_vulkan_and/ | false | false | self | 26 | {'enabled': False, 'images': [{'id': 'GMB3lzzAnOTZCjUjIzh0CSKJz_NNkQmuPwPxVsBsvl4', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/GMB3lzzAnOTZCjUjIzh0CSKJz_NNkQmuPwPxVsBsvl4.png?width=108&crop=smart&auto=webp&s=f3b045ac2f49f1dcdec25f65a0d836d3a8d64dc6', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/GMB3lzzAnOTZCjUjIzh0CSKJz_NNkQmuPwPxVsBsvl4.png?width=216&crop=smart&auto=webp&s=51dd38a394ad5dfc4c4a0d0259fc0514d91898c0', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/GMB3lzzAnOTZCjUjIzh0CSKJz_NNkQmuPwPxVsBsvl4.png?width=320&crop=smart&auto=webp&s=3d170b50f66f42ae115e44a806ff0d83a5132942', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/GMB3lzzAnOTZCjUjIzh0CSKJz_NNkQmuPwPxVsBsvl4.png?width=640&crop=smart&auto=webp&s=a408b0c5c8a69316c0e6059af95f0e0880a1f80a', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/GMB3lzzAnOTZCjUjIzh0CSKJz_NNkQmuPwPxVsBsvl4.png?width=960&crop=smart&auto=webp&s=0d30af026a3af65d1adf7994030ecad63322b57c', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/GMB3lzzAnOTZCjUjIzh0CSKJz_NNkQmuPwPxVsBsvl4.png?width=1080&crop=smart&auto=webp&s=070fa3e758dda27d7ab878579a9d6655713f3a1b', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/GMB3lzzAnOTZCjUjIzh0CSKJz_NNkQmuPwPxVsBsvl4.png?auto=webp&s=96c9af570d915c11a79e4d5bfae658a725b5f351', 'width': 1200}, 'variants': {}}]} |
Is there a tool that can extract a summary of a file in source code so it can be used to generate prompts? | 1 | When I need to modify a file, I often need a list of function names, variable names, etc so the LLM has some context. I find that ctags doesn't have everything I need (include statements, global variables, etc.).
The purpose is to add this to a prompt and then ask an LLM to guess which function I need to modify. | 2025-12-21T04:26:37 | https://www.reddit.com/r/LocalLLaMA/comments/1prxm80/is_there_a_tool_that_can_extract_a_summary_of_a/ | birdsintheskies | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1prxm80 | false | null | t3_1prxm80 | /r/LocalLLaMA/comments/1prxm80/is_there_a_tool_that_can_extract_a_summary_of_a/ | false | false | self | 1 | null |
is it a good deal? 64GB VRAM @ 1,058 USD | 88 | This Black Friday, I found an Nvidia Jetson AGX Orin 64GB developer kit for $1,058. It usually goes for $2,000, and if you're in India like I am, it retails around $2,370.61. For comparison, the 5090, which is a 32GB card, costs $2,000 right now.
A little background: in my [previous post](https://www.reddit.com/r/LocalLLaMA/comments/1p9iawm/looking_for_open_source_10b_model_that_is/), I asked the community which open-source model I could use locally to achieve similar performance to GPT-4o-mini with a 16GB VRAM constraint, and the unanimous conclusion was that more VRAM is required.
So I began my search and found this deal ([out of stock now](https://www.amazon.com/NVIDIA-Jetson-Orin-64GB-Developer/dp/B0BYGB3WV4?crid=1Y0GEMVGIT2Y1&dib=eyJ2IjoiMSJ9.FoThX8FZ94bjsnPOOKsYeOU_z7hyFtfGlHRIhkasZV2n3k3fXTbvCidX2BS21F6ho6cCeKibmPpVZ__v6ESMpAPJV0GrTdf9P_Os4hVMzc0ACZbLbOAe6eGI_zkvEeb4kGLxv1F4I1PCp1dryARl0-d4TyqvQtQqJGFMyqcSpEX3yq317tO2ns0-i1_F45_RSj8ia8hONnO1csZjWVJl5MP-QwhkOIy5HVrbgz__9mc.rmCLen5N1BOvx4gErZosaBavA_JDBwagBfDOxG0vdBY&dib_tag=se&keywords=nvidia%2Bjetson%2Bagx%2Borin%2B64gb&qid=1766285654&sprefix=nvidia%2Bjetson%2Bagx%2Caps%2C488&sr=8-1&th=1)) and asked someone from the US to buy it and bring it to India.
The reason for this purchase: I've built an AI Voice Agent platform that handles pre-sales and post-sales for any company. This voice pipeline runs on three models in a cascading fashion: (VAD + Turn Detection) → STT → LLM → TTS. Since I need to host multiple models, VRAM is a bigger constraint than processing power.
So, instead of a consumer card like the 5090 (32GB), which offers great processing power, I ended up purchasing the Jetson AGX Orin (64GB).
I'll continue the chain of posting with my results of running voice agents specific models on this machine. | 2025-12-21T03:25:47 | bohemianLife1 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1prwhb1 | false | null | t3_1prwhb1 | /r/LocalLLaMA/comments/1prwhb1/is_it_a_good_deal_64gb_vram_1058_usd/ | false | false | 88 | {'enabled': True, 'images': [{'id': 'vMd11i4yWZX2VTQWsgQTtxJ1nCrdiDVB-20Jx1stG58', 'resolutions': [{'height': 144, 'url': 'https://preview.redd.it/sr0bemp03h8g1.jpeg?width=108&crop=smart&auto=webp&s=3946d91a7db2523480338a15aa1bc9afa2aac8dd', 'width': 108}, {'height': 288, 'url': 'https://preview.redd.it/sr0bemp03h8g1.jpeg?width=216&crop=smart&auto=webp&s=d1484feca29f5428ca8c10a77cb2cf62ffe791d2', 'width': 216}, {'height': 426, 'url': 'https://preview.redd.it/sr0bemp03h8g1.jpeg?width=320&crop=smart&auto=webp&s=1bed1994e1aac58314fa73b64a5240645c345fb1', 'width': 320}, {'height': 853, 'url': 'https://preview.redd.it/sr0bemp03h8g1.jpeg?width=640&crop=smart&auto=webp&s=3483188d964c72719dc2a2e2c6a49ccdbdf22371', 'width': 640}, {'height': 1280, 'url': 'https://preview.redd.it/sr0bemp03h8g1.jpeg?width=960&crop=smart&auto=webp&s=14bd5eabd8b493bd1704549f976409df66ed1789', 'width': 960}, {'height': 1440, 'url': 'https://preview.redd.it/sr0bemp03h8g1.jpeg?width=1080&crop=smart&auto=webp&s=45d31e11259019f2b55b5ff28b8ab541c2e50726', 'width': 1080}], 'source': {'height': 4032, 'url': 'https://preview.redd.it/sr0bemp03h8g1.jpeg?auto=webp&s=5c1a14edca207ca2c78aad6cf46e35293904d993', 'width': 3024}, 'variants': {}}]} | ||
LLM for a 6900xt? | 1 | Hello everyone and good day. I'm looking for a LOM that could fit my needs. I want a little bit of GPT style conversation and some riplet agent style coding. Doesn't have to be super advanced but I need the coding side to at least fix problems in some of my programs that I have when I don't have any more money to spend on professional agents.
Mobo is Asus x399-e
Processor is TR 1950x
Memory 32gb ddr4.
GPU 6700xt 12gb with smart enabled.
Psu EVGA mach 1 1200w
| 2025-12-21T03:18:59 | https://www.reddit.com/r/LocalLLaMA/comments/1prwcvi/llm_for_a_6900xt/ | RichOpinion4766 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1prwcvi | false | null | t3_1prwcvi | /r/LocalLLaMA/comments/1prwcvi/llm_for_a_6900xt/ | false | false | self | 1 | null |
GLM 4.7 imminent?! | 100 | [https://github.com/zRzRzRzRzRzRzR](https://github.com/zRzRzRzRzRzRzR), a [z.ai](http://z.ai) employee, appears hard at work to implement GLM 4.7 support. It's added in vLLM already.
What are your expectations for this, to be announced, new model? I'm both very optimistic and a little cautious at the same time.
Earlier in the year they, GLM itself on twitter, said that version 5.0 would be released this year. Now all i see is 4.7 which kinda gives me a feeling of the model potentially not being as great of an update as they had hoped to be. I don't think they'll top all the SOTA models in the benchmarks but i do think they will come within reach again. Say in the top 10. That's just pure wishful thinking and speculation at this point. | 2025-12-21T03:13:27 | https://www.reddit.com/r/LocalLLaMA/comments/1prw988/glm_47_imminent/ | JuicyLemonMango | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1prw988 | false | null | t3_1prw988 | /r/LocalLLaMA/comments/1prw988/glm_47_imminent/ | false | false | self | 100 | {'enabled': False, 'images': [{'id': '5x4-Q6kLeklEsN5R84wpZDm5ZMhp_vwS1Jzp7f3fDhg', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/5x4-Q6kLeklEsN5R84wpZDm5ZMhp_vwS1Jzp7f3fDhg.png?width=108&crop=smart&auto=webp&s=5e06c5a1dd10ca26762cce15c9293f92f3f36020', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/5x4-Q6kLeklEsN5R84wpZDm5ZMhp_vwS1Jzp7f3fDhg.png?width=216&crop=smart&auto=webp&s=71a033662806f64841ff01b7dde2a1e082cd9754', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/5x4-Q6kLeklEsN5R84wpZDm5ZMhp_vwS1Jzp7f3fDhg.png?width=320&crop=smart&auto=webp&s=d9e11b2d7bd39d0bb6d9bb16db0a981717b4247b', 'width': 320}], 'source': {'height': 460, 'url': 'https://external-preview.redd.it/5x4-Q6kLeklEsN5R84wpZDm5ZMhp_vwS1Jzp7f3fDhg.png?auto=webp&s=a966463f1de5bb4b36a6094bf835aaa07df60c5f', 'width': 460}, 'variants': {}}]} |
think I just built a grammarly for LLMs with llama | 0 | I think I just built a grammarly for LLMs. Should I ship this product feature?
For some background, I built this tool called [Promptify](https://joinpromptify.com/) which is a free chrome extension to take vague prompts and create super detailed, context aware JSON (or XML or regulat) prompts for crazy outputs.
I had an idea two days ago to make Promptify kind of like a "Grammarly." It gives feedback and rewrites prompts in a simple, optimized manner than the monstrous JSON mega prompt typically created.
Haven't added this feature to the [product ](https://chromewebstore.google.com/detail/promptify/gbdneaodlcoplkbpiemljcafpghcelld)yet but am thinking of dropping it next week. Should I? Give it a go in how it is (yes I know the UI sucks its also getting an update) and let me know!
Its simple. It checks the prompt input, goes through a specific scoring guide I put as a system prompt in another LLM and breaks it up into steps for improvement!
**All of this uses Meta's llama by the way**
\*Pro tip: use groq API with meta llama, completely free to enhance prompts from my 180+ weekly users
Check it out:
https://preview.redd.it/a7y2c1po0h8g1.png?width=640&format=png&auto=webp&s=03bdab9414e98c4c571527debcaada20c2c136de
| 2025-12-21T02:47:42 | https://www.reddit.com/r/LocalLLaMA/comments/1prvrhh/think_i_just_built_a_grammarly_for_llms_with_llama/ | Turbulent-Range-9394 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1prvrhh | false | null | t3_1prvrhh | /r/LocalLLaMA/comments/1prvrhh/think_i_just_built_a_grammarly_for_llms_with_llama/ | false | false | 0 | null | |
Here is what happens if you have an LLM that requires more RAM than you have | 0 | https://reddit.com/link/1prvonw/video/cyka8v340h8g1/player
Could a pagefile make it work? | 2025-12-21T02:43:44 | https://www.reddit.com/r/LocalLLaMA/comments/1prvonw/here_is_what_happens_if_you_have_an_llm_that/ | Cold_Junket8940 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1prvonw | false | null | t3_1prvonw | /r/LocalLLaMA/comments/1prvonw/here_is_what_happens_if_you_have_an_llm_that/ | false | false | self | 0 | null |
gemma3:4b running on 4GB RAM + no GPU + no pagefile + Win10. | 0 | 2025-12-21T02:27:42 | https://www.reddit.com/r/LocalLLaMA/comments/1prvdqt/gemma34b_running_on_4gb_ram_no_gpu_no_pagefile/ | Cold_Junket8940 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1prvdqt | false | null | t3_1prvdqt | /r/LocalLLaMA/comments/1prvdqt/gemma34b_running_on_4gb_ram_no_gpu_no_pagefile/ | false | false | 0 | null | ||
gemma3:1b running on 4GB RAM + no GPU. | 0 | 2025-12-21T02:21:02 | https://www.reddit.com/r/LocalLLaMA/comments/1prv971/gemma31b_running_on_4gb_ram_no_gpu/ | Cold_Junket8940 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1prv971 | false | null | t3_1prv971 | /r/LocalLLaMA/comments/1prv971/gemma31b_running_on_4gb_ram_no_gpu/ | false | false | 0 | null | ||
AN ARTIFICIAL INTELLIGENCE MODEL PRODUCED BY APPLYING KNOWLEDGE
DISTILLATION TO A FRONTIER MODEL AS DEFINED IN PARAGRAPH (A) OF THIS
SUBDIVISION. | 0 | So, like, gpt-oss
Distill wasn't in the california bill. The devil is in the details, folks.
[https://www.nysenate.gov/legislation/bills/2025/A6453/amendment/A](https://www.nysenate.gov/legislation/bills/2025/A6453/amendment/A) | 2025-12-21T02:13:19 | https://www.reddit.com/r/LocalLLaMA/comments/1prv3zz/an_artificial_intelligence_model_produced_by/ | kaggleqrdl | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1prv3zz | false | null | t3_1prv3zz | /r/LocalLLaMA/comments/1prv3zz/an_artificial_intelligence_model_produced_by/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'hPXZu2Ij8GnHNGklyFd9CxrBFfYjXqKDV3KzYFr96zE', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/hPXZu2Ij8GnHNGklyFd9CxrBFfYjXqKDV3KzYFr96zE.png?width=108&crop=smart&auto=webp&s=260d988366ecb679ab997cbae60e2342709adb2d', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/hPXZu2Ij8GnHNGklyFd9CxrBFfYjXqKDV3KzYFr96zE.png?width=216&crop=smart&auto=webp&s=73a13216e59f73e69e16859ccbb3b327163ba70c', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/hPXZu2Ij8GnHNGklyFd9CxrBFfYjXqKDV3KzYFr96zE.png?width=320&crop=smart&auto=webp&s=e8640175227fffefbf50da49977e8f0f828b27e6', 'width': 320}, {'height': 640, 'url': 'https://external-preview.redd.it/hPXZu2Ij8GnHNGklyFd9CxrBFfYjXqKDV3KzYFr96zE.png?width=640&crop=smart&auto=webp&s=61af452e737d2cb56eca035f7cd5b4a4f384612d', 'width': 640}, {'height': 960, 'url': 'https://external-preview.redd.it/hPXZu2Ij8GnHNGklyFd9CxrBFfYjXqKDV3KzYFr96zE.png?width=960&crop=smart&auto=webp&s=012e9c966a5e077f44adf80b216756376980f095', 'width': 960}, {'height': 1080, 'url': 'https://external-preview.redd.it/hPXZu2Ij8GnHNGklyFd9CxrBFfYjXqKDV3KzYFr96zE.png?width=1080&crop=smart&auto=webp&s=8fc39339b61d78c90ca6534199b4896439f27691', 'width': 1080}], 'source': {'height': 4096, 'url': 'https://external-preview.redd.it/hPXZu2Ij8GnHNGklyFd9CxrBFfYjXqKDV3KzYFr96zE.png?auto=webp&s=a2f9f0d3372df03add1c1bfcb87d14cce3f69776', 'width': 4096}, 'variants': {}}]} |
How big do we think Gemini 3 flash is | 124 | Hopefully the relevance to open models is clear enough. I'm curious about speculations based on speed and other things how big this model is--because it can help us understand just how strong a model something like 512Gb mac ultra can run eventually or something like 128Gb macbook. Do we think it's something that can fit in memory in a 128Gb MacBook for example? | 2025-12-21T01:51:33 | https://www.reddit.com/r/LocalLLaMA/comments/1pruoy7/how_big_do_we_think_gemini_3_flash_is/ | davikrehalt | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pruoy7 | false | null | t3_1pruoy7 | /r/LocalLLaMA/comments/1pruoy7/how_big_do_we_think_gemini_3_flash_is/ | false | false | self | 124 | null |
Hola, quiero saber algo de IAs | 0 | Hola a todos! 👍🏼
Quisiera que me orienten qué IA local poner en mi computadora local:
Ryzen 5700G
64 Gb RAM 3200 Mhz C16
8 Gb VRAM GPU 6600xt
Discos SSD y mecánicos
Y me encantaría que se pueda tener una IA de muy buena cantidad de tokens de contexto y capaz de hacer de todo, no sólo charlar, también programar.
¿Qué me recomiendan? 🤔 | 2025-12-21T01:03:47 | https://www.reddit.com/r/LocalLLaMA/comments/1prtrai/hola_quiero_saber_algo_de_ias/ | Necessary-Plant8738 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1prtrai | false | null | t3_1prtrai | /r/LocalLLaMA/comments/1prtrai/hola_quiero_saber_algo_de_ias/ | false | false | self | 0 | null |
Let’s assume that some company releases an open weight model that beats Claude Sonnet fairly well. | 0 | Claude Sonnet is pretty solid model when it comes toolchain calling and instructions following and understanding the context really well. It assists in writing code in pretty much every language and doesn’t hallucinate a lot.
But is there any model that comes super close to Claude? And if one surpasses it then what? Will we have super cheap subscriptions to that open weight model or the pricing and limitation will be similar to that of Anthropic’s because such models are gigantic and power hungry? | 2025-12-21T00:56:43 | https://www.reddit.com/r/LocalLLaMA/comments/1prtm54/lets_assume_that_some_company_releases_an_open/ | _takasur | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1prtm54 | false | null | t3_1prtm54 | /r/LocalLLaMA/comments/1prtm54/lets_assume_that_some_company_releases_an_open/ | false | false | self | 0 | null |
MiMo-V2-Flash - SGLang - mtp triton attention | 19 | Some testing results on 4x 6000 Blackwell workstation cards
4K | 3,597 | 500 | 100.2 t/s | N/A | 2.40
┃ 8K | 7,199 | 500 | 88.2 t/s | N/A | 2.39
┃ 16K | 14,401 | 500 | 67.0 t/s | N/A | 2.24
┃ 32K | 28,804 | 500 | 54.5 t/s | N/A | 2.50
┃ 64K | 57,611 | 500 | 31.7 t/s | N/A | 2.23
┃ 100K | 90,019 | 500 | 24.5 t/s | N/A | 2.42 | 2025-12-21T00:34:04 | https://www.reddit.com/r/LocalLLaMA/comments/1prt5qz/mimov2flash_sglang_mtp_triton_attention/ | getfitdotus | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1prt5qz | false | null | t3_1prt5qz | /r/LocalLLaMA/comments/1prt5qz/mimov2flash_sglang_mtp_triton_attention/ | false | false | self | 19 | null |
Is there even a reliable AI statistics/ranker? | 0 | Yes there's some out there that give some semblance of actual statistics. But majority of the space claiming to "rank" or have a placement of who's ai is best for what is usually shallow or unreliable? Alot even have contradicting information even if in applicable usage or experience it's noticeably better to the point it's obvious? Or are most just paid off for the sake of free advertising as alot of those so called "Leaderboards" usually have a "*sponsored" flair over them. Or is their way to statisticaly rank it in different ways some may rely on public consensus? Some may have personalized standardized tests which offer different statistics based on how they formulate them? Or they all have different promoting some use the base mode or others prompt it hardl for example ChatGPT base model is really bad for me in terms of speech, directness and objectivity while impressive when finetuned? I'm just confused or should I just give up and just rely on my own consensus as there's too much to keep up with different AI's to try for my projects or personal fun. | 2025-12-21T00:06:08 | https://www.reddit.com/r/LocalLLaMA/comments/1prsloo/is_there_even_a_reliable_ai_statisticsranker/ | CompoteTiny | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1prsloo | false | null | t3_1prsloo | /r/LocalLLaMA/comments/1prsloo/is_there_even_a_reliable_ai_statisticsranker/ | false | false | self | 0 | null |
Transformer Model fMRI (Now with 100% more Gemma) build progress | 0 | As the title suggests, I made a pivot to Gemma2 2B. I'm on a consumer card (16gb) and I wasn't able to capture all of the backward pass data that I would like using a 3B model. While I was running a new test suite, The model made a runaway loop suggesting that I purchase a video editor (lol).
[I guess I need a new editor?](https://preview.redd.it/mx3jnvza4g8g1.jpg?width=320&format=pjpg&auto=webp&s=2924a63b728283d0f11fefb861effab7bcf2518f)
I decided that these would be good logs to analyze, and wanted to share. Below are three screenshots that correspond to the word 'video'
https://preview.redd.it/utfer14e4g8g1.jpg?width=640&format=pjpg&auto=webp&s=d45fb09ce130fa5cb628d4e35c0c63e886157474
https://preview.redd.it/p3k1ci2f4g8g1.jpg?width=640&format=pjpg&auto=webp&s=a1e7be1b5e51d54daa52a1d4df5a9cc1efe068c0
https://preview.redd.it/jf8ynqqf4g8g1.jpg?width=640&format=pjpg&auto=webp&s=c936d39351e3700bb78e2482f6a824ff3b0176b3
The internal space of the model, while appearing the same at first glance, is slightly different in structure. I'm still exploring what that would mean, but thought it was worth sharing!
| 2025-12-20T23:46:00 | https://www.reddit.com/r/LocalLLaMA/comments/1prs6lf/transformer_model_fmri_now_with_100_more_gemma/ | Due_Hunter_4891 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1prs6lf | false | null | t3_1prs6lf | /r/LocalLLaMA/comments/1prs6lf/transformer_model_fmri_now_with_100_more_gemma/ | false | false | 0 | null | |
Chatbot chat bubble | 2 | I have been banging my head for to long, so now I'm here begging for help.
I wrote a chatbot client. I have a heavy Victorian aesthetic. For the chat bubbles, I want them to be banner scrolls, that roll out dynamically as the user or AI types.
I've spent to many hours and piled up a bunch of failures. Can anyone help me with a vibecoding prompt for this?
Can anyone help? | 2025-12-20T23:32:59 | https://www.reddit.com/r/LocalLLaMA/comments/1prrx2c/chatbot_chat_bubble/ | david_jackson_67 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1prrx2c | false | null | t3_1prrx2c | /r/LocalLLaMA/comments/1prrx2c/chatbot_chat_bubble/ | false | false | self | 2 | null |
This is what I call a good benchmax... | 0 | 2025-12-20T22:54:00 | GenLabsAI | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1prr33w | false | null | t3_1prr33w | /r/LocalLLaMA/comments/1prr33w/this_is_what_i_call_a_good_benchmax/ | false | false | default | 0 | {'enabled': True, 'images': [{'id': 'lai8q0m6vf8g1', 'resolutions': [{'height': 20, 'url': 'https://preview.redd.it/lai8q0m6vf8g1.png?width=108&crop=smart&auto=webp&s=ccc9b9962b4e00c4d13c8e43009fd0d1506e064f', 'width': 108}, {'height': 41, 'url': 'https://preview.redd.it/lai8q0m6vf8g1.png?width=216&crop=smart&auto=webp&s=b9ec919d396950fcffdce1daa155bc1f4387a21d', 'width': 216}, {'height': 61, 'url': 'https://preview.redd.it/lai8q0m6vf8g1.png?width=320&crop=smart&auto=webp&s=c16a4f23c9f9a8dc46883c619ea5a56176845234', 'width': 320}, {'height': 122, 'url': 'https://preview.redd.it/lai8q0m6vf8g1.png?width=640&crop=smart&auto=webp&s=e6b4f4f7d1dd884aeee0f275899ffeb8930c1904', 'width': 640}, {'height': 183, 'url': 'https://preview.redd.it/lai8q0m6vf8g1.png?width=960&crop=smart&auto=webp&s=02d2bb807de44d857ec28bcca131f6f59a8a0d1f', 'width': 960}], 'source': {'height': 193, 'url': 'https://preview.redd.it/lai8q0m6vf8g1.png?auto=webp&s=2e465622a0af0539d0b0f32ba9772a9eaf04b976', 'width': 1008}, 'variants': {}}]} | ||
Local training - funny Grok hallucination | 0 | So I am currently training up Llama 3.2 3B base on the OpenAI Harmony template, and using test prompts to check safety alignment and chat template adherence, which I then send to Grok to get a second set of eyes for missing special tokens. Well, it seems it only takes a few rounds of talking about Harmony for Grok to start trying to use it itself. It took me several rounds after this to get it to stop.
https://preview.redd.it/4jzj2i6rsf8g1.png?width=667&format=png&auto=webp&s=558cf1cbc240165471ff07e72ae3fc6d74ffd2b5
| 2025-12-20T22:42:12 | https://www.reddit.com/r/LocalLLaMA/comments/1prqu0i/local_training_funny_grok_hallucination/ | Mabuse046 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1prqu0i | false | null | t3_1prqu0i | /r/LocalLLaMA/comments/1prqu0i/local_training_funny_grok_hallucination/ | false | false | 0 | null | |
Where can I find the Intel Arc Pro B60? | 6 | Hey there, hope this is the right place to post but I saw on here a few months back that someone mentioned this Intel Arc Pro B60 with 24g ram. I’ve been trying to upgrade my rig for local and thought this would be perfect! But….i can’t find out where to get it. Newegg doesn’t even recognize it and google shopping isn’t bringing it up either. Any help would be greatly appreciate.
Link that I came across for reference: https://www.reddit.com/r/LocalLLaMA/comments/1nlyy6n/intel\_arc\_pro\_b60\_24gb\_professional\_gpu\_listed\_at/ | 2025-12-20T22:19:23 | https://www.reddit.com/r/LocalLLaMA/comments/1prqcc8/where_can_i_find_the_intel_arc_pro_b60/ | Puzzled_Rip9008 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1prqcc8 | false | null | t3_1prqcc8 | /r/LocalLLaMA/comments/1prqcc8/where_can_i_find_the_intel_arc_pro_b60/ | false | false | self | 6 | null |
Got lots of VRAM? Want to help a developer refine methods and tooling for small edge models (BitNet+KBLaM)? Show this some love! | 16 | This developer u/ufos1111 put a lot of work in, but it didn't get much traction. I think there's lots of value to be had here, if anyone wanted to collaborate or run test training give them a shout :-)
Edge devices, even Raspberry Pi can run this, as well as any avx2 cpu, but MS is also working on GPU support.
I am certainly no expert, just trying to help publicise the work... | 2025-12-20T21:57:29 | https://www.reddit.com/r/LLMDevs/comments/1lrev36/bitnet_model_implementation_in_microsoftkblam/#:~:text=I've%20created%20an%20initial%20implementation%20of%20BitNet,to%20introduce%20additional%20knowledge%20base%20data%20into%E2%80%A6 | rog-uk | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1prpv1g | false | null | t3_1prpv1g | /r/LocalLLaMA/comments/1prpv1g/got_lots_of_vram_want_to_help_a_developer_refine/ | false | false | 16 | {'enabled': False, 'images': [{'id': 'KWtUbuNCwFa_wwoxQhB0aQn6OgRe6sJokwMFKLrmN94', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/KWtUbuNCwFa_wwoxQhB0aQn6OgRe6sJokwMFKLrmN94.png?width=108&crop=smart&auto=webp&s=cb4f58cf51cae301b092ba26aa07151aca4f3fa4', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/KWtUbuNCwFa_wwoxQhB0aQn6OgRe6sJokwMFKLrmN94.png?width=216&crop=smart&auto=webp&s=d0e91904419e5dbe6715dd4663c33b6d1529f84c', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/KWtUbuNCwFa_wwoxQhB0aQn6OgRe6sJokwMFKLrmN94.png?width=320&crop=smart&auto=webp&s=e96a71a651d13cf9bf4909d3f922a8350dcaf8c9', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/KWtUbuNCwFa_wwoxQhB0aQn6OgRe6sJokwMFKLrmN94.png?width=640&crop=smart&auto=webp&s=83b963f83be9915212d0be225fcdfc5817bf3052', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/KWtUbuNCwFa_wwoxQhB0aQn6OgRe6sJokwMFKLrmN94.png?width=960&crop=smart&auto=webp&s=8f55faa52ba9465e383e12c9ba8e9078257fd4e7', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/KWtUbuNCwFa_wwoxQhB0aQn6OgRe6sJokwMFKLrmN94.png?width=1080&crop=smart&auto=webp&s=7ab994618b72a8f6671f55a5619c9d7a7169437c', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/KWtUbuNCwFa_wwoxQhB0aQn6OgRe6sJokwMFKLrmN94.png?auto=webp&s=35b1d16d577fdc80d5f3369575973e221436d253', 'width': 1200}, 'variants': {}}]} | |
"I built an open-source security scanner for AI models - scan before you load!" | 0 | Hey everyone,
I've been concerned about AI supply chain attacks - poisoned weights, pickle exploits, and malware hidden in model files. So I built **ModelSentinel**.
What it does:
\- Scans GGUF, SafeTensors, and PyTorch models for threats.
\- Detects statistical anomalies (poisoned weights)
\- Finds malware signatures
\- Works on Windows, Mac, and Linux
\- Has a simple GUI - no coding needed
Why you need this:
\- Anyone can upload a "Llama 3" model to HuggingFace
\- Pickle files (.bin, .pt) can execute code when loaded
\- You won't know until it's too late
\- GitHub: [https://github.com/TejaCHINTHALA67/ModelSentinel.git](https://github.com/TejaCHINTHALA67/ModelSentinel.git)
It's 100% free and open source (MIT license).
Would love feedback! What features would you want? | 2025-12-20T21:56:07 | https://www.reddit.com/r/LocalLLaMA/comments/1prpu04/i_built_an_opensource_security_scanner_for_ai/ | Teja_Chinthala | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1prpu04 | false | null | t3_1prpu04 | /r/LocalLLaMA/comments/1prpu04/i_built_an_opensource_security_scanner_for_ai/ | false | false | self | 0 | null |
New York Governor Kathy Hochul signs RAISE Act to regulate AI "safety" | 5 | 2025-12-20T21:37:00 | https://www.politico.com/news/2025/12/19/kathy-hochul-signs-new-yorks-ai-safety-law-aimed-at-tech-industry-heavyweights-00700473 | TheRealMasonMac | politico.com | 1970-01-01T00:00:00 | 0 | {} | 1prpet4 | false | null | t3_1prpet4 | /r/LocalLLaMA/comments/1prpet4/new_york_governor_kathy_hochul_signs_raise_act_to/ | false | false | default | 5 | null | |
Just a moment... | 1 | [deleted] | 2025-12-20T21:36:06 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 1prpe45 | false | null | t3_1prpe45 | /r/LocalLLaMA/comments/1prpe45/just_a_moment/ | false | false | default | 1 | null | ||
[Request] Make a tunable Devstral 123B | 14 | I've been asking around and doing my own attempts at creating a Devstral 123B that can be tuned (i.e., dequanted at BF16/FP16)
I figured I could tap into the community to see if anyone has a clue on how to dequant it so people (like me) can start tuning on it.
Anyone got ideas? I'd personally give credits to whoever can help kickstart a new 123B era.
Link for additional context. | 2025-12-20T21:36:03 | https://github.com/huggingface/transformers/issues/42907 | TheLocalDrummer | github.com | 1970-01-01T00:00:00 | 0 | {} | 1prpe36 | false | null | t3_1prpe36 | /r/LocalLLaMA/comments/1prpe36/request_make_a_tunable_devstral_123b/ | false | false | default | 14 | {'enabled': False, 'images': [{'id': 'TxRR-MsCsLcFQzYanP3jRjhNqN7ocymmwljq0fd-4cw', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/TxRR-MsCsLcFQzYanP3jRjhNqN7ocymmwljq0fd-4cw.png?width=108&crop=smart&auto=webp&s=6895ef169899b0fff748e828886d6d9f607db3c8', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/TxRR-MsCsLcFQzYanP3jRjhNqN7ocymmwljq0fd-4cw.png?width=216&crop=smart&auto=webp&s=bb1709c6829ebb316d6003960fdbe92018fe9814', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/TxRR-MsCsLcFQzYanP3jRjhNqN7ocymmwljq0fd-4cw.png?width=320&crop=smart&auto=webp&s=2b43e1161c1f9de0f17a360a752e56ba808b244f', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/TxRR-MsCsLcFQzYanP3jRjhNqN7ocymmwljq0fd-4cw.png?width=640&crop=smart&auto=webp&s=321dd8fe9c41cd2b24320a843829907204903f9f', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/TxRR-MsCsLcFQzYanP3jRjhNqN7ocymmwljq0fd-4cw.png?width=960&crop=smart&auto=webp&s=072eba9ee2c33969c26164a201d1008d5906978b', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/TxRR-MsCsLcFQzYanP3jRjhNqN7ocymmwljq0fd-4cw.png?width=1080&crop=smart&auto=webp&s=a9868147502b404403ab936645628f8813aff3ab', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/TxRR-MsCsLcFQzYanP3jRjhNqN7ocymmwljq0fd-4cw.png?auto=webp&s=9117fa36973f0e74348e1a85a7f8a987390e61c9', 'width': 1200}, 'variants': {}}]} |
Mi50 32GB Group Buy. | 3 | Someone in another sub is arranging a Mi50 group buy. This might be of interest to some in this sub. | 2025-12-20T20:55:34 | https://www.reddit.com/r/LocalAIServers/comments/1pnyct6/mi50_32gb_group_buy/ | fallingdowndizzyvr | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1prohxr | false | null | t3_1prohxr | /r/LocalLLaMA/comments/1prohxr/mi50_32gb_group_buy/ | false | false | default | 3 | null |
[Project] Engineering a robust SQL Optimizer with DeepSeek-R1:14B (Ollama) + HypoPG. How I handled the <think> tags and Context Pruning on a 12GB GPU | 0 | Hi everyone,
I’ve been working on **OptiSchema Slim**, a local-first tool to analyze PostgreSQL performance without sending sensitive schema data to the cloud.
I started with SQLCoder-7B, but found it struggled with complex reasoning. I recently switched to **DeepSeek-R1-Distill-Qwen-14B** (running via Ollama), and the difference is massive if you handle the output correctly.
I wanted to share the architecture I used to make a local 14B model reliable for database engineering tasks on my **RTX 3060 (12GB)**.
# The Stack
* **Engine:** Ollama (DeepSeek-R1:14b quantized to Int4)
* **Backend:** Python (FastAPI) + sqlglot
* **Validation:** HypoPG (Postgres extension for hypothetical indexes)
# The 3 Big Problems & Solutions
**1. The Context Window vs. Noise**
Standard 7B/14B models get "dizzy" if you dump a 50-table database schema into the prompt. They start hallucinating columns that don't exist.
* **Solution:** I implemented a **Context Pruner** using sqlglot. Before the prompt is built, I parse the user's SQL, identify only the tables involved (and their FK relations), and fetch the schema for just those 2-3 tables. This reduces the prompt token count by \~90% and massively increases accuracy.
**2. Taming DeepSeek R1's <think> blocks**
Standard models (like Llama 3) respond well to "Respond in JSON." R1 does not. it needs to "rant" in its reasoning block first to get the answer right. If you force JSON mode immediately, it gets dumber.
* **Solution:** I built a **Dual-Path Router**:
* If the user selects **Qwen/Llama**: We enforce strict JSON schemas.
* If the user selects **DeepSeek R1**: We use a raw prompt that explicitly asks for reasoning inside <think> tags first, followed by a Markdown code block containing the JSON. I then use a Regex parser in Python to extract the JSON payload from the tail end of the response.
**3. Hallucination Guardrails**
Even R1 hallucinates indexes for columns that don't exist.
* **Solution:** I don't trust the LLM. The output JSON is passed to a Python guardrail that checks information\_schema. If the column doesn't exist, we discard the result before it even hits the UI. If it passes, we simulate it with HypoPG to get the actual cost reduction.
# The Result
https://preview.redd.it/sr3jkhjc9f8g1.png?width=1913&format=png&auto=webp&s=69c7b7aa9d633fc19649c8a3ac469be5d77567c9
I can now run deep query analysis locally. R1 is smart enough to suggest **Partial Indexes** (e.g., WHERE status='active') which smaller models usually miss.
The repo is open (MIT) if you want to check out the prompt engineering or the parser logic.
**You can check it out** [Here](https://arnab2001.github.io/Optischema-Slim/)
Would love to hear how you guys are parsing structured output from R1 models, are you using regex or forcing tool calls? | 2025-12-20T20:51:55 | https://www.reddit.com/r/LocalLLaMA/comments/1proez3/project_engineering_a_robust_sql_optimizer_with/ | arnab03214 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1proez3 | false | null | t3_1proez3 | /r/LocalLLaMA/comments/1proez3/project_engineering_a_robust_sql_optimizer_with/ | false | false | 0 | null | |
Models sometimes fall into strange voices... | 0 | I wasn't trying to steer tone. Justed asked a normal question and got this answer. Fresh chat, default settings. Curios what might trigger this kind of stylistic drift. | 2025-12-20T19:56:58 | JohannesGlaser | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1prn6uk | false | null | t3_1prn6uk | /r/LocalLLaMA/comments/1prn6uk/models_sometimes_fall_into_strange_voices/ | false | false | default | 0 | {'enabled': True, 'images': [{'id': 'ee6cz73nze8g1', 'resolutions': [{'height': 213, 'url': 'https://preview.redd.it/ee6cz73nze8g1.jpeg?width=108&crop=smart&auto=webp&s=7203fd79d7e379582aab8b3e01a1f0b8a09328e2', 'width': 108}, {'height': 426, 'url': 'https://preview.redd.it/ee6cz73nze8g1.jpeg?width=216&crop=smart&auto=webp&s=7082045ef936c5ea0dca863bf1e60a4cc0f3cfb6', 'width': 216}, {'height': 631, 'url': 'https://preview.redd.it/ee6cz73nze8g1.jpeg?width=320&crop=smart&auto=webp&s=74084c56b1c0e8f991baa0b344817287a855947d', 'width': 320}, {'height': 1262, 'url': 'https://preview.redd.it/ee6cz73nze8g1.jpeg?width=640&crop=smart&auto=webp&s=71db01849ee35148716a3e0a7640eba0a43a5279', 'width': 640}, {'height': 1894, 'url': 'https://preview.redd.it/ee6cz73nze8g1.jpeg?width=960&crop=smart&auto=webp&s=49e90ea38ae39bb78f2e0602283ac05d75361d74', 'width': 960}, {'height': 2130, 'url': 'https://preview.redd.it/ee6cz73nze8g1.jpeg?width=1080&crop=smart&auto=webp&s=fe09ad66794ce778ffc5dba42d5de23b38b846f4', 'width': 1080}], 'source': {'height': 2407, 'url': 'https://preview.redd.it/ee6cz73nze8g1.jpeg?auto=webp&s=f7eda84ec02910363e81ec00bca2621c55e618da', 'width': 1220}, 'variants': {}}]} | |
Models sometimes fall into strange voices | 1 | I wasn't trying to steer tone or style. Just asked a normal question and got this answer. Default settings, fresh chat. Not sure what triggered it. | 2025-12-20T19:51:24 | JohannesGlaser | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1prn2bv | false | null | t3_1prn2bv | /r/LocalLLaMA/comments/1prn2bv/models_sometimes_fall_into_strange_voices/ | false | false | default | 1 | {'enabled': True, 'images': [{'id': 'ynnstnbnye8g1', 'resolutions': [{'height': 213, 'url': 'https://preview.redd.it/ynnstnbnye8g1.jpeg?width=108&crop=smart&auto=webp&s=11b72b91dcdee1f75c417dcecfc43e29934c3831', 'width': 108}, {'height': 426, 'url': 'https://preview.redd.it/ynnstnbnye8g1.jpeg?width=216&crop=smart&auto=webp&s=f01c1022653ded6ada47f69111f4fa3062751066', 'width': 216}, {'height': 631, 'url': 'https://preview.redd.it/ynnstnbnye8g1.jpeg?width=320&crop=smart&auto=webp&s=8eeb38ef52ceedbe8ae6c3f3fbb56872161623ec', 'width': 320}, {'height': 1262, 'url': 'https://preview.redd.it/ynnstnbnye8g1.jpeg?width=640&crop=smart&auto=webp&s=352e0079909f1ab0531d12cd3a917017b396fc81', 'width': 640}, {'height': 1894, 'url': 'https://preview.redd.it/ynnstnbnye8g1.jpeg?width=960&crop=smart&auto=webp&s=f3141cec99dfad979b3b8e237df0c5b3d8529354', 'width': 960}, {'height': 2130, 'url': 'https://preview.redd.it/ynnstnbnye8g1.jpeg?width=1080&crop=smart&auto=webp&s=1ad93f2106129fb98bb3d63d38329c2480777d1a', 'width': 1080}], 'source': {'height': 2407, 'url': 'https://preview.redd.it/ynnstnbnye8g1.jpeg?auto=webp&s=1d8f6eb4824d515f37190eb5f29b8996c40f01a8', 'width': 1220}, 'variants': {}}]} | |
Why does OpenCode hallucinate MCP tool names while Open WebUI works perfectly with the same model? | 3 | Hello everyone,
I'm testing how LLMs work with MCP tools by building a local RAG setup. Everything works perfectly in Open WebUI, but OpenCode has issues calling the correct MCP tools.
My stack:
\- Ollama 0.13.3 (running in Docker on WSL2, GPU enabled)
\- PostgreSQL 16 with pgvector extension
\- Open WebUI (Docker container, port `3000`)
\- OpenCode 1.0.180
\- Custom MCP server (FastMCP, serving on `http://localhost:8080/sse`)
MCP Server Configuration:
The server exposes these tools via FastMCP (python):
\- `search(query, repo, doc_type, limit)` \- Semantic search
\- `search_rerank(query, repo, doc_type, limit)` \- Search with re-ranking
\- `search_hybrid(query, repo, doc_type, limit, alpha)` \- Hybrid semantic + full-text
\- `list_repos()` \- List indexed repositories
\- `get_stats()` \- Database statistics
OpenCode configuration (`~/.config/opencode/opencode.json`):
{
"model": "ollama/mistral-small-tools:latest",
"mcp": {
"pgdocs-rag": {
"type": "remote",
"url": "http://localhost:8080/sse"
}
}
}
The Problem:
When using OpenWebUi and some context, everything work great. But when I use opencode I get weird things like all the calls to my MCP but it does not actually call them. It just prints them on my screen like `{"name": "pg_search", "arguments": {"query": "max_connections"}}`
This tool doesn't exist - it should call search() instead. The model seems to hallucinate plausible tool names rather than using the actual MCP.
What works:
\- The MCP server is running correctly (REST API at /api/search works fine)
\- Open WebUI with the same Ollama model calls the tools correctly and gives excellent answers with context of course
\- The SSE endpoint (http://localhost:8080/sse) is accessible
I use a dockerized environment with docker compose that run on WSL2 (Ubuntu 22.04, kernel 6.6.87.2).
Containers Are :
\- Ollama: `0.13.3`
\- OpenCode: `1.0.180`
\- Open WebUI `0.6.41` (`ghcr.io/open-webui/open-webui:main`)
\- PostgreSQL `16.11` (`pgvector/pgvector:pg16`)
\- Models tested: `mistral-small-tools:latest`, [`hf.co/unsloth/Qwen3-Coder-30B-A3B-Instruct-GGUF:Q4_K_M`](http://hf.co/unsloth/Qwen3-Coder-30B-A3B-Instruct-GGUF:Q4_K_M)
Questions:
1. Is this a known issue with OpenCode's MCP tool discovery?
2. Do I need to configure tool schemas differently for OpenCode vs Open WebUI?
3. Are there specific models that work better with OpenCode's tool calling?
Any help is appreciated!
Robin, | 2025-12-20T19:45:04 | https://www.reddit.com/r/LocalLLaMA/comments/1prmx22/why_does_opencode_hallucinate_mcp_tool_names/ | AcadiaTraditional268 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1prmx22 | false | null | t3_1prmx22 | /r/LocalLLaMA/comments/1prmx22/why_does_opencode_hallucinate_mcp_tool_names/ | false | false | self | 3 | null |
Best coding and agentic models - 96GB | 31 | Hello, lurker here, I'm having a hard time keeping up with the latest models. I want to try local coding and separately have an app run by a local model.
I'm looking for recommendations for the best:
• coding model
• agentic/tool calling/code mode model
That can fit in 96GB of RAM (Mac).
Thanks! | 2025-12-20T19:35:17 | https://www.reddit.com/r/LocalLLaMA/comments/1prmp2j/best_coding_and_agentic_models_96gb/ | 34_to_34 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1prmp2j | false | null | t3_1prmp2j | /r/LocalLLaMA/comments/1prmp2j/best_coding_and_agentic_models_96gb/ | false | false | self | 31 | null |
Best Speech-to-Text in 2025? | 11 | I work at a company where we require calls to be transcribed in-house (no third party). We have a server with 26GB VRAM (GeForce GTX 4090) and 64GB of RAM running Ubuntu server.
The most i keep seeing is the Whisper models but they seem to be about 75% accurate and will be destroyed when background noise of other people is introduced.
Im looking for opinions on the best Speech-to-text models or techniques. Anyone have any thoughts? | 2025-12-20T19:28:56 | https://www.reddit.com/r/LocalLLaMA/comments/1prmjt3/best_speechtotext_in_2025/ | MindWithEase | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1prmjt3 | false | null | t3_1prmjt3 | /r/LocalLLaMA/comments/1prmjt3/best_speechtotext_in_2025/ | false | false | self | 11 | null |
New to LangChain – What Should I Learn Next? | 0 | Hello everyone,
I am currently learning LangChain and have recently built a simple chatbot using Jupyter. However, I am eager to learn more and explore some of the more advanced concepts. I would appreciate any suggestions on what I should focus on next. For example, I have come across Langraph and other related topics—are these areas worth prioritizing?
I am also interested in understanding what is currently happening in the industry. Are there any exciting projects or trends in LangChain and AI that are worth following right now? As I am new to this field, I would love to get a sense of where the industry is heading.
Additionally, I am not familiar with web development and am primarily focused on AI engineering. Should I consider learning web development as well to build a stronger foundation for the future?
Any advice or resources would be greatly appreciated.
https://preview.redd.it/v6stlcpgte8g1.png?width=1350&format=png&auto=webp&s=c91bad523f6b22b8806a97d1107a0f2418f0e475
| 2025-12-20T19:22:30 | https://www.reddit.com/r/LocalLLaMA/comments/1prmegv/new_to_langchain_what_should_i_learn_next/ | Select-Day-873 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1prmegv | false | null | t3_1prmegv | /r/LocalLLaMA/comments/1prmegv/new_to_langchain_what_should_i_learn_next/ | false | false | 0 | null | |
TheDrummer models meet heretic | 66 | What if I abliterate the drummer's fine tune to make them a bit less censored? So, I did that and here's the collection:
[https://huggingface.co/collections/coder3101/the-drummers](https://huggingface.co/collections/coder3101/the-drummers)
It includes:
* Magidonia-24B-v4.3
* Cydonia-24B-v4.3
There are two variants, one that reduces refusal and another that reduces KLD so as to keep the performance similar. | 2025-12-20T19:08:34 | https://www.reddit.com/r/LocalLLaMA/comments/1prm2tq/thedrummer_models_meet_heretic/ | coder3101 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1prm2tq | false | null | t3_1prm2tq | /r/LocalLLaMA/comments/1prm2tq/thedrummer_models_meet_heretic/ | false | false | self | 66 | {'enabled': False, 'images': [{'id': 'ppZr5AFByJNJ36-7qkkzTnHUOOdqB7E3y-Y28OKFW2Y', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/ppZr5AFByJNJ36-7qkkzTnHUOOdqB7E3y-Y28OKFW2Y.png?width=108&crop=smart&auto=webp&s=9b4a7a8def7d39d5962ad2a2f606d8700567b616', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/ppZr5AFByJNJ36-7qkkzTnHUOOdqB7E3y-Y28OKFW2Y.png?width=216&crop=smart&auto=webp&s=a06c2d086b7fd95e2e8ff905223ae66cc162a591', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/ppZr5AFByJNJ36-7qkkzTnHUOOdqB7E3y-Y28OKFW2Y.png?width=320&crop=smart&auto=webp&s=c741df2aa69da6cbee87047d23ea77417724cea1', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/ppZr5AFByJNJ36-7qkkzTnHUOOdqB7E3y-Y28OKFW2Y.png?width=640&crop=smart&auto=webp&s=500919d8bcc7b22ad88b132afa04224d677a1c31', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/ppZr5AFByJNJ36-7qkkzTnHUOOdqB7E3y-Y28OKFW2Y.png?width=960&crop=smart&auto=webp&s=af6b0ab7c7a5d1da138d20afb63190cd8602a7d8', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/ppZr5AFByJNJ36-7qkkzTnHUOOdqB7E3y-Y28OKFW2Y.png?width=1080&crop=smart&auto=webp&s=4aa557522f5fbb2adf4a17f3571fbfe87d4e9c34', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/ppZr5AFByJNJ36-7qkkzTnHUOOdqB7E3y-Y28OKFW2Y.png?auto=webp&s=0c020e2393bb86b3f061d7318a7abd021a07fadb', 'width': 1200}, 'variants': {}}]} |
Downsides to Cloud Llm? | 0 | Hi yall! (Skip to end for TLDR)
New to non-front facing consumer llms. For context my main llm has been chatgpt for the past year or so and Ive also used gemini/google ai studio. It was great, with gpt 4o and the first week of 5.1 I was even able to build a RAG to store and organize all of my medical docs and other important docs on my mac without any knowledge of coding (besides a beginner python course and c++ course like frickin 4 years ago lmao)
Obviously though… I’ve noticed a stark downward turn in chatgpts performance lately. 5.2’s ability to retain memory and to code correctly is abysmal despite what openai has been saying. The amount of refusals for benign requests is out of hand (no im not one of *those* people lmao) im talking about asking about basic supplementation or probiotics for getting over a cold…and it spending the majority of its time thinking about how its not allowed to perscribe or say certain things. And it rambling on about how its not allowed to do x y and z….
Even while coding with gpt- ill look over and see it thinking….and i swear half the thinking is literally it just wrestling with itself?! Its twisting itself in knots over the most basic crap. (Also yes ik how llms actually work ik its not literally *thinking*. You get what im trying to say)
Anywho- have a newer mac but I dont have enough RAM to download a genuinely great uncensored LLM to run locally. So i spent a few hours figuring out what hugging face was, how to connect a model to inference endpoints by creating my own endpoint- downloaded llama.cp via my terminal- running that- then ran that through openwebui connected my endpoint- and then spent a few hours fiddling with Heretic-gpt-oss and stress tested that model,
i got a bunch of refusals initially still with the heretic model i figured due to there being echoes still of its original guardrails and safety stuff but i successfully got it working. it worked best if my advanced params were:
Reasoning tags: disabled
Reasoning effort - low
Temp: 1.2
Top_p 1
Repeat penalty 1.1
And then I eventually got it to create its own system prompt instructions which has worked amazingly well thus far. If anyone wants it they can dm me!
ANYWAYS: all this to say- is there any real downside to using inference endpoints to host an llm like this? Its fast. Ive gotten great results… RAM is expensive right now. Is there an upside? Wondering if i should consider putting money into a local model or if I should just continue as is…
TLDR: currently running heretic gpt oss via inference endpoints/cloud since i dont have enough local storage to download an llm locally. At this point, with prices how they are- is it worth it to invest long term in a local llm or are cloud llms eventually the future anyways?
| 2025-12-20T18:57:49 | https://www.reddit.com/r/LocalLLaMA/comments/1prltrz/downsides_to_cloud_llm/ | Rachkstarrr | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1prltrz | false | null | t3_1prltrz | /r/LocalLLaMA/comments/1prltrz/downsides_to_cloud_llm/ | false | false | self | 0 | null |
VRAM Advice? 24GB or 32GB for starters | 10 | Hey guys, hope it’s been a great weekend for you all
I’m working to build my rig with primary use case of hosting, fine tuning and maybe doing image/video gen locally.
With all that said, does a 4090 makes any sense as of now or only 5090 will cut it?
The gap is huge for me, if I add the rest of the components as well required for the CPU, but I’ve been waiting and waiting and waiting that I don’t know what makes sense anymore
If 24 GB is just a little slower (30% as per most benchmarks), I can try to live with it but if the performance is insanely different and high end for 32, I’ll have to wait more I guess
Love to know thoughts from all of you
| 2025-12-20T18:53:52 | https://www.reddit.com/r/LocalLLaMA/comments/1prlqi1/vram_advice_24gb_or_32gb_for_starters/ | RobotsMakingDubstep | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1prlqi1 | false | null | t3_1prlqi1 | /r/LocalLLaMA/comments/1prlqi1/vram_advice_24gb_or_32gb_for_starters/ | false | false | self | 10 | null |
Free API to extract wiki content for RAG applications | 0 | I made an API that can parse through any MediaWiki related webpage and provide clean data for RAG/training. It has 150 free monthly quotas per account, it's specially useful for large size and complex webpages.
For example, here's the entire entry for the History of the Roman Empire:
[https://hastebin.com/share/etolurugen.swift](https://hastebin.com/share/etolurugen.swift)
And here's the entire entry for the Emperor of Mankind from Warhammer 40k: [https://hastebin.com/share/vuxupuvone.swift](https://hastebin.com/share/vuxupuvone.swift)
# WikiExtract Universal API
# Features
1. **Triple-Check Parsing** \- Combines HTML scraping with AST parsing for 99% success rate
2. **Universal Infobox Support** \- Language-agnostic structural detection
3. **Dedicated Portal Extraction** \- Specialized parser for Portal pages
4. **Table Fidelity** \- HTML tables converted to compliant GFM Markdown
5. **Namespace Awareness** \- Smart handling of File: pages with rich metadata
6. **Disambiguation Trees** \- Structured decision trees for disambiguation pages
7. **Canonical Images** \- Resolves Fandom lazy-loaded images to full resolution
8. **Navigation Pruning** \- Removes navboxes and footer noise
9. **Attribution & Provenance** \- CC-BY-SA 3.0 compliant with contributor links
10. **Universal Wiki Support** \- Works with Wikipedia, Fandom, and any MediaWiki site
The API can be found here: [https://rapidapi.com/wikiextract-wikiextract-default/api/wikiextract-universal-api](https://rapidapi.com/wikiextract-wikiextract-default/api/wikiextract-universal-api) | 2025-12-20T17:45:23 | https://www.reddit.com/r/LocalLLaMA/comments/1prk4q7/free_api_to_extract_wiki_content_for_rag/ | Tiny_Type_1985 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1prk4q7 | false | null | t3_1prk4q7 | /r/LocalLLaMA/comments/1prk4q7/free_api_to_extract_wiki_content_for_rag/ | false | false | self | 0 | null |
Panini — a grammar-first Sanskrit tokenizer (2–4× fewer tokens than MuRIL / Qwen2) | 1 | Hey folks,
I’ve been working on Sanskrit NLP and kept running into the same wall: modern SOTA tokenizers (BPE / WordPiece) are fundamentally misaligned with highly inflected, sandhi-heavy languages like Sanskrit.
They don’t fail loudly , they fail quietly, by exploding sequence length and fragmenting semantic units into phonetic shards like `##k`, `##z`, etc.
So I built something different.
Panini Tokenizer is a deterministic, grammar-first Sanskrit tokenizer.
Instead of learning subwords statistically, it applies Pāṇinian-style morphological analysis to reverse sandhi and recover meaningful stems before tokenization.
This isn’t meant to replace BPE everywhere, it’s designed specifically for Sanskrit and closely related tasks (training, RAG, long-context reading).
# Benchmarks (complex philosophical compounds)
Average token counts over a small but adversarial test set:
* Qwen2 tokenizer: **\~21.8 tokens**
* Google MuRIL: **\~15.9 tokens**
* Panini (ours): **\~7.2 tokens**
Example:
Input: `nirapekzajYAnasAkzAtkArasAmarthyam`
* **Qwen2 (25 tokens):** `▁n | ir | ap | ek | z | a | j | Y | A | n | as | ...`
* **MuRIL (18 tokens):** `ni | ##rape | ##k | ##za | ##j | ##YA | ...`
* **Panini (6 tokens):** `▁nirapekza | jYAna | sAkzAtkAra | sAman | arthy | am`
Same input, very different representational load.
# Why this matters
* **2–4× sequence compression** on real Sanskrit compounds
* **More usable context** per forward pass (especially for long texts)
* **Semantic units stay intact**, instead of being reconstructed in attention
This doesn’t magically make a model “smart” , it just stops wasting capacity on reassembling syllables.
# Links
* Live demo (side-by-side comparison)**:** [https://huggingface.co/spaces/ArthaLabs/panini-tokenizer-demo](https://huggingface.co/spaces/ArthaLabs/panini-tokenizer-demo)
* Tokenizer on Hugging Face**:** [https://huggingface.co/ArthaLabs/panini-tokenizer](https://huggingface.co/ArthaLabs/panini-tokenizer)
I’m 16, this is my first public release under ArthaLabs, and I’m mainly looking for critical feedback, especially:
* sandhi edge cases
* failure modes
* where grammar-first breaks down vs stats-first
Happy to be told where this falls apart. | 2025-12-20T17:43:53 | https://www.reddit.com/r/LocalLLaMA/comments/1prk3ge/panini_a_grammarfirst_sanskrit_tokenizer_24_fewer/ | arthalabs | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1prk3ge | false | null | t3_1prk3ge | /r/LocalLLaMA/comments/1prk3ge/panini_a_grammarfirst_sanskrit_tokenizer_24_fewer/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'Af8CN7PreNDQJ-hbpku9_lZ0xvqzUHFYxlcKuGC1eT8', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/Af8CN7PreNDQJ-hbpku9_lZ0xvqzUHFYxlcKuGC1eT8.png?width=108&crop=smart&auto=webp&s=2de34eec7bb8bf3cf0ae399dc26ae2322801aa67', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/Af8CN7PreNDQJ-hbpku9_lZ0xvqzUHFYxlcKuGC1eT8.png?width=216&crop=smart&auto=webp&s=abd5e3452514d2925efc3a35c3d599196f2ba92e', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/Af8CN7PreNDQJ-hbpku9_lZ0xvqzUHFYxlcKuGC1eT8.png?width=320&crop=smart&auto=webp&s=6dc10b9bf700ca5a38437a1d77b672bde25ee9d8', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/Af8CN7PreNDQJ-hbpku9_lZ0xvqzUHFYxlcKuGC1eT8.png?width=640&crop=smart&auto=webp&s=685e8cdeb6bbd18ce807560958911f71ab0dd924', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/Af8CN7PreNDQJ-hbpku9_lZ0xvqzUHFYxlcKuGC1eT8.png?width=960&crop=smart&auto=webp&s=4fe07c242e20ec083c3bf846b430d303712d124b', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/Af8CN7PreNDQJ-hbpku9_lZ0xvqzUHFYxlcKuGC1eT8.png?width=1080&crop=smart&auto=webp&s=5db66ab4fc1a3ac8e6c5a442195821e24892b573', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/Af8CN7PreNDQJ-hbpku9_lZ0xvqzUHFYxlcKuGC1eT8.png?auto=webp&s=de1e91029baf0be59560cd6d2e10b1d05243aeaf', 'width': 1200}, 'variants': {}}]} |
Xiaomi’s MiMo-V2-Flash (309B model) jumping straight to the big leagues | 414 | 2025-12-20T17:39:17 | 98Saman | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1prjzoh | false | null | t3_1prjzoh | /r/LocalLLaMA/comments/1prjzoh/xiaomis_mimov2flash_309b_model_jumping_straight/ | false | false | 414 | {'enabled': True, 'images': [{'id': '9jCthiIZfjo7qu3sTn-cE3ePPpYxWrl9C2mCdz-KrNg', 'resolutions': [{'height': 31, 'url': 'https://preview.redd.it/uew3kkt2be8g1.jpeg?width=108&crop=smart&auto=webp&s=d2f6970be937dcfe3a547abd7eb2c590ad0a65af', 'width': 108}, {'height': 63, 'url': 'https://preview.redd.it/uew3kkt2be8g1.jpeg?width=216&crop=smart&auto=webp&s=05da47b351f739be342b628f678a8a02421ed2cd', 'width': 216}, {'height': 94, 'url': 'https://preview.redd.it/uew3kkt2be8g1.jpeg?width=320&crop=smart&auto=webp&s=b74eb7ca436a707809201ee4eb909ebf9183f8a2', 'width': 320}, {'height': 189, 'url': 'https://preview.redd.it/uew3kkt2be8g1.jpeg?width=640&crop=smart&auto=webp&s=c74cf3336896a9944ca78303c3bc0e948e805302', 'width': 640}, {'height': 284, 'url': 'https://preview.redd.it/uew3kkt2be8g1.jpeg?width=960&crop=smart&auto=webp&s=1da9f92d7342c87e26a835094d7d5bc47f015f47', 'width': 960}, {'height': 319, 'url': 'https://preview.redd.it/uew3kkt2be8g1.jpeg?width=1080&crop=smart&auto=webp&s=f0f48e8fa2ba15959f29d8aa98f1953550639741', 'width': 1080}], 'source': {'height': 606, 'url': 'https://preview.redd.it/uew3kkt2be8g1.jpeg?auto=webp&s=9fd8cc0ea9babdd9aa0d8cb2c786d51201cbc889', 'width': 2048}, 'variants': {}}]} | |||
What's the realistic "entry point" for a good local LLM experience going into 2026? | 62 | I notice a lot of questions from people asking it they can run LLM's on their 8gb or 12gb GPU's.
But have noticed most builds fall into two camps: the 16GB-24GB crowd making it work with quantized models, or the absolute madlads running 96GB+ setups.
But there's this interesting middle ground between 24-32GB that doesn't get talked about as much.
So I'm curious what this community thinks: **If someone's getting into local LLMs today, wants a genuinely usable experience (not just "it technically runs"), but still has budget constraints—what's the minimum VRAM you'd actually recommend?**
Excluding Macs here since they're a whole different value proposition with unified memory.
My take: 24GB feels like the sweet spot for accessibility right now. You can snag a used 3090 for reasonable money, and it opens up a lot of models that just aren't practical at 16GB. If you are willing to go AMD like me, RX 7900 XTX's can be had for under a grand.
But I'm curious if I'm off base. Are people having legitimately good experiences at 16GB with the right model choices? Or is the jump to 24GB as game-changing as it seems?
What's your "minimum viable VRAM" for someone who wants to actually *use* local LLMs, not just experiment? | 2025-12-20T17:21:59 | https://www.reddit.com/r/LocalLLaMA/comments/1prjldr/whats_the_realistic_entry_point_for_a_good_local/ | alphatrad | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1prjldr | false | null | t3_1prjldr | /r/LocalLLaMA/comments/1prjldr/whats_the_realistic_entry_point_for_a_good_local/ | false | false | self | 62 | null |
Nvidia Introduces 'NitroGen': A Foundation Model for Generalist Gaming Agents | "This research effectively validates a scalable pipeline for building general-purpose agents that can operate in unknown environments, moving the field closer to universally capable AI." | 85 | ####TL;DR:
**NitroGen demonstrates that we can accelerate the development of generalist AI agents by scraping internet-scale data rather than relying on slow, expensive manual labeling. **
**This research effectively validates a scalable pipeline for building general-purpose agents that can operate in unknown environments, moving the field closer to universally capable AI.**
---
####Abstract:
>We introduce NitroGen, a vision-action foundation model for generalist gaming agents that is trained on 40,000 hours of gameplay videos across more than 1,000 games. We incorporate three key ingredients:
>- (1) An internet-scale video-action dataset constructed by automatically extracting player actions from publicly available gameplay videos,
>- (2) A multi-game benchmark environment that can measure cross-game generalization, and
>- (3) A unified vision-action model trained with large-scale behavior cloning.
>
>NitroGen exhibits strong competence across diverse domains, including combat encounters in 3D action games, high-precision control in 2D platformers, and exploration in procedurally generated worlds. **It transfers effectively to unseen games, achieving up to 52% relative improvement in task success rates over models trained from scratch.** We release the dataset, evaluation suite, and model weights to advance research on generalist embodied agents.
---
####Layman's Explanation:
NVIDIA researchers bypassed the data bottleneck in embodied AI by identifying 40,000 hours of gameplay videos where streamers displayed their controller inputs on-screen, effectively harvesting free, high-quality action labels across more than 1,000 games. This approach proves that the "scale is all you need" paradigm, which drove the explosion of Large Language Models, is viable for training agents to act in complex, virtual environments using noisy internet data.
The resulting model **verifies that large-scale pre-training creates transferable skills; the AI can navigate, fight, and solve puzzles in games it has never seen before, performing significantly better than models trained from scratch.**
By open-sourcing the model weights and the massive video-action dataset, the team has removed a major barrier to entry, allowing the community to immediately fine-tune these foundation models for new tasks instead of wasting compute on training from the ground up.
---
#####Link to the Paper: https://nitrogen.minedojo.org/assets/documents/nitrogen.pdf
---
#####Link to the Project Website: https://nitrogen.minedojo.org/
---
#####Link to the HuggingFace: https://huggingface.co/nvidia/NitroGen
---
#####Link to the Open-Sourced Dataset: https://huggingface.co/datasets/nvidia/NitroGen | 2025-12-20T17:21:48 | https://v.redd.it/zdp80umr7e8g1 | 44th--Hokage | /r/LocalLLaMA/comments/1prjl7z/nvidia_introduces_nitrogen_a_foundation_model_for/ | 1970-01-01T00:00:00 | 0 | {} | 1prjl7z | false | {'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/zdp80umr7e8g1/DASHPlaylist.mpd?a=1768972914%2CM2ZlODc0ODRlYTU4Y2MyZGI1NWVjMWU5Y2RmYzQ4OTQ5MGUwZmNlYzcwMzhlZGYwODViODdiYzVmMWQ3Y2E5MQ%3D%3D&v=1&f=sd', 'duration': 50, 'fallback_url': 'https://v.redd.it/zdp80umr7e8g1/CMAF_720.mp4?source=fallback', 'has_audio': False, 'height': 720, 'hls_url': 'https://v.redd.it/zdp80umr7e8g1/HLSPlaylist.m3u8?a=1768972914%2CNDZkZDA3ZDQ4NjVjOWI4YjJiNjM3ZjAwYjA2NmM4ZTg3NTVjM2FjNjQ3ZmE0NmNhMzZiNzFmODM4NWZmYjBmOA%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/zdp80umr7e8g1/CMAF_96.mp4', 'transcoding_status': 'completed', 'width': 1280}} | t3_1prjl7z | /r/LocalLLaMA/comments/1prjl7z/nvidia_introduces_nitrogen_a_foundation_model_for/ | false | false | 85 | {'enabled': False, 'images': [{'id': 'aTRkb2lybnI3ZThnMdFL3UUz04QdHLBdqdlbFHYzvAvsN9wNsENNDP9FjT2f', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/aTRkb2lybnI3ZThnMdFL3UUz04QdHLBdqdlbFHYzvAvsN9wNsENNDP9FjT2f.png?width=108&crop=smart&format=pjpg&auto=webp&s=eeb4dfe9182f62227e19e9da69cf4c510b558193', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/aTRkb2lybnI3ZThnMdFL3UUz04QdHLBdqdlbFHYzvAvsN9wNsENNDP9FjT2f.png?width=216&crop=smart&format=pjpg&auto=webp&s=bccefa83c780e4a3e0d6cba76d90f78a0efd3b61', 'width': 216}, {'height': 179, 'url': 'https://external-preview.redd.it/aTRkb2lybnI3ZThnMdFL3UUz04QdHLBdqdlbFHYzvAvsN9wNsENNDP9FjT2f.png?width=320&crop=smart&format=pjpg&auto=webp&s=76117e446f86446ddbae0cfde101234d5f0c41f2', 'width': 320}, {'height': 359, 'url': 'https://external-preview.redd.it/aTRkb2lybnI3ZThnMdFL3UUz04QdHLBdqdlbFHYzvAvsN9wNsENNDP9FjT2f.png?width=640&crop=smart&format=pjpg&auto=webp&s=c46ce30978d6ae31784c6eb4c226690f1b495978', 'width': 640}, {'height': 539, 'url': 'https://external-preview.redd.it/aTRkb2lybnI3ZThnMdFL3UUz04QdHLBdqdlbFHYzvAvsN9wNsENNDP9FjT2f.png?width=960&crop=smart&format=pjpg&auto=webp&s=841cfdc399c66fc889a655e9b34b5bccaf6515e0', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/aTRkb2lybnI3ZThnMdFL3UUz04QdHLBdqdlbFHYzvAvsN9wNsENNDP9FjT2f.png?width=1080&crop=smart&format=pjpg&auto=webp&s=2a473aaa005da4d75f19cc409380896391968f29', 'width': 1080}], 'source': {'height': 607, 'url': 'https://external-preview.redd.it/aTRkb2lybnI3ZThnMdFL3UUz04QdHLBdqdlbFHYzvAvsN9wNsENNDP9FjT2f.png?format=pjpg&auto=webp&s=8f30cbe7c948570bcdcfbd8d2fb966de2a4c5bfd', 'width': 1080}, 'variants': {}}]} | |
Nvidia Introduces 'NitroGen': A Foundation Model for Generalist Gaming Agents | "This research effectively validates a scalable pipeline for building general-purpose agents that can operate in unknown environments, moving the field closer to universally capable AI." | 1 | [deleted] | 2025-12-20T17:20:21 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 1prjk3e | false | null | t3_1prjk3e | /r/LocalLLaMA/comments/1prjk3e/nvidia_introduces_nitrogen_a_foundation_model_for/ | false | false | default | 1 | null | ||
Automating Subtitles For Videos using Whisper? | 1 | Not sure if Whisper is the best tool for this so wanted to ask the community. I'm currently working with a full text document and they're usually broken down into 15 word phrases that I run through a TTS at a time, but also want to generate subtitles for that TTS without having to manually fit them in through a video editor.
Is there a better tool (or method) for what I'm trying to accomplish? Or is Whisper my best shot? | 2025-12-20T16:39:10 | https://www.reddit.com/r/LocalLLaMA/comments/1pril1x/automating_subtitles_for_videos_using_whisper/ | Head-Investigator540 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pril1x | false | null | t3_1pril1x | /r/LocalLLaMA/comments/1pril1x/automating_subtitles_for_videos_using_whisper/ | false | false | self | 1 | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.