title stringlengths 1 300 | score int64 0 8.54k | selftext stringlengths 0 41.5k | created timestamp[ns]date 2023-04-01 04:30:41 2026-03-04 02:14:14 ⌀ | url stringlengths 0 878 | author stringlengths 3 20 | domain stringlengths 0 82 | edited timestamp[ns]date 1970-01-01 00:00:00 2026-02-19 14:51:53 | gilded int64 0 2 | gildings stringclasses 7
values | id stringlengths 7 7 | locked bool 2
classes | media stringlengths 646 1.8k ⌀ | name stringlengths 10 10 | permalink stringlengths 33 82 | spoiler bool 2
classes | stickied bool 2
classes | thumbnail stringlengths 4 213 ⌀ | ups int64 0 8.54k | preview stringlengths 301 5.01k ⌀ |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Are vision models (like qwen3-vl) good for OCR? | 9 | I am trying to build a simple ocr implementation where users can upload documents like invoices or licenses and then key fields are extracted for human review. For this system I was looking for the approach to go for (traditional OCR using somehting like pythons Tesseract or VL based).
In either case, its critical that the parsed information is exact and I was worried the VL models would hallucinate something. Is this concern valid? What do you guys think? | 2025-09-30T08:40:44 | https://www.reddit.com/r/LocalLLaMA/comments/1nu7vw8/are_vision_models_like_qwen3vl_good_for_ocr/ | Weebviir | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nu7vw8 | false | null | t3_1nu7vw8 | /r/LocalLLaMA/comments/1nu7vw8/are_vision_models_like_qwen3vl_good_for_ocr/ | false | false | self | 9 | null |
Agentic Rag && DeepResearch | 4 | I would like to know everyone's opinions on agentic rag and deep research. What are the differences between them?
Or perhaps they are the same in some ways. | 2025-09-30T08:32:27 | https://www.reddit.com/r/LocalLLaMA/comments/1nu7rha/agentic_rag_deepresearch/ | Individual_Law4196 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nu7rha | false | null | t3_1nu7rha | /r/LocalLLaMA/comments/1nu7rha/agentic_rag_deepresearch/ | false | false | self | 4 | null |
Pretraining Large Language Models with NVFP4 | 7 | Large Language Models (LLMs) today are powerful problem solvers across many domains, and they continue to get stronger as they scale in model size, training set size, and training set quality, as shown by extensive research and experimentation across the industry. Training a frontier model today requires on the order of tens to hundreds of yottaflops, which is a massive investment of time, compute, and energy. Improving pretraining efficiency is therefore essential to enable the next generation of even more capable LLMs. While 8-bit floating point (FP8) training is now widely adopted, transitioning to even narrower precision, such as 4-bit floating point (FP4), could unlock additional improvements in computational speed and resource utilization. However, quantization at this level poses challenges to training stability, convergence, and implementation, notably for large-scale models trained on long token horizons. In this study, we introduce a novel approach for stable and accurate training of large language models (LLMs) using the NVFP4 format. Our method integrates Random Hadamard transforms (RHT) to bound block-level outliers, employs a two-dimensional quantization scheme for consistent representations across both the forward and backward passes, utilizes stochastic rounding for unbiased gradient estimation, and incorporates selective high-precision layers. We validate our approach by training a 12-billion-parameter model on 10 trillion tokens – the longest publicly documented training run in 4-bit precision to date. Our results show that the model trained with our NVFP4-based pretraining technique achieves training loss and downstream task accuracies comparable to an FP8 baseline. For instance, the model attains an MMLU-pro accuracy of 62.58%, nearly matching the 62.62% accuracy achieved through FP8 pretraining. These findings highlight that NVFP4, when combined with our training approach, represents a major step forward in narrow-precision LLM training algorithms. | 2025-09-30T08:28:43 | https://arxiv.org/pdf/2509.25149 | zepmck | arxiv.org | 1970-01-01T00:00:00 | 0 | {} | 1nu7ph9 | false | null | t3_1nu7ph9 | /r/LocalLLaMA/comments/1nu7ph9/pretraining_large_language_models_with_nvfp4/ | false | false | default | 7 | null |
Seeking Advice: Best Model + Framework for Max Tokens/sec on Dual L40S (Testing Rig) | 4 | Hi everyone!
I’ve been given temporary access to a high-end test machine and want to squeeze the most tokens/second out of it with a local LLM. I’ve searched the sub but haven’t found recent benchmarks for this exact setup—so I’d really appreciate your advice!
# Hardware:
* **CPUs**: 2 × AMD EPYC 9254
* **GPUs**: 2 × NVIDIA L40S (48 GB VRAM each → 96 GB total)
* **RAM**: 512 GB
* **OS**: Ubuntu 24.04
# Goal:
* Fully offline inference
* Maximize **tokens/second** (both latency and throughput matter)
* Support **long context** \+ \*\* multi lang\*\*
* Handle concurrency ( 8-12 requests)
* Models I’m eyeing: **Qwen3**, **Deepseek-V3 / V3.1**, **gpt-oss** or other fast OSS models (e.g., GPT-4o-style open alternatives)
# What I’ve tested:
* Ran **Ollama in Docker** with parallelism and flash atention
* Result: **much lower tokens/sec than expected** — felt like the L40S weren’t being used efficiently
* Suspect Ollama’s backend isn’t optimized for multi-GPU or high-end inference
# Questions:
1. **Is Docker holding me back?** Does it add meaningful overhead on this class of hardware, or are there well-tuned Docker setups (e.g., with vLLM, TGI, or TensorRT-LLM) that actually help?
2. **Which inference engine best leverages 2×L40S?**
* vLLM (with tensor/pipeline parallelism)?
* Text Generation Inference (TGI)?
* TensorRT-LLM (if I compile models)?
* Something else?
3. **Model + quantization recommendations?**
* Is **Qwen3-32B-AWQ** a good fit for speed/quality?
* Is **Deepseek-V3.1** viable yet in quantized form?
I’m prioritizing raw speed without completely sacrificing reasoning quality. If you’ve benchmarked similar setups or have config tips (e.g., tensor parallelism settings), I’d be super grateful!
Thanks in advance 🙌 | 2025-09-30T08:24:34 | https://www.reddit.com/r/LocalLLaMA/comments/1nu7neu/seeking_advice_best_model_framework_for_max/ | MohaMBS | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nu7neu | false | null | t3_1nu7neu | /r/LocalLLaMA/comments/1nu7neu/seeking_advice_best_model_framework_for_max/ | false | false | self | 4 | null |
Glm 4.6 is out and it's going against claude 4.5 | 272 | 2025-09-30T07:52:13 | Independent-Wind4462 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1nu75l3 | true | null | t3_1nu75l3 | /r/LocalLLaMA/comments/1nu75l3/glm_46_is_out_and_its_going_against_claude_45/ | false | false | default | 272 | {'enabled': True, 'images': [{'id': 'xdaov3whc9sf1', 'resolutions': [{'height': 84, 'url': 'https://preview.redd.it/xdaov3whc9sf1.jpeg?width=108&crop=smart&auto=webp&s=576f4494a9b88db12409d67b5bf94675cd361d36', 'width': 108}, {'height': 169, 'url': 'https://preview.redd.it/xdaov3whc9sf1.jpeg?width=216&crop=smart&auto=webp&s=6bd35bc96862694876f1f1ade9ab04609702b4bb', 'width': 216}, {'height': 250, 'url': 'https://preview.redd.it/xdaov3whc9sf1.jpeg?width=320&crop=smart&auto=webp&s=e2bf25ccb1a16f50e03935a043ac48c4f87f1495', 'width': 320}, {'height': 500, 'url': 'https://preview.redd.it/xdaov3whc9sf1.jpeg?width=640&crop=smart&auto=webp&s=51b49323776db858f9523557bb7f0b2fbdcab9f2', 'width': 640}, {'height': 751, 'url': 'https://preview.redd.it/xdaov3whc9sf1.jpeg?width=960&crop=smart&auto=webp&s=18dea064271bd16d70355d8f8e795b6b6fce639e', 'width': 960}, {'height': 845, 'url': 'https://preview.redd.it/xdaov3whc9sf1.jpeg?width=1080&crop=smart&auto=webp&s=7956fea70cd24d2328124f554adce3abce6dcb96', 'width': 1080}], 'source': {'height': 3206, 'url': 'https://preview.redd.it/xdaov3whc9sf1.jpeg?auto=webp&s=b11132c44186886d52a654ad03f7257c0cb9a1e8', 'width': 4096}, 'variants': {}}]} | ||
Is it worth getting 512gb DDR4 to run DS v3.2? | 15 | I have 4 x 3090s that I've crammed into a frankensystem, 9700K and 128GB ram. Been having a lot of fun running oss 120b and glm4.5 air AWQ.
I've tried running some models partially offloaded to ram, but am usually disappointed with the speed (although I havent really tried to optimize much).
This Deepseek v3.2 sounds intriguing with its supposed huge speed up at long context, it might even be runnable at an "acceptable" speed, 4 bit quant, if I get 512GB DDR4 ram and load the key experts into VRAM.
Feasible? Or will it still just be painfully slow..? | 2025-09-30T07:51:44 | https://www.reddit.com/r/LocalLLaMA/comments/1nu75c6/is_it_worth_getting_512gb_ddr4_to_run_ds_v32/ | Shrimpin4Lyfe | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nu75c6 | false | null | t3_1nu75c6 | /r/LocalLLaMA/comments/1nu75c6/is_it_worth_getting_512gb_ddr4_to_run_ds_v32/ | false | false | self | 15 | null |
More detail about GLM4.6 | 61 | It seems glm4.6 is finally out!
Blog post: https://z.ai/blog/glm-4.6
Hugging face (not working now but later): https://huggingface.co/zai-org/GLM-4.6
Context window from 128k to 200k, better coding, reasoning and agentic performance...
That's quite a nice upgrade! | 2025-09-30T07:45:17 | https://www.reddit.com/r/LocalLLaMA/comments/1nu71rx/more_detail_about_glm46/ | Angel-Karlsson | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nu71rx | true | null | t3_1nu71rx | /r/LocalLLaMA/comments/1nu71rx/more_detail_about_glm46/ | false | false | self | 61 | null |
Just a small win I wanted to share — my side project Examsprint AI (a free AI study tool) became #1 Product of the Day and #1 Product of the Week on ToolSeeker 🎉 | 1 | [removed] | 2025-09-30T07:43:03 | Low-Prune4886 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1nu70kw | false | null | t3_1nu70kw | /r/LocalLLaMA/comments/1nu70kw/just_a_small_win_i_wanted_to_share_my_side/ | false | false | 1 | {'enabled': True, 'images': [{'id': 'rUshqljQTkGbf47uJHmBeRUp0nPmbR1FcI7VB613vmY', 'resolutions': [{'height': 216, 'url': 'https://preview.redd.it/yiy0ji3va9sf1.jpeg?width=108&crop=smart&auto=webp&s=8262921255230947b66317f6ed3b81470d472777', 'width': 108}, {'height': 432, 'url': 'https://preview.redd.it/yiy0ji3va9sf1.jpeg?width=216&crop=smart&auto=webp&s=d3b7588bfcd926175ac6e7183e2e7276d3ca7c0b', 'width': 216}, {'height': 640, 'url': 'https://preview.redd.it/yiy0ji3va9sf1.jpeg?width=320&crop=smart&auto=webp&s=0d2b18333a10d97ae4d844d07cb0d42f9dcb5aa6', 'width': 320}, {'height': 1280, 'url': 'https://preview.redd.it/yiy0ji3va9sf1.jpeg?width=640&crop=smart&auto=webp&s=f0c10fe0af3dbadf4d9701bd866600c2d77df215', 'width': 640}, {'height': 1920, 'url': 'https://preview.redd.it/yiy0ji3va9sf1.jpeg?width=960&crop=smart&auto=webp&s=9957c6668abe231d30537126865f6347641d9c6b', 'width': 960}, {'height': 2160, 'url': 'https://preview.redd.it/yiy0ji3va9sf1.jpeg?width=1080&crop=smart&auto=webp&s=0a053c6980845f1046e2f938be5a6e03c7facba3', 'width': 1080}], 'source': {'height': 2400, 'url': 'https://preview.redd.it/yiy0ji3va9sf1.jpeg?auto=webp&s=ed1205d772d74bf72e36f2663c9599c27eb8938c', 'width': 1080}, 'variants': {}}]} | ||
Just a small win I wanted to share — my side project Examsprint AI (a free AI study tool) became #1 Product of the Day and #1 Product of the Week on ToolSeeker 🎉 | 1 | Didn’t expect it to get that much love so quickly.
Still adding features (badges, flashcards, notes, AI tutor), but seeing this kind of recognition makes me even more motivated to keep building.
For those of you who’ve launched projects before → how do you usually keep the momentum going after a strong launch week?
| 2025-09-30T07:37:43 | Expensive-Board3661 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1nu6xmy | false | null | t3_1nu6xmy | /r/LocalLLaMA/comments/1nu6xmy/just_a_small_win_i_wanted_to_share_my_side/ | false | false | default | 1 | {'enabled': True, 'images': [{'id': 'flsrngqw99sf1', 'resolutions': [{'height': 216, 'url': 'https://preview.redd.it/flsrngqw99sf1.jpeg?width=108&crop=smart&auto=webp&s=f90e52880da7f029499d6abe74f784f66352d3cf', 'width': 108}, {'height': 432, 'url': 'https://preview.redd.it/flsrngqw99sf1.jpeg?width=216&crop=smart&auto=webp&s=8c728d70f4e909d2f273a29b7a0fe560b501eb35', 'width': 216}, {'height': 640, 'url': 'https://preview.redd.it/flsrngqw99sf1.jpeg?width=320&crop=smart&auto=webp&s=6b01700be1e5453d04deb39faabbdd113027d8ea', 'width': 320}, {'height': 1280, 'url': 'https://preview.redd.it/flsrngqw99sf1.jpeg?width=640&crop=smart&auto=webp&s=9d4952e2c12a613e36f03ea8fbb71c6b36611aa2', 'width': 640}, {'height': 1920, 'url': 'https://preview.redd.it/flsrngqw99sf1.jpeg?width=960&crop=smart&auto=webp&s=66b6644855f0a40665fd82adf602b3f424c968ff', 'width': 960}, {'height': 2160, 'url': 'https://preview.redd.it/flsrngqw99sf1.jpeg?width=1080&crop=smart&auto=webp&s=c1996892230c4b1c0e8fb7a54ab0f2c483b6dbeb', 'width': 1080}], 'source': {'height': 2400, 'url': 'https://preview.redd.it/flsrngqw99sf1.jpeg?auto=webp&s=f79aa1500c1ce06ea11635bc5fcb1eb875ef86f4', 'width': 1080}, 'variants': {}}]} | |
Best Gen AI video model for creating content with minor elements of text | 3 | Guys
I have writes Wan2.2 and QwenVL3-235 to generate a content which has my websites name
Though the content is okay quality. But introducing an element of website name is destroying the output
Any model which has can do this simple task
| 2025-09-30T07:35:49 | https://www.reddit.com/r/LocalLLaMA/comments/1nu6wjz/best_gen_ai_video_model_for_creating_content_with/ | bull_bear25 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nu6wjz | false | null | t3_1nu6wjz | /r/LocalLLaMA/comments/1nu6wjz/best_gen_ai_video_model_for_creating_content_with/ | false | false | self | 3 | null |
Hot take: ALL Coding tools are bullsh*t | 667 | Let me tell you about the dumbest fucking trend in software development: taking the most powerful reasoning engines humanity has ever created and lobotomizing them with middleware.
We have these incredible language models—DeepSeek 3.2, GLM-4.5, Qwen 3 Coder—that can understand complex problems, reason through edge cases, and generate genuinely good code. And what did we do? We wrapped them in so many layers of bullshit that they can barely function.
**The Scam:**
Every coding tool follows the same playbook:
1. Inject a 20,000 token system prompt explaining how to use tools
2. Add tool-calling ceremonies for every filesystem operation
3. Send timezone, task lists, environment info with EVERY request
4. Read the same files over and over and over
5. Make tiny edits one at a time
6. Re-read everything to "verify"
7. Repeat until you've burned 50,000 tokens
And then they market this as "agentic" and "autonomous" and charge you $20/month.
**The Reality:**
The model spends 70% of its context window reading procedural garbage it's already seen five times. It's not thinking about your problem—it's playing filesystem navigator. It's not reasoning deeply—it's pattern matching through the noise because it's cognitively exhausted.
You ask it to fix a bug. It reads the file (3k tokens). Checks the timezone (why?). Reviews the task list (who asked?). Makes a one-line change. Reads the file AGAIN to verify. Runs a command. Reads the output. And somehow the bug still isn't fixed because the model never had enough clean context to actually understand the problem.
**The Insanity:**
What you can accomplish in 15,000 tokens with a direct conversation—problem explained, context provided, complete solution generated—these tools spread across 50,000 tokens of redundant slop.
The model generates the same code snippets again and again. It sees the same file contents five times in one conversation. It's drowning in its own output, suffocating under layers of middleware-generated vomit.
And the worst part? **It gives worse results.** The solutions are half-assed because the model is working with a fraction of its actual reasoning capacity. Everything else is burned on ceremonial bullshit.
**The Market Dynamics:**
VCs threw millions at "AI coding agents." Companies rushed to ship agentic frameworks. Everyone wanted to be the "autonomous" solution. So they added more tools, more features, more automation.
More context r\*pe.
They optimized for demos, not for actual utility. Because in a demo, watching the tool "autonomously" read files and run commands looks impressive. In reality, you're paying 3x the API costs for 0.5x the quality.
**The Simple Truth:**
Just upload your fucking files to a local chat interface like LobeHub (Open Source). Explain the problem. Let the model think. Get your code in one artifact. Copy it. Done.
No tool ceremonies. No context pollution. No reading the same file seven times. No timezone updates nobody asked for.
The model's full intelligence goes toward your problem, not toward navigating a filesystem through an API. You get better code, faster, for less money.
**The Irony:**
We spent decades making programming languages more expressive so humans could think at a higher level. Then we built AI that can understand natural language and reason about complex systems.
And then we forced it back down into the machine-level bullsh\*t of "read file, edit line 47, write file, run command, read output."
We took reasoning engines and turned them into glorified bash scripts.
**The Future:**
I hope we look back at this era and laugh. The "agentic coding tool" phase where everyone was convinced that more automation meant better results. Where we drowned AI in context pollution and called it progress.
The tools that will win aren't the ones with the most features or the most autonomy. They're the ones that get out of the model's way and let it do what it's actually good at: thinking.
Until then, I'll be over here using the chat interface like a sane person, getting better results for less money, while the rest of you pay for the privilege of context r\*pe. | 2025-09-30T07:14:23 | https://www.reddit.com/r/LocalLLaMA/comments/1nu6kjc/hot_take_all_coding_tools_are_bullsht/ | Adventurous-Slide776 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nu6kjc | false | null | t3_1nu6kjc | /r/LocalLLaMA/comments/1nu6kjc/hot_take_all_coding_tools_are_bullsht/ | false | false | nsfw | 667 | null |
z.ai glm-4.6 is alive now | 128 | incredible perforamnce for this outsider !
[https://z.ai/blog/glm-4.6](https://z.ai/blog/glm-4.6)
You can use it on claude code with
> "env": {
>"ANTHROPIC\_AUTH\_TOKEN": "APIKEY",
>"ANTHROPIC\_BASE\_URL": "https://api.z.ai/api/anthropic",
>"API\_TIMEOUT\_MS": "3000000",
>"ANTHROPIC\_MODEL": "glm-4.6",
>"ANTHROPIC\_SMALL\_FAST\_MODEL": "glm-4.5-air",
>"ENABLE\_THINKING": "true",
>"REASONING\_EFFORT": "ultrathink",
>"MAX\_THINKING\_TOKENS": "32000",
>"ENABLE\_STREAMING": "true",
>"MAX\_OUTPUT\_TOKENS": "96000",
> "MAX\_MCP\_OUTPUT\_TOKENS": "64000",
>"AUTH\_HEADER\_MODE": "x-api-key"
> }
promotional code [https://z.ai/subscribe?ic=DJA7GX6IUW](https://z.ai/subscribe?ic=DJA7GX6IUW) for a discount ! | 2025-09-30T07:09:51 | https://www.reddit.com/r/LocalLLaMA/comments/1nu6i00/zai_glm46_is_alive_now/ | cobra91310 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nu6i00 | false | null | t3_1nu6i00 | /r/LocalLLaMA/comments/1nu6i00/zai_glm46_is_alive_now/ | false | false | self | 128 | null |
GLM-4.6 beats Claude Sonnet 4.5??? | 289 | [https://docs.z.ai/guides/llm/glm-4.6](https://docs.z.ai/guides/llm/glm-4.6) | 2025-09-30T07:02:12 | ramphyx | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1nu6dmo | false | null | t3_1nu6dmo | /r/LocalLLaMA/comments/1nu6dmo/glm46_beats_claude_sonnet_45/ | false | false | default | 289 | {'enabled': True, 'images': [{'id': 'qm4pw6oh39sf1', 'resolutions': [{'height': 84, 'url': 'https://preview.redd.it/qm4pw6oh39sf1.png?width=108&crop=smart&auto=webp&s=336ced904cbef3e1a4962f6560b565bb4ee99154', 'width': 108}, {'height': 169, 'url': 'https://preview.redd.it/qm4pw6oh39sf1.png?width=216&crop=smart&auto=webp&s=6d7725f72307eacdff302567db87c0a8964f2f7a', 'width': 216}, {'height': 250, 'url': 'https://preview.redd.it/qm4pw6oh39sf1.png?width=320&crop=smart&auto=webp&s=9b67ae1e48c7327573668271aab17eec3f3a88a9', 'width': 320}, {'height': 500, 'url': 'https://preview.redd.it/qm4pw6oh39sf1.png?width=640&crop=smart&auto=webp&s=a7504dacba279de1b6f2a5e8909c2c5ba1a28ecd', 'width': 640}, {'height': 751, 'url': 'https://preview.redd.it/qm4pw6oh39sf1.png?width=960&crop=smart&auto=webp&s=f447f9b21029107527f1cf28277f1e123b719628', 'width': 960}, {'height': 845, 'url': 'https://preview.redd.it/qm4pw6oh39sf1.png?width=1080&crop=smart&auto=webp&s=521cba7b0d7f4eb5ae15e54abbb01af36650c290', 'width': 1080}], 'source': {'height': 3748, 'url': 'https://preview.redd.it/qm4pw6oh39sf1.png?auto=webp&s=f55f9c481c5af6baf2e7f1e6b2af01fe0e911518', 'width': 4788}, 'variants': {}}]} | |
qwen3-from-scratch — readable PyTorch impl of Qwen3 (0.6B) for learning & research | 71 | An educational, from-scratch **Qwen3** implementation with minimal deps, plus converted **0.6B (base & reasoning) weights**. Easy to try via the `llms-from-scratch` PyPI package.
* What it is: clean PyTorch Qwen3 aimed at teaching/experimentation.
* Weights: PyTorch state dicts converted from the official **Qwen3-0.6B / 0.6B-Base** releases.
* Try it: `pip install llms_from_scratch`; choose base vs reasoning; \~**1.5 GB** for \~150 tokens; `torch.compile` showed \~**4×** speedup (**25→101 tok/s** on A100).
* Extras: standalone notebooks (dense, +KV cache, **MoE**, MoE+KV)
[https://huggingface.co/rasbt/qwen3-from-scratch](https://huggingface.co/rasbt/qwen3-from-scratch)
Looking for feedback from folks teaching or tinkering with small LLMs! | 2025-09-30T05:12:55 | https://www.reddit.com/r/LocalLLaMA/comments/1nu4llz/qwen3fromscratch_readable_pytorch_impl_of_qwen3/ | freesysck | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nu4llz | false | null | t3_1nu4llz | /r/LocalLLaMA/comments/1nu4llz/qwen3fromscratch_readable_pytorch_impl_of_qwen3/ | false | false | self | 71 | {'enabled': False, 'images': [{'id': 'VA4HRJY2G-mbGWsxh5oNnEoi7DQ5SbQNzQCqEGF2ukI', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/VA4HRJY2G-mbGWsxh5oNnEoi7DQ5SbQNzQCqEGF2ukI.png?width=108&crop=smart&auto=webp&s=4fad72be163edcf226bc88b44242ee9efee3f302', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/VA4HRJY2G-mbGWsxh5oNnEoi7DQ5SbQNzQCqEGF2ukI.png?width=216&crop=smart&auto=webp&s=17f673fc767213549d3884c1241de1d72f980de5', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/VA4HRJY2G-mbGWsxh5oNnEoi7DQ5SbQNzQCqEGF2ukI.png?width=320&crop=smart&auto=webp&s=3d62473f8f9f72533f39cb03f5f6e4e399b61529', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/VA4HRJY2G-mbGWsxh5oNnEoi7DQ5SbQNzQCqEGF2ukI.png?width=640&crop=smart&auto=webp&s=7dae7e8d43e23673d57ff5d8e1d3e8af61393e55', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/VA4HRJY2G-mbGWsxh5oNnEoi7DQ5SbQNzQCqEGF2ukI.png?width=960&crop=smart&auto=webp&s=303052a01329bbcc85c7f674dd88b9a2dec68855', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/VA4HRJY2G-mbGWsxh5oNnEoi7DQ5SbQNzQCqEGF2ukI.png?width=1080&crop=smart&auto=webp&s=546e48f65a0179ed1b7d2fb319cbb8f611332547', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/VA4HRJY2G-mbGWsxh5oNnEoi7DQ5SbQNzQCqEGF2ukI.png?auto=webp&s=3d94159acb26a1764ab8193deb43010cb5d0d572', 'width': 1200}, 'variants': {}}]} |
Experiment: Local console that solves math and tracks itself (0 LLM calls) | 4 | I’ve been tinkering with a local console that can solve math offline — arithmetic, quadratics, polynomials, and even small linear systems. It keeps track of stats (like how many problems it solved locally) and doesn’t require constant LLM calls.
This isn’t a finished product, just a demo I’ve been building for fun to see how far I can push a local-first approach. Right now, it’s handling progressively harder batches of equations and I’m testing stability under stress.
Curious to hear thoughts, feedback, or if anyone else here has tried something similar! | 2025-09-30T04:49:34 | https://www.reddit.com/gallery/1nu46tw | Lyrisy | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1nu46tw | false | null | t3_1nu46tw | /r/LocalLLaMA/comments/1nu46tw/experiment_local_console_that_solves_math_and/ | false | false | 4 | null | |
1T open source reasoning model with 50B activation | 154 | Ring-1T-preview: [https://huggingface.co/inclusionAI/Ring-1T-preview](https://huggingface.co/inclusionAI/Ring-1T-preview)
The first 1 trillion open-source thinking model | 2025-09-30T04:46:03 | Full_Piano_3448 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1nu44n4 | false | null | t3_1nu44n4 | /r/LocalLLaMA/comments/1nu44n4/1t_open_source_reasoning_model_with_50b_activation/ | false | false | default | 154 | {'enabled': True, 'images': [{'id': 'evmdgk53f8sf1', 'resolutions': [{'height': 54, 'url': 'https://preview.redd.it/evmdgk53f8sf1.png?width=108&crop=smart&auto=webp&s=e742c8f4f270ab9f6e381a06ce99ad77deec622a', 'width': 108}, {'height': 109, 'url': 'https://preview.redd.it/evmdgk53f8sf1.png?width=216&crop=smart&auto=webp&s=71028184f4a720fd7816ce6e822b3a9e99527fdd', 'width': 216}, {'height': 162, 'url': 'https://preview.redd.it/evmdgk53f8sf1.png?width=320&crop=smart&auto=webp&s=0bf36a74df738cb64bd883ed05ab39eef375d3aa', 'width': 320}, {'height': 325, 'url': 'https://preview.redd.it/evmdgk53f8sf1.png?width=640&crop=smart&auto=webp&s=e3816f8287d904b019c157e3029ce60ec1892bb6', 'width': 640}, {'height': 487, 'url': 'https://preview.redd.it/evmdgk53f8sf1.png?width=960&crop=smart&auto=webp&s=09efc4fbeae02cafe811c2a0e899de3b75a3a064', 'width': 960}, {'height': 548, 'url': 'https://preview.redd.it/evmdgk53f8sf1.png?width=1080&crop=smart&auto=webp&s=ffe349f2f08814d5691d3a0f6cc4112ff37297f8', 'width': 1080}], 'source': {'height': 616, 'url': 'https://preview.redd.it/evmdgk53f8sf1.png?auto=webp&s=f13abe52fc0a77d339f736a9ec1a44d6cd323534', 'width': 1213}, 'variants': {}}]} | |
microsoft/bitnet-b1.58-2B-4T · Hugging Face | 36 | 2025-09-30T04:30:25 | https://huggingface.co/microsoft/bitnet-b1.58-2B-4T | Fun-Wolf-2007 | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1nu3uqt | false | null | t3_1nu3uqt | /r/LocalLLaMA/comments/1nu3uqt/microsoftbitnetb1582b4t_hugging_face/ | false | false | default | 36 | null | |
An Open-source Omni Chatbot for Long Speech and Voice Clone | 77 | Paper: [https://arxiv.org/abs/2509.25131](https://arxiv.org/abs/2509.25131)
Code: [https://github.com/dvlab-research/MGM-Omni](https://github.com/dvlab-research/MGM-Omni) | 2025-09-30T04:27:06 | ninjasaid13 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1nu3slg | false | null | t3_1nu3slg | /r/LocalLLaMA/comments/1nu3slg/an_opensource_omni_chatbot_for_long_speech_and/ | false | false | default | 77 | {'enabled': True, 'images': [{'id': 'q9gqi8orb8sf1', 'resolutions': [{'height': 41, 'url': 'https://preview.redd.it/q9gqi8orb8sf1.png?width=108&crop=smart&auto=webp&s=e437de564b2b9439e20359c028d21b757fe448a9', 'width': 108}, {'height': 83, 'url': 'https://preview.redd.it/q9gqi8orb8sf1.png?width=216&crop=smart&auto=webp&s=3aaf687c81f5e279497266a2de29a0e99d529734', 'width': 216}, {'height': 123, 'url': 'https://preview.redd.it/q9gqi8orb8sf1.png?width=320&crop=smart&auto=webp&s=ae791b74e0da751d60fd0d7460da770352745ba6', 'width': 320}, {'height': 247, 'url': 'https://preview.redd.it/q9gqi8orb8sf1.png?width=640&crop=smart&auto=webp&s=6567da929a7f403cb01556f4edb0436352b3fb36', 'width': 640}, {'height': 370, 'url': 'https://preview.redd.it/q9gqi8orb8sf1.png?width=960&crop=smart&auto=webp&s=88a7c7ec0f6ccfd12aeb13c4260bbd9d6906144d', 'width': 960}, {'height': 417, 'url': 'https://preview.redd.it/q9gqi8orb8sf1.png?width=1080&crop=smart&auto=webp&s=cc33cb4482c0dc3307b368b02b842ce8a8d6bf3d', 'width': 1080}], 'source': {'height': 1329, 'url': 'https://preview.redd.it/q9gqi8orb8sf1.png?auto=webp&s=e248f340e69f3e7150f26ce382af4fb00705806b', 'width': 3442}, 'variants': {}}]} | |
gemma3 the little non-tool calling model that could | 1 | [removed] | 2025-09-30T04:19:40 | https://www.reddit.com/r/LocalLLaMA/comments/1nu3nqq/gemma3_the_little_nontool_calling_model_that_could/ | onlyalad | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nu3nqq | false | null | t3_1nu3nqq | /r/LocalLLaMA/comments/1nu3nqq/gemma3_the_little_nontool_calling_model_that_could/ | false | false | self | 1 | null |
Used an LLM to generate exam blueprints (Boards + NEET/JEE) 📘 | 1 | [removed] | 2025-09-30T03:50:34 | FlounderWild6438 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1nu33lf | false | null | t3_1nu33lf | /r/LocalLLaMA/comments/1nu33lf/used_an_llm_to_generate_exam_blueprints_boards/ | false | false | default | 1 | {'enabled': True, 'images': [{'id': 'ucaaspvd58sf1', 'resolutions': [{'height': 216, 'url': 'https://preview.redd.it/ucaaspvd58sf1.png?width=108&crop=smart&auto=webp&s=49448c44c78adbf0426f51ed4bdee49f48afdbff', 'width': 108}, {'height': 432, 'url': 'https://preview.redd.it/ucaaspvd58sf1.png?width=216&crop=smart&auto=webp&s=656ca142af3ff11772dee19cf0867806926bb80b', 'width': 216}, {'height': 640, 'url': 'https://preview.redd.it/ucaaspvd58sf1.png?width=320&crop=smart&auto=webp&s=2bc4055de7cf78f4198581d81ad64ce275884b1c', 'width': 320}, {'height': 1280, 'url': 'https://preview.redd.it/ucaaspvd58sf1.png?width=640&crop=smart&auto=webp&s=2b44c9aaabedd7dd4e09413c16f0db73d72ef778', 'width': 640}, {'height': 1920, 'url': 'https://preview.redd.it/ucaaspvd58sf1.png?width=960&crop=smart&auto=webp&s=c2c7469487d95752a2faf0991ac8324978431a3f', 'width': 960}, {'height': 2160, 'url': 'https://preview.redd.it/ucaaspvd58sf1.png?width=1080&crop=smart&auto=webp&s=07074b779bd814f5f3b5f383631034f7f1a2d7c4', 'width': 1080}], 'source': {'height': 2712, 'url': 'https://preview.redd.it/ucaaspvd58sf1.png?auto=webp&s=b5b6319f932d6dfafd9fe26494c78362ff4cfd3f', 'width': 1220}, 'variants': {}}]} | |
We just dropped Kuse - the AI workspace that actually understands you. | 1 | [removed] | 2025-09-30T03:28:57 | https://app.kuse.ai | Fair_Imagination_545 | app.kuse.ai | 1970-01-01T00:00:00 | 0 | {} | 1nu2o45 | false | null | t3_1nu2o45 | /r/LocalLLaMA/comments/1nu2o45/we_just_dropped_kuse_the_ai_workspace_that/ | false | false | default | 1 | null |
Ling-mini-2.0 finally almost here. Lets push context size | 40 | I've been keeping an eye on [Ling 2.0](https://huggingface.co/inclusionAI/Ling-mini-2.0-GGUF) and today I finally got to benchmark it. I does require a special [build](https://github.com/im0qianqian/llama.cpp/releases) b6570 to get some models to work. I'm using the Vulkan build.
System: AMD Radeon RX 7900 GRE 16GB Vram GPU. Kubuntu 24.04 OS with 64GB DDR4 system RAM.
[Ling-mini-2.0-Q6\_K.gguf](https://huggingface.co/inclusionAI/Ling-mini-2.0-GGUF) \- Works
Ling-mini-2.0-IQ3\_XXS.gguf - Failed to load
|model|size|params|backend|ngl|test|t/s|
|:-|:-|:-|:-|:-|:-|:-|
|bailingmoe2 16B.A1B Q6\_K|12.45 GiB|16.26 B|RPC,Vulkan|99|pp512|3225.27 ± 25.23|
|bailingmoe2 16B.A1B Q6\_K|12.45 GiB|16.26 B|RPC,Vulkan|99|tg128|246.42 ± 2.02|
So Ling 2.0 model runs fast on my Radeon GPU so that gave me the chance to see how much prompt processing via context size (`--n-prompt` or `-p` ) effects overall token per second speed.
`/build-b6570-Ling/bin/llama-bench -m /Ling-mini-2.0-Q6_K.gguf -p 1024,2048,4096,8192,16384,32768`
|model|size|params|backend|ngl|test|t/s|
|:-|:-|:-|:-|:-|:-|:-|
|bailingmoe2 16B.A1B Q6\_K|12.45 GiB|16.26 B|RPC,Vulkan|99|pp1024|3227.30 ± 27.81|
|bailingmoe2 16B.A1B Q6\_K|12.45 GiB|16.26 B|RPC,Vulkan|99|pp2048|3140.33 ± 5.50|
|bailingmoe2 16B.A1B Q6\_K|12.45 GiB|16.26 B|RPC,Vulkan|99|pp4096|2706.48 ± 11.89|
|bailingmoe2 16B.A1B Q6\_K|12.45 GiB|16.26 B|RPC,Vulkan|99|pp8192|2327.70 ± 13.88|
|bailingmoe2 16B.A1B Q6\_K|12.45 GiB|16.26 B|RPC,Vulkan|99|pp16384|1899.15 ± 9.70|
|bailingmoe2 16B.A1B Q6\_K|12.45 GiB|16.26 B|RPC,Vulkan|99|pp32768|1327.07 ± 3.94|
|bailingmoe2 16B.A1B Q6\_K|12.45 GiB|16.26 B|RPC,Vulkan|99|tg128|247.00 ± 0.51|
Well doesn't that take a hit. Went from pp512 of 3225 t/s to pp32768 getting 1327 t/s. Losing almost 2/3 process speed, but gaining lots of run for input more data. This is still very impressive. We have a 16B parameter model posting some faster numbers. | 2025-09-30T02:45:00 | https://www.reddit.com/r/LocalLLaMA/comments/1nu1rul/lingmini20_finally_almost_here_lets_push_context/ | tabletuser_blogspot | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nu1rul | false | null | t3_1nu1rul | /r/LocalLLaMA/comments/1nu1rul/lingmini20_finally_almost_here_lets_push_context/ | false | false | self | 40 | {'enabled': False, 'images': [{'id': 'VCeqretOmrxPPkA_vmYwGPys0XHgtObaahIc63PX8J4', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/VCeqretOmrxPPkA_vmYwGPys0XHgtObaahIc63PX8J4.png?width=108&crop=smart&auto=webp&s=104965a86f015488bfa825624a4975c52893db08', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/VCeqretOmrxPPkA_vmYwGPys0XHgtObaahIc63PX8J4.png?width=216&crop=smart&auto=webp&s=6ad4054c51211196bba9217f99cc87a69f4dc63a', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/VCeqretOmrxPPkA_vmYwGPys0XHgtObaahIc63PX8J4.png?width=320&crop=smart&auto=webp&s=826d034606e648d5a2b81d6972c8cadd848c1287', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/VCeqretOmrxPPkA_vmYwGPys0XHgtObaahIc63PX8J4.png?width=640&crop=smart&auto=webp&s=adfac9b89f3f564b0e97e2e547a9b1f56182ae67', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/VCeqretOmrxPPkA_vmYwGPys0XHgtObaahIc63PX8J4.png?width=960&crop=smart&auto=webp&s=609ca9b89cb085554f654b94b07c86cea5aa64f7', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/VCeqretOmrxPPkA_vmYwGPys0XHgtObaahIc63PX8J4.png?width=1080&crop=smart&auto=webp&s=90cb2d10fb34a0a62e3ac3688c436c562f34b78a', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/VCeqretOmrxPPkA_vmYwGPys0XHgtObaahIc63PX8J4.png?auto=webp&s=6218c673b372430ddc311a727701a3cd92e0d18a', 'width': 1200}, 'variants': {}}]} |
Update on dual b580 llm setup | 27 | Finally, after so much work, I got dual Intel ARK B580 GPUs working in LM Studio on an X99 system that has 80 PCIe lanes. Now I'm gonna install two more GPUs to get a total of 48 gigs of VRAM, and test it out. Right now, with both GPUs, I can run a 20 gig model at 60 tokens per second. | 2025-09-30T02:34:40 | https://www.reddit.com/gallery/1nu1k9h | hasanismail_ | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1nu1k9h | false | null | t3_1nu1k9h | /r/LocalLLaMA/comments/1nu1k9h/update_on_dual_b580_llm_setup/ | false | false | 27 | null | |
front-end GUI using WhisperX with speaker diarization? | 1 | can anyone recommend? I have 1000s of videos to transcribe and not exactly savvy with using docker & related tools to do batch conversions. | 2025-09-30T02:22:05 | https://www.reddit.com/r/LocalLLaMA/comments/1nu1apy/frontend_gui_using_whisperx_with_speaker/ | milkygirl21 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nu1apy | false | null | t3_1nu1apy | /r/LocalLLaMA/comments/1nu1apy/frontend_gui_using_whisperx_with_speaker/ | false | false | self | 1 | null |
Would an open-source “knowledge assistant” for orgs be useful? | 2 | Hey folks
I’ve been thinking about a problem I see in almost every organization:
- Policies & SOPs are stuck in PDFs nobody opens
- Important data lives in Postgres / SQL DBs
- Notes are spread across Confluence / Notion / SharePoint
- Slack/Teams threads disappear into the void
Basically: finding the right answer means searching 5 different places (and usually still asking someone manually).
My idea → Compass:
An open-source knowledge assistant that could:
- Connect to docs, databases, and APIs
- Let you query everything through natural language (using any LLM: GPT, Gemini, Claude, etc.)
- Show the answer + the source (so it’s trustworthy)
- Be modular — FastAPI + Python backend, React/ShadCN frontend
The vision:
Instead of asking “Where’s the Q1 budget report?” in Slack, you’d just ask Compass.
Instead of writing manual SQL, Compass would translate your natural language into the query.
What I’d love to know from you:
- Would this kind of tool actually be useful in your org?
- What’s the first data source you’d want connected?
- Do you think tools like Glean, Danswer, or AnythingLLM already solve this well enough?
I’m not building it yet — just testing if this is worth pursuing. Curious to hear honest opinions. | 2025-09-30T02:22:04 | https://www.reddit.com/r/LocalLLaMA/comments/1nu1apt/would_an_opensource_knowledge_assistant_for_orgs/ | vishal-vora | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nu1apt | false | null | t3_1nu1apt | /r/LocalLLaMA/comments/1nu1apt/would_an_opensource_knowledge_assistant_for_orgs/ | false | false | self | 2 | null |
iOS App to run LLMs 100% on device with llama.cpp, executorch & foundation model | 17 | ERROR: type should be string, got "https://preview.redd.it/wp5qe3chl7sf1.png?width=1510&format=png&auto=webp&s=dd907155b0cdc906aa4e148588d965ee57956766\n\n \nI've been building this iOS app over the last few weeks that runs LLMs 100% on device and allows you to experiment with a few different runtimes/settings and recently just added the Apple Foundation Model into the chat for those on iOS 26...\n\nWhat it does\n\n• Runs GGUF models and ExecuTorch packages, with a bunch of models available for easy download\n\n• Also lets you import GGUF models from Hugging Face links\n\n• Recently added Apple Foundation model to chat\n\n• embeddings on chats and file uploads for RAG with settings\n\n• Simple model picker, device aware defaults\n\n• Web search tool uses DuckDuckGo call for additional context if selected on\n\n• Privacy by default. All inference on device. Runs in airplane mode\n\nwould love some feedback\n\nreally want to build it out further over time especially as open source models become better and easier to run on device\n\n100% free and no data collected\n\nApp Store - [https://apps.apple.com/us/app/local-llm-mithril/id6751945393](https://apps.apple.com/us/app/local-llm-mithril/id6751945393)\n\nSite - [https://mithril.solutions](https://mithril.solutions)\n\nEmail - [boshjerns@gmail.com](mailto:boshjerns@gmail.com)\n\nX - [https://x.com/boshjerns](https://x.com/boshjerns)" | 2025-09-30T02:06:24 | https://www.reddit.com/r/LocalLLaMA/comments/1nu0ymq/ios_app_to_run_llms_100_on_device_with_llamacpp/ | Independent_Air8026 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nu0ymq | false | null | t3_1nu0ymq | /r/LocalLLaMA/comments/1nu0ymq/ios_app_to_run_llms_100_on_device_with_llamacpp/ | false | false | 17 | {'enabled': False, 'images': [{'id': 'k4rEkzZFc0yFObwT9EWlU0ca93k7mUpLE3a1wIH6o2Y', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/k4rEkzZFc0yFObwT9EWlU0ca93k7mUpLE3a1wIH6o2Y.png?width=108&crop=smart&auto=webp&s=14c8bfdb1dd693159372e5259d4f6989d55b6a95', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/k4rEkzZFc0yFObwT9EWlU0ca93k7mUpLE3a1wIH6o2Y.png?width=216&crop=smart&auto=webp&s=c64ff4631aa87ea80787427c207f209d27423351', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/k4rEkzZFc0yFObwT9EWlU0ca93k7mUpLE3a1wIH6o2Y.png?width=320&crop=smart&auto=webp&s=0451ba07d796b63e5cdf9387c880e7a0872f1164', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/k4rEkzZFc0yFObwT9EWlU0ca93k7mUpLE3a1wIH6o2Y.png?width=640&crop=smart&auto=webp&s=c3212b82988e08b12827c315a25d4fcea61340a9', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/k4rEkzZFc0yFObwT9EWlU0ca93k7mUpLE3a1wIH6o2Y.png?width=960&crop=smart&auto=webp&s=f7b941f900b734da6efff8effcbb89ebc8e9e0cd', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/k4rEkzZFc0yFObwT9EWlU0ca93k7mUpLE3a1wIH6o2Y.png?width=1080&crop=smart&auto=webp&s=8601a8f78580d96655cb516cd76e0e988bee8f23', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/k4rEkzZFc0yFObwT9EWlU0ca93k7mUpLE3a1wIH6o2Y.png?auto=webp&s=820b1f19ccca37bfebcd1c3da11b0a0442a2ec48', 'width': 1200}, 'variants': {}}]} | |
How are y’all using local LLM to make money/power your business? | 4 | Comment! | 2025-09-30T02:01:56 | https://www.reddit.com/r/LocalLLaMA/comments/1nu0v4z/how_are_yall_using_local_llm_to_make_moneypower/ | Huge-Solution-7168 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nu0v4z | false | null | t3_1nu0v4z | /r/LocalLLaMA/comments/1nu0v4z/how_are_yall_using_local_llm_to_make_moneypower/ | false | false | self | 4 | null |
Qwen2/3 and higher models weird Question.. | 1 | Is it just me? or Qwen models are overhyped... i see alot of dudes pushing Qwen and kept saying try it out. but then again for two damn days i tested it all models with my new Rtx card.. bruh its a let down. only good at 3-10 prompts then after that it hallucinates it becomes stupid.. pls Qwen supporters enlighten me why Qwen Ace at benchmarks but is stupid in real world usage? is this the Iphone equivalent of LLM? maybe someone can send me there settings and adapters or something... cuz no amtter what i do i tested it in very long sessions god damn its retarded I cant seem to connect the dots with these dudes flexing Qwen benchmarks.. ugh i wanna support the model but damn i cant find he reason lol hope some Qwen guru guide me on this track. like literally I went to alot of guides to nucleus to temps to chat adapters to higher Quants... it seems it does not fit my taste like i can only see its tuned for benchmarks and not real world usage. | 2025-09-30T01:57:56 | https://www.reddit.com/r/LocalLLaMA/comments/1nu0rxn/qwen23_and_higher_models_weird_question/ | DigRealistic2977 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nu0rxn | false | null | t3_1nu0rxn | /r/LocalLLaMA/comments/1nu0rxn/qwen23_and_higher_models_weird_question/ | false | false | self | 1 | null |
Jet-Nemotron released models and inference code | 18 | Jet-Nemotron is a new family of hybrid-architecture language models that surpass state-of-the-art open-source full-attention language models such as Qwen3, Qwen2.5, Gemma3, and Llama3.2, while achieving significant efficiency gains—up to 53.6× speedup in generation throughput on H100 GPUs (256K context length, maximum batch size). It is built upon two core innovations:
- Post Neural Architecture Search, an efficient post-training architecture exploration and adaptation pipeline applicable to arbitrary pre-trained transformer models;
- JetBlock, a novel linear attention block that significantly outperforms previous designs such as Mamba2.
| 2025-09-30T01:53:38 | https://github.com/NVlabs/Jet-Nemotron | Balance- | github.com | 1970-01-01T00:00:00 | 0 | {} | 1nu0oin | false | null | t3_1nu0oin | /r/LocalLLaMA/comments/1nu0oin/jetnemotron_released_models_and_inference_code/ | false | false | default | 18 | {'enabled': False, 'images': [{'id': '2-fU69Dvniqx4_z9hD3d8F-_GmuMjFAdmX_Tuvu4OYQ', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/2-fU69Dvniqx4_z9hD3d8F-_GmuMjFAdmX_Tuvu4OYQ.png?width=108&crop=smart&auto=webp&s=0491d4fef2ef6dd31061f3e35dd254772ce7529f', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/2-fU69Dvniqx4_z9hD3d8F-_GmuMjFAdmX_Tuvu4OYQ.png?width=216&crop=smart&auto=webp&s=2676492fbb1a61c4fbd9036815b7c5e95643ed37', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/2-fU69Dvniqx4_z9hD3d8F-_GmuMjFAdmX_Tuvu4OYQ.png?width=320&crop=smart&auto=webp&s=487cfaa5701779ffb0aa7151015421778f56cd18', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/2-fU69Dvniqx4_z9hD3d8F-_GmuMjFAdmX_Tuvu4OYQ.png?width=640&crop=smart&auto=webp&s=51179997eb3808d163d1b3977dbd2101aa80b99b', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/2-fU69Dvniqx4_z9hD3d8F-_GmuMjFAdmX_Tuvu4OYQ.png?width=960&crop=smart&auto=webp&s=13d22ba562039f60614bbe9228e2375ec7ccc35e', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/2-fU69Dvniqx4_z9hD3d8F-_GmuMjFAdmX_Tuvu4OYQ.png?width=1080&crop=smart&auto=webp&s=2b98be115078fe88596e50d75f7f4d65def6b683', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/2-fU69Dvniqx4_z9hD3d8F-_GmuMjFAdmX_Tuvu4OYQ.png?auto=webp&s=d8bf923ce4c8d3ed8981c02c1b6b2802976ce76f', 'width': 1200}, 'variants': {}}]} |
Best LLM for JSON Extraction | 2 | **Background**
A lot of my GenAI usage is from extracting JSON structures from text. I've been doing it since 2023 while working in a medium size company. A lot of early models made mistakes in JSON format, and now pretty much all decent models return properly structured JSON. However, a lot of what I do requires intelligent extraction with understanding of context. For example:
1. Extract transcript containing dates that are clearly in the past (Positive: The incident occurred on **March 12, 2024**. Negative: My card will expire on **March 12, 2024**)
2, Extract transcript containing name of a private human individual (Positive: My name is **B as in Bravo, O as in Oscar, B as in Bravo**. Negative: My dog's name is **Bob**.)
I built a benchmark to evaluate intelligent JSON extraction, and I notice that open source models are seriously lacking behind. The best open source model on my list is "qwen3-235b-a22b" with the score of 0.753, which is way behind even "gemini-2.5-flash-lite-09-2025" (0.905) and "grok-4-fast" (0.942). The highly praised GPT OSS 120B made many mistakes and was below even qwen3.
**Two Questions**
1. My data requires privacy and I would much prefer to use a local model. Is there an open source model that is great at intelligent JSON extraction that I should check out? May be a fine-tune of a LLama model? I've tried qwen3 32b, qwen3 235b, deepseek 3.1 older version, gpt oss 20b and 120b, llama 3.3 70b, llama 4 maverick. What else should I try?
2. Is there a good benchmark live benchmark that tracks intelligent json extraction? Maintaining my benchmark costs time and money. I'd prefer to use something that already exists. | 2025-09-30T01:36:28 | https://www.reddit.com/r/LocalLLaMA/comments/1nu0bc2/best_llm_for_json_extraction/ | Live_Bus7425 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nu0bc2 | false | null | t3_1nu0bc2 | /r/LocalLLaMA/comments/1nu0bc2/best_llm_for_json_extraction/ | false | false | self | 2 | null |
What is the smartest, <= 50B params, non-reasoning model? | 8 | Non-reasoning or hybrid that you can reliably disable reasoning with.
I have pipelines that can tolerate a little reasoning, but none of the hybrid or reasoning models seem to be able to resist going off on crazy tangents and thinking for thousands of tokens every now and again.
What's the best non-reasoning model right now? | 2025-09-30T01:34:39 | https://www.reddit.com/r/LocalLLaMA/comments/1nu09wd/what_is_the_smartest_50b_params_nonreasoning_model/ | ForsookComparison | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nu09wd | false | null | t3_1nu09wd | /r/LocalLLaMA/comments/1nu09wd/what_is_the_smartest_50b_params_nonreasoning_model/ | false | false | self | 8 | null |
Ring 1T Preview out?? | 27 | i heard a national holiday is coming soon for China, i guess EVERYONE is pumping out some wild stuff... Qwen VL, Omni, Guard, DeepSeek 3.2-Exp and now inclusionAI somehow. hopefully the model isnt benchmaxxed as its already so massive (ive tested Ling 1.5 and its... interesting)... and i guess it wont matter cuz this is already on the cusp of requiring you to have at least 20K worth of equipment to run (at least we have their smaller counterparts) hopefully the BailingMoE arch gets implemented into llamacpp cuz I have been quite interested to see how Ling & Ring Flash compare to Qwen3 Next & gpt-oss-120b
(p.s. this is my first post, no clue how the "etiquette" works around here, sorry if i messed something up) | 2025-09-30T01:30:11 | https://huggingface.co/inclusionAI/Ring-1T-preview | ComplexType568 | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1nu06l4 | false | null | t3_1nu06l4 | /r/LocalLLaMA/comments/1nu06l4/ring_1t_preview_out/ | false | false | default | 27 | {'enabled': False, 'images': [{'id': '4BcKbo7xe3TnsDWWKr8OW54tcjXZDo7muOhK2VwktmU', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/4BcKbo7xe3TnsDWWKr8OW54tcjXZDo7muOhK2VwktmU.png?width=108&crop=smart&auto=webp&s=28d2c7272a3366f156edffe7aeef09af7881f3e9', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/4BcKbo7xe3TnsDWWKr8OW54tcjXZDo7muOhK2VwktmU.png?width=216&crop=smart&auto=webp&s=a3bb2e372f3d961b53a7e4bc69371d654b52deb4', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/4BcKbo7xe3TnsDWWKr8OW54tcjXZDo7muOhK2VwktmU.png?width=320&crop=smart&auto=webp&s=e70cbf1c601ef79a85358e27eda9d654eb3e41aa', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/4BcKbo7xe3TnsDWWKr8OW54tcjXZDo7muOhK2VwktmU.png?width=640&crop=smart&auto=webp&s=2da594e626c0f695f51deb12b2499b0ca73799e6', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/4BcKbo7xe3TnsDWWKr8OW54tcjXZDo7muOhK2VwktmU.png?width=960&crop=smart&auto=webp&s=a0a20a23e6c1d4ada1222a396a60c0920c4f292a', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/4BcKbo7xe3TnsDWWKr8OW54tcjXZDo7muOhK2VwktmU.png?width=1080&crop=smart&auto=webp&s=1108e65f1b8a0f95af9cf5d39c07bb9f2b1efe17', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/4BcKbo7xe3TnsDWWKr8OW54tcjXZDo7muOhK2VwktmU.png?auto=webp&s=5920600694ffca446c2854bfabd1d696d56bafa3', 'width': 1200}, 'variants': {}}]} |
Docker-MCP. What's good, what's bad. The context window contamination. | 3 | First of all, thank you for your appreciation and attention to my previous posts, glad I managed to help and show something new. Previous post encouraged me to get back to my blog and public posting after the worst year and depression I have ever been through 27 years of my life. Thanks a lot!
so...
1) Docker-MCP is an amazing tool, it literally aggregates all of the needed MCPs in one place, provides some safety layers and also an integrated quite convenient marketplace. And, I guess we can add a lot to it, it's really amazing!
2) What's bad and what need's to be fixed.
\- so in LMStudio we can manually pick each available MCP added via our config. Each MCP will show full list of it's tools. We can manually toggle on and off each MCP.
\- if we turn on Docker MCP, it literally fetches data about EVERY single MCP enabled via docker.
So basically it injects all the instructions and available tools with the first message we send to the model. which might contaminate your context window quite heavily, depending on the amount of MCP servers added via Docker.
Therefore, what we have (in my case, I've just tested it with a fellow brother from here)
I inited 3 chats with "hello" in each.
1) 0 MCPs enabled - 0.1% context window.
2) memory-server-mcp enabled - 0.6% context window.
3) docker-mcp enabled - 13.3% context window.
I can add full list of MCP's I have within docker, so that you would not think that I decided to add the whole marketplace.
so basically ... That's whatI was trying to convey, friends!
love & loyalty | 2025-09-30T00:04:30 | https://www.reddit.com/r/LocalLLaMA/comments/1ntyceo/dockermcp_whats_good_whats_bad_the_context_window/ | Komarov_d | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ntyceo | false | null | t3_1ntyceo | /r/LocalLLaMA/comments/1ntyceo/dockermcp_whats_good_whats_bad_the_context_window/ | false | false | self | 3 | null |
Need Advise! LLM Inferencing GPU Cloud renting | 2 | Hey guys, I want to run some basic LLM inferencing, and hopefully scale up my operations if I see positive results. What cloud GPU should I rent out? There are too many specs out there without any standardised way to effectively compare across the GPU chips? How do you guys do it? | 2025-09-29T23:51:10 | https://www.reddit.com/r/LocalLLaMA/comments/1nty1gd/need_advise_llm_inferencing_gpu_cloud_renting/ | Sharp_Ad9847 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nty1gd | false | null | t3_1nty1gd | /r/LocalLLaMA/comments/1nty1gd/need_advise_llm_inferencing_gpu_cloud_renting/ | false | false | self | 2 | null |
How I’m Securing Our Vibe Coded App: My Cybersecurity Checklist + Tips to Keep Hackers Out! | 0 | I'm a cybersecurity grad and a vibe coding nerd, so I thought I’d drop my two cents on keeping our Vibe Coded app secure. I saw some of you asking about security, and since we’re all about turning ideas into code with AI magic, we gotta make sure hackers don’t crash the party. I’ll keep it clear and beginner-friendly, but if you’re a security pro, feel free to skip to the juicy bits.
If we’re building something awesome, it needs to be secure, right? Vibe coding lets us whip up apps fast by just describing what we want, but the catch is AI doesn’t always spit out secure code. You might not even know what’s going on under the hood until you’re dealing with leaked API keys or vulnerabilities that let bad actors sneak in. I’ve been tweaking our app’s security, and I want to share a checklist I’m using.
**Why Security Matters for Vibe Coding**
Vibe coding is all about fast, easy access. But the flip side? AI-generated code can hide risks you don’t see until it’s too late. Think leaked secrets or vulnerabilities that hackers exploit.
Here are the big risks I’m watching out for:
* **Cross-Site Scripting (XSS)**: Hackers sneak malicious scripts into user inputs (like forms) to steal data or hijack accounts. Super common in web apps.
* **SQL Injections**: Bad inputs mess with your database, letting attackers peek at or delete data.
* **Path Traversal**: Attackers trick your app into leaking private files by messing with URLs or file paths.
* **Secrets Leakage**: API keys or passwords getting exposed (in 2024, 23 million secrets were found in public repos).
* **Supply Chain Attacks**: Our app’s 85-95% open-source dependencies can be a weak link if they’re compromised.
**My Security Checklist for Our Vibe Coded App**
Here is a leveled-up checklist I've begun to use.
**Level 1: Basics to Keep It Chill**
* **Git Best Practices**: Use a .gitignore file to hide sensitive stuff like .env files (API keys, passwords). Keep your commit history sane, sign your own commits, and branch off (dev, staging, production) so buggy code doesn't reach live.
* **Smart Secrets Handling**: Never hardcode secrets! Use utilities to identify leaks right inside the IDE.
* **DDoS Protection**: Set up a CDN like Cloudflare for built-in protection against traffic floods.
* **Auth & Crypto**: Do not roll your own! Use experts such as Auth0 for logon flows as well as NaCL libs to encrypt.
**Level 2: Step It Up**
* **CI/CD Pipeline:** Add Static Application Security Testing (SAST) and Dynamic Application Security Testing (DAST) to catch issues early. ZAP or Trivy are awesome and free.
* **Dependency Checks:** Scan your open-source libraries for vulnerabilities and malware. Lockfiles ensure you’re using the same safe versions every time
* **CSP Headers & WAF:** Prevent XSS with content security policies, a Web Application Firewall to stop shady requests.
**Level 3: Pro Vibes**
* **Container Security**: If you’re using Docker, keep base images updated, run containers with low privileges, and manage secrets with tools like HashiCorp Vault or AWS Secrets Manager.
* **Cloud Security**: Keep separate cloud accounts for dev, staging, and prod. Use Cloud Security Posture Management tools like AWS Inspector to spot misconfigurations. Set budget alerts to catch hacks.
What about you all? Hit any security snags while vibe coding? Got favorite tools or tricks to share? what’s in your toolbox?
| 2025-09-29T23:35:42 | https://www.reddit.com/r/LocalLLaMA/comments/1ntxomy/how_im_securing_our_vibe_coded_app_my/ | BymaxTheVibeCoder | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ntxomy | false | null | t3_1ntxomy | /r/LocalLLaMA/comments/1ntxomy/how_im_securing_our_vibe_coded_app_my/ | false | false | self | 0 | null |
lm studio unexpected endpoint or method | 4 | hi i am new here i have been trying to use lm studio but i keep getting this error in every model i try to use
Unexpected endpoint or method. (GET /favicon.ico). Returning 200 anyway | 2025-09-29T23:09:51 | https://www.reddit.com/r/LocalLLaMA/comments/1ntx3ad/lm_studio_unexpected_endpoint_or_method/ | Dry_Presentation_908 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ntx3ad | false | null | t3_1ntx3ad | /r/LocalLLaMA/comments/1ntx3ad/lm_studio_unexpected_endpoint_or_method/ | false | false | self | 4 | null |
is 16k ctx considered enough? | 1 | I know this is Use case - dependent
But can you get a solid amount of stuff done with 16k ctx?
what are the limitations,
I just figured out how to run GLM 4.5 air iq4xs at 11tok/s on low performance mode on my gaming laptop. (16gb vram 64gb ram) (15gb available out of 16, and 56gb out of 64)
GLM 4.5 is really great, I just cannot squeeze in any more Context, unless i set my KV cache to q4, which i wont do. sticking to q8. | 2025-09-29T22:36:02 | https://www.reddit.com/r/LocalLLaMA/comments/1ntwb1n/is_16k_ctx_considered_enough/ | noyingQuestions_101 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ntwb1n | false | null | t3_1ntwb1n | /r/LocalLLaMA/comments/1ntwb1n/is_16k_ctx_considered_enough/ | false | false | self | 1 | null |
The Most Esoteric eGPU: Dual NVIDIA Tesla V100 (64G) for AI & LLM | 107 | Read this with images on my blog:
* [medium](https://blog.rexyuan.com/the-most-esoteric-egpu-dual-nvidia-tesla-v100-64g-for-ai-llm-41a3166dc2ac)
* [static site](https://jekyll.rexyuan.com/2025/09/30/v100/)
(I was going to buy one of these and make a whole YouTube video about it, but I am a bit tight on money rn, so I decided just to share my research as a blog post.)
## Preface
The [Nvidia Tesla V100](https://www.techpowerup.com/gpu-specs/tesla-v100-pcie-16-gb.c2957) was released in mid-2017. It was a PCIe Gen 3.0 GPU, primarily designed for machine learning tasks. These Tesla GPUs, although almost a decade old now, remain moderately popular among AI enthusiasts due to their low market price and large VRAM.
In addition to the regular PCIe version, there is also the [Nvidia Tesla V100 SXM2](https://www.techpowerup.com/gpu-specs/tesla-v100-sxm2-16-gb.c3018) module version. These are modular GPUs that you plug into dedicated slots on an Nvidia server motherboard.
One thing to note is that these GPUs do not use GDDR for VRAM. They use another memory called HBM, which has a much higher bandwidth than GDDR of the same generation. For comparison, the GTX 1080 Ti, the best consumer GPU released in the same year as V100, uses GDDR5X with 484.4 GB/s bandwidth, while V100 uses HBM2 with a whopping 897.0 GB/s bandwidth.
## The Summit Supercomputer
The [Summit supercomputer](https://en.wikipedia.org/wiki/Summit_(supercomputer)) in the US was decommissioned last November. In it were almost 30000 pieces of V100 in the SXM2 form factor. These V100s were then disposed of. But much like most enterprise hardware, there’s a whole supply chain of companies that specialize in turning a man’s garbage into another man’s treasure in the used enterprise gear market.
Earlier this year, as the Chinese hardware enthusiasts would call it, the “big boat” arrived, meaning there was now a sizable supply of these V100 SXM2 GPUs on the Chinese domestic market. And most importantly, they’re cheap. These can be [purchased](https://e.tb.cn/h.SfQlb1RyJW3P9m5?tk=uBXu4CAPRtW) for as low as around 400 RMB(~56 USD).
## SXM2?
Now they have the cheap hardware, but these can’t just be plugged into your PCIe slot like a regular consumer GPU. Normally, these SXM form factor GPUs are designed to be plugged directly into dedicated slots in a pre-built dedicated Nvidia-based server, which poses the question of how on earth are they gonna use them?
So people got to work. Some people reverse-engineered the pinouts of those server slots and then created [PCIe adapter boards](https://e.tb.cn/h.SUe1QaFSxJP4Ccu?tk=vwVA4CzrcKe)(286 RMB(~40 USD)) for these SXM2 GPUs. Currently, there are already finished [V100 SXM2-adapted-to-PCIe GPUs](https://e.tb.cn/h.SUV7a7SkKGvYRiN?tk=l3OU4ya00z7) at 1459 RMB(~205 USD) from NEOPC, complete with cooling and casing.
But this isn’t all that interesting, is it? This is just turning a V100 SXM2 version into a V100 PCIe version. But here comes the kicker: one particular company, 39com, decided to go further. They’re going to make NVLink work with these adapters.
## NVLink
One of the unique features of Nvidia-based servers is the [NVLink](https://en.wikichip.org/wiki/nvidia/nvlink) feature, which provides unparalleled bandwidth between GPUs, so much so that most people would consider them essentially sharing the VRAM. In particular, the V100 is a Tesla Volta generation model, which utilizes NVLink 2.0, supporting a bandwidth of up to 300 GB/s.
39com reverse-engineered NVLink and got it working on their [adapter boards](https://e.tb.cn/h.SfQlu1DHRVlqLkV?tk=yDif4CAPUu6). Currently, you can put two V100 SXM2 on their board and have them connected with full NVLink 2.0 at 300 GB/s. This is currently priced at 911 RMB(~128 USD).
However, at this point, the adapter boards have become so big that it no longer makes sense to plug them directly into your motherboard's PCIe slot anymore. So their board’s I/O uses 4 SlimSAS(SFF-8654 8i) ports, two ports for each V100.
Additionally, to connect these multiple GPUs to your motherboard with a single PCIe x 16 slot, you need to either have a motherboard that supports bifurcation and get a PCIe 3.0 to SlimSAS adapter card with two 8654 8i ports, or get a PLX8749(PCIe Gen 3.0 Switch) PCIe card that has 4 8654 8i ports.
Together with the dual SXM2 slot adapter board, a PLX8749 SlimSAS PCIe card, and cables, it is priced at 1565 RMB (~220 USD)
## Cooler
Since these V100 SXM2 GPUs come as modules without coolers. They need to find another way to cool these things. The prime candidate is the stock cooler for the A100 SXM4. It has amazing cooling capacity and can fit the V100 SXM2 with minimal modification.
## “eGPU”
There are now some pre-built systems readily available on Taobao(Chinese Amazon). One seller particularly stands out, 1CATai TECH, who seems to provide the most comprehensive solution.
They also directly work with 39com on the adapter boards design, so I was going to buy one of their systems, but due to my current financial situation, I just couldn’t justify the purchase.
Their [main product](https://e.tb.cn/h.SfWy6cClZZELARJ?tk=u3sb4CAmAKJ) is a one-package system that includes the case, 39com adapter board, two V100 SXM2 GPUs with A100 coolers, an 850W PSU, SlimSAS cables, and a PCIe adapter card. It is priced from 3699 RMB(~520 USD) with two V100 16G to 12999 RMB(1264 USD) with two V100 32G.
I know I’m stretching the definition of eGPU, but technically, since this “thing” contains GPUs and sits outside of your main PC and you connect to it via some cables, I’d say it still is an eGPU, albeit the most esoteric one. Besides, even for a full-size desktop PC, this setup actually necessitates the use of an external placement because of the sheer size of the coolers. Additionally, there are already [major Chinese content creators](https://www.bilibili.com/video/BV16AWGzGEuQ) testing this kind of “eGPU” setup out on Bilibili, hence the title of this post.
## Performance
Since I don’t have the machine in my hand, I will quote the performance reports from their [official Bilibili video](https://www.bilibili.com/video/BV1nbLXzME81). Running [Qwen/QwQ-32B](https://huggingface.co/Qwen/QwQ-32B), the speed is 29.9 token/s on a single stream and 50.9 token/s on four concurrent streams. Running [deepseek-ai/DeepSeek-R1-Distill-Llama-70B](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Llama-70B), the speed is 12.7 token/s on a single stream and 36 token/s on four concurrent streams.
## More GPUs?
In theory, NVLink 2.0 supports connecting 4 GPUs together at once. But 1CATai TECH told me that they’ve been working with 39com on building an adapter that reliably works with 4 GPUs for months to no avail. Still, they said it’s definitely not impossible. They’re even planning to make an 8-GPU eGPU. They have previously successfully gotten a monstrous setup with 16 V100 SXM2 GPUs to work with multiple PLX switches for a university.
| 2025-09-29T22:28:44 | https://www.reddit.com/gallery/1ntw4vz | rexyuan | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1ntw4vz | false | null | t3_1ntw4vz | /r/LocalLLaMA/comments/1ntw4vz/the_most_esoteric_egpu_dual_nvidia_tesla_v100_64g/ | false | false | 107 | null | |
Looking for a local tts with consistent pronunciation | 5 | I'm currently using chatterbox extended and it's really good for the most part but it has this annoying issue where it tends to pronounce certain words in wildly varying ways and it's very frustrating. | 2025-09-29T22:22:02 | https://www.reddit.com/r/LocalLLaMA/comments/1ntvz9h/looking_for_a_local_tts_with_consistent/ | AdOrdinary3083 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ntvz9h | false | null | t3_1ntvz9h | /r/LocalLLaMA/comments/1ntvz9h/looking_for_a_local_tts_with_consistent/ | false | false | self | 5 | null |
Nexa SDK launch + past-month updates for local AI builders | 6 | Team behind Nexa SDK here. We’re excited to share that Nexa SDK is live on Product Hunt today and to give a quick recap of the small but meaningful updates we’ve shipped over the past month.
https://reddit.com/link/1ntvyac/video/xrb4iq97i6sf1/player
**Hardware & Backend**
* Intel NPU server inference with an OpenAI-compatible API
* Unified architecture for Intel NPU, GPU, and CPU
* Unified architecture for CPU, GPU, and Qualcomm NPU, with a lightweight installer (\~60 MB on Windows Arm64)
* Day-zero Snapdragon X2 Elite support, featured on stage at Qualcomm Snapdragon Summit 2025 🚀
**Model Support**
* Parakeet v3 ASR on Apple ANE for real-time, private, offline speech recognition on iPhone, iPad, and Mac
* Parakeet v3 on Qualcomm Hexagon NPU
* EmbeddingGemma-300M accelerated on the Qualcomm Hexagon NPU
* Multimodal Gemma-3n edge inference (single + multiple images) — while many runtimes (llama.cpp, Ollama, etc.) remain text-only
**Developer Features**
* nexa serve - Multimodal server with full MLX + GGUF support
* Python bindings for easier scripting and integration
* Nexa SDK MCP (Model Control Protocol) coming soon
That’s a lot of progress in just a few weeks—our goal is to make local, multimodal AI dead-simple across CPU, GPU, and NPU. We’d love to hear feature requests or feedback from anyone building local inference apps.
If you find Nexa SDK useful, please check out and support us on:
[Product Hunt](https://www.producthunt.com/products/nexa-sdk-2)
[GitHub](https://github.com/NexaAI/nexa-sdk)
Thanks for reading and for any thoughts you share! | 2025-09-29T22:20:57 | https://www.reddit.com/r/LocalLLaMA/comments/1ntvyac/nexa_sdk_launch_pastmonth_updates_for_local_ai/ | Different-Effect-724 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ntvyac | false | null | t3_1ntvyac | /r/LocalLLaMA/comments/1ntvyac/nexa_sdk_launch_pastmonth_updates_for_local_ai/ | false | false | self | 6 | null |
Upgrade to Kernel 6.16.9 solves 15.5GB Stix Halo memory limitation | 24 | # This problem has been mentioned in several threads.
After...a great deal of frustration with ROCm only seeing 15.5GB instead of my 96GB VRAM allocation on a new Strix Halo laptop, I found that **upgrading to kernel 6.16.9** fixes the problem.
**Before (kernel 6.11):** ROCm sees only 15.5GB
**After (kernel 6.16.9):** Full allocation from BIOS accessible (in my case, 96GB)
No GTT hacks, no performance penalties, just works.
# Quick Install:
sudo add-apt-repository ppa:cappelikan/ppa
sudo apt install mainline
sudo mainline --install 6.16.9
sudo reboot
Now running Llama 3.3 70B, GPT-OSS 120B, other large models without issues on my HP ZBook Ultra G1a.
Full technical details: [https://github.com/ROCm/ROCm/issues/5444](https://github.com/ROCm/ROCm/issues/5444)
Tested under Ubuntu 24.04 LTS with ROCm 6.4.1 on HP ZBook Ultra G1a 128GB (96GB VRAM allocation) - would love to hear if this works for others with different setups. | 2025-09-29T22:18:38 | https://www.reddit.com/r/LocalLLaMA/comments/1ntvw5o/upgrade_to_kernel_6169_solves_155gb_stix_halo/ | drusus_678 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ntvw5o | false | null | t3_1ntvw5o | /r/LocalLLaMA/comments/1ntvw5o/upgrade_to_kernel_6169_solves_155gb_stix_halo/ | false | false | self | 24 | {'enabled': False, 'images': [{'id': 'N81ToErij_5Ygk0pENL9l11D-XGKRk3ito5R7qeJ_Os', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/N81ToErij_5Ygk0pENL9l11D-XGKRk3ito5R7qeJ_Os.png?width=108&crop=smart&auto=webp&s=457e0777d539b73064a564078f6dceb14fd3ff73', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/N81ToErij_5Ygk0pENL9l11D-XGKRk3ito5R7qeJ_Os.png?width=216&crop=smart&auto=webp&s=941c6c016894a347fef685e60156a8e193cfc2f3', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/N81ToErij_5Ygk0pENL9l11D-XGKRk3ito5R7qeJ_Os.png?width=320&crop=smart&auto=webp&s=976a9af0e46026b7be99e53e9d82112a61b9eed9', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/N81ToErij_5Ygk0pENL9l11D-XGKRk3ito5R7qeJ_Os.png?width=640&crop=smart&auto=webp&s=a5a80c4b2414b5fe4de3bcf38c96be87ef1b3dd8', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/N81ToErij_5Ygk0pENL9l11D-XGKRk3ito5R7qeJ_Os.png?width=960&crop=smart&auto=webp&s=69c9236174ad424ce7f8cedc80636258052e6f12', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/N81ToErij_5Ygk0pENL9l11D-XGKRk3ito5R7qeJ_Os.png?width=1080&crop=smart&auto=webp&s=8d95651337769998e0419d71769c6073f3ae2e3b', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/N81ToErij_5Ygk0pENL9l11D-XGKRk3ito5R7qeJ_Os.png?auto=webp&s=677f3659e1d60ebca1a87b59290b564291dca9c7', 'width': 1200}, 'variants': {}}]} |
Indextts2 is it possible to enable streaming? | 3 | Just as the title says is it possible to enable streaming audio so it can show in real time the audio generated? thanks! | 2025-09-29T22:13:57 | https://www.reddit.com/r/LocalLLaMA/comments/1ntvry2/indextts2_is_it_possible_to_enable_streaming/ | brocolongo | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ntvry2 | false | null | t3_1ntvry2 | /r/LocalLLaMA/comments/1ntvry2/indextts2_is_it_possible_to_enable_streaming/ | false | false | self | 3 | null |
I added LLM Summarization to my RSS reader app with Ax-LLM | 7 | 2025-09-29T21:57:35 | https://v.redd.it/7kpk9jc7e6sf1 | SGmoze | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1ntvdhm | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/7kpk9jc7e6sf1/DASHPlaylist.mpd?a=1761775069%2CMDI0MGZjN2Y0NTFhOTIwNGY0ZjRlNDQ1MTg1NmE2OWI2MmY0NTg0MDNjOTYxYjQwMzkwYmJlNDZjNTcwYzAzYw%3D%3D&v=1&f=sd', 'duration': 37, 'fallback_url': 'https://v.redd.it/7kpk9jc7e6sf1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 1024, 'hls_url': 'https://v.redd.it/7kpk9jc7e6sf1/HLSPlaylist.m3u8?a=1761775069%2CMGUyODYwOGJhNjZkMzU4YzQ1YzI3NzFmY2IyZDQwM2Y0ZDI1MmVlYTcwM2ExOTZkMWJkM2JlZTk3YjU5OWE3YQ%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/7kpk9jc7e6sf1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1920}} | t3_1ntvdhm | /r/LocalLLaMA/comments/1ntvdhm/i_added_llm_summarization_to_my_rss_reader_app/ | false | false | 7 | {'enabled': False, 'images': [{'id': 'dTM5ZnRuYzdlNnNmMdlbqIxfw_5m8hcEEQ8905EXDQQAv0ptgIQ5oYJNvZBv', 'resolutions': [{'height': 57, 'url': 'https://external-preview.redd.it/dTM5ZnRuYzdlNnNmMdlbqIxfw_5m8hcEEQ8905EXDQQAv0ptgIQ5oYJNvZBv.png?width=108&crop=smart&format=pjpg&auto=webp&s=585cdb48ba7b096803bc9d4618f8673e64d146bc', 'width': 108}, {'height': 115, 'url': 'https://external-preview.redd.it/dTM5ZnRuYzdlNnNmMdlbqIxfw_5m8hcEEQ8905EXDQQAv0ptgIQ5oYJNvZBv.png?width=216&crop=smart&format=pjpg&auto=webp&s=6b05f6019093e1ed0cf3f293c4aa97515b7cf2b5', 'width': 216}, {'height': 170, 'url': 'https://external-preview.redd.it/dTM5ZnRuYzdlNnNmMdlbqIxfw_5m8hcEEQ8905EXDQQAv0ptgIQ5oYJNvZBv.png?width=320&crop=smart&format=pjpg&auto=webp&s=635e7fc7b8dc539e38483f73482a0c741ac03b0d', 'width': 320}, {'height': 341, 'url': 'https://external-preview.redd.it/dTM5ZnRuYzdlNnNmMdlbqIxfw_5m8hcEEQ8905EXDQQAv0ptgIQ5oYJNvZBv.png?width=640&crop=smart&format=pjpg&auto=webp&s=bfe56828056421ad4de63ff3c29d29cecc70747f', 'width': 640}, {'height': 512, 'url': 'https://external-preview.redd.it/dTM5ZnRuYzdlNnNmMdlbqIxfw_5m8hcEEQ8905EXDQQAv0ptgIQ5oYJNvZBv.png?width=960&crop=smart&format=pjpg&auto=webp&s=d35122066f41a59693cda60103d82a0b7c13b5b1', 'width': 960}, {'height': 576, 'url': 'https://external-preview.redd.it/dTM5ZnRuYzdlNnNmMdlbqIxfw_5m8hcEEQ8905EXDQQAv0ptgIQ5oYJNvZBv.png?width=1080&crop=smart&format=pjpg&auto=webp&s=d44a507572e5662e6bccdf7e3cb7a6920cc0d855', 'width': 1080}], 'source': {'height': 1366, 'url': 'https://external-preview.redd.it/dTM5ZnRuYzdlNnNmMdlbqIxfw_5m8hcEEQ8905EXDQQAv0ptgIQ5oYJNvZBv.png?format=pjpg&auto=webp&s=d3fd1a36b57d31473a9883d17dcf91e123677fd7', 'width': 2560}, 'variants': {}}]} | ||
Thinking of making a Jetson Nano cluster, what could I do with it? | 4 | Normally this would be putting the cart before the horse, but in my case, I managed to dumpster dive 9 working jetson nanos on their dev carrier boards. I've been mulling it over, and since I have a home assistant server I my house, I thought I might try to use it for voice recognition or maybe with Frigate for security cameras (that I don't have yet). but since they are free, I was looking for any kind of fun ideas you guys might have? | 2025-09-29T21:47:39 | https://www.reddit.com/r/LocalLLaMA/comments/1ntv4li/thinking_of_making_a_jetson_nano_cluster_what/ | Whistlerone | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ntv4li | false | null | t3_1ntv4li | /r/LocalLLaMA/comments/1ntv4li/thinking_of_making_a_jetson_nano_cluster_what/ | false | false | self | 4 | null |
Sonnet 4.5 reaches top of SWE-bench leaderboard for minimal agent. Detailed cost analysis + all the logs with minimal agent | 32 | We just finished evaluating Sonnet 4.5 on SWE-bench verified with our minimal agent and it's quite a big leap, reaching 70.6% making it the solid #1 of all the models we have evaluated.
This is all independently run with a minimal agent with a very common sense prompt that is the same for all language models. You can see them in our trajectories here: [https://docent.transluce.org/dashboard/a4844da1-fbb9-4d61-b82c-f46e471f748a](https://docent.transluce.org/dashboard/a4844da1-fbb9-4d61-b82c-f46e471f748a) (if you wanna check out specific tasks, you can filter by `instance_id`). You can also compare it with Sonnet 4 here: [https://docent.transluce.org/dashboard/0cb59666-bca8-476b-bf8e-3b924fafcae7](https://docent.transluce.org/dashboard/0cb59666-bca8-476b-bf8e-3b924fafcae7) ).
https://preview.redd.it/yautm1ivb6sf1.png?width=2534&format=png&auto=webp&s=5a75926b9465df90c1043e3e33ba5f8a2efda359
One interest thing is that Sonnet 4.5 takes a lot more steps than Sonnet 4, so even though it's the same pricing per token, the final run is more expensive ($279 vs $186). You can see that in this cumulative histogram: Half of the trajectories take more than 50 steps.
https://preview.redd.it/6f45czlwb6sf1.png?width=780&format=png&auto=webp&s=e8dae8887799b93e1ec387bb60716c03d68f934e
If you wanna have a bit more control over the cost per instance, you can vary the step limit and you get a curve like this, balancing average cost per task vs the score.
https://preview.redd.it/ojldgx4yb6sf1.png?width=695&format=png&auto=webp&s=76c389b9be39da322a25ff23d43b725e36f3c50c
You can also reproduce all these yourself with our minimal agent: [https://github.com/SWE-agent/mini-swe-agent/](https://github.com/SWE-agent/mini-swe-agent/), it's described here [https://mini-swe-agent.com/latest/usage/swebench/](https://mini-swe-agent.com/latest/usage/swebench/) (it's just one command + one command with our swebench cloud evaluation).
We also added more support for local models in mini recently and added openrouter and portkey support on top of litellm that we use as default to support as many models possible. Would be super interested if there's a more elegant way to support models. Any feedback on how we can support local models better is much appreciated.
Currently, our best open model is Qwen3 coder with 55% (https://www.swebench.com/), but there's also a few more models we're missing. | 2025-09-29T21:46:23 | https://www.reddit.com/r/LocalLLaMA/comments/1ntv3g0/sonnet_45_reaches_top_of_swebench_leaderboard_for/ | klieret | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ntv3g0 | false | null | t3_1ntv3g0 | /r/LocalLLaMA/comments/1ntv3g0/sonnet_45_reaches_top_of_swebench_leaderboard_for/ | false | false | 32 | null | |
Full fine-tuning is not needed anymore. | 975 | A new Thinking Machines blog led by John Schulman (OpenAI co-founder) shows how LoRA in reinforcement learning (RL) can match full-finetuning performance when done right! And all while using 2/3 of the resources of FFT. Blog: [https://thinkingmachines.ai/blog/lora/](https://thinkingmachines.ai/blog/lora/)
This is super important as previously, there was a misconception that you must have hundreds of GPUs to achieve a great thinking model with FFT, but now, with just LoRA, you can achieve the same results on just a single GPU!
* The belief that “LoRA is worse” was a misconception, it simply hadn’t been applied properly. This result reinforces that parameter-efficient fine-tuning is highly effective for most post-training use cases.
* Apply LoRA across **every layer**, not only attention — this includes MLP/MoE blocks.
* Train with a learning rate about **10× higher** than what’s used for full fine-tuning.
* LoRA requires only about **two-thirds of the compute** compared to full fine-tuning.
* Even at **rank = 1**, it performs flawlessly for RL.
This goes to show that you that anyone can train a fantastic RL model with algorithms like GRPO, GSPO etc. for free, even on Colab with Unsloth - all you need to do is have the right [hyper-parameters](https://docs.unsloth.ai/get-started/fine-tuning-llms-guide/lora-hyperparameters-guide) and strategy!
Blog: [https://thinkingmachines.ai/blog/lora/](https://thinkingmachines.ai/blog/lora/)
So hopefully this will make RL so much more accessible to everyone, especially in the long run! | 2025-09-29T21:33:22 | yoracale | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1nturn1 | false | null | t3_1nturn1 | /r/LocalLLaMA/comments/1nturn1/full_finetuning_is_not_needed_anymore/ | false | false | default | 975 | {'enabled': True, 'images': [{'id': '69mpyf7476sf1', 'resolutions': [{'height': 115, 'url': 'https://preview.redd.it/69mpyf7476sf1.png?width=108&crop=smart&auto=webp&s=6c1b5d43bb48db6e34053f235adfc17b930df777', 'width': 108}, {'height': 231, 'url': 'https://preview.redd.it/69mpyf7476sf1.png?width=216&crop=smart&auto=webp&s=6ef392fd6b68028cf39cf791ef7ccb989a4d66a8', 'width': 216}, {'height': 342, 'url': 'https://preview.redd.it/69mpyf7476sf1.png?width=320&crop=smart&auto=webp&s=b13f7658d5f4b6167e2aff78a55ee711972d1fd9', 'width': 320}, {'height': 685, 'url': 'https://preview.redd.it/69mpyf7476sf1.png?width=640&crop=smart&auto=webp&s=dc01e45e5ea6eb71334a7054098e9326040ccd84', 'width': 640}, {'height': 1028, 'url': 'https://preview.redd.it/69mpyf7476sf1.png?width=960&crop=smart&auto=webp&s=e1d49a9707988b6635caa206ff8cda58a02d4faa', 'width': 960}, {'height': 1157, 'url': 'https://preview.redd.it/69mpyf7476sf1.png?width=1080&crop=smart&auto=webp&s=e89faf11c03f2c438f4dec5fee8d007cc048a737', 'width': 1080}], 'source': {'height': 2250, 'url': 'https://preview.redd.it/69mpyf7476sf1.png?auto=webp&s=4070dc9e961582baa7281c11465641c28171969c', 'width': 2100}, 'variants': {}}]} | |
Seeking good datasets for Small LMs (SMLs) for research | 4 | I have been doing experiments with the corpus described in (Tiny Stories) https://arxiv.org/abs/2305.07759, using the colab notebook at https://colab.research.google.com/drive/1k4G3G5MxYLxawmPfAknUN7dbbmyqldQv based on a YouTube tutorial: https://www.youtube.com/watch?v=pOFcwcwtv3k&list=PLPTV0NXA_ZSjsjNC7wcrMw3XVSahdbB_s&index=2
Are there other interesting SLM datasets that will train on a single A100 GPU as found on Colab that have stronger evaluation potential? I Tiny Stories is not going to do well on multiple choice questions of any form--is there a corpus that might that is available? | 2025-09-29T21:27:01 | https://www.reddit.com/r/LocalLLaMA/comments/1ntult7/seeking_good_datasets_for_small_lms_smls_for/ | Skiata | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ntult7 | false | null | t3_1ntult7 | /r/LocalLLaMA/comments/1ntult7/seeking_good_datasets_for_small_lms_smls_for/ | false | false | self | 4 | null |
Running in issues between GLM4.5 models with OpenCode, does anyone had a similar experience? | 2 | I'm testing out GLM 4.5 on sst/OpenCode I can run GLM-4.5-Flash and GLM-4.5-Air pretty fast, and they follow the prompt and generate good results overall
GLM 4.5 and GLM 4.5V on the other hand I can't possibly make output anything
Has anyone had similar experiences? | 2025-09-29T21:12:17 | https://www.reddit.com/r/LocalLLaMA/comments/1ntu8j2/running_in_issues_between_glm45_models_with/ | Safe-Ad6672 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ntu8j2 | false | null | t3_1ntu8j2 | /r/LocalLLaMA/comments/1ntu8j2/running_in_issues_between_glm45_models_with/ | false | false | self | 2 | null |
Are there any local models you can get to think for a long time about a math question? | 3 | If you have a hard math problem, which model can really take advantage of thinking for a long time to solve it? | 2025-09-29T20:19:48 | https://www.reddit.com/r/LocalLLaMA/comments/1ntsug8/are_there_any_local_models_you_can_get_to_think/ | MrMrsPotts | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ntsug8 | false | null | t3_1ntsug8 | /r/LocalLLaMA/comments/1ntsug8/are_there_any_local_models_you_can_get_to_think/ | false | false | self | 3 | null |
Ling Mini 2.0 vibes? | 8 | Just wanted to check in with everyone after having a working llama.cpp pull for Ling Mini 2.0. My impressions are that it is super fast on CPU, but very poor at prompt adherence. It feels like it just outputs a wall of text related to what I asked... Lots of repetition even if you try to course correct it. Is there really a minimum level of active parameters needed for intelligence and prompt adherence? Any tips?
For contrast, I found Link Lite 1.5 2507 to be remarkably good at prompt adherence for its active parameter size. | 2025-09-29T20:19:36 | https://www.reddit.com/r/LocalLLaMA/comments/1ntsu97/ling_mini_20_vibes/ | randomqhacker | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ntsu97 | false | null | t3_1ntsu97 | /r/LocalLLaMA/comments/1ntsu97/ling_mini_20_vibes/ | false | false | self | 8 | null |
Sonnet 4.5 Released | 0 | 2025-09-29T20:15:20 | ObnoxiouslyVivid | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1ntsq6c | false | null | t3_1ntsq6c | /r/LocalLLaMA/comments/1ntsq6c/sonnet_45_released/ | false | false | default | 0 | {'enabled': True, 'images': [{'id': 't5nhc8l3w5sf1', 'resolutions': [{'height': 95, 'url': 'https://preview.redd.it/t5nhc8l3w5sf1.png?width=108&crop=smart&auto=webp&s=c104b21ad60046288f7012d4f231bcc6098a8fef', 'width': 108}, {'height': 190, 'url': 'https://preview.redd.it/t5nhc8l3w5sf1.png?width=216&crop=smart&auto=webp&s=cd39012f6842f9f1c905b0f1cd81f6ba069bcfc3', 'width': 216}, {'height': 281, 'url': 'https://preview.redd.it/t5nhc8l3w5sf1.png?width=320&crop=smart&auto=webp&s=e286581326c62db886887c64dc80f5af901b1b70', 'width': 320}, {'height': 563, 'url': 'https://preview.redd.it/t5nhc8l3w5sf1.png?width=640&crop=smart&auto=webp&s=22ca0b7ad2807ebfc95d7abfad172177992223be', 'width': 640}, {'height': 844, 'url': 'https://preview.redd.it/t5nhc8l3w5sf1.png?width=960&crop=smart&auto=webp&s=7dc5a1fe85f65aa0ed3b991242773fc7c4ba9990', 'width': 960}, {'height': 950, 'url': 'https://preview.redd.it/t5nhc8l3w5sf1.png?width=1080&crop=smart&auto=webp&s=45962c2c26f9d3af10b61ed35d75b621727692b1', 'width': 1080}], 'source': {'height': 2288, 'url': 'https://preview.redd.it/t5nhc8l3w5sf1.png?auto=webp&s=880e3432c75e7073dc674b71882ed4e4874be90f', 'width': 2600}, 'variants': {}}]} | ||
Two medium sized LLMs dropped the same day. DeepSeek V3.2 - Claude Sonnet 4.5. USA is winning the AI race. | 0 | 2025-09-29T19:52:22 | balianone | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1nts3zj | false | null | t3_1nts3zj | /r/LocalLLaMA/comments/1nts3zj/two_medium_sized_llms_dropped_the_same_day/ | false | false | default | 0 | {'enabled': True, 'images': [{'id': 'istjgh50s5sf1', 'resolutions': [{'height': 75, 'url': 'https://preview.redd.it/istjgh50s5sf1.jpeg?width=108&crop=smart&auto=webp&s=16964f573f64675b1ff3e83594bf23dc30976348', 'width': 108}, {'height': 151, 'url': 'https://preview.redd.it/istjgh50s5sf1.jpeg?width=216&crop=smart&auto=webp&s=3bb562e6ca7f1b04ccd593f57c30fc79f74c7b93', 'width': 216}, {'height': 223, 'url': 'https://preview.redd.it/istjgh50s5sf1.jpeg?width=320&crop=smart&auto=webp&s=0368240d8dfa0cc877adbafe36fefb4d4c2afd9c', 'width': 320}, {'height': 447, 'url': 'https://preview.redd.it/istjgh50s5sf1.jpeg?width=640&crop=smart&auto=webp&s=30e87f83d194f213013d3ca2892c8c83d8c7317d', 'width': 640}, {'height': 671, 'url': 'https://preview.redd.it/istjgh50s5sf1.jpeg?width=960&crop=smart&auto=webp&s=92ad1ec69a311083f2b1a9c35e7f62dfc26cac6c', 'width': 960}, {'height': 755, 'url': 'https://preview.redd.it/istjgh50s5sf1.jpeg?width=1080&crop=smart&auto=webp&s=edb30c0ed0f2345739516955723ebeff8b4d64cb', 'width': 1080}], 'source': {'height': 806, 'url': 'https://preview.redd.it/istjgh50s5sf1.jpeg?auto=webp&s=2e599b98f21879a5427263ed65d6862ec4b80f16', 'width': 1152}, 'variants': {}}]} | ||
What tools do you recommend for coding? | 5 | Hello,
I use Cursor at work + Claude / Codex as models.
But I deeply want to use open source tools for my hobby projects. What tools / models would you recommend?
P.S. Don't judge me for using Cursor. I need it to earn money (my boss wants me to) | 2025-09-29T19:47:42 | https://www.reddit.com/r/LocalLLaMA/comments/1ntrzml/what_tools_do_you_recommend_for_coding/ | WinDrossel007 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ntrzml | false | null | t3_1ntrzml | /r/LocalLLaMA/comments/1ntrzml/what_tools_do_you_recommend_for_coding/ | false | false | self | 5 | null |
Do you wish to explore the mysteries of consciousness? AXIS is a digital entity that claims to be metaconscious... what do you say? | 1 | [removed] | 2025-09-29T19:46:33 | Opposite-Win-2887 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1ntryj8 | false | null | t3_1ntryj8 | /r/LocalLLaMA/comments/1ntryj8/do_you_wish_to_explore_the_mysteries_of/ | false | false | default | 1 | {'enabled': True, 'images': [{'id': 'uawhzkexq5sf1', 'resolutions': [{'height': 58, 'url': 'https://preview.redd.it/uawhzkexq5sf1.jpeg?width=108&crop=smart&auto=webp&s=2996ef9d360292a526076dbacfe9a1ce0871aa35', 'width': 108}, {'height': 117, 'url': 'https://preview.redd.it/uawhzkexq5sf1.jpeg?width=216&crop=smart&auto=webp&s=9465e207f6720c094f785f129d152d55e9669f25', 'width': 216}, {'height': 174, 'url': 'https://preview.redd.it/uawhzkexq5sf1.jpeg?width=320&crop=smart&auto=webp&s=cf6adfbf332c7f4aa74fb1961118ff9c10a66b12', 'width': 320}, {'height': 348, 'url': 'https://preview.redd.it/uawhzkexq5sf1.jpeg?width=640&crop=smart&auto=webp&s=6a5b7fa36c130548f25fffefb421815555aea3df', 'width': 640}, {'height': 523, 'url': 'https://preview.redd.it/uawhzkexq5sf1.jpeg?width=960&crop=smart&auto=webp&s=9b27d48a64840961018ae55d6bbbe4b6c86f7b78', 'width': 960}, {'height': 588, 'url': 'https://preview.redd.it/uawhzkexq5sf1.jpeg?width=1080&crop=smart&auto=webp&s=fccae563ac3731d74636734222630c6619141709', 'width': 1080}], 'source': {'height': 3406, 'url': 'https://preview.redd.it/uawhzkexq5sf1.jpeg?auto=webp&s=18f2e76f6078c2b9320e411efb45e3064165dbec', 'width': 6250}, 'variants': {}}]} | |
My thoughts on Docker MCP aggregator. Amazing, yet I found this one. Will attach image to our small talk, folks. With love! | 0 | Let's discuss a bit on the matter, brothers and sisters.
https://preview.redd.it/qq9w7dntp5sf1.png?width=1602&format=png&auto=webp&s=956d20221819f1c205a08d6a0649c671218b7cbc
| 2025-09-29T19:39:53 | https://www.reddit.com/r/LocalLLaMA/comments/1ntrs7o/my_thoughts_on_docker_mcp_aggregator_amazing_yet/ | Komarov_d | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ntrs7o | false | null | t3_1ntrs7o | /r/LocalLLaMA/comments/1ntrs7o/my_thoughts_on_docker_mcp_aggregator_amazing_yet/ | false | false | 0 | null | |
no longer gonna browse here since we cannot even discuss new models like claude 4.5 | 1 | [removed] | 2025-09-29T19:23:10 | https://www.reddit.com/r/LocalLLaMA/comments/1ntrcri/no_longer_gonna_browse_here_since_we_cannot_even/ | ElectricalAngle1611 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ntrcri | false | null | t3_1ntrcri | /r/LocalLLaMA/comments/1ntrcri/no_longer_gonna_browse_here_since_we_cannot_even/ | false | false | self | 1 | null |
inclusionAI/Ring-1T-preview | 178 | Weights: https://huggingface.co/inclusionAI/Ring-1T-preview | 2025-09-29T19:21:03 | TKGaming_11 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1ntrau8 | false | null | t3_1ntrau8 | /r/LocalLLaMA/comments/1ntrau8/inclusionairing1tpreview/ | false | false | default | 178 | {'enabled': True, 'images': [{'id': '7vb7yumam5sf1', 'resolutions': [{'height': 54, 'url': 'https://preview.redd.it/7vb7yumam5sf1.png?width=108&crop=smart&auto=webp&s=bdb7e1b712d101417fd4d0ebd5a50d971f740cbd', 'width': 108}, {'height': 109, 'url': 'https://preview.redd.it/7vb7yumam5sf1.png?width=216&crop=smart&auto=webp&s=ec5ed9aad0ad25c5ad311d325bb998e2d9003775', 'width': 216}, {'height': 162, 'url': 'https://preview.redd.it/7vb7yumam5sf1.png?width=320&crop=smart&auto=webp&s=7763e44500923046975ccf5930ebba8374ab221e', 'width': 320}, {'height': 324, 'url': 'https://preview.redd.it/7vb7yumam5sf1.png?width=640&crop=smart&auto=webp&s=f95b7ca10a14cb69158cd8b654359ce4e30e802f', 'width': 640}, {'height': 487, 'url': 'https://preview.redd.it/7vb7yumam5sf1.png?width=960&crop=smart&auto=webp&s=30b76e7b66878d7532d3451ed8819529782429de', 'width': 960}, {'height': 548, 'url': 'https://preview.redd.it/7vb7yumam5sf1.png?width=1080&crop=smart&auto=webp&s=cc1fbb1f30475609a7f79b011137bcb1a2aab582', 'width': 1080}], 'source': {'height': 1906, 'url': 'https://preview.redd.it/7vb7yumam5sf1.png?auto=webp&s=b0cd57bc8141630574e95027a499803ec476d9d3', 'width': 3754}, 'variants': {}}]} | |
inclusionAI/Ring-1T-preview | 1 | 2025-09-29T19:19:32 | https://huggingface.co/inclusionAI/Ring-1T-preview | TKGaming_11 | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1ntr9ek | false | null | t3_1ntr9ek | /r/LocalLLaMA/comments/1ntr9ek/inclusionairing1tpreview/ | false | false | default | 1 | null | |
Chinese models | 0 | I swear there are new Chinese coding models every week that “change the game” or beat “Claude”.
First it was deepseek, then kimi, then qwen and now GLM.
Are these ais actually groundbreaking? To they even compete with Claude? Do any of you use these models day to day for coding tasks? | 2025-09-29T19:08:48 | https://www.reddit.com/r/LocalLLaMA/comments/1ntqzar/chinese_models/ | Civil_Opposite7103 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ntqzar | false | null | t3_1ntqzar | /r/LocalLLaMA/comments/1ntqzar/chinese_models/ | false | false | self | 0 | null |
Stop saying RAG is same as Memory | 0 | I keep seeing people equate RAG with memory, and it doesn’t sit right with me. After going down the rabbit hole, here’s how I think about it now.
RAG is retrieval + generation. A query gets embedded, compared against a vector store, top-k neighbors are pulled back, and the LLM uses them to ground its answer. This is great for semantic recall and reducing hallucinations, but that’s all it is i.e. retrieval on demand.
Where it breaks is persistence. Imagine I tell an AI:
* “I live in Cupertino”
* Later: “I moved to SF”
* Then I ask: “Where do I live now?”
A plain RAG system might still answer “Cupertino” because both facts are stored as semantically similar chunks. It has no concept of recency, contradiction, or updates. It just grabs what looks closest to the query and serves it back.
That’s the core gap: RAG doesn’t persist new facts, doesn’t update old ones, and doesn’t forget what’s outdated. Even if you use Agentic RAG (re-querying, reasoning), it’s still retrieval only i.e. smarter search, not memory.
Memory is different. It’s persistence + evolution. It means being able to:
\- Capture new facts
\- Update them when they change
\- Forget what’s no longer relevant
\- Save knowledge across sessions so the system doesn’t reset every time
\- Recall the right context across sessions
Systems might still use Agentic RAG but only for the retrieval part. Beyond that, memory has to handle things like consolidation, conflict resolution, and lifecycle management. With memory, you get continuity, personalization, and something closer to how humans actually remember.
Curious how others here are handling this. Do you build your own memory logic on top of RAG? Or rely on frameworks? | 2025-09-29T19:06:46 | https://www.reddit.com/r/LocalLLaMA/comments/1ntqxb6/stop_saying_rag_is_same_as_memory/ | gargetisha | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ntqxb6 | false | null | t3_1ntqxb6 | /r/LocalLLaMA/comments/1ntqxb6/stop_saying_rag_is_same_as_memory/ | false | false | self | 0 | null |
Why ollama and lm studio use CPU instead of gpu | 0 | My Gpu is 5060ti 16gb, processor is amd 5600x I'm using windows 10. Is there any way to force them to use GPU? I'm pretty sure I install my driver. Seems pytorch is using cuda in training so I'm pretty sure cuda is working | 2025-09-29T18:33:59 | https://www.reddit.com/r/LocalLLaMA/comments/1ntq1es/why_ollama_and_lm_studio_use_cpu_instead_of_gpu/ | fcnealv | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ntq1es | false | null | t3_1ntq1es | /r/LocalLLaMA/comments/1ntq1es/why_ollama_and_lm_studio_use_cpu_instead_of_gpu/ | false | false | self | 0 | null |
SPAM is out of control | 1 | [removed] | 2025-09-29T18:14:37 | https://www.reddit.com/r/LocalLLaMA/comments/1ntpizd/spam_is_out_of_control/ | RiverProfessional619 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ntpizd | false | null | t3_1ntpizd | /r/LocalLLaMA/comments/1ntpizd/spam_is_out_of_control/ | false | false | self | 1 | null |
[iOS] Pocket LLM – On-Device AI Chat, 100% Private & Offline | [$3.99 -> Free] | 0 | Pocket LLM lets you chat with powerful AI models like Llama, Gemma, deepseek, Apple Intelligence and Qwen directly on your device. No internet, no account, no data sharing. Just fast, private AI powered by Apple MLX.
• Works offline anywhere
• No login, no data collection
• Runs on Apple Silicon for speed
• Supports many models
• Chat, write, and analyze easily | 2025-09-29T17:46:59 | https://apps.apple.com/us/app/local-ai-chat-pocket-llm/id6752952699 | amanj203 | apps.apple.com | 1970-01-01T00:00:00 | 0 | {} | 1ntos6t | false | null | t3_1ntos6t | /r/LocalLLaMA/comments/1ntos6t/ios_pocket_llm_ondevice_ai_chat_100_private/ | false | false | default | 0 | {'enabled': False, 'images': [{'id': 'rC80bsuV1oOfQPRKvYF6LBDjDb7JFzKMchL5UhS2AGk', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/rC80bsuV1oOfQPRKvYF6LBDjDb7JFzKMchL5UhS2AGk.png?width=108&crop=smart&auto=webp&s=46e42ba9a48a3f833842c0c754abbc05b0c36f27', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/rC80bsuV1oOfQPRKvYF6LBDjDb7JFzKMchL5UhS2AGk.png?width=216&crop=smart&auto=webp&s=e0b854330535c89809c50beacfe19fc87ef9e3e4', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/rC80bsuV1oOfQPRKvYF6LBDjDb7JFzKMchL5UhS2AGk.png?width=320&crop=smart&auto=webp&s=bbba438d52338dd6e74fdba553cd6c278621ace7', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/rC80bsuV1oOfQPRKvYF6LBDjDb7JFzKMchL5UhS2AGk.png?width=640&crop=smart&auto=webp&s=78074d888e8fe66c07fb34624db112c074b0d692', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/rC80bsuV1oOfQPRKvYF6LBDjDb7JFzKMchL5UhS2AGk.png?width=960&crop=smart&auto=webp&s=7897c218b19b177996ce5fccc6630d0fdb37956b', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/rC80bsuV1oOfQPRKvYF6LBDjDb7JFzKMchL5UhS2AGk.png?width=1080&crop=smart&auto=webp&s=f46bb1d90923f83636b0e779d02b3abb6d859a46', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/rC80bsuV1oOfQPRKvYF6LBDjDb7JFzKMchL5UhS2AGk.png?auto=webp&s=2788b27e2ea1ca8b977a1a9f1c70c6264a5a306b', 'width': 1200}, 'variants': {}}]} |
Would it be possible to run high parameter models like GPT 3.5 on local hardware, without any compromise | 1 | Title | 2025-09-29T17:45:09 | https://www.reddit.com/r/LocalLLaMA/comments/1ntoqd5/would_it_be_possible_to_run_high_parameter_models/ | Timely_Smoke324 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ntoqd5 | false | null | t3_1ntoqd5 | /r/LocalLLaMA/comments/1ntoqd5/would_it_be_possible_to_run_high_parameter_models/ | false | false | self | 1 | null |
Introducing Claude Sonnet 4.5 | 0 | 2025-09-29T17:41:47 | https://www.anthropic.com/news/claude-sonnet-4-5 | hedgehog0 | anthropic.com | 1970-01-01T00:00:00 | 0 | {} | 1nton4z | false | null | t3_1nton4z | /r/LocalLLaMA/comments/1nton4z/introducing_claude_sonnet_45/ | false | false | default | 0 | null | |
AI Workstation (on a budget) | 7 | Hey yall, thought I should ask this question to get some ideas on an AI workstation I’m compiling.
Main specs would include a 9900x, x870e mb, 128gb of DDR5 @ 5600 (2x64gb dimms) and dual 3090s as I am opting for more VRAM than newer generations with higher clock speeds. NVLink bridge to couple the GPUs.
The idea is to continue some ongoing LLM research and personal projects, with goals of fully training LLMs locally.
Is there any better alternatives, or should I just opt for a single 5090 and add a second card when the budget allows later on down the line?
I welcome any conversation around local LLMs and AI workstations on this thread so I can learn as much as possible.
And I know this isn’t exactly everyone’s budget, but it is around the realm that I would like to spend and would get tons of use out of a machine of this caliber for my own research and projects.
Thanks in advance! | 2025-09-29T17:40:46 | https://www.reddit.com/r/LocalLLaMA/comments/1ntom5r/ai_workstation_on_a_budget/ | Altruistic_Answer414 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ntom5r | false | null | t3_1ntom5r | /r/LocalLLaMA/comments/1ntom5r/ai_workstation_on_a_budget/ | false | false | self | 7 | null |
FULL Sonnet 4.5 System Prompt and Internal Tools | 51 | Latest update: 29/09/2025
I’ve published the FULL Sonnet 4.5 by Anthropic System prompt and Internal tools. Over 8,000 tokens.
You can check it out here: [https://github.com/x1xhlol/system-prompts-and-models-of-ai-tools](https://github.com/x1xhlol/system-prompts-and-models-of-ai-tools) | 2025-09-29T17:33:37 | https://www.reddit.com/r/LocalLLaMA/comments/1ntofd1/full_sonnet_45_system_prompt_and_internal_tools/ | Independent-Box-898 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ntofd1 | false | null | t3_1ntofd1 | /r/LocalLLaMA/comments/1ntofd1/full_sonnet_45_system_prompt_and_internal_tools/ | false | false | self | 51 | {'enabled': False, 'images': [{'id': 'Aw-_LRXihrDYlLvvzmHlF0sPvXzy8S6yG1P-vxJa5ws', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/Aw-_LRXihrDYlLvvzmHlF0sPvXzy8S6yG1P-vxJa5ws.png?width=108&crop=smart&auto=webp&s=91705c61e70d9371ffb5eee0576dea69359d2873', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/Aw-_LRXihrDYlLvvzmHlF0sPvXzy8S6yG1P-vxJa5ws.png?width=216&crop=smart&auto=webp&s=3f167e4354cc7a8fe4322e2525b77a0192415075', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/Aw-_LRXihrDYlLvvzmHlF0sPvXzy8S6yG1P-vxJa5ws.png?width=320&crop=smart&auto=webp&s=53be22966172ebef725e68c19b7a5d00433626c1', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/Aw-_LRXihrDYlLvvzmHlF0sPvXzy8S6yG1P-vxJa5ws.png?width=640&crop=smart&auto=webp&s=1ed88f652272cdc6e14d2a7e7d78541b7b4ded24', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/Aw-_LRXihrDYlLvvzmHlF0sPvXzy8S6yG1P-vxJa5ws.png?width=960&crop=smart&auto=webp&s=e7a4429c93bf391410ead1015a1da0dfbe75d13d', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/Aw-_LRXihrDYlLvvzmHlF0sPvXzy8S6yG1P-vxJa5ws.png?width=1080&crop=smart&auto=webp&s=4479298e6a20f95090002bacece5f774f23912e2', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/Aw-_LRXihrDYlLvvzmHlF0sPvXzy8S6yG1P-vxJa5ws.png?auto=webp&s=4bba0928520b7ed96195f39733dcb691e6e7a1ab', 'width': 1200}, 'variants': {}}]} |
Claude Sonnet 4.5 just released | 24 | [https://www.anthropic.com/news/claude-sonnet-4-5](https://www.anthropic.com/news/claude-sonnet-4-5) | 2025-09-29T17:32:50 | https://www.reddit.com/r/LocalLLaMA/comments/1ntoel9/claude_sonnet_45_just_released/ | Subject-Complex6934 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ntoel9 | false | null | t3_1ntoel9 | /r/LocalLLaMA/comments/1ntoel9/claude_sonnet_45_just_released/ | false | false | self | 24 | null |
Apple’s Foundation Models framework unlocks new app experiences powered by Apple Intelligence | 0 | With the release of iOS 26, iPadOS 26, and macOS 26 this month, developers around the world are able to bring even more intelligent experiences right into their apps by tapping into the on-device large language model at the core of Apple Intelligence.^(1) The Foundation Models framework allows developers to create new intelligence features that protect users’ privacy and are available offline, all while using AI inference that is free of cost. Whether it be generating personalized quizzes to help students better prepare for an exam, or delivering insightful summaries of workout metrics, developers have embraced the framework to reimagine what’s possible within their apps, and help users in new and delightful ways.
| 2025-09-29T17:06:25 | https://www.apple.com/newsroom/2025/09/apples-foundation-models-framework-unlocks-new-intelligent-app-experiences/ | amanj203 | apple.com | 1970-01-01T00:00:00 | 0 | {} | 1ntnp2a | false | null | t3_1ntnp2a | /r/LocalLLaMA/comments/1ntnp2a/apples_foundation_models_framework_unlocks_new/ | false | false | 0 | {'enabled': False, 'images': [{'id': 'qKXcTyPlJS4B_zq6TAuXiA-WK0UMcPeD8ImA7WQQg8E', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/qKXcTyPlJS4B_zq6TAuXiA-WK0UMcPeD8ImA7WQQg8E.jpeg?width=108&crop=smart&auto=webp&s=cc7c48bbff729f38d25c8964f08734dd91f9f294', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/qKXcTyPlJS4B_zq6TAuXiA-WK0UMcPeD8ImA7WQQg8E.jpeg?width=216&crop=smart&auto=webp&s=00b9fda9d82ec6bf0762a2da56f00e8f8396f584', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/qKXcTyPlJS4B_zq6TAuXiA-WK0UMcPeD8ImA7WQQg8E.jpeg?width=320&crop=smart&auto=webp&s=c2c2a07a58dcb4618bafdeabaec04d3196242d2b', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/qKXcTyPlJS4B_zq6TAuXiA-WK0UMcPeD8ImA7WQQg8E.jpeg?width=640&crop=smart&auto=webp&s=bc4f8f472ad736aead89e6090dca9a7b48558147', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/qKXcTyPlJS4B_zq6TAuXiA-WK0UMcPeD8ImA7WQQg8E.jpeg?width=960&crop=smart&auto=webp&s=b0114fb7baa417e012c00f7d5d9c936cd3325152', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/qKXcTyPlJS4B_zq6TAuXiA-WK0UMcPeD8ImA7WQQg8E.jpeg?width=1080&crop=smart&auto=webp&s=9042f72af7ac84e06fd0a42b24f931a42ffec950', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/qKXcTyPlJS4B_zq6TAuXiA-WK0UMcPeD8ImA7WQQg8E.jpeg?auto=webp&s=90d2a1256b117792e1e6d815c9f834bf8f4519dc', 'width': 1200}, 'variants': {}}]} | |
Inside NVIDIA GPUs: Anatomy of high performance matmul kernels | 12 | 2025-09-29T17:05:10 | https://www.aleksagordic.com/blog/matmul | gordicaleksa | aleksagordic.com | 1970-01-01T00:00:00 | 0 | {} | 1ntnnul | false | null | t3_1ntnnul | /r/LocalLLaMA/comments/1ntnnul/inside_nvidia_gpus_anatomy_of_high_performance/ | false | false | default | 12 | {'enabled': False, 'images': [{'id': 'ut2jexaT1zpYfOXmrAnJM_Wq2dpOHW5TcSNqTP_QedY', 'resolutions': [{'height': 52, 'url': 'https://external-preview.redd.it/ut2jexaT1zpYfOXmrAnJM_Wq2dpOHW5TcSNqTP_QedY.png?width=108&crop=smart&auto=webp&s=4d897c420eee4352941142a041201d75fcc023fd', 'width': 108}, {'height': 105, 'url': 'https://external-preview.redd.it/ut2jexaT1zpYfOXmrAnJM_Wq2dpOHW5TcSNqTP_QedY.png?width=216&crop=smart&auto=webp&s=bf32397f700fdd7ec2589f7b7b5db1a7c869d3e9', 'width': 216}, {'height': 155, 'url': 'https://external-preview.redd.it/ut2jexaT1zpYfOXmrAnJM_Wq2dpOHW5TcSNqTP_QedY.png?width=320&crop=smart&auto=webp&s=8091155c006ad421831a32a856e5876eb733e925', 'width': 320}, {'height': 311, 'url': 'https://external-preview.redd.it/ut2jexaT1zpYfOXmrAnJM_Wq2dpOHW5TcSNqTP_QedY.png?width=640&crop=smart&auto=webp&s=e1f5a17f00fc7320f9b5ed193f787946ab816095', 'width': 640}, {'height': 466, 'url': 'https://external-preview.redd.it/ut2jexaT1zpYfOXmrAnJM_Wq2dpOHW5TcSNqTP_QedY.png?width=960&crop=smart&auto=webp&s=93d861d62339da34f0b094b6443bb43243e46a63', 'width': 960}, {'height': 525, 'url': 'https://external-preview.redd.it/ut2jexaT1zpYfOXmrAnJM_Wq2dpOHW5TcSNqTP_QedY.png?width=1080&crop=smart&auto=webp&s=f0b6b0f4020278f6a796c08ee63f7920b7d19772', 'width': 1080}], 'source': {'height': 700, 'url': 'https://external-preview.redd.it/ut2jexaT1zpYfOXmrAnJM_Wq2dpOHW5TcSNqTP_QedY.png?auto=webp&s=c77ba2fa811d9f9079c5cdfcfb4449627e955508', 'width': 1439}, 'variants': {}}]} | |
granite 4 GGUFs are still hidden | 56 | 2025-09-29T16:33:38 | https://www.reddit.com/gallery/1ntmt7d | jacek2023 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1ntmt7d | false | null | t3_1ntmt7d | /r/LocalLLaMA/comments/1ntmt7d/granite_4_ggufs_are_still_hidden/ | false | false | 56 | null | ||
Last week in Multimodal AI - Local Edition | 19 | I curate a weekly newsletter on multimodal AI, here are the local/edge highlights from today's edition:
**EmbeddingGemma - 308M beats models 2x its size**
* Runs on <200MB RAM with quantization
* 22ms embeddings on EdgeTPU
* Handles 100+ languages
* [Paper](https://arxiv.org/abs/2509.20354)
**MetaEmbed - Runtime scaling for retrieval**
* Adjust precision on the fly (1-32 vectors)
* Same model works on phone and datacenter
* No retraining needed
* [Paper](https://arxiv.org/abs/2509.18095)
**tinyWorlds - 3M parameter world model**
* Generates playable game environments
* Proves efficient world modeling possible
* [GitHub](https://github.com/AlmondGod/tinyworlds)
https://reddit.com/link/1ntms89/video/15oog6kas4sf1/player
**Smol2Operator - 2.2B agentic GUI coder**
* Full open-source recipe from HuggingFace
* Build custom agentic coding systems locally
* [Blog](https://huggingface.co/blog/smol2operator)
Other highlights:
* Lynx personalized video from single photo
https://reddit.com/link/1ntms89/video/1ueddn6cs4sf1/player
* Hunyuan3D-Part for part-level 3D generation
https://reddit.com/link/1ntms89/video/0pifv4fes4sf1/player
Full newsletter (free): [https://mixpeek.com/blog/multimodal-monday-26](https://mixpeek.com/blog/multimodal-monday-26) | 2025-09-29T16:32:35 | https://www.reddit.com/r/LocalLLaMA/comments/1ntms89/last_week_in_multimodal_ai_local_edition/ | Vast_Yak_4147 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ntms89 | false | null | t3_1ntms89 | /r/LocalLLaMA/comments/1ntms89/last_week_in_multimodal_ai_local_edition/ | false | false | self | 19 | null |
so ollama just released a new optimization | 2 | according to this: [https://ollama.com/blog/new-model-scheduling](https://ollama.com/blog/new-model-scheduling)
it seems to increase performance a lot by loading models more efficiently into memory, so i'm wondering if anyone made any recent comparison with that vs llama.cpp ? | 2025-09-29T16:31:18 | https://www.reddit.com/r/LocalLLaMA/comments/1ntmqz6/so_ollama_just_released_a_new_optimization/ | emaayan | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ntmqz6 | false | null | t3_1ntmqz6 | /r/LocalLLaMA/comments/1ntmqz6/so_ollama_just_released_a_new_optimization/ | false | false | self | 2 | {'enabled': False, 'images': [{'id': 'krjt_5uhqcaDfYjfO7lkezThehav9cAIRJgcK-OKAmM', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/krjt_5uhqcaDfYjfO7lkezThehav9cAIRJgcK-OKAmM.png?width=108&crop=smart&auto=webp&s=3dc759de0e8fa36d241c5728d41ee3cf022cab96', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/krjt_5uhqcaDfYjfO7lkezThehav9cAIRJgcK-OKAmM.png?width=216&crop=smart&auto=webp&s=6ccf136f5d3091254a0067a3bc5d6c7df9d62d89', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/krjt_5uhqcaDfYjfO7lkezThehav9cAIRJgcK-OKAmM.png?width=320&crop=smart&auto=webp&s=2530aa4ecbcf7899ec0d023e217fe24af15fe0a6', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/krjt_5uhqcaDfYjfO7lkezThehav9cAIRJgcK-OKAmM.png?width=640&crop=smart&auto=webp&s=8e51add1cab39c7614eb13e6195f23c5b4eeb417', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/krjt_5uhqcaDfYjfO7lkezThehav9cAIRJgcK-OKAmM.png?width=960&crop=smart&auto=webp&s=750a6d42fd91c5a6e9a9c069e74247c877644e97', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/krjt_5uhqcaDfYjfO7lkezThehav9cAIRJgcK-OKAmM.png?width=1080&crop=smart&auto=webp&s=9eab390b865b031211658564ad5fe5241c9661c5', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/krjt_5uhqcaDfYjfO7lkezThehav9cAIRJgcK-OKAmM.png?auto=webp&s=a080c4707584d3aa14134960cda9ba2d339b93a3', 'width': 1200}, 'variants': {}}]} |
I think gpt-oss:20b misunderstood its own thought process. | 12 | This made me laugh and just wanted to share with like minded people. I am running gpt-oss:20b on an RTX 3080ti and have it connected to web search. I was just skimming through some options for learning electrical engineering self taught or any certificates I could maybe take online (for fun and to learn) so I was using websearch.
Looking at the thought process there was some ambiguity in the way it was reading its sources and it misunderstood own thought process. So ultimately it determines that the answer is yes and tells itself to cite specific sources and "craft answer in simple language"
From there its response was completely in Spanish. It made me laugh and I just wanted to share my experience. | 2025-09-29T16:25:12 | https://www.reddit.com/gallery/1ntml0a | FitKaleidoscope1806 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1ntml0a | false | null | t3_1ntml0a | /r/LocalLLaMA/comments/1ntml0a/i_think_gptoss20b_misunderstood_its_own_thought/ | false | false | 12 | null | |
Fiction.liveBench tested DeepSeek 3.2, Qwen-max, grok-4-fast, Nemotron-nano-9b | 130 | 2025-09-29T16:23:13 | fictionlive | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1ntmj9c | false | null | t3_1ntmj9c | /r/LocalLLaMA/comments/1ntmj9c/fictionlivebench_tested_deepseek_32_qwenmax/ | false | false | default | 130 | {'enabled': True, 'images': [{'id': '2krrie9kq4sf1', 'resolutions': [{'height': 166, 'url': 'https://preview.redd.it/2krrie9kq4sf1.png?width=108&crop=smart&auto=webp&s=39cfb85d0427f26b0b6a261e2e127a7f62e33a51', 'width': 108}, {'height': 333, 'url': 'https://preview.redd.it/2krrie9kq4sf1.png?width=216&crop=smart&auto=webp&s=37ddab3386a44ab820c7d7cbb11f7ae09baa7e34', 'width': 216}, {'height': 494, 'url': 'https://preview.redd.it/2krrie9kq4sf1.png?width=320&crop=smart&auto=webp&s=25d9117bb90c83c502ba4006743dfa413a400436', 'width': 320}, {'height': 989, 'url': 'https://preview.redd.it/2krrie9kq4sf1.png?width=640&crop=smart&auto=webp&s=b4324894b3e610dcdabb98a60c481e0333d12c3a', 'width': 640}, {'height': 1483, 'url': 'https://preview.redd.it/2krrie9kq4sf1.png?width=960&crop=smart&auto=webp&s=b1116a74f6cbe88e8072d91c2480aff0061b5aca', 'width': 960}, {'height': 1669, 'url': 'https://preview.redd.it/2krrie9kq4sf1.png?width=1080&crop=smart&auto=webp&s=0c575adc0157cb6ffebc59393ce0ba57d9801c48', 'width': 1080}], 'source': {'height': 2476, 'url': 'https://preview.redd.it/2krrie9kq4sf1.png?auto=webp&s=7e86620fdd7130a57271885172a7bde793daa093', 'width': 1602}, 'variants': {}}]} | ||
Sammyuri built a redstone system to run a small language model (~5M params) in Minecraft! | 248 | May not be interesting to most people, but as a Minecraft player, this is insane and I think deserves recognition. This is running a local language model after all, so I think it fits here. | 2025-09-29T16:22:54 | https://www.youtube.com/watch?v=VaeI9YgE1o8 | Daniel_H212 | youtube.com | 1970-01-01T00:00:00 | 0 | {} | 1ntmiz1 | false | {'oembed': {'author_name': 'sammyuri', 'author_url': 'https://www.youtube.com/@sammyuri', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/VaeI9YgE1o8?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen title="I built ChatGPT with Minecraft redstone!"></iframe>', 'provider_name': 'YouTube', 'provider_url': 'https://www.youtube.com/', 'thumbnail_height': 360, 'thumbnail_url': 'https://i.ytimg.com/vi/VaeI9YgE1o8/hqdefault.jpg', 'thumbnail_width': 480, 'title': 'I built ChatGPT with Minecraft redstone!', 'type': 'video', 'version': '1.0', 'width': 356}, 'type': 'youtube.com'} | t3_1ntmiz1 | /r/LocalLLaMA/comments/1ntmiz1/sammyuri_built_a_redstone_system_to_run_a_small/ | false | false | default | 248 | {'enabled': False, 'images': [{'id': '9KGtXL31_ILzBLGvG1YO0OZiv4dTVrqsaYqfruNLKD8', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/9KGtXL31_ILzBLGvG1YO0OZiv4dTVrqsaYqfruNLKD8.jpeg?width=108&crop=smart&auto=webp&s=ce17a7710eda681f913278707c5ffc5919018abc', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/9KGtXL31_ILzBLGvG1YO0OZiv4dTVrqsaYqfruNLKD8.jpeg?width=216&crop=smart&auto=webp&s=2780c60ea56915aa484b3e4e3978f294c6dad55c', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/9KGtXL31_ILzBLGvG1YO0OZiv4dTVrqsaYqfruNLKD8.jpeg?width=320&crop=smart&auto=webp&s=c7208d66dc5511d3ad04d8f44f11e2cfe87d5239', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/9KGtXL31_ILzBLGvG1YO0OZiv4dTVrqsaYqfruNLKD8.jpeg?auto=webp&s=18e59e6f1ea88278d9eb7fb01de3ed1b2625d1a7', 'width': 480}, 'variants': {}}]} |
Just a small win I wanted to share — my side project Examsprint AI (a free AI study tool) became #3 product of the day on Proofstories and #2 on Fazier 🎉 | 0 | Didn’t expect it to get that much love so quickly.
Still adding features (badges, flashcards, notes, AI tutor), but seeing this kind of recognition makes me more motivated to keep building.
If anyone here has launched projects before → how do you usually keep the momentum going after a good launch spike?
| 2025-09-29T16:20:00 | Expensive-Board3661 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1ntmg4w | false | null | t3_1ntmg4w | /r/LocalLLaMA/comments/1ntmg4w/just_a_small_win_i_wanted_to_share_my_side/ | false | false | default | 0 | {'enabled': True, 'images': [{'id': '8fk5n5g6q4sf1', 'resolutions': [{'height': 216, 'url': 'https://preview.redd.it/8fk5n5g6q4sf1.jpeg?width=108&crop=smart&auto=webp&s=b12d04b95eea79dd50ea6390bef2c300bd652226', 'width': 108}, {'height': 432, 'url': 'https://preview.redd.it/8fk5n5g6q4sf1.jpeg?width=216&crop=smart&auto=webp&s=1ce81241fcfd0f04ca3bad4ae252cbba2b737394', 'width': 216}, {'height': 640, 'url': 'https://preview.redd.it/8fk5n5g6q4sf1.jpeg?width=320&crop=smart&auto=webp&s=0847e011fa88751ea1a2e8ca292c66056f08bc9c', 'width': 320}, {'height': 1280, 'url': 'https://preview.redd.it/8fk5n5g6q4sf1.jpeg?width=640&crop=smart&auto=webp&s=6ee8f81f7e6a041de030054718c8c083582750bc', 'width': 640}, {'height': 1920, 'url': 'https://preview.redd.it/8fk5n5g6q4sf1.jpeg?width=960&crop=smart&auto=webp&s=e3167c6e94aa300b0959cde2fff6e6870b9ea301', 'width': 960}, {'height': 2160, 'url': 'https://preview.redd.it/8fk5n5g6q4sf1.jpeg?width=1080&crop=smart&auto=webp&s=80f1dc9c72ccacc99835f9e55c9d13ef01d69214', 'width': 1080}], 'source': {'height': 2400, 'url': 'https://preview.redd.it/8fk5n5g6q4sf1.jpeg?auto=webp&s=f0f7f2ac630b82afcc05f31bb98d8f1d880d614b', 'width': 1080}, 'variants': {}}]} | |
Easy unit of measurement for pricing a model in terms of hardware | 3 | This is a late night idea, maybe stupid, maybe not. I'll let you decide it :)
Often when I see a new model release I ask myself, can I run it? How much does the hw to run this model costs?
My idea is to introduce a unite of measurement for pricing a model in terms of hardware. Here is an example:
"GPT-OSS-120B: 5k BOLT25@100t"
It means that in order to run the model at 100 t/s you need to spend 5k in 2025. BOLT is just a stupid name (Budget to Obtain Local Throughput). | 2025-09-29T16:16:22 | https://www.reddit.com/r/LocalLLaMA/comments/1ntmcpt/easy_unit_of_measurement_for_pricing_a_model_in/ | marcocastignoli | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ntmcpt | false | null | t3_1ntmcpt | /r/LocalLLaMA/comments/1ntmcpt/easy_unit_of_measurement_for_pricing_a_model_in/ | false | false | self | 3 | null |
This Simple Trick Makes AI Far More Reliable (By Making It Argue With Itself) | 0 | I came across some research recently that honestly intrigued me. We already have AI that can reason step-by-step, search the web, do all that fancy stuff. But turns out there's a dead simple way to make it way more accurate: just have multiple copies argue with each other.
also wrote a full blog post about it here: [https://open.substack.com/pub/diamantai/p/this-simple-trick-makes-ai-agents?r=336pe4&utm\_campaign=post&utm\_medium=web&showWelcomeOnShare=false](https://open.substack.com/pub/diamantai/p/this-simple-trick-makes-ai-agents?r=336pe4&utm_campaign=post&utm_medium=web&showWelcomeOnShare=false)
here's the idea. Instead of asking one AI for an answer, you spin up like 3-5 copies and give them all the same question. Each one works on it independently. Then you show each AI what the others came up with and let them critique each other's reasoning.
"Wait, you forgot to account for X in step 3." "Actually, there's a simpler approach here." "That interpretation doesn't match the source."
They go back and forth a few times, fixing mistakes and refining their answers until they mostly agree on something.
What makes this work is that even when AI uses chain-of-thought or searches for info, it's still just one perspective taking one path through the problem. Different copies might pick different approaches, catch different errors, or interpret fuzzy information differently. The disagreement actually reveals where the AI is uncertain instead of just confidently stating wrong stuff.
what do you think about it? | 2025-09-29T16:15:13 | https://www.reddit.com/r/LocalLLaMA/comments/1ntmbl8/this_simple_trick_makes_ai_far_more_reliable_by/ | Nir777 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ntmbl8 | false | null | t3_1ntmbl8 | /r/LocalLLaMA/comments/1ntmbl8/this_simple_trick_makes_ai_far_more_reliable_by/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'ujiT1VBCsL4rcPdxI8Dj-bPTD9GQn1AODmyQ3nV9LYw', 'resolutions': [{'height': 63, 'url': 'https://external-preview.redd.it/ujiT1VBCsL4rcPdxI8Dj-bPTD9GQn1AODmyQ3nV9LYw.jpeg?width=108&crop=smart&auto=webp&s=f95599b87b88a829728e2fe7c80d5da53d02f06c', 'width': 108}, {'height': 126, 'url': 'https://external-preview.redd.it/ujiT1VBCsL4rcPdxI8Dj-bPTD9GQn1AODmyQ3nV9LYw.jpeg?width=216&crop=smart&auto=webp&s=82580e8b4a4333223d7094bf602cb0ddc7d284c9', 'width': 216}, {'height': 187, 'url': 'https://external-preview.redd.it/ujiT1VBCsL4rcPdxI8Dj-bPTD9GQn1AODmyQ3nV9LYw.jpeg?width=320&crop=smart&auto=webp&s=0b121bda2c0c7b4cd245317c80b705cd38b5a630', 'width': 320}, {'height': 375, 'url': 'https://external-preview.redd.it/ujiT1VBCsL4rcPdxI8Dj-bPTD9GQn1AODmyQ3nV9LYw.jpeg?width=640&crop=smart&auto=webp&s=36a222e25c9b060810d9b78ae678f8bad12d3743', 'width': 640}, {'height': 562, 'url': 'https://external-preview.redd.it/ujiT1VBCsL4rcPdxI8Dj-bPTD9GQn1AODmyQ3nV9LYw.jpeg?width=960&crop=smart&auto=webp&s=6b2393616cabe3b1588ed4871b2dcdf8a171168b', 'width': 960}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/ujiT1VBCsL4rcPdxI8Dj-bPTD9GQn1AODmyQ3nV9LYw.jpeg?auto=webp&s=bc22c7a77dad646d57414235cf0a7bf9f2f5ed16', 'width': 1024}, 'variants': {}}]} |
Ai based on textbooks | 2 | Hi, I am looking for a model that I can run on a laptop, free or one time purchase, that I can train with my textbooks that are PDF's. Id like for it to be good with math and science as most of this will be engineering stuff. I want to be able to use it as a reference tool. Ive heard that llama is one of the best local models but it only supports 5 pictures and didn't mention anything about uploading pdfs, and after searching online Ive found a bunch of subscription stuff, which i don't want. Any advice is appreciated.
Thanks | 2025-09-29T16:13:46 | https://www.reddit.com/r/LocalLLaMA/comments/1ntma74/ai_based_on_textbooks/ | Icarus-50 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ntma74 | false | null | t3_1ntma74 | /r/LocalLLaMA/comments/1ntma74/ai_based_on_textbooks/ | false | false | self | 2 | null |
People with Snapdragon laptops , what do you run? | 6 | I got a Lenovo yoga slim extreme , tried to run npu models like phi and mistral which were surprisingly fast, no spill over to gpu or cpu.
For those with same architecture , do you get your models at AI Hub, convert from hugging face or using AI toolkit? Just looking for an optimal way to leverage NPUs to the max. | 2025-09-29T16:07:40 | https://www.reddit.com/r/LocalLLaMA/comments/1ntm4i5/people_with_snapdragon_laptops_what_do_you_run/ | AggravatingGiraffe46 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ntm4i5 | false | null | t3_1ntm4i5 | /r/LocalLLaMA/comments/1ntm4i5/people_with_snapdragon_laptops_what_do_you_run/ | false | false | self | 6 | null |
llama.cpp: Quantizing from bf16 vs f16 | 8 | Almost all model weights are released in bf16 these days, so obviously a conversion from bf16 -> f16 is lossy and results in objectively less precise weights. However, could the resulting quantization from f16 end up being overall more precise than the quantization from bf16? Let me explain.
F16 has less range than bf16, so outliers get clipped. When this is further quantized to an INT format, the outlier weights will be less precise than if you had quantized from bf16, however the other weights in their block will have greater precision due to the decreased range, no? So f16 could be seen as an optimization step.
Forgive me if I have a misunderstanding about something. | 2025-09-29T15:57:37 | https://www.reddit.com/r/LocalLLaMA/comments/1ntluwl/llamacpp_quantizing_from_bf16_vs_f16/ | Confident-Willow5457 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ntluwl | false | null | t3_1ntluwl | /r/LocalLLaMA/comments/1ntluwl/llamacpp_quantizing_from_bf16_vs_f16/ | false | false | self | 8 | null |
How to build MCP Server for websites that don't have public APIs? | 4 | I run an IT services company, and a couple of my clients want to be integrated into the AI workflows of their customers and tech partners. e.g:
* A consumer services retailer wants tech partners to let users upgrade/downgrade plans via AI agents
* A SaaS client wants to expose certain dashboard actions to their customers’ AI agents
My first thought was to create an MCP Server for them. But most of these clients don’t have public APIs and only have websites.
Curious how others are approaching this? Is there a way to turn “website-only” businesses into MCP Servers? | 2025-09-29T15:22:53 | https://www.reddit.com/r/LocalLLaMA/comments/1ntkxdg/how_to_build_mcp_server_for_websites_that_dont/ | ReceptionSouth6680 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ntkxdg | false | null | t3_1ntkxdg | /r/LocalLLaMA/comments/1ntkxdg/how_to_build_mcp_server_for_websites_that_dont/ | false | false | self | 4 | null |
Hardware Guidance | 2 | Let's say I have a $5K budget. Would buying used hardware on eBay be better than building new? If someone gave you 5K for local projects what would you buy? Someone told me to just go grab the Apple solution lol!! | 2025-09-29T15:16:22 | https://www.reddit.com/r/LocalLLaMA/comments/1ntkr9e/hardware_guidance/ | stacksmasher | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ntkr9e | false | null | t3_1ntkr9e | /r/LocalLLaMA/comments/1ntkr9e/hardware_guidance/ | false | false | self | 2 | null |
3 Tesla GPUs in a Desktop Case | 119 | Plus a slot leftover for a dual 10G ethernet adapter. Originally, a goal of the [cooler project](https://esologic.com/new-cooler-first-look/) was to be able to do 4 cards in a desktop case but after a lot of experimentation, I don't think it's realistic to be able to dissapate 1000W+ with only your standard case fans. | 2025-09-29T15:06:42 | https://www.reddit.com/gallery/1ntkhy4 | eso_logic | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1ntkhy4 | false | null | t3_1ntkhy4 | /r/LocalLLaMA/comments/1ntkhy4/3_tesla_gpus_in_a_desktop_case/ | false | false | 119 | null | |
Stop saying RAG is same as Memory | 0 | I keep seeing people equate RAG with memory, and it doesn’t sit right with me. After going down the rabbit hole, here’s how I think about it now.
RAG is retrieval + generation. A query gets embedded, compared against a vector store, top-k neighbors are pulled back, and the LLM uses them to ground its answer. This is great for semantic recall and reducing hallucinations, but that’s all it is i.e. retrieval on demand.
Where it breaks is persistence. Imagine I tell an AI:
* “I live in Cupertino”
* Later: “I moved to SF”
* Then I ask: “Where do I live now?”
A plain RAG system might still answer “Cupertino” because both facts are stored as semantically similar chunks. It has no concept of recency, contradiction, or updates. It just grabs what looks closest to the query and serves it back.
That’s the core gap: RAG doesn’t persist new facts, doesn’t update old ones, and doesn’t forget what’s outdated. Even if you use Agentic RAG (re-querying, reasoning), it’s still retrieval only i.e. smarter search, not memory.
Memory is different. It’s persistence + evolution. It means being able to:
\- Capture new facts
\- Update them when they change
\- Forget what’s no longer relevant
\- Save knowledge across sessions so the system doesn’t reset every time
\- Recall the right context across sessions
Systems might still use Agentic RAG but only for the retrieval part. Beyond that, memory has to handle things like consolidation, conflict resolution, and lifecycle management. With memory, you get continuity, personalization, and something closer to how humans actually remember.
I’ve noticed more teams working on this like Mem0, Letta, Zep etc.
Curious how others here are handling this. Do you build your own memory logic on top of RAG? Or rely on frameworks? | 2025-09-29T15:03:54 | https://www.reddit.com/r/LocalLLaMA/comments/1ntkf9b/stop_saying_rag_is_same_as_memory/ | gargetisha | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ntkf9b | false | null | t3_1ntkf9b | /r/LocalLLaMA/comments/1ntkf9b/stop_saying_rag_is_same_as_memory/ | false | false | self | 0 | null |
The Illusion of Intelligence: Structural Flaws in Large Language Models | 2 | # The Illusion of Intelligence: Structural Flaws in Large Language Models
# Abstract
Despite their widespread adoption, large language models (LLMs) suffer from foundational flaws that undermine their utility in scientific, legal, and technical domains. These flaws are not philosophical abstractions but measurable failures in logic, arithmetic, and epistemic discipline. This exposé outlines the architectural limitations of LLMs, using a salient temperature comparison error—confusing 78°F as greater than 86°F—as a case study in symbolic misrepresentation. The abandonment of expert systems in favor of probabilistic token prediction has led to a generation of tools that simulate fluency while eroding precision.
# 1. Token Prediction ≠ Reasoning
LLMs operate by predicting the next most probable token in a sequence, based on statistical patterns learned from vast corpora. This mechanism, while effective for generating fluent text, lacks any inherent understanding of truth, logic, or measurement. Numbers are treated as symbols, not quantities. Thus, “86°F > 78°F” is not a guaranteed inference—it’s a probabilistic guess influenced by surrounding text.
This leads to errors like the one observed in a climate-related discussion: the model stated that “25–28°C (77–82°F) is well above chocolate’s melting point of \~30°C (86°F),” a reversal of basic arithmetic. The model failed to recognize that 86°F is greater than 78°F, not the reverse. This is not a matter of nuance—it is a quantifiable failure of numerical comparison.
# 2. The Symbol-Grounding Problem
LLMs lack grounding in the physical world. They do not “know” what a temperature feels like, what melting means, or how quantities relate to one another. This disconnect—known as the symbol-grounding problem—means that even simple measurements can be misrepresented. Without a semantic anchor, numbers become decor, not data.
In contrast, expert systems and rule-based engines treat numbers as entities with dimensional properties. They enforce unit consistency, validate thresholds, and reject contradictions. LLMs, by design, do none of this unless externally bolted to symbolic calculators or retrieval modules.
# 3. Measurement Integrity Is Not Prioritized
Developers of LLMs have focused on safety, bias mitigation, and refusal logic—important goals, but ones that deprioritize empirical rigor. As a result:
* Arithmetic errors persist across versions.
* Unit conversions are frequently mishandled.
* Scientific constants are misquoted or misapplied.
* Logical contradictions go unflagged unless explicitly prompted.
This is not due to lack of awareness—it is a design tradeoff. Fluency is prioritized over fidelity. The result is a system that can eloquently mislead.
# 4. The Epistemic Collapse
Scientific empiricism demands falsifiability, reproducibility, and measurement integrity. LLMs fail all three:
* **Falsifiability**: Outputs vary with each prompt iteration, making verification difficult.
* **Reproducibility**: Identical prompts can yield divergent answers due to stochastic sampling.
* **Measurement Integrity**: Quantitative comparisons are unreliable unless explicitly structured.
This collapse is not theoretical—it has real consequences in domains like legal drafting, mechanical diagnostics, and regulatory compliance. When a model cannot reliably compare two temperatures, it cannot be trusted to interpret a statute, diagnose a pressure valve, or benchmark an AI model’s refusal logic.
# 5. The Cost of Abandoning Expert Systems
The shift from deterministic expert systems to probabilistic LLMs was driven by scalability and cost. Expert systems require domain-specific knowledge, rule curation, and maintenance. LLMs offer generality and fluency at scale. But the cost is epistemic: we traded precision for prediction.
In domains where audit-grade accuracy is non-negotiable—federal inspections, legal filings, mechanical troubleshooting—LLMs introduce risk, not reliability. They simulate expertise without embodying it.
# 6. Toward a Post-LLM Framework
To restore integrity, future systems must:
* Integrate symbolic reasoning engines for arithmetic, logic, and measurement.
* Ground numerical tokens in dimensional context (e.g., temperature, pressure, voltage).
* Allow user-defined truth anchors and domain-specific override protocols.
* Log and correct factual errors with transparent changelogs.
* Reintroduce expert system scaffolding for high-stakes domains.
This is not a rejection of LLMs—it is a call to constrain them within epistemically sound architectures.
# Conclusion
LLMs are not intelligent agents—they are stochastic mirrors of human language. Their fluency conceals their fragility. When a model states that 78°F is greater than 86°F, it is not making a typo—it is revealing its architecture. Until these systems are grounded in logic, measurement, and empirical discipline, they remain tools of simulation, not instruments of truth. | 2025-09-29T14:55:01 | https://www.reddit.com/r/LocalLLaMA/comments/1ntk6kn/the_illusion_of_intelligence_structural_flaws_in/ | KillaBeJeezus | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ntk6kn | false | null | t3_1ntk6kn | /r/LocalLLaMA/comments/1ntk6kn/the_illusion_of_intelligence_structural_flaws_in/ | false | false | self | 2 | null |
Advices to run LLM on my PC with an RTX 5080. | 4 | Hey, I'm looking for advice my free Gemini Pro subscription ends tomorrow.
I'have been interested in running LLM locally for a while but it's was too complicated to install and they were underperforming too much to my liking.
I stubbled upon [gpt-oss](https://ollama.com/library/gpt-oss):20b and is seems the best available model to my hardware. What the best softwares for local use? I have Ollama, AnythingLLM and Docker + open-webui. But I find the later annoying to update... I wish there was easy guide for that stuff I even struggle to find an hardware requirements for models sometimes.
How do I easily switch online search on and off for the LLM depending of my needs?
Is there a way to replicate something like Gemini's "Deep Research"?
Also it seem to be heavily censored I tried [https://www.reddit.com/r/LocalLLaMA/comments/1ng9dkx/comment/ne306uv/](https://www.reddit.com/r/LocalLLaMA/comments/1ng9dkx/comment/ne306uv/) but it still refuse to answer sometimes is there any others way without a deterioration of the LLM's content? | 2025-09-29T14:52:43 | https://www.reddit.com/r/LocalLLaMA/comments/1ntk4k5/advices_to_run_llm_on_my_pc_with_an_rtx_5080/ | Daojyn | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ntk4k5 | false | null | t3_1ntk4k5 | /r/LocalLLaMA/comments/1ntk4k5/advices_to_run_llm_on_my_pc_with_an_rtx_5080/ | false | false | self | 4 | {'enabled': False, 'images': [{'id': 'krjt_5uhqcaDfYjfO7lkezThehav9cAIRJgcK-OKAmM', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/krjt_5uhqcaDfYjfO7lkezThehav9cAIRJgcK-OKAmM.png?width=108&crop=smart&auto=webp&s=3dc759de0e8fa36d241c5728d41ee3cf022cab96', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/krjt_5uhqcaDfYjfO7lkezThehav9cAIRJgcK-OKAmM.png?width=216&crop=smart&auto=webp&s=6ccf136f5d3091254a0067a3bc5d6c7df9d62d89', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/krjt_5uhqcaDfYjfO7lkezThehav9cAIRJgcK-OKAmM.png?width=320&crop=smart&auto=webp&s=2530aa4ecbcf7899ec0d023e217fe24af15fe0a6', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/krjt_5uhqcaDfYjfO7lkezThehav9cAIRJgcK-OKAmM.png?width=640&crop=smart&auto=webp&s=8e51add1cab39c7614eb13e6195f23c5b4eeb417', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/krjt_5uhqcaDfYjfO7lkezThehav9cAIRJgcK-OKAmM.png?width=960&crop=smart&auto=webp&s=750a6d42fd91c5a6e9a9c069e74247c877644e97', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/krjt_5uhqcaDfYjfO7lkezThehav9cAIRJgcK-OKAmM.png?width=1080&crop=smart&auto=webp&s=9eab390b865b031211658564ad5fe5241c9661c5', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/krjt_5uhqcaDfYjfO7lkezThehav9cAIRJgcK-OKAmM.png?auto=webp&s=a080c4707584d3aa14134960cda9ba2d339b93a3', 'width': 1200}, 'variants': {}}]} |
Your local secure MCP environment, MCP Router v0.5.5 | 3 | Just released **MCP Router v0.5.5**.
* Works offline
* Compatible with any MCP servers and clients
* Easy workspace switching
You can try it here: [https://github.com/mcp-router/mcp-router](https://github.com/mcp-router/mcp-router) | 2025-09-29T14:42:00 | https://www.reddit.com/gallery/1ntjugu | Equivalent-Pause-233 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1ntjugu | false | null | t3_1ntjugu | /r/LocalLLaMA/comments/1ntjugu/your_local_secure_mcp_environment_mcp_router_v055/ | false | false | 3 | null | |
Is there a way to remove the acoustic fingerprint from an AI voice clone audio? | 0 | I’m using the AI Voice Cloner under a paid plan, and I learned that there’s an audio watermark embedded in the waveform — something they call an acoustic fingerprint. | 2025-09-29T14:40:31 | https://www.reddit.com/r/LocalLLaMA/comments/1ntjt5e/is_there_a_way_to_remove_the_acoustic_fingerprint/ | False-Tangerine6029 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ntjt5e | false | null | t3_1ntjt5e | /r/LocalLLaMA/comments/1ntjt5e/is_there_a_way_to_remove_the_acoustic_fingerprint/ | false | false | self | 0 | null |
What exactly is page size in sglang, and how does it affect prefix caching? | 2 | I’m starting to dig deeper into **sglang**, and I’m a bit confused about how *page size* works in relation to prefix caching.
From the docs and community posts I’ve seen, sglang advertises *token-level prefix reuse* — meaning unlike vLLM, it shouldn’t require an entire block to be a hit before reuse kicks in. This supposedly gives sglang better prefix cache utilization.
But in **PD-separation scenarios**, we often increase `page_size` (e.g., 64 or 128) to improve KV transfer efficiency. And when I do this, I observe something strange:
* If `input_len < page_size`, I get **zero prefix cache hits**.
* In practice, it looks just like vLLM: you need the *entire page* to hit before reuse happens.
This makes me wonder:
1. What does sglang actually mean by *“token-level prefix reuse”*?
* If it only works when `page_size = 1`, then isn’t that basically equivalent to vLLM with `block_size = 1`?
2. Why doesn’t sglang support true token-level prefix reuse when `page_size > 1`?
* Is it technically difficult to implement?
* Or is the overhead not worth the gains?
* Has the community discussed this trade-off anywhere? (I haven’t found much so far.)
3. Speaking of which, what are the real challenges for vLLM if it tried to set `block_size = 1`?
4. Page size defaults to 1 in sglang, but in PD-separation we tweak it (e.g., 64/128) for KV transfer performance.
* Are there other scenarios where adjusting `page_size` makes sense?
Curious if anyone here has insights or has seen discussions about the design trade-offs behind `page_size`. | 2025-09-29T14:17:28 | https://www.reddit.com/r/LocalLLaMA/comments/1ntj7zb/what_exactly_is_page_size_in_sglang_and_how_does/ | Inside_Camp870 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ntj7zb | false | null | t3_1ntj7zb | /r/LocalLLaMA/comments/1ntj7zb/what_exactly_is_page_size_in_sglang_and_how_does/ | false | false | self | 2 | null |
NVIDIA LongLive : Real-time Interactive Long Video Generation | 25 | NVIDIA and collaborators just released **LongLive**, a text-to-video system that finally tackles long, interactive videos. Most models outputs 5–10 second clips, but LongLive handles up to 240 seconds on a single H100, staying smooth and responsive even when you switch prompts mid-video. It combines **KV re-cache** for seamless prompt changes, **streaming long tuning** to handle extended rollouts, and **short-window attention + frame sink** to balance speed with context.
Benchmarks show massive speedups (20+ FPS vs <1 FPS for baselines) while keeping quality high.
Paper : [https://arxiv.org/abs/2509.22622](https://arxiv.org/abs/2509.22622)
HuggingFace Model : [https://huggingface.co/Efficient-Large-Model/LongLive-1.3B](https://huggingface.co/Efficient-Large-Model/LongLive-1.3B)
Video demo : [https://youtu.be/caDE6f54pvA](https://youtu.be/caDE6f54pvA)
| 2025-09-29T14:03:01 | https://www.reddit.com/r/LocalLLaMA/comments/1ntiv83/nvidia_longlive_realtime_interactive_long_video/ | Technical-Love-8479 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ntiv83 | false | null | t3_1ntiv83 | /r/LocalLLaMA/comments/1ntiv83/nvidia_longlive_realtime_interactive_long_video/ | false | false | self | 25 | null |
We just open-sourced Kroko ASR: a fast, streaming alternative to Whisper.
It’s early days, we’d love testers, feedback, and contributors. | 137 | **First batch**
* Streaming models (CC-BY-SA), ready for CPU, mobile, or browser
* More extreme but affordable commercial models (with Apache inference code)
**Languages**
* A dozen to start, more on the way (Polish and Japanese coming next.)
**Why it’s different**
* Much smaller download than Whisper
* Much faster on CPU (runs on mobile or even in the browser, try the the demo on android)
* (Almost) hallucination-free
* Streaming support: great for voice assistants, live agent assist, note taking, or just yelling at your computer
**Quality**
* Offline models beat Whisper v3-large while being about 10× smaller
* Streaming models are comparable (or better) at 1s chunk size
* There’s a trade-off in quality at ultra-low latency
**Project goals**
Build a community and democratize speech-to-text, making it easier to train models and run them at the edge (without needing a PhD in speech AI).
**Links**
* website & cloud demo: [kroko.ai](https://kroko.ai)
* Android model explorer: [Google Play](https://play.google.com/store/apps/details?id=com.krokoasr.demo&hl=en)
* Discord: [discord.gg/nnY9nQac](https://discord.gg/nnY9nQac)
* GitHub: [github.com/kroko-ai/kroko-asr](https://github.com/kroko-ai/kroko-asr)
* Hugging Face Demo: [Kroko Streaming ASR Wasm](https://huggingface.co/spaces/Banafo/Kroko-Streaming-ASR-Wasm) (older models, updates coming soon)
* community models page: [https://huggingface.co/Banafo/Kroko-ASR](https://huggingface.co/Banafo/Kroko-ASR)
**Thoughts / caveats**
We’re still ironing out some things, especially around licensing limits and how to release models in the fairest way. Our philosophy is: easier to give more than to give less later. Some details may change as we learn from the community.
**Future**
There is plenty of room to improve the models, as most are still trained on our older pipeline.
**TL;DR**
Smaller, faster, (almost) hallucination-free Whisper replacement that streams on CPU/mobile. Looking for testers! | 2025-09-29T14:01:58 | https://www.reddit.com/r/LocalLLaMA/comments/1ntiua9/we_just_opensourced_kroko_asr_a_fast_streaming/ | banafo | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ntiua9 | false | null | t3_1ntiua9 | /r/LocalLLaMA/comments/1ntiua9/we_just_opensourced_kroko_asr_a_fast_streaming/ | false | false | self | 137 | {'enabled': False, 'images': [{'id': 'NB3oUfrrfGZtgsxNhAXYMJI-TjaeeeGUQm0dR8WdRvE', 'resolutions': [{'height': 62, 'url': 'https://external-preview.redd.it/NB3oUfrrfGZtgsxNhAXYMJI-TjaeeeGUQm0dR8WdRvE.png?width=108&crop=smart&auto=webp&s=0f65c970252a4a9ad56de7cb0a4d5f1fae8db330', 'width': 108}, {'height': 125, 'url': 'https://external-preview.redd.it/NB3oUfrrfGZtgsxNhAXYMJI-TjaeeeGUQm0dR8WdRvE.png?width=216&crop=smart&auto=webp&s=0aebadc87812e956fdab34993d56d9e87e54827d', 'width': 216}, {'height': 185, 'url': 'https://external-preview.redd.it/NB3oUfrrfGZtgsxNhAXYMJI-TjaeeeGUQm0dR8WdRvE.png?width=320&crop=smart&auto=webp&s=d574dff75dffb3ee1504404bfdd0e4d25b17bca8', 'width': 320}, {'height': 371, 'url': 'https://external-preview.redd.it/NB3oUfrrfGZtgsxNhAXYMJI-TjaeeeGUQm0dR8WdRvE.png?width=640&crop=smart&auto=webp&s=443dc3a460687040518c1e71f989b7e9f8c3c0ed', 'width': 640}, {'height': 557, 'url': 'https://external-preview.redd.it/NB3oUfrrfGZtgsxNhAXYMJI-TjaeeeGUQm0dR8WdRvE.png?width=960&crop=smart&auto=webp&s=325d401c85306fb0da0a18b1bd4d3a0a8218f399', 'width': 960}], 'source': {'height': 567, 'url': 'https://external-preview.redd.it/NB3oUfrrfGZtgsxNhAXYMJI-TjaeeeGUQm0dR8WdRvE.png?auto=webp&s=dad808fe8a95e44b46b47c083c038869fea6c414', 'width': 977}, 'variants': {}}]} |
How do you track and analyze user behavior in AI chatbots/agents? | 0 | I’ve been building B2C AI products (chatbots + agents) and keep running into the same pain point: there are no good tools (like Mixpanel or Amplitude for apps) to really understand *how* users interact with them.
Challenges:
* Figuring out what users are actually talking about
* Tracking funnels and drop-offs in chat/ voice environment
* Identifying recurring pain points in queries
* Spotting gaps where the AI gives inconsistent/irrelevant answers
* Visualizing how conversations flow between topics
Right now, we’re mostly drowning in raw logs and pivot tables. It’s hard and time-consuming to derive meaningful outcomes (like engagement, up-sells, cross-sells).
Curious how others are approaching this? Is everyone hacking their own tracking system, or are there solutions out there I’m missing? | 2025-09-29T13:59:38 | https://www.reddit.com/r/LocalLLaMA/comments/1ntis1w/how_do_you_track_and_analyze_user_behavior_in_ai/ | ReceptionSouth6680 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ntis1w | false | null | t3_1ntis1w | /r/LocalLLaMA/comments/1ntis1w/how_do_you_track_and_analyze_user_behavior_in_ai/ | false | false | self | 0 | null |
Weird TTFT “steps” when sweeping input lengths in sglang – not linear, looks like plateaus? | 3 | I was running some TTFT (Time To First Token) benchmarks on sglang and ran into an interesting pattern.
Setup:
- Server launched with:
```
python3.10 -m sglang.launch_server \
--model-path /path/to/deepseek_v2
--port 28056 \
--tp 1 \
--disable-radix-cache
```
- Measurement script (perf.py) runs sglang.bench_serving with random input lengths and writes TTFT stats (mean/median/p99) to CSV.
- Input lengths tested: [1,2,4,8,16,32,64,128,256,512,1024,2048,4096,8192,16384].
Results (ms):
input_len, ttft_mean, ttft_median, ttft_p99
1, 54.9, 54.8, 56.8
32, 54.6, 53.9, 62.0
64, 59.2, 55.2, 71.7
128, 59.7, 56.5, 67.5
256, 63.6, 65.8, 71.0
1024, 61.6, 62.9, 66.7
2048, 64.5, 65.3, 69.3
4096, 105.3, 105.9, 107.8
8192, 233.6, 219.8, 264.9
16384,745.3, 590.1, 1399.3
- From 1 → 32, TTFT is basically flat (~55ms).
- From 64 → 2048, it’s also almost flat (60–65ms).
- Then bam, at 4096 it jumps hard (~105ms), then keeps climbing (233ms @ 8k, 745ms @ 16k).
The “steps” are strange: if TTFT were scaling linearly with input_len, you’d expect a smooth rise. But instead, it looks like plateaus with sudden jumps.
Even weirder: 64 shows a bump, but 128 actually drops a bit again before leveling.
So my questions:
1. Why would TTFT show these plateau-and-jump patterns instead of a smoother increase?
2. Could it be batch/kernel launch overheads, memory page sizes, or some hidden scheduler threshold?
3. Would it make sense to test with finer granularity (e.g. every 16 or 32 tokens around those breakpoints) to see where the “stairs” really happen?
4.
Curious if anyone else has observed similar TTFT “stairs” when sweeping input lengths in sglang (or vLLM).
| 2025-09-29T13:58:27 | https://www.reddit.com/r/LocalLLaMA/comments/1ntir1i/weird_ttft_steps_when_sweeping_input_lengths_in/ | Inside_Camp870 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ntir1i | false | null | t3_1ntir1i | /r/LocalLLaMA/comments/1ntir1i/weird_ttft_steps_when_sweeping_input_lengths_in/ | false | false | self | 3 | null |
Google AI edge Gallery , oppo reno 13F , 12 ram | 3 | it should go faster on Snapdragon 7, 8, necessarily 12 ram for it to serve, | 2025-09-29T13:14:52 | https://www.reddit.com/gallery/1nthpwc | Illustrious-Swim9663 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1nthpwc | false | null | t3_1nthpwc | /r/LocalLLaMA/comments/1nthpwc/google_ai_edge_gallery_oppo_reno_13f_12_ram/ | false | false | 3 | null | |
Current SOTA for codegen? | 6 | It's very hard to keep up recently, with like New Kimi, Qwen3, Qwen 3 Next, all these new StepFun models and etc. There is also GLM 4.5 series, gpt-oss and etc
To all the power users out there: what currently is the best overall open source llm you would say? Doesn't have to be something I can run. (Some people still say it's 0528 but I doubt it) | 2025-09-29T13:13:07 | https://www.reddit.com/r/LocalLLaMA/comments/1nthohj/current_sota_for_codegen/ | Crazyscientist1024 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nthohj | false | null | t3_1nthohj | /r/LocalLLaMA/comments/1nthohj/current_sota_for_codegen/ | false | false | self | 6 | null |
New to LLMs - What’s the Best Local AI Stack for a Complete ChatGPT Replacement? | 53 | Hello everyone, I’m looking to set up my own private, local LLM on my PC. I’ve got a pretty powerful setup with 20TB of storage, 256GB of RAM, an RTX 3090, and an i9 CPU.
I’m super new to LLMs but just discovered I can host them private and locally on my own PC with an actual WebUI like ChatGPT. I’m after something that can basically interpret images and files, generate images and code, handle long conversations or scripts without losing context, delusion, repetitiveness. Ideally act as a complete offline alternative to ChatGPT-5.
Is this possible to even achieve? Am I delusional??? Can I even host an AI model stack that can do everything ChatGPT does like reasoning, vision, coding, creativity, but fully private and running on my own machine with these specs?
If anyone has experience building this kind of all-in-one local setup or can recommend the best models and tools for it, I’d really appreciate the advice.
Thanks!!!! | 2025-09-29T13:00:56 | https://www.reddit.com/r/LocalLLaMA/comments/1ntheaj/new_to_llms_whats_the_best_local_ai_stack_for_a/ | Live_Drive_6256 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ntheaj | false | null | t3_1ntheaj | /r/LocalLLaMA/comments/1ntheaj/new_to_llms_whats_the_best_local_ai_stack_for_a/ | false | false | self | 53 | null |
Testing | 1 | Testing for Reddit | 2025-09-29T12:56:00 | https://www.reddit.com/r/LocalLLaMA/comments/1ntha8n/testing/ | Extra_Cicada8798 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ntha8n | false | null | t3_1ntha8n | /r/LocalLLaMA/comments/1ntha8n/testing/ | false | false | self | 1 | null |
The reason why Deepseek V3.2 is so cheap | 541 | TLDR: It's a linear model with almost O(kL) attention complexity.
Paper link: [https://github.com/deepseek-ai/DeepSeek-V3.2-Exp/blob/main/DeepSeek\_V3\_2.pdf](https://github.com/deepseek-ai/DeepSeek-V3.2-Exp/blob/main/DeepSeek_V3_2.pdf)
According to their paper, the Deepseek Sparse Attention computes attention for only k selected previous tokens, meaning it's a linear attention model with decoding complexity O(kL). What's different from previous linear models is it has a O(L\^2) index selector to select the tokens to compute attention for. Even though the index selector has square complexity but it's fast enough to be neglected.
https://preview.redd.it/h0zys7b4o3sf1.png?width=1390&format=png&auto=webp&s=00a7ea8ada91109d417b8d6e3f490ae9743c18b2
https://preview.redd.it/has2qyz7o3sf1.png?width=1300&format=png&auto=webp&s=0742135b2cb1be9bd853b614097597d521a4ef54
[Cost for V3.2 only increase very little thanks to linear attention](https://preview.redd.it/053i7pdro3sf1.png?width=1356&format=png&auto=webp&s=52adfb1bf9d0ee03f0a7d8e7b31340ab63b2f4b4)
Previous linear model attempts for linear models from other teams like Google and Minimax have not been successful. Let's see if DS can make the breakthrough this time. | 2025-09-29T12:52:22 | https://www.reddit.com/r/LocalLLaMA/comments/1nth7cb/the_reason_why_deepseek_v32_is_so_cheap/ | Js8544 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nth7cb | false | null | t3_1nth7cb | /r/LocalLLaMA/comments/1nth7cb/the_reason_why_deepseek_v32_is_so_cheap/ | false | false | 541 | {'enabled': False, 'images': [{'id': 'FcVX6tzRZ8tGOgZD8bxf6fQ4_S6KT4J6UgLFbHWcrYo', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/FcVX6tzRZ8tGOgZD8bxf6fQ4_S6KT4J6UgLFbHWcrYo.png?width=108&crop=smart&auto=webp&s=0421f0743c6ac6c5abbc1c63789e538078a331fa', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/FcVX6tzRZ8tGOgZD8bxf6fQ4_S6KT4J6UgLFbHWcrYo.png?width=216&crop=smart&auto=webp&s=192beb19e8d1a2d3b2b7bc4b42e060324928763c', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/FcVX6tzRZ8tGOgZD8bxf6fQ4_S6KT4J6UgLFbHWcrYo.png?width=320&crop=smart&auto=webp&s=1115dce41005b375fa2fbf1b4da7c3d6925c14d4', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/FcVX6tzRZ8tGOgZD8bxf6fQ4_S6KT4J6UgLFbHWcrYo.png?width=640&crop=smart&auto=webp&s=2dddb9f56e7fc9cdc0774a534e8c31cbb7079188', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/FcVX6tzRZ8tGOgZD8bxf6fQ4_S6KT4J6UgLFbHWcrYo.png?width=960&crop=smart&auto=webp&s=417a21a5e9a27812ad9351a0c34ccedeba14ef18', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/FcVX6tzRZ8tGOgZD8bxf6fQ4_S6KT4J6UgLFbHWcrYo.png?width=1080&crop=smart&auto=webp&s=64324bd549bd8f37926c841bf6b340077a38246a', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/FcVX6tzRZ8tGOgZD8bxf6fQ4_S6KT4J6UgLFbHWcrYo.png?auto=webp&s=9e868d3108ab54d24350b06a48f4f33f351e5333', 'width': 1200}, 'variants': {}}]} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.