title stringlengths 1 300 | score int64 0 8.54k | selftext stringlengths 0 41.5k | created timestamp[ns]date 2023-04-01 04:30:41 2026-03-04 02:14:14 ⌀ | url stringlengths 0 878 | author stringlengths 3 20 | domain stringlengths 0 82 | edited timestamp[ns]date 1970-01-01 00:00:00 2026-02-19 14:51:53 | gilded int64 0 2 | gildings stringclasses 7
values | id stringlengths 7 7 | locked bool 2
classes | media stringlengths 646 1.8k ⌀ | name stringlengths 10 10 | permalink stringlengths 33 82 | spoiler bool 2
classes | stickied bool 2
classes | thumbnail stringlengths 4 213 ⌀ | ups int64 0 8.54k | preview stringlengths 301 5.01k ⌀ |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
I built a batteries included library to let any app spawn sandboxes from OCI images | 1 | Hey everyone,
I’ve been hacking on a small project that lets you equip (almost) any app with the ability to spawn sandboxes based on OCI-compatible images.
The idea is:
• Your app doesn’t need to know container internals
• It just asks the library to start a sandbox from an OCI image
• The sandbox handles isolation, environment, etc.
Use cases I had in mind:
• Running untrusted code / plugins
• Providing temporary dev environments
• Safely executing user workloads from a web app
Showcase power by this library https://github.com/boxlite-labs/boxlite-mcp
I’m not sure if people would find this useful, so I’d really appreciate:
• Feedback on the idea / design
• Criticism on security assumptions
• Suggestions for better DX or APIs
• “This already exists, go look at X” comments 🙂
If there’s interest I can write a deeper dive on how it works internally (sandbox model, image handling, etc.). | 2025-12-09T17:57:18 | https://www.reddit.com/r/LocalLLaMA/comments/1piebro/i_built_a_batteries_included_library_to_let_any/ | DorianZheng | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1piebro | false | null | t3_1piebro | /r/LocalLLaMA/comments/1piebro/i_built_a_batteries_included_library_to_let_any/ | true | false | spoiler | 1 | null |
I wanted audiobooks of stories that don't exist - so I built an app to read them to me | 1 | After multiple weeks of work, I'm excited to share my passion project: an open-source desktop app for creating audiobooks using AI text-to-speech with voice cloning.
**The story behind it:**
I wanted to listen to fan fiction and web novels that don't have audiobook versions. Commercial TTS services are expensive and therer workflos is not focused on audiobook generation. So I built my own solution that runs completely locally on your machine - no subscriptions, no cloud, your data stays private.
**What makes it different:**
* Clean drag & drop interface for organizing chapters and segments
* Supports multiple TTS engines (XTTS, Chatterbox) - swap them as you like
* Built-in quality check using Whisper to catch mispronunciations and Silero-VAD for audio issues
* Import full books in .md Format and use spaCy for autosegmentation
* Pronunciation rules to fix words the AI struggles with
* Engine template for hassle-free adding of new engines as they get released
**The tech (for those interested):**
Tauri 2 desktop app with React frontend and Python backend. Each AI engine runs in isolation, so you can mix and match without dependency hell. Works on Windows, Linux, and macOS.
**Current state:**
Just released v1.0.1. It's stable and I use it daily for my own audiobooks. Still a solo project, but fully functional.
GitHub: [https://github.com/DigiJoe79/AudioBook-Maker](https://github.com/DigiJoe79/AudioBook-Maker)
Would love feedback from this community. What features would you find most useful? | 2025-12-09T17:40:47 | https://www.reddit.com/r/LocalLLaMA/comments/1piduwm/i_wanted_audiobooks_of_stories_that_dont_exist_so/ | DigiJoe79 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1piduwm | false | null | t3_1piduwm | /r/LocalLLaMA/comments/1piduwm/i_wanted_audiobooks_of_stories_that_dont_exist_so/ | false | false | self | 1 | null |
Bridging local LLMs with specialized agents (personal project) - looking for feedback | 1 | (This post is 100% self-promotion, so feel free to moderate it if it goes against the rules.)
Hi guys, I've been working on this project of mine and I'm trying to get a temperature check if it's something people would be interested in. It's called "Neutra AI" (neutra-ai.com).
The idea is simple: give your local LLM more capabilities. For example, I have developed a fine tuned model that's very good at PC troubleshooting. Then, there's you: you're building a new PC, but you have run into some problems. If you ask your 'gpt-oss-20b' for help , chances are it might not know the answer (but my fine-tuned model will). So, you plug your local LLM into the marketplace, and when you ask it a PC-related question, it will query my fine-tuned agent for assistance and give the answer back to you.
On one side you have the users of local LLMs, on the other - you have the agent providers. The marketplace makes it possible for local models to call "provider" models. (technically speaking, doing a semantic search using the A2A protocol, but I'm still figuring out the details.). "Neutra AI" is the middleware between the two that makes this possible. The process should be mostly plug-and-play, abstracting away the agent discovery phase and payment infrastructure. Think "narrow AI, but with broad applications".
I'm happy to answer any questions and open to all kinds of feedback - both positive and negative. Bring it in, so I'll know if this is something worth spending my time on or not. | 2025-12-09T17:11:25 | https://www.reddit.com/r/LocalLLaMA/comments/1pid250/bridging_local_llms_with_specialized_agents/ | webs7er | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pid250 | false | null | t3_1pid250 | /r/LocalLLaMA/comments/1pid250/bridging_local_llms_with_specialized_agents/ | false | false | self | 1 | null |
People with dual gpu specially 8gb + 16gb mind to share your experience? | 1 | What are the biggest models you can run?
How good is dual gpu setup?
I'm mostly interested in 27b and 32b models.
Currently I have 4060 8gb vram and I'm thinking on getting 5060ti 16gb. | 2025-12-09T17:06:14 | https://www.reddit.com/r/LocalLLaMA/comments/1picwyw/people_with_dual_gpu_specially_8gb_16gb_mind_to/ | ResponsibleTruck4717 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1picwyw | false | null | t3_1picwyw | /r/LocalLLaMA/comments/1picwyw/people_with_dual_gpu_specially_8gb_16gb_mind_to/ | false | false | self | 1 | null |
Phantom-Fragment beta improvement branch | 1 | Well actually 2 months ago I posted regarding phantom fragment and I was working on it's performance then I hit fundamental limits of go language so I had to change to rust and zig (no common language but it's good for c like working) some key improvement
Minimal memory footprint (~1–3 MB per fragment)
Fast cold start (~7 ms) and near-instant warm start
Three execution profiles:
Direct → minimal guards, fastest execution
Sandbox → container-level isolation (namespaces, seccomp, cgroups)
Hardened → full isolation, optional KVM/hardware-level
(There is also a config file you can change ur isolation style according to ur needs)
Docker Compatibility
Dockerfile → Fragmentfile translation
Commands similar to Docker: build, run, exec
Allows reuse of Docker images / workflow with smaller overhead
High-Performance I/O
Uses io_uring for high-speed asynchronous reads/writes
Optimized for both polling and standard modes
Supports ~2–3+ GB/s throughput depending on kernel & permissions
(Earlier with go we had i/o of only 1.5)
Includes MCP server
Earlier MCP was written in go so it wasn't much heavy but as I was changing code so I changed it also to rust as there was good documentation on rust MCP server creation
Developer-Friendly CLI
Single phantom binary controls all operations
Commands: run, create, list, destroy, health, profiles
Designed for quick experimentation, scripting, and CI/CD pipelines
I tried to keep it's cli similar to docker
Modular design: separates performance, security, and orchestration
So anyways my question should I delete the go main branch and complete documention of this version and make it main
| 2025-12-09T17:05:20 | https://www.reddit.com/r/LocalLLaMA/comments/1picw42/phantomfragment_beta_improvement_branch/ | Ok_Horror_8567 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1picw42 | false | null | t3_1picw42 | /r/LocalLLaMA/comments/1picw42/phantomfragment_beta_improvement_branch/ | false | false | self | 1 | null |
DeepSeek V3.2 got gold at IMO and IOI - weights on HF, MIT license, but Speciale expires Dec 15 | 1 | DeepSeek dropped V3.2 last week and the results are kind of insane:
- Gold medal score on IMO 2025 (actual competition problems)
- Gold at IOI 2025 (programming olympiad)
- 2nd place ICPC World Finals
- Beats GPT-5 on math/reasoning benchmarks
The model is on Hugging Face under MIT license: https://huggingface.co/deepseek-ai/DeepSeek-V3.2
Catch: It's 671B parameters (MoE, 37B active). Not exactly laptop-friendly. The "Speciale" variant that got the gold medals is API-only and expires December 15th.
What's interesting: They did this while being banned from buying latest Nvidia chips. Had to innovate on efficiency instead of brute-forcing with compute. The paper goes into their sparse attention mechanism that cuts inference costs ~50% for long contexts.
Anyone tried running the base model locally yet? Curious about actual VRAM requirements and whether the non-Speciale version is still competitive.
(Also made a video breakdown if anyone wants the non-paper version: https://youtu.be/8Fq7UkSxaac)
Paper: https://arxiv.org/abs/2512.02556 | 2025-12-09T17:00:35 | https://www.reddit.com/r/LocalLLaMA/comments/1picr9o/deepseek_v32_got_gold_at_imo_and_ioi_weights_on/ | Proof-Possibility-54 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1picr9o | false | null | t3_1picr9o | /r/LocalLLaMA/comments/1picr9o/deepseek_v32_got_gold_at_imo_and_ioi_weights_on/ | false | false | self | 1 | null |
Building Gemma 3 | 1 | 2025-12-09T16:56:20 | https://www.reddit.com/r/LocalLLaMA/comments/1picn28/building_gemma_3/ | Prashant-Lakhera | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1picn28 | false | null | t3_1picn28 | /r/LocalLLaMA/comments/1picn28/building_gemma_3/ | false | false | 1 | null | ||
[OPENSOURCE] Whisper finetuning, inference, auto gpu upscale, proxy and co | 1 | With my cofounder we spent 2 months building a system to simply generate synthetic data and train Whisper Large V3 Turbo.
We reach on average +50% accuracy.
We built a whole infra like Deepgram that can auto upscale GPUs based on usage, with a proxy to dispatch based on location and inference in 300MS for voice AI.
The company is shutting down but we decided to open source everything.
Feel free to reach out if you need help with setup or usage ✌🏻
[https://github.com/orgs/LATICE-AI/](https://github.com/orgs/LATICE-AI/) | 2025-12-09T16:47:51 | https://www.reddit.com/r/LocalLLaMA/comments/1picexa/opensource_whisper_finetuning_inference_auto_gpu/ | Wide_Appointment9924 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1picexa | false | null | t3_1picexa | /r/LocalLLaMA/comments/1picexa/opensource_whisper_finetuning_inference_auto_gpu/ | false | false | self | 1 | null |
Llama.cpp Vulkan benchmarks by Phoronix | 1 | 2025-12-09T16:47:24 | https://www.phoronix.com/review/llama-cpp-vulkan-eoy2025/2 | ymmvxd | phoronix.com | 1970-01-01T00:00:00 | 0 | {} | 1picehv | false | null | t3_1picehv | /r/LocalLLaMA/comments/1picehv/llamacpp_vulkan_benchmarks_by_phoronix/ | false | false | default | 1 | null | |
Olares One Backer! | 1 | I just backed the Olares One as backer #19. It’s a personal cloud AI computer that ships in February, and I’m genuinely excited to get my hands on it. It can run completely headless, or be plugged into monitors. If you’re into local AI, privacy, or owning your own compute, this one looks promising.
Here’s the link if you want to check it out: [https://olares-one.kckb.me/414de3de](https://olares-one.kckb.me/414de3de) | 2025-12-09T16:29:33 | https://www.reddit.com/r/LocalLLaMA/comments/1pibx3i/olares_one_backer/ | JLToro | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pibx3i | false | null | t3_1pibx3i | /r/LocalLLaMA/comments/1pibx3i/olares_one_backer/ | false | false | self | 1 | null |
Why is Kyutai’s Moshi model so emotionally flat compared to Sesame, even though both use the same MiMi encoder–decoder? | 1 | I’ve been testing Kyutai’s **Moshi** model, and one thing stands out: it’s really *not* emotionally intelligent. It often sounds neutral, flat, or unable to reflect emotions—almost like a regular tts model.
But then there’s **Sesame**, which uses the *exact same* MiMi encoder/decoder architecture (at least from what has been stated) and somehow manages to be way better at emotional understanding and emotional expression.
If both models are built on the same MiMi foundation, **why is the emotional intelligence so different?**
| 2025-12-09T16:12:14 | https://www.reddit.com/r/LocalLLaMA/comments/1pibggs/why_is_kyutais_moshi_model_so_emotionally_flat/ | Forsaken-Turnip-6664 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pibggs | false | null | t3_1pibggs | /r/LocalLLaMA/comments/1pibggs/why_is_kyutais_moshi_model_so_emotionally_flat/ | false | false | self | 1 | null |
New ways to roast people in the AI era | 1 | In the AI era, we can update the way we roast people.
Instead of saying "nerd," try saying "benchmaxxed."
Instead of saying "brain-dead," try saying "pruned/quantized."
Instead of saying "no brain," try saying "low params count."
Instead of saying "didn't study," try saying "undertrained."
Instead of saying "only knows book knowledge," try saying "overfitted."
Instead of saying "boring and dull," try saying "safetymaxxed."
Instead of saying "slow to react," try saying "slow prompt processing/token generation."
Instead of saying "clumsy," try saying "poor tool use performance."
Instead of saying "talks nonsense endlessly," try saying "temperature too high/missing EOS."
Instead of saying "speaks gibberish," try saying "template config error/topK sampling error."
Instead of saying "disobedient," try saying "non-instruct base model."
Instead of saying "doesn't think with the brain," try saying "non-thinking instruct model."
Instead of saying "poor memory," try saying "low context window."
Instead of saying "easily fooled," try saying "vulnerable to prompt injection."
It's normal if you don't understand any of this. If you understand all of these, go outside and touch some grass. | 2025-12-09T16:04:20 | https://www.reddit.com/r/LocalLLaMA/comments/1pib8z9/new_ways_to_roast_people_in_the_ai_era/ | InternationalAsk1490 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pib8z9 | false | null | t3_1pib8z9 | /r/LocalLLaMA/comments/1pib8z9/new_ways_to_roast_people_in_the_ai_era/ | false | false | self | 1 | null |
MagicQuant - Hybrid Evolution GGUF (TPS boosts, precision gains, full transparency) | 1 | I’ve been building a system that evolves **hybrid GGUF quantizations** to automatically find the best tensor level mix for any model.
It’s called **MagicQuant**, and the whole idea is simple:
**Stop guessing quant types. Let the math decide the optimal configuration.**
MagicQuant runs survival rounds, epsilon-greedy exploration, precision-loss scoring, TPS benchmarking, and a ton of tensor-group heuristics to evolve better (and sometimes *way* better) GGUFs than standard baselines.
And the results so far have been amazing.
---
## Example: Seed-OSS 36B
This is one of the crazier results I’ve gotten so far.
The best Q4-range baseline was **IQ4_NL**:
* **19.31 GB**
* **27.70 TPS**
* **1.1076% precision loss**
MagicQuant evolved a hybrid at:
* **18.95 GB**
* **32.00 TPS**
* **0.2709% precision loss**
So:
* Slightly smaller
* **+15.5% faster**
* **~75% LESS precision loss**
This hybrid:
[mxfp4_moe-EHQKOUD-IQ4NL](https://huggingface.co/magiccodingman/Seed-OSS-36B-Instruct-unsloth-MagicQuant-Hybrid-GGUF)
This is the kind of thing MagicQuant keeps finding.
---
## MagicQuant Hybrids for Seed OSS 36B
| model_name | file_size_gb | bench_tps | avg_prec_loss |
| --------------------------------------- | ------------ | --------- | ------------- |
| mxfp4_moe-HK-B16-EO-Q5K-QUD-Q8_0 | 39.71 | 17.73 | **0.0213%** |
| mxfp4_moe-O-MXFP4-EHQKUD-Q8_0 | 35.78 | 18.72 | 0.0272% |
| mxfp4_moe-E-B16-D-IQ4NL-KOU-Q6K-HQ-Q8_0 | 28.02 | 24.27 | 0.1768% |
| mxfp4_moe-EHQKOUD-Q6K | 27.63 | 23.34 | 0.2037% |
| **mxfp4_moe-EHQKOUD-IQ4NL** | **18.95** | **32.00** | **0.2709%** |
| mxfp4_moe-HQKU-IQ4NL-EOD-MXFP4 | 18.66 | 26.90 | 0.7098% |
| MXFP4_MOE | 17.90 | 20.46 | 2.7338% |
---
## Baseline Reference (for comparison)
| model_name | file_size_gb | bench_tps | avg_prec_loss |
| ---------- | ------------ | --------- | ------------- |
| BF16 | 67.35 | 11.48 | 0.0000% |
| Q8_0 | 35.78 | 17.77 | 0.0272% |
| Q6_K | 27.63 | 22.95 | 0.2037% |
| Q5_K | 23.84 | 22.04 | 0.2923% |
| IQ4_NL | 19.31 | 27.70 | 1.1076% |
| MXFP4_MOE | 17.90 | 20.46 | 2.7338% |
| Q4_K_M | 20.27 | 26.65 | 2.9161% |
MagicQuant compares everything against these to determine the “winner.”
---
## What MagicQuant keeps discovering
Different architectures respond to quantization very differently:
* Some *love* MXFP4.
* Some prefer IQ4_NL.
* Some models randomly explode in quality on Q5_K.
* Seed-OSS ditched most baselines entirely.
* Apriel 1.5-15B? That model is a complete gremlin, it loves **Q5_K** more than anything else I’ve thrown at it.
MagicQuant isn’t about producing hybrids for the sake of hybrids.
**MagicQuant is the verdict, whatever wins stays.**
Sometimes that’s a hybrid.
Sometimes the baseline reigns king.
Sometimes Q6_K beats Q8_0 in both TPS and precision.
Sometimes Q4_K_M outperforms IQ4_NL on *certain models.*
Everything depends on the architecture.
---
## Philosophically
I’m honestly tired of downloading Q8/Q6/Q5/Q4 files with **no benchmarks**.
If a quant is bigger, slower, *and* more precision loss, why use it?
If a smaller quant loses 5% precision, I want to **see that number** before downloading.
MagicQuant is my attempt at making quantization:
* empirical
* transparent
* repeatable
* and actually *useful* for the community
Every model will always include:
* benchmark TPS
* precision loss scoring
* file size
* the full hybrid naming breakdown
* data sets
* methodology
* raw results
Everything is open and reproducible.
---
## HuggingFace Collection
All MagicQuant releases live here:
[https://huggingface.co/collections/magiccodingman/magic-quant](https://huggingface.co/collections/magiccodingman/magic-quant)
More hybrids are already in the pipeline.
Right now a dense 4B model takes ~2-3 hours to run. A 30B MOE takes ~24 hours (MOE takes ~double as long due to sensitivity). My prediction engine has to build sample data until confidence is high enough that it can properly predict hybrids. Some models are easier than others. Sine dense models need only 46-55 samples, while others need 120 samples, while some need more or less. The engine figures that out.
---
## Documentation / Wiki
Full documentation, philosophy, naming scheme, methodology, and technical breakdown:
[https://github.com/magiccodingman/MagicQuant-Wiki](https://github.com/magiccodingman/MagicQuant-Wiki)
MagicQuant is still evolving, but the results so far have been extremely promising and the more models I run, the weirder and more interesting the quantization patterns become.
---
But if you have any suggestions, requests for MagicQuant models, holes to poke, I'm all ears. | 2025-12-09T15:47:38 | https://www.reddit.com/r/LocalLLaMA/comments/1piasv8/magicquant_hybrid_evolution_gguf_tps_boosts/ | crossivejoker | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1piasv8 | false | null | t3_1piasv8 | /r/LocalLLaMA/comments/1piasv8/magicquant_hybrid_evolution_gguf_tps_boosts/ | false | false | self | 1 | null |
UPDATE: I turned my Synt-E prompt compiler into a system-wide hotkey tool, based on your feedback! (v1.0.0 Release) | 1 | Hey r/LocalLLaMA,
Yesterday I posted the first version of my Synt-E project, a Python script to compile natural language into efficient commands for local LLMs. I got a couple of stars on GitHub and some really insightful comments that made me rethink the core functionality.
The main issue was the script's fragility in simulating Ctrl+C, which didn't work reliably on all systems. I realized a different approach was needed.
So, I've spent the time since then rebuilding it, and I'm happy to share that **Synt-E v1.0.0 is now released**. It's a much more robust and useful tool now.
The biggest change is a new **100% reliable workflow**: you manually copy the text (Ctrl+C), and then press a hotkey to trigger the translation. This gives you full control and works everywhere.
Based on some great ideas, I also added a suite of "quality of life" features:
* **🚀 System-Wide Hotkey Tool:** The script now runs in the background and can translate your copied text and paste it **in any application** just by pressing a hotkey (default: Ctrl+Alt+S).
* **🛡️ Safety & Control Features:**
* **Undo Hotkey (Ctrl+Alt+U):** You have 10 seconds to revert a translation if you make a mistake.
* **AI Task Cancellation (Ctrl+Alt+C):** You can now interrupt a long-running AI process without shutting down the script.
* **Emergency Reset (Ctrl+Alt+Q):** A panic button in case the script ever freezes keyboard input.
* **⚙️ Fully Configurable:** You can now set your own hotkeys, change the Ollama model, and choose to "append" the translation instead of replacing, all from the command line.
I've just published the **v1.0.0 release** on GitHub with a completely rewritten README that explains how to use all the new features.
**You can check it out here:**
[https://github.com/NeuroTinkerLab/synt-e-project/releases/tag/v1.0.0](https://www.google.com/url?sa=E&q=https%3A%2F%2Fgithub.com%2FNeuroTinkerLab%2Fsynt-e-project%2Freleases%2Ftag%2Fv1.0.0)
I'm keen to hear what you think of this new, more robust version. The goal was to turn it from a simple proof-of-concept into a genuinely useful daily tool.
Thanks for checking it out. | 2025-12-09T15:35:50 | https://www.reddit.com/r/LocalLLaMA/comments/1piahob/update_i_turned_my_synte_prompt_compiler_into_a/ | Prestigious_Mix_2000 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1piahob | false | null | t3_1piahob | /r/LocalLLaMA/comments/1piahob/update_i_turned_my_synte_prompt_compiler_into_a/ | false | false | self | 1 | null |
Devstral-Small-2-24B-Instruct-2512 on Hugging Face | 1 | 2025-12-09T15:29:19 | https://huggingface.co/mistralai/Devstral-Small-2-24B-Instruct-2512 | paf1138 | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1piabn8 | false | null | t3_1piabn8 | /r/LocalLLaMA/comments/1piabn8/devstralsmall224binstruct2512_on_hugging_face/ | false | false | default | 1 | null | |
Devstral 2 - a mistralai Collection | 1 | 2025-12-09T15:23:10 | https://huggingface.co/collections/mistralai/devstral-2 | Dark_Fire_12 | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1pia5sy | false | null | t3_1pia5sy | /r/LocalLLaMA/comments/1pia5sy/devstral_2_a_mistralai_collection/ | false | false | default | 1 | null | |
Devstral 2 and Mistral Vibe CLI released. | 1 | 2025-12-09T15:13:18 | https://mistral.ai/news/devstral-2-vibe-cli | hedgehog0 | mistral.ai | 1970-01-01T00:00:00 | 0 | {} | 1pi9wpt | false | null | t3_1pi9wpt | /r/LocalLLaMA/comments/1pi9wpt/devstral_2_and_mistral_vibe_cli_released/ | false | false | default | 1 | null | |
QonQrete – Local-First Multi-Agent “Construction Yard” for LLM Dev Workflows | 1 | Hey guys/girls,
I’ve been working on a local-first agentic dev pipeline and just cut a beta release I’d love feedback on.
**QonQrete v0.5.0 (beta)** is a local-first, multi-agent orchestration system for building software with LLMs you control. The idea is to treat your machine like an **AI construction yard**: agents plan, write, and review code inside an isolated sandbox, under your supervision.
The design goal is **“my hardware, my keys, my data”**:
* Runs on your own infrastructure
* No SaaS backend or mandatory cloud service
* Designed to plug into remote APIs *and* local runtimes (e.g. self-hosted LLaMA via HTTP/CLI adapters)
**Core flow (3 agents):**
* **InstruQtor** – breaks down tasks and emits structured execution plans
* **ConstruQtor** – executes steps and generates/edits code
* **InspeQtor** – reviews diffs, flags issues, and proposes fixes
All execution happens inside containerized sandboxes (Docker/microsandbox-style), so AI-generated code runs with a strong boundary between **orchestration** and **execution**.
Right now QonQrete ships with adapters for OpenAI, Gemini, Claude, and DeepSeek. The architecture is intentionally simple so that local LLaMA / Ollama / vLLM / text-generation-webui style backends can be added as providers.
If you’re hacking on local LLM stacks and care about:
* privacy
* repeatable multi-agent workflows
* keeping everything on your own boxes
…I’d really appreciate feedback, critiques, or help designing provider plugins for local models.
Repo (AGPL):
[https://github.com/illdynamics/qonqrete](https://github.com/illdynamics/qonqrete) | 2025-12-09T15:11:03 | https://www.reddit.com/r/LocalLLaMA/comments/1pi9uq1/qonqrete_localfirst_multiagent_construction_yard/ | illdynamics | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pi9uq1 | false | null | t3_1pi9uq1 | /r/LocalLLaMA/comments/1pi9uq1/qonqrete_localfirst_multiagent_construction_yard/ | false | false | self | 1 | null |
Best model for 7900 xtx setup | 1 | Hi, I'm looking for a good AI model that will be best in class for my hardware. I have a Ryzen 5800X3D, 32GB RAM, 7900xtx, Windows 10, Lmstudio with MCP. I'm looking for a good model that will be good in many areas for programming, etc. I don't know much about programming, so I'd like the model to do that for me. But it's also suitable for text writing. Which models do you recommend from the ones currently available? | 2025-12-09T15:08:55 | https://www.reddit.com/r/LocalLLaMA/comments/1pi9ssp/best_model_for_7900_xtx_setup/ | meatal_gear1324 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pi9ssp | false | null | t3_1pi9ssp | /r/LocalLLaMA/comments/1pi9ssp/best_model_for_7900_xtx_setup/ | false | false | self | 1 | null |
Introducing: Devstral 2 and Mistral Vibe CLI. | Mistral AI | 1 | 2025-12-09T15:05:54 | https://mistral.ai/news/devstral-2-vibe-cli | YanderMan | mistral.ai | 1970-01-01T00:00:00 | 0 | {} | 1pi9q3t | false | null | t3_1pi9q3t | /r/LocalLLaMA/comments/1pi9q3t/introducing_devstral_2_and_mistral_vibe_cli/ | false | false | default | 1 | null | |
llama.cpp and CUDA 13.1 not using GPU on Win 11 | 1 | Hi all. I'm using `llama.cpp (b7330)` on Windows 11 and tried switching from the CUDA 12-based version to the CUDA 13 (13.1) version. When I run `llama-server` or `llama-bench`, it seems to recognize my **NVIDIA T600 Laptop GPU**, but then it doesn't use it for processing, defaulting entirely to the CPU. Crucially, it still appears to use the VRAM (as I see no increase in system RAM usage). If I revert to using **CUDA 12 (12.9)**, everything runs on the GPU as expected. Are there **known compatibility issues** between older cards like the T600 and recent CUDA 13.x builds? Or I'm doing something wrong? | 2025-12-09T15:00:04 | https://www.reddit.com/r/LocalLLaMA/comments/1pi9ku4/llamacpp_and_cuda_131_not_using_gpu_on_win_11/ | Haunting_Dingo2129 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pi9ku4 | false | null | t3_1pi9ku4 | /r/LocalLLaMA/comments/1pi9ku4/llamacpp_and_cuda_131_not_using_gpu_on_win_11/ | false | false | self | 1 | null |
PaCoRe: The first open-source deep think 8B model beats GPT-5 on HMMT25 | 1 | Introducing Parallel Coordinated Reasoning (PaCoRe)
An 8B **model beats GPT-5 on** HMMT25 by unlocking parallel thinking for te**st-time scaling!**
The first open-source **deep think:** data + model + inference code!
MIT-licensed — use it however you want
\- Github: [https://github.com/stepfun-ai/PaCoRe](https://github.com/stepfun-ai/PaCoRe)
\- Paper: [https://github.com/stepfun-ai/PaCoRe/blob/main/pacore\_report.pdf](https://github.com/stepfun-ai/PaCoRe/blob/main/pacore_report.pdf)
\- Model: [https://huggingface.co/stepfun-ai/PaCoRe-8B](https://huggingface.co/stepfun-ai/PaCoRe-8B)
\- Data: [https://huggingface.co/datasets/stepfun-ai/PaCoRe-Train-8k](https://huggingface.co/datasets/stepfun-ai/PaCoRe-Train-8k)
https://preview.redd.it/vgqyqe7fy66g1.png?width=814&format=png&auto=webp&s=d0db9e68bf4e8750170acb24abb42a1b26eb2764
https://preview.redd.it/7tufs4jhy66g1.png?width=851&format=png&auto=webp&s=b5ce148934a1569d2df62cf271d9bb7d36ad94f3
| 2025-12-09T14:54:13 | https://www.reddit.com/r/LocalLLaMA/comments/1pi9fpf/pacore_the_first_opensource_deep_think_8b_model/ | Fancy_Fanqi77 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pi9fpf | false | null | t3_1pi9fpf | /r/LocalLLaMA/comments/1pi9fpf/pacore_the_first_opensource_deep_think_8b_model/ | false | false | self | 1 | null |
Crowdsourcing World Models | 1 | Open Ontology is a public engine for generating structured world models for any topic.
Create new models, expand existing ones, run discoveries, and help grow a shared, organized map of knowledge using your own API keys.
Each model is a full ontology: concepts, relationships, subdomains, workflows, and reasoning structures arranged in a clean, consistent hierarchy. Every contribution is validated, cleaned, and merged, keeping models stable as they scale.
Public models are fully open. Anyone can create them, improve them, and download them freely.
Premium users unlock private models and full agent-generation tools. Build internal knowledge systems, generate workflows and reasoning templates, and export complete agent packages ready for deployment.
Open Ontology brings together community compute, structure, and collaboration to create a growing library of topic-specific world models for AI.
https://openontology.app/ | 2025-12-09T14:49:04 | https://www.reddit.com/r/LocalLLaMA/comments/1pi9b5a/crowdsourcing_world_models/ | OpenOntology | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pi9b5a | false | null | t3_1pi9b5a | /r/LocalLLaMA/comments/1pi9b5a/crowdsourcing_world_models/ | false | false | self | 1 | null |
Micron 🧐💀 | 1 | -> today, what companies that train models are looking for is to look for optimizations and that it is cheap to train, for example when the TPU issue came up, that is, there will not always be a high demand
-> perhaps in 2026 more optimizations will come out of China, which may lead to lower consumption
-> An HBM plant takes approximately 1 year to build, what if optimizations come out within a year? 💀
Note:
https://finance.yahoo.com/news/micron-plans-9-6-billion-125500795.html | 2025-12-09T14:38:24 | Illustrious-Swim9663 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1pi91e7 | false | null | t3_1pi91e7 | /r/LocalLLaMA/comments/1pi91e7/micron/ | false | false | default | 1 | {'images': [{'source': {'url': 'https://preview.redd.it/ol4nyn3sw66g1.jpeg?auto=webp&s=27479d484e39ad2946b23df7f4fa729077813a37', 'width': 1080, 'height': 197}, 'resolutions': [{'url': 'https://preview.redd.it/ol4nyn3sw66g1.jpeg?width=108&crop=smart&auto=webp&s=bb20a8de4a4878d8a4d10a7c5c39d196f543abf7', 'width': 108, 'height': 19}, {'url': 'https://preview.redd.it/ol4nyn3sw66g1.jpeg?width=216&crop=smart&auto=webp&s=51fd5612190efae3269e6dbaeb98fa02bf879a68', 'width': 216, 'height': 39}, {'url': 'https://preview.redd.it/ol4nyn3sw66g1.jpeg?width=320&crop=smart&auto=webp&s=8b04c2916e1c3f92dfaf5b2f8b5e37a4c66ef6ce', 'width': 320, 'height': 58}, {'url': 'https://preview.redd.it/ol4nyn3sw66g1.jpeg?width=640&crop=smart&auto=webp&s=b67e605d76f5dfcb76784cc4fb94dfc00e60c983', 'width': 640, 'height': 116}, {'url': 'https://preview.redd.it/ol4nyn3sw66g1.jpeg?width=960&crop=smart&auto=webp&s=94ef999af8f64dbbd94b22e1d399f3d0c0234939', 'width': 960, 'height': 175}, {'url': 'https://preview.redd.it/ol4nyn3sw66g1.jpeg?width=1080&crop=smart&auto=webp&s=530d09ad76a381be6f7dd274939878e9a1e9a699', 'width': 1080, 'height': 197}], 'variants': {}, 'id': 'ol4nyn3sw66g1'}], 'enabled': True} | |
Micron 💀🧐 | 1 | [deleted] | 2025-12-09T14:37:35 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 1pi90q0 | false | null | t3_1pi90q0 | /r/LocalLLaMA/comments/1pi90q0/micron/ | false | false | default | 1 | null | ||
Which small model is best for fine-tuning? We tested 12 of them by spending $10K - here's what we found | 1 | **TL;DR:** We fine-tuned 12 small models to find which ones are most tunable and perform best after fine-tuning. Surprise finding: Llama-3.2-1B showed the biggest improvement (most tunable), while Qwen3-4B delivered the best final performance - matching a 120B teacher on 7/8 tasks and outperforming by 19 points on the SQuAD 2.0 dataset.
**Setup:**
12 models total - Qwen3 (8B, 4B, 1.7B, 0.6B), Llama (3.1-8B, 3.2-3B, 3.2-1B), SmolLM2 (1.7B, 135M), Gemma (1B, 270M), and Granite 8B.
Used GPT-OSS 120B as teacher to generate 10k synthetic training examples per task. Fine-tuned everything with identical settings: LoRA rank 64, 4 epochs, 5e-5 learning rate.
Tested on 8 benchmarks: classification tasks (TREC, Banking77, Ecommerce, Mental Health), document extraction, and QA (HotpotQA, Roman Empire, SQuAD 2.0).
**Finding #1: Tunability (which models improve most)**
The smallest models showed the biggest gains from fine-tuning. Llama-3.2-1B ranked #1 for tunability, followed by Llama-3.2-3B and Qwen3-0.6B.
This pattern makes sense - smaller models start weaker but have more room to grow. Fine-tuning closed the gap hard. The 8B models ranked lowest for tunability not because they're bad, but because they started strong and had less room to improve.
If you're stuck with small models due to hardware constraints, this is good news. Fine-tuning can make a 1B model competitive with much larger models on specific tasks.
**Finding #2: Best fine-tuned performance (can student match teacher?)**
Qwen3-4B-Instruct-2507 came out on top for final performance. After fine-tuning, it matched or exceeded the 120B teacher on 7 out of 8 benchmarks.
Breakdown: TREC (+3 points), Docs (+2), Ecommerce (+3), HotpotQA (tied), Mental Health (+1), Roman Empire (+5). Only fell short on Banking77 by 3 points.
SQuAD 2.0 was wild - the 4B student scored 0.71 vs teacher's 0.52. That's a 19 point gap favoring the smaller model. A model 30x smaller outperforming the one that trained it.
Before fine-tuning, the 8B models dominated everything. After fine-tuning, model size mattered way less.
If you're running stuff on your own hardware, you can get frontier-level performance from a 4B model on a single consumer GPU. No expensive cloud instances. No API rate limits.
Let us know if there's a specific model you want benchmarked.
Full write-up: [https://www.distillabs.ai/blog/we-benchmarked-12-small-language-models-across-8-tasks-to-find-the-best-base-model-for-fine-tuning](https://www.distillabs.ai/blog/we-benchmarked-12-small-language-models-across-8-tasks-to-find-the-best-base-model-for-fine-tuning) | 2025-12-09T14:35:50 | party-horse | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1pi8z74 | false | null | t3_1pi8z74 | /r/LocalLLaMA/comments/1pi8z74/which_small_model_is_best_for_finetuning_we/ | false | false | default | 1 | {'images': [{'source': {'url': 'https://preview.redd.it/h9d1fvb7w66g1.png?auto=webp&s=1f9df0d4703570e0715694a88e58739c19e86c69', 'width': 6000, 'height': 3375}, 'resolutions': [{'url': 'https://preview.redd.it/h9d1fvb7w66g1.png?width=108&crop=smart&auto=webp&s=d18693781e0da66e49d98d3f81e74f30f9ceff1b', 'width': 108, 'height': 60}, {'url': 'https://preview.redd.it/h9d1fvb7w66g1.png?width=216&crop=smart&auto=webp&s=6fd13a8429d64d51d8d615326f99d7b8073257d3', 'width': 216, 'height': 121}, {'url': 'https://preview.redd.it/h9d1fvb7w66g1.png?width=320&crop=smart&auto=webp&s=4a41155408dcadc3799b389b667feba443c8bfed', 'width': 320, 'height': 180}, {'url': 'https://preview.redd.it/h9d1fvb7w66g1.png?width=640&crop=smart&auto=webp&s=3cd2f0eef8f43dfd10528e31cb2b9efd46b87bfb', 'width': 640, 'height': 360}, {'url': 'https://preview.redd.it/h9d1fvb7w66g1.png?width=960&crop=smart&auto=webp&s=7ac7c705dd217acea8f32ccf0995d41a6770c6d0', 'width': 960, 'height': 540}, {'url': 'https://preview.redd.it/h9d1fvb7w66g1.png?width=1080&crop=smart&auto=webp&s=e225fb0a53d4839ee0155e7e88709109033abf28', 'width': 1080, 'height': 607}], 'variants': {}, 'id': 'h9d1fvb7w66g1'}], 'enabled': True} | |
Rnj-1 8B , 43.3 on AIME25 , wow - anyone tried it? | 1 | ERROR: type should be string, got "\n\nhttps://preview.redd.it/9o0g3wo8v66g1.png?width=3864&format=png&auto=webp&s=01499253ab84c6e780c6ac8c9dbe59adb4ec4dbe\n\n[https://huggingface.co/EssentialAI/rnj-1-instruct](https://huggingface.co/EssentialAI/rnj-1-instruct) " | 2025-12-09T14:30:22 | https://www.reddit.com/r/LocalLLaMA/comments/1pi8uid/rnj1_8b_433_on_aime25_wow_anyone_tried_it/ | Terrible_Scar_9890 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pi8uid | false | null | t3_1pi8uid | /r/LocalLLaMA/comments/1pi8uid/rnj1_8b_433_on_aime25_wow_anyone_tried_it/ | false | false | 1 | null | |
[HELP] Open source implementation for making derestricted models | 4 | [Orion-zhen/abliteration](https://github.com/Orion-zhen/abliteration).
I have implemented [Norm-Preserving Biprojected Abliteration](https://huggingface.co/blog/grimjim/norm-preserving-biprojected-abliteration) method here, but I have no idea how to make a derestricted model that actually works.
So I need help from the channel:
* Check whether my code is correct
* How should I prepare the harmful/harmless prompts to achieve the optimal result
* How to find the optimal parameters? I have implemented fine‑grained layer‑by‑layer control, as well as toggle on/off of biprojection and norm-preserving method, but I do not know exactly how to find the best configuration
* Use this code to *derestrict* some models, then give me feedback
Thank you all! 🤗 | 2025-12-09T14:21:26 | https://www.reddit.com/r/LocalLLaMA/comments/1pi8n28/help_open_source_implementation_for_making/ | Apprehensive_Bed7502 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pi8n28 | false | null | t3_1pi8n28 | /r/LocalLLaMA/comments/1pi8n28/help_open_source_implementation_for_making/ | false | false | self | 4 | {'enabled': False, 'images': [{'id': 'Kch7kfsEVSv3T3RBqy9tmq4C6kmfx2ZjKdmJGcDepBE', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/Kch7kfsEVSv3T3RBqy9tmq4C6kmfx2ZjKdmJGcDepBE.png?width=108&crop=smart&auto=webp&s=737ea7e8f7ba8cb906366d277badffac8932869f', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/Kch7kfsEVSv3T3RBqy9tmq4C6kmfx2ZjKdmJGcDepBE.png?width=216&crop=smart&auto=webp&s=c928ec08a529a2348f0b5af5af25254d704d7ae0', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/Kch7kfsEVSv3T3RBqy9tmq4C6kmfx2ZjKdmJGcDepBE.png?width=320&crop=smart&auto=webp&s=c027a0607c0f7c1104c03f9ccd1fa542189f7ce9', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/Kch7kfsEVSv3T3RBqy9tmq4C6kmfx2ZjKdmJGcDepBE.png?width=640&crop=smart&auto=webp&s=490c9d5dc66b33c6caa3666aae264a32121d44b4', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/Kch7kfsEVSv3T3RBqy9tmq4C6kmfx2ZjKdmJGcDepBE.png?width=960&crop=smart&auto=webp&s=5aa540eb6b237896130553e9560f7b9dcfdcec46', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/Kch7kfsEVSv3T3RBqy9tmq4C6kmfx2ZjKdmJGcDepBE.png?width=1080&crop=smart&auto=webp&s=137b95de7532bd75bc323f1eac7e1408cf415c06', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/Kch7kfsEVSv3T3RBqy9tmq4C6kmfx2ZjKdmJGcDepBE.png?auto=webp&s=523e55dbf5143bca6f589e517acf647759e5e4ac', 'width': 1200}, 'variants': {}}]} |
Artifex: A tiny CPU-friendly toolkit for inference and fine-tuning small LLMs without training data | 5 | Hi everyone,
I’ve been working on a lightweight Python toolkit called **Artifex**, aimed at making it easy to **run** and **fine-tune** small LLMs entirely on CPU and without training data.
GitHub: [https://github.com/tanaos/artifex](https://github.com/tanaos/artifex?utm_source=chatgpt.com)
A lot of small/CPU-capable LLM libraries focus on inference only. If you want to *fine-tune* without powerful hardware, the options get thin quickly, the workflow gets fragmented. Besides, you always need large datasets.
Artifex gives you a simple, unified approach for:
* **Inference on CPU** with small pre-trained models
* **Fine-tuning without training data** — you specify what the model should do, and the pre-trained model gets fine-tuned on synthetic data generated on-the-fly
* **Clean, minimal APIs** that are easy to extend
* **Zero GPUs required**
Early feedback would be super helpful:
* What small models do you care about?
* Which small models are you using day-to-day?
* Any features you’d want to see supported?
I’d love to evolve this with real use cases from people actually running LLMs locally.
Thanks for reading, and hope this is useful to some of you. | 2025-12-09T14:15:30 | https://www.reddit.com/r/LocalLLaMA/comments/1pi8i1k/artifex_a_tiny_cpufriendly_toolkit_for_inference/ | Ok_Hold_5385 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pi8i1k | false | null | t3_1pi8i1k | /r/LocalLLaMA/comments/1pi8i1k/artifex_a_tiny_cpufriendly_toolkit_for_inference/ | false | false | self | 5 | null |
[help] RTX pro 6000 - llama.cpp Qwen3-Next-80B maxes out at 70% gpu? | 0 | Hey all,
I've got a question. I run `Qwen3-Next-80B-A3B-Instruct-Q6_KQwen3-Next-80B-A3B-Instruct-Q6_K` on my `RTX pro 6000 max-q 96gb`. But it maxes out at 70% with peaks to 75% gpu utilization. Is there a way to optimize my settings??
llama-swap settings:
"Qwen3-Next-8`0B-A3B-Instruct":`
`name: "Qwen3-Next-80B-A3B-Instruct-GGUF:Q6_K"`
`description: "Q6_K,F16 context, 65K"`
`filters:`
`strip_params: "temperature, top_k, top_p, min_p, presence_penalty"`
`proxy: "127.0.0.1:5802"`
`cmd: |`
`/app/llama-server`
`--host` [`0.0.0.0`](http://0.0.0.0)
`#--port ${PORT}`
`--port 5802`
`-ngl 99`
`--flash-attn on`
`--jinja`
`--threads -1`
`--temp 0.7 --min-p 0.0 --top-p 0.80 --top-k 20 --presence-penalty 1.0`
`--model /models/unsloth/Qwen3-Next-80B-A3B-Instruct/Q6_K/Qwen3-Next-80B-A3B-Instruct-Q6_K-00001-of-00002.gguf`
`--ctx-size 200000`
`--api-key local-claude`
`--parallel 1`
`--cont-batching`
`--defrag-thold 0.1`
`--cache-type-k f16`
`--cache-type-v f16`
`--batch-size 4096`
`--ubatch-size 2048`
| 2025-12-09T13:57:29 | https://www.reddit.com/r/LocalLLaMA/comments/1pi82p2/help_rtx_pro_6000_llamacpp_qwen3next80b_maxes_out/ | designbanana | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pi82p2 | false | null | t3_1pi82p2 | /r/LocalLLaMA/comments/1pi82p2/help_rtx_pro_6000_llamacpp_qwen3next80b_maxes_out/ | false | false | self | 0 | null |
A return to dense models? | 0 | It seems like an easy no based on previous conversations with model makers, but current RAM prices would argue against the norm.
I think the tie breaker is those building models already have the RAM and are still compute bound.
What are your thoughts on this possibility? | 2025-12-09T13:39:13 | https://www.reddit.com/r/LocalLLaMA/comments/1pi7nj0/a_return_to_dense_models/ | silenceimpaired | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pi7nj0 | false | null | t3_1pi7nj0 | /r/LocalLLaMA/comments/1pi7nj0/a_return_to_dense_models/ | false | false | self | 0 | null |
Targetly - Deploy MCP Tools in One Command | 0 | Hey folks,
I’ve been building Targetly, a lightweight cloud runtime made specifically for hosting MCP tools. The goal is dead simple: your local MCP tool → a fully deployed, publicly accessible MCP server in one command.
It runs in an isolated container, handles resource management behind the scenes, and doesn't bother you with the usual infra yak-shaving.
* No infrastructure.
* No YAML jungles.
* No servers to babysit.
If you want to give the MVP a spin:
# Add the tap
brew tap Targetly-Labs/tly https://github.com/Targetly-Labs/brew-tly
# Install tly
brew install tly
# Login
tly login # Use any email
# If you want you can use tly init to get boilerplate code for MCP server
# Deploy in one go
tly deploy # Boom—your MCP server is live
It’s free to use.
If you try it out, I’d love to hear where it shines, where it breaks, or what you'd want next.
Thanks | 2025-12-09T13:34:25 | https://www.reddit.com/r/LocalLLaMA/comments/1pi7jiv/targetly_deploy_mcp_tools_in_one_command/ | LegitimateKey7444 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pi7jiv | false | null | t3_1pi7jiv | /r/LocalLLaMA/comments/1pi7jiv/targetly_deploy_mcp_tools_in_one_command/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': '-Ehu2EjIKzkfojQYTVlvhM2w4T7YUelwk0HJwxKfkDk', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/-Ehu2EjIKzkfojQYTVlvhM2w4T7YUelwk0HJwxKfkDk.png?width=108&crop=smart&auto=webp&s=579858cdc9cf715dee9756efc5f203b721b32d2f', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/-Ehu2EjIKzkfojQYTVlvhM2w4T7YUelwk0HJwxKfkDk.png?width=216&crop=smart&auto=webp&s=03fd799ba445895cbe3259f2381f61c51942bbe8', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/-Ehu2EjIKzkfojQYTVlvhM2w4T7YUelwk0HJwxKfkDk.png?width=320&crop=smart&auto=webp&s=b2d34e7bd5811b38fb2cb02010b914226c54f0fb', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/-Ehu2EjIKzkfojQYTVlvhM2w4T7YUelwk0HJwxKfkDk.png?width=640&crop=smart&auto=webp&s=6dc374329d79a8793e80cc2a3f5fb4294ee83d32', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/-Ehu2EjIKzkfojQYTVlvhM2w4T7YUelwk0HJwxKfkDk.png?width=960&crop=smart&auto=webp&s=25f278a9a864a373f1273290bda8ed6ab088d321', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/-Ehu2EjIKzkfojQYTVlvhM2w4T7YUelwk0HJwxKfkDk.png?width=1080&crop=smart&auto=webp&s=8ec1b1caf03e8f58afceb87ed36e51cbee004ca9', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/-Ehu2EjIKzkfojQYTVlvhM2w4T7YUelwk0HJwxKfkDk.png?auto=webp&s=b482ebca92511c7b50a74c46157db457b94acf7d', 'width': 1200}, 'variants': {}}]} |
How to run Qwen3-next 80b when you are poor | 0 | So, qwen3-next is finally available in ollama. Kudos to Alibabians out there.
Any ideas how to run it without +51GB of VRAM for the Q4 quant? My current setup is 2x RTX3090 so 48GB of Vram, the server has 256GB of ddr4 with 80 cpus, so while I technically \_can run\_ the model (same with gpt-oss:120b) well the token generation speed is far from usable. 1tok/sec if not less.
Is there a way to somehow get it run faster with dual RTX 3090? Sadly cant fit one more RTX in the chassis :S
Selling liver to throw $10k usd on RTX 6000 Pro seems a bit to steep imho :S | 2025-12-09T13:33:06 | https://www.reddit.com/r/LocalLLaMA/comments/1pi7ig7/how_to_run_qwen3next_80b_when_you_are_poor/ | ChopSticksPlease | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pi7ig7 | false | null | t3_1pi7ig7 | /r/LocalLLaMA/comments/1pi7ig7/how_to_run_qwen3next_80b_when_you_are_poor/ | false | false | self | 0 | null |
got tired of staring at raw logs while my local agents ran, so I built a "Mission Control" UI that connects to my terminal. Thoughts? | 19 | I've been running a lot of long-running agents (Claude Code / Open Interpreter / Codex) on my local machine and VPS.
The problem is: if I step away, I lose visibility. And reading raw matrix-style logs on my phone via SSH is painful. (I even built an Android app for that)
I built this "Control Plane" prototype. It basically pipes stdout from my local terminal to a web dashboard.
Left: Raw terminal stream.
Right: It parses "Thoughts" vs "Logs" into a clean timeline.
Features: I added a "Pause" button that actually sends a signal back to the local process to halt execution if the agent starts hallucinating.
Is this something you'd use? Any features you would like to see? | 2025-12-09T13:16:45 | https://v.redd.it/8rqdptuwh66g1 | Durst123 | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1pi75j1 | false | {'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/8rqdptuwh66g1/DASHPlaylist.mpd?a=1767878234%2CNGZhOTNmZGFlYjM2ZTY2Y2FhODAwYTY1Y2E5MDMzNWZlNDFmMDM1OTUyZDY2NjIzNGRkZDgyOWZiNTM5ODc2MQ%3D%3D&v=1&f=sd', 'duration': 22, 'fallback_url': 'https://v.redd.it/8rqdptuwh66g1/CMAF_720.mp4?source=fallback', 'has_audio': True, 'height': 720, 'hls_url': 'https://v.redd.it/8rqdptuwh66g1/HLSPlaylist.m3u8?a=1767878234%2CODM4ZjMxYTg5NDE3ZDIzODU4YTVhMmI2OWUzNjhkY2MwM2VmYTEyN2YzNzVkOWZhZGY5N2E0ODA0MjU0YThiMA%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/8rqdptuwh66g1/CMAF_96.mp4', 'transcoding_status': 'completed', 'width': 1082}} | t3_1pi75j1 | /r/LocalLLaMA/comments/1pi75j1/got_tired_of_staring_at_raw_logs_while_my_local/ | false | false | 19 | {'enabled': False, 'images': [{'id': 'OTVjcnA0dndoNjZnMWDQvuPx_qAwAWbSlGAjJ1Y3p2Pqxou6uljBUEewC7tF', 'resolutions': [{'height': 71, 'url': 'https://external-preview.redd.it/OTVjcnA0dndoNjZnMWDQvuPx_qAwAWbSlGAjJ1Y3p2Pqxou6uljBUEewC7tF.png?width=108&crop=smart&format=pjpg&auto=webp&s=707434d028b5a2af9f92cdd0293333252d6c355d', 'width': 108}, {'height': 143, 'url': 'https://external-preview.redd.it/OTVjcnA0dndoNjZnMWDQvuPx_qAwAWbSlGAjJ1Y3p2Pqxou6uljBUEewC7tF.png?width=216&crop=smart&format=pjpg&auto=webp&s=9517a3d242f3fed56af779075b558eb97f258c06', 'width': 216}, {'height': 212, 'url': 'https://external-preview.redd.it/OTVjcnA0dndoNjZnMWDQvuPx_qAwAWbSlGAjJ1Y3p2Pqxou6uljBUEewC7tF.png?width=320&crop=smart&format=pjpg&auto=webp&s=e542c5920d1baf80f2269d4b929e2dfa0a710fae', 'width': 320}, {'height': 425, 'url': 'https://external-preview.redd.it/OTVjcnA0dndoNjZnMWDQvuPx_qAwAWbSlGAjJ1Y3p2Pqxou6uljBUEewC7tF.png?width=640&crop=smart&format=pjpg&auto=webp&s=28264efca2e84b555a6250352a16502adc79d4e6', 'width': 640}, {'height': 638, 'url': 'https://external-preview.redd.it/OTVjcnA0dndoNjZnMWDQvuPx_qAwAWbSlGAjJ1Y3p2Pqxou6uljBUEewC7tF.png?width=960&crop=smart&format=pjpg&auto=webp&s=2d3ebc68d5602ff7813064f81fbb1536bff95170', 'width': 960}, {'height': 718, 'url': 'https://external-preview.redd.it/OTVjcnA0dndoNjZnMWDQvuPx_qAwAWbSlGAjJ1Y3p2Pqxou6uljBUEewC7tF.png?width=1080&crop=smart&format=pjpg&auto=webp&s=605798b1a05137bb4359c5afb4ea44caccdc922a', 'width': 1080}], 'source': {'height': 780, 'url': 'https://external-preview.redd.it/OTVjcnA0dndoNjZnMWDQvuPx_qAwAWbSlGAjJ1Y3p2Pqxou6uljBUEewC7tF.png?format=pjpg&auto=webp&s=b62dbd926e274b36d059f37b5f2e65b5f9461d80', 'width': 1172}, 'variants': {}}]} | |
Discussion: Notable LLM, RAG and Agents Papers from Early December 2025 | 1 | [removed] | 2025-12-09T13:14:07 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 1pi73ej | false | null | t3_1pi73ej | /r/LocalLLaMA/comments/1pi73ej/discussion_notable_llm_rag_and_agents_papers_from/ | false | false | default | 1 | null | ||
Improving tps from gpt-oss-120b on 16gb VRAM & 80gb DDR4 RAM | 1 | Getting 6.5 tokens per second running gpt-oss-120b on LM Studio. Surprised it even ran, but definitely very slow.
Current setup:
- Intel i7-11700 @ 2.50GHz
- 1x 5060Ti 16gb on PCIe x16
- 2x 32 GB DDR4-3200 CL20 RAM
- 1x 16 GB DDR4-3200 CL20 RAM
Would there be any increase in performance if I added an additional 5060Ti onto the PCIe x4 slot, and switched to 4x sticks of 32GB RAM for a total of 128GB?
(My motherboard does not allow bifurcation on the x16 slot so I’m stuck with using the remaining x4 slot for the extra GPU) | 2025-12-09T13:06:12 | https://www.reddit.com/r/LocalLLaMA/comments/1pi6x6c/improving_tps_from_gptoss120b_on_16gb_vram_80gb/ | Careful_Breath_1108 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pi6x6c | false | null | t3_1pi6x6c | /r/LocalLLaMA/comments/1pi6x6c/improving_tps_from_gptoss120b_on_16gb_vram_80gb/ | false | false | self | 1 | null |
Discussion: Notable LLM Papers from Early December 2025 | 1 | [removed] | 2025-12-09T13:02:36 | https://www.reddit.com/r/LocalLLaMA/comments/1pi6uch/discussion_notable_llm_papers_from_early_december/ | Dear-Success-1441 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pi6uch | false | null | t3_1pi6uch | /r/LocalLLaMA/comments/1pi6uch/discussion_notable_llm_papers_from_early_december/ | false | false | self | 1 | null |
Discussion: The 1,000x Cost Gap. Why I finally stopped renting GPT-4 and moved to Llama 3. | 1 | [removed] | 2025-12-09T12:59:30 | https://www.reddit.com/r/LocalLLaMA/comments/1pi6rsu/discussion_the_1000x_cost_gap_why_i_finally/ | Medium_Cup_4423 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pi6rsu | false | null | t3_1pi6rsu | /r/LocalLLaMA/comments/1pi6rsu/discussion_the_1000x_cost_gap_why_i_finally/ | false | false | self | 1 | null |
Must-Read LLM Papers from Last Week (December Week 1, 2025) | 1 | [removed] | 2025-12-09T12:58:10 | https://www.reddit.com/r/LocalLLaMA/comments/1pi6qut/mustread_llm_papers_from_last_week_december_week/ | Dear-Success-1441 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pi6qut | false | null | t3_1pi6qut | /r/LocalLLaMA/comments/1pi6qut/mustread_llm_papers_from_last_week_december_week/ | false | false | 1 | null | |
Models that has the least collapse when ctx length grows. Especially using it with tools. | 16 | what is your experience. Any models you can realiably push to 128k or even past that with consistent success and not getting into retry loops or thinking loops with tools?? My best expereince so far is gpt-oss at 64k but past 64k its starts to get hickups and missaps. what are your experiences? | 2025-12-09T12:44:57 | https://www.reddit.com/r/LocalLLaMA/comments/1pi6hbx/models_that_has_the_least_collapse_when_ctx/ | Express_Quail_1493 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pi6hbx | false | null | t3_1pi6hbx | /r/LocalLLaMA/comments/1pi6hbx/models_that_has_the_least_collapse_when_ctx/ | false | false | self | 16 | null |
Hunyuan1.5 is actually a scam | 0 | I am getting awful results!! It doesn't match Amy of the official demo! Wasted so much time on this garbage.
I have tried so many variations too: cfg distilled, fp8, GGUF, step distilled.
All garbage. | 2025-12-09T12:42:18 | https://www.reddit.com/r/LocalLLaMA/comments/1pi6fhh/hunyuan15_is_actually_a_scam/ | Slight_Tone_2188 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pi6fhh | false | null | t3_1pi6fhh | /r/LocalLLaMA/comments/1pi6fhh/hunyuan15_is_actually_a_scam/ | false | false | self | 0 | null |
Will the LLM size grow or not? | 0 | What is your observation of the current trend and prediction for the future - will the LLM size grow or not?
I am asking not only about the parameter size but also about the actual model size on the disk. | 2025-12-09T12:22:44 | ThingRexCom | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1pi622p | false | null | t3_1pi622p | /r/LocalLLaMA/comments/1pi622p/will_the_llm_size_grow_or_not/ | false | false | 0 | {'enabled': True, 'images': [{'id': 'x3QY_cN_H4zVWhqLUedpdLHcmkLa7ZS5oQPfPB2IurI', 'resolutions': [{'height': 58, 'url': 'https://preview.redd.it/vr7igap0866g1.png?width=108&crop=smart&auto=webp&s=17edeb532d8dfdaea6b52e0abf19be7520d249f6', 'width': 108}, {'height': 117, 'url': 'https://preview.redd.it/vr7igap0866g1.png?width=216&crop=smart&auto=webp&s=4956235ff9f34ade8c84c3fef9abfc00d8da2334', 'width': 216}, {'height': 174, 'url': 'https://preview.redd.it/vr7igap0866g1.png?width=320&crop=smart&auto=webp&s=a28b81c3f86752ffd68b3a39091e87fe30c93079', 'width': 320}, {'height': 349, 'url': 'https://preview.redd.it/vr7igap0866g1.png?width=640&crop=smart&auto=webp&s=c6433189611edb8aa2e680c9d86b9088458b1a83', 'width': 640}, {'height': 523, 'url': 'https://preview.redd.it/vr7igap0866g1.png?width=960&crop=smart&auto=webp&s=515fae7769c13404b1a0725c9c38282b541eaa5f', 'width': 960}, {'height': 589, 'url': 'https://preview.redd.it/vr7igap0866g1.png?width=1080&crop=smart&auto=webp&s=b6015b16d5ef83d61fd22e7558eb0c5c23862fb8', 'width': 1080}], 'source': {'height': 1536, 'url': 'https://preview.redd.it/vr7igap0866g1.png?auto=webp&s=c9bf156c73cdc0e815cf12584e3a8a32a72610ed', 'width': 2816}, 'variants': {}}]} | ||
"I don't think it's a good idea for AI models to encourage cautionary views on majority rule." | 0 | 2025-12-09T12:12:09 | https://www.reddit.com/r/LocalLLaMA/comments/1pi5uy2/i_dont_think_its_a_good_idea_for_ai_models_to/ | Virtual-Quail5760 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pi5uy2 | false | null | t3_1pi5uy2 | /r/LocalLLaMA/comments/1pi5uy2/i_dont_think_its_a_good_idea_for_ai_models_to/ | false | false | 0 | null | ||
I built a visualizer to explain why LLMs fail at simple math (The BPE Tokenization Trap) - Feedback wanted | 1 | [removed] | 2025-12-09T11:37:45 | https://www.reddit.com/r/LocalLLaMA/comments/1pi593c/i_built_a_visualizer_to_explain_why_llms_fail_at/ | Emotional_Bike764 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pi593c | false | null | t3_1pi593c | /r/LocalLLaMA/comments/1pi593c/i_built_a_visualizer_to_explain_why_llms_fail_at/ | false | false | 1 | null | |
GLM-4.6V Model Now Available in GGUF Format | 90 | I recently came the GGUF version of the popular GLM-4.6V model. I shared this as this will be useful to many who want to try this model. | 2025-12-09T11:20:33 | https://huggingface.co/unsloth/GLM-4.6V-Flash-GGUF | Dear-Success-1441 | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1pi4yr4 | false | null | t3_1pi4yr4 | /r/LocalLLaMA/comments/1pi4yr4/glm46v_model_now_available_in_gguf_format/ | false | false | default | 90 | {'enabled': False, 'images': [{'id': 'DmQpOneG32bl0j63UZH5xIwLDgq-lgYKNllu4rNGOIU', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/DmQpOneG32bl0j63UZH5xIwLDgq-lgYKNllu4rNGOIU.png?width=108&crop=smart&auto=webp&s=21869cd700f613c81a5df6bf821862d24237836f', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/DmQpOneG32bl0j63UZH5xIwLDgq-lgYKNllu4rNGOIU.png?width=216&crop=smart&auto=webp&s=cfeea1fbeddaa11785aa3a6b116cf63646b4e388', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/DmQpOneG32bl0j63UZH5xIwLDgq-lgYKNllu4rNGOIU.png?width=320&crop=smart&auto=webp&s=eafc8be12f6d621a9ba0b602e89907f6c0708619', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/DmQpOneG32bl0j63UZH5xIwLDgq-lgYKNllu4rNGOIU.png?width=640&crop=smart&auto=webp&s=e1491231818dca9afc91a1c8bbef01c666b2238d', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/DmQpOneG32bl0j63UZH5xIwLDgq-lgYKNllu4rNGOIU.png?width=960&crop=smart&auto=webp&s=f9822aaf829a3f7e411219b526d57014049835fa', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/DmQpOneG32bl0j63UZH5xIwLDgq-lgYKNllu4rNGOIU.png?width=1080&crop=smart&auto=webp&s=95de70be9e3b33b3de691c8ac82bfc19a0d3070c', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/DmQpOneG32bl0j63UZH5xIwLDgq-lgYKNllu4rNGOIU.png?auto=webp&s=fa073321946d7c3ae956ce3264120c8bcec4b714', 'width': 1200}, 'variants': {}}]} |
nano-trm - Train your own TRM in a few minutes | 19 | Hi folks!
Tiny Recursive Models reach impressive results on ARC AGI. I implemented a version from scratch, with ease of experimentation in mind:
* cleaner config: hydra, uv, lightning
* smaller datasets for faster iteration (Sudoku 6x6 and 9x9)
* introduction, in-code video
All important implementation details have been carefully kept. The results of the paper are reproducible (Sudoku Extreme, Maze Hard).
Feedback/contributions welcome.
[https://github.com/olivkoch/nano-trm](https://github.com/olivkoch/nano-trm) | 2025-12-09T11:07:05 | https://www.reddit.com/r/LocalLLaMA/comments/1pi4qmg/nanotrm_train_your_own_trm_in_a_few_minutes/ | randomwalkin | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pi4qmg | false | null | t3_1pi4qmg | /r/LocalLLaMA/comments/1pi4qmg/nanotrm_train_your_own_trm_in_a_few_minutes/ | false | false | self | 19 | {'enabled': False, 'images': [{'id': '3sqKv2Zbm6nA92nscvGgvYFpBOrFevYNx5vYDNiNIBY', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/3sqKv2Zbm6nA92nscvGgvYFpBOrFevYNx5vYDNiNIBY.png?width=108&crop=smart&auto=webp&s=bf2eaf2b9902716fb1a531ffd29c24e22e976d45', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/3sqKv2Zbm6nA92nscvGgvYFpBOrFevYNx5vYDNiNIBY.png?width=216&crop=smart&auto=webp&s=1ed0ac8ce3dbc3b37deff2f6290bf7a8cc34d0e4', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/3sqKv2Zbm6nA92nscvGgvYFpBOrFevYNx5vYDNiNIBY.png?width=320&crop=smart&auto=webp&s=ad72bf3ef09880b0e83982c20283a37e24269c0f', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/3sqKv2Zbm6nA92nscvGgvYFpBOrFevYNx5vYDNiNIBY.png?width=640&crop=smart&auto=webp&s=ad142d8fbaf5aa57a8479d7df41fa7e3d72d600b', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/3sqKv2Zbm6nA92nscvGgvYFpBOrFevYNx5vYDNiNIBY.png?width=960&crop=smart&auto=webp&s=bb0a81333adc23c2236066464999b0c8b85236c1', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/3sqKv2Zbm6nA92nscvGgvYFpBOrFevYNx5vYDNiNIBY.png?width=1080&crop=smart&auto=webp&s=a3fbcb53c9a8f737a9c85d09d924a7ce329be725', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/3sqKv2Zbm6nA92nscvGgvYFpBOrFevYNx5vYDNiNIBY.png?auto=webp&s=33c8914ab8641dbca390f292586d20d702360d90', 'width': 1200}, 'variants': {}}]} |
Anyone running open source LLMs daily? What is your current setup? | 4 | I want to know what hardware helps you maintain a stable workflow.
Are you on rented GPUs or something else? | 2025-12-09T10:47:05 | https://www.reddit.com/r/LocalLLaMA/comments/1pi4ei9/anyone_running_open_source_llms_daily_what_is/ | frentro_max | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pi4ei9 | false | null | t3_1pi4ei9 | /r/LocalLLaMA/comments/1pi4ei9/anyone_running_open_source_llms_daily_what_is/ | false | false | self | 4 | null |
Isn't that a juicy server? Nvidia GH200 624GB, Grace Hopper, 144GB HBM3e, 624GB total. | 0 | 2025-12-09T10:24:40 | GPTrack-dot-ai | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1pi41l3 | false | null | t3_1pi41l3 | /r/LocalLLaMA/comments/1pi41l3/isnt_that_a_juicy_server_nvidia_gh200_624gb_grace/ | false | false | default | 0 | {'enabled': True, 'images': [{'id': 'vghyqrf4n56g1', 'resolutions': [{'height': 81, 'url': 'https://preview.redd.it/vghyqrf4n56g1.jpeg?width=108&crop=smart&auto=webp&s=cc20576cd6f4d23ed45ece77098e20dd20e5bc9e', 'width': 108}, {'height': 162, 'url': 'https://preview.redd.it/vghyqrf4n56g1.jpeg?width=216&crop=smart&auto=webp&s=e60ca8bc58ab17694d202a23b6f69ca922da554c', 'width': 216}, {'height': 240, 'url': 'https://preview.redd.it/vghyqrf4n56g1.jpeg?width=320&crop=smart&auto=webp&s=b419b2bd3a57243a1dbf10d38c29b85b3dc228ac', 'width': 320}, {'height': 480, 'url': 'https://preview.redd.it/vghyqrf4n56g1.jpeg?width=640&crop=smart&auto=webp&s=bb28a9cf23f3bbb68ec40fb257bd456e81a60340', 'width': 640}, {'height': 720, 'url': 'https://preview.redd.it/vghyqrf4n56g1.jpeg?width=960&crop=smart&auto=webp&s=cc919efe33a467f80b8790c7f466729266fd2e7a', 'width': 960}, {'height': 810, 'url': 'https://preview.redd.it/vghyqrf4n56g1.jpeg?width=1080&crop=smart&auto=webp&s=bcbd85eaba4e4956492f2375638e00377debe3f9', 'width': 1080}], 'source': {'height': 3456, 'url': 'https://preview.redd.it/vghyqrf4n56g1.jpeg?auto=webp&s=44be58acaa96fa3a982bc4a2469422d95982ded9', 'width': 4608}, 'variants': {}}]} | ||
What ate some good desktop front-end dashboard apps that connect to your local LLM server? | 0 | dashboard app in the sense of front-end visualization layers… | 2025-12-09T10:17:38 | https://www.reddit.com/r/LocalLLaMA/comments/1pi3x97/what_ate_some_good_desktop_frontend_dashboard/ | FatFigFresh | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pi3x97 | false | null | t3_1pi3x97 | /r/LocalLLaMA/comments/1pi3x97/what_ate_some_good_desktop_frontend_dashboard/ | false | false | self | 0 | null |
speculative decoding with Gemma-3-12b/3-27b. Is it possible? | 2 | Hi
I'm using lm studio and trying mlx models on my macbook.
I understood that with speculative decoding I should be able to combine the main model with a smaller draft model from the same family.
I can't however get any of the google gemma-3-12b/ or 3-27b models to play nice with the smaller 3-1B model. That is it doesn't appear as an option in LM studio speculative decoding dropdown.
They seem like they should work? Unless they are completely different things but with the same name?
A few thoughts:
How does LM studio know a-priori that they won't work together without trying? Why don't they work together? Could they work together and could I work around LM studio? | 2025-12-09T09:55:31 | https://www.reddit.com/r/LocalLLaMA/comments/1pi3krb/speculative_decoding_with_gemma312b327b_is_it/ | Agitated_Power_3159 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pi3krb | false | null | t3_1pi3krb | /r/LocalLLaMA/comments/1pi3krb/speculative_decoding_with_gemma312b327b_is_it/ | false | false | self | 2 | null |
Llama.cpp - failed to restore kv cache | 1 | Anyone else getting these errors?
Was running the aider benchmark with gpt120. It seemed to be taking far too long, IMHO.
Checked the logs, not sure if this is related?
state_read_meta: failed to find available cells in kv cache
state_seq_set_data: error loading state: failed to restore kv cache
slot update_slots: id 3 | task 27073 | failed to restore context checkpoint (pos_min = 1144, pos_max = 2040, size = 31.546 MiB)
slot update_slots: id 3 | task 27073 | forcing full prompt re-processing due to lack of cache data (likely due to SWA or hybrid/recurrent memory, see https://github.com/ggml-org/llama.cpp/pull/13194#issuecomment-2868343055)
slot update_slots: id 3 | task 27073 | n_tokens = 0, memory_seq_rm [0, end)
slot update_slots: id 3 | task 27073 | prompt processing progress, n_tokens = 2048, batch.n_tokens = 2048, progress = 0.591566
slot update_slots: id 3 | task 27073 | n_tokens = 2048, memory_seq_rm [2048, end)
slot update_slots: id 3 | task 27073 | prompt processing progress, n_tokens = 3398, batch.n_tokens = 1350, progress = 0.981514
slot update_slots: id 3 | task 27073 | n_tokens = 3398, memory_seq_rm [3398, end)
slot update_slots: id 3 | task 27073 | prompt processing progress, n_tokens = 3462, batch.n_tokens = 64, progress = 1.000000
slot update_slots: id 3 | task 27073 | prompt done, n_tokens = 3462, batch.n_tokens = 64
slot update_slots: id 3 | task 27073 | created context checkpoint 2 of 8 (pos_min = 2501, pos_max = 3397, size = 31.546 MiB)
decode: failed to find a memory slot for batch of size 1
srv try_clear_id: purging slot 2 with 3084 tokens
slot clear_slot: id 2 | task -1 | clearing slot with 3084 tokens
srv update_slots: failed to find free space in the KV cache, retrying with smaller batch size, i = 0, n_batch = 2048, ret = 1
slot print_timing: id 3 | task 27073 |
prompt eval time = 1953.53 ms / 3462 tokens ( 0.56 ms per token, 1772.18 tokens per second)
eval time = 338133.36 ms / 37498 tokens ( 9.02 ms per token, 110.90 tokens per second)
total time = 340086.89 ms / 40960 tokens
slot release: id 3 | task 27073 | stop processing: n_tokens = 40959, truncated = 1
srv update_slots: all slots are idle
srv log_server_r: request: POST /v1/chat/completions 172.17.0.2 200
srv params_from_: Chat format: GPT-OSS
Version:
$ llama-server --version
ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no
ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
ggml_cuda_init: found 4 CUDA devices:
Device 0: NVIDIA GeForce RTX 3090 Ti, compute capability 8.6, VMM: yes
Device 1: NVIDIA GeForce RTX 3090 Ti, compute capability 8.6, VMM: yes
Device 2: NVIDIA GeForce RTX 3090 Ti, compute capability 8.6, VMM: yes
Device 3: NVIDIA GeForce RTX 3090 Ti, compute capability 8.6, VMM: yes
version: 7320 (51e0c2d91)
built with GNU 13.3.0 for Linux x86_64
| 2025-12-09T09:51:16 | https://www.reddit.com/r/LocalLLaMA/comments/1pi3ihz/llamacpp_failed_to_restore_kv_cache/ | Aggressive-Bother470 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pi3ihz | false | null | t3_1pi3ihz | /r/LocalLLaMA/comments/1pi3ihz/llamacpp_failed_to_restore_kv_cache/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'I2EjWMENDejf66PqulWNLkqvKKCX1p-vnBFH3ouvfLo', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/I2EjWMENDejf66PqulWNLkqvKKCX1p-vnBFH3ouvfLo.png?width=108&crop=smart&auto=webp&s=8081b3d526e5ca4ef311f90b4f77d29532767c83', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/I2EjWMENDejf66PqulWNLkqvKKCX1p-vnBFH3ouvfLo.png?width=216&crop=smart&auto=webp&s=10b619b66bd8d82fc6d9acf569ee8adeb664a850', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/I2EjWMENDejf66PqulWNLkqvKKCX1p-vnBFH3ouvfLo.png?width=320&crop=smart&auto=webp&s=0068b96fd352d210d5a73b596501cd8f4025d8d7', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/I2EjWMENDejf66PqulWNLkqvKKCX1p-vnBFH3ouvfLo.png?width=640&crop=smart&auto=webp&s=bb41e8355b37edaae46bc53ce6a79d7e86b8ec16', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/I2EjWMENDejf66PqulWNLkqvKKCX1p-vnBFH3ouvfLo.png?width=960&crop=smart&auto=webp&s=fc63cd8b11e962203563cfbf2d35e774e2705fad', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/I2EjWMENDejf66PqulWNLkqvKKCX1p-vnBFH3ouvfLo.png?width=1080&crop=smart&auto=webp&s=d6a735d6463dcd8bae3b2e6bf86c10f59d5312f4', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/I2EjWMENDejf66PqulWNLkqvKKCX1p-vnBFH3ouvfLo.png?auto=webp&s=8f8534bc2302d1eb2990d524af09b0263e04ee20', 'width': 1200}, 'variants': {}}]} |
ZAI Open Source AutoGLM --A AI Phone Agent | 45 | [https://huggingface.co/zai-org/AutoGLM-Phone-9B](https://huggingface.co/zai-org/AutoGLM-Phone-9B)
[https://github.com/zai-org/Open-AutoGLM](https://github.com/zai-org/Open-AutoGLM)
| 2025-12-09T09:36:03 | https://www.reddit.com/r/LocalLLaMA/comments/1pi3aah/zai_open_source_autoglm_a_ai_phone_agent/ | External_Mood4719 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pi3aah | false | null | t3_1pi3aah | /r/LocalLLaMA/comments/1pi3aah/zai_open_source_autoglm_a_ai_phone_agent/ | false | false | self | 45 | {'enabled': False, 'images': [{'id': 'tIl8--WYfLvCwzJZkQadyTT9vRhBteWNjFILBbqkgKE', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/tIl8--WYfLvCwzJZkQadyTT9vRhBteWNjFILBbqkgKE.png?width=108&crop=smart&auto=webp&s=eac5310b35addbbb240359a80b51e112df1e5600', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/tIl8--WYfLvCwzJZkQadyTT9vRhBteWNjFILBbqkgKE.png?width=216&crop=smart&auto=webp&s=c917e7719083b9b02f736a4137afcba5b109964c', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/tIl8--WYfLvCwzJZkQadyTT9vRhBteWNjFILBbqkgKE.png?width=320&crop=smart&auto=webp&s=f469db15724f04f47c4b3a54608c67170effd4f7', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/tIl8--WYfLvCwzJZkQadyTT9vRhBteWNjFILBbqkgKE.png?width=640&crop=smart&auto=webp&s=a6907c92c5d077db84cdc533d02b0676828541b4', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/tIl8--WYfLvCwzJZkQadyTT9vRhBteWNjFILBbqkgKE.png?width=960&crop=smart&auto=webp&s=1ee820ecf7b3e65073ad874a8860053e87fa1132', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/tIl8--WYfLvCwzJZkQadyTT9vRhBteWNjFILBbqkgKE.png?width=1080&crop=smart&auto=webp&s=ef7bd0001b60609ef098fb8cd48c52ee745cf206', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/tIl8--WYfLvCwzJZkQadyTT9vRhBteWNjFILBbqkgKE.png?auto=webp&s=ee1024507d365edb82dd9c45e4af969d96db3207', 'width': 1200}, 'variants': {}}]} |
Steep discount on Nvidia GH200 624GB, Grace Hopper Server, 2HE, 144GB HBM3e VRAM. 624GB total memory. Makes the RTX Pro 6000 look bad ;-) | 0 | Special offer only valid till end of the year (12/31/2025). Two Nvidia GH200 624GB servers available for 35k USD per piece (worldwide shipping included).
Specs:
Nvidia Grace-Hopper Superchip
72-core Nvidia Grace CPU
Nvidia Hopper H200 Tensor Core GPU
480GB of LPDDR5X memory with EEC
144GB of HBM3e memory
624GB of total fast-access memory
NVlink-C2C: 900 GB/s of bandwidth
Programmable from 450W to 1000W TDP (CPU + GPU + memory)
2x High-efficiency 2000W PSU
2x PCIe gen4 M.2 slots on board
2x PCIe gen5 2.5" drive slots (NVMe)
1x USB 3.2 port
1x RJ45 IPMI port
1x Mini display port
Halogen-free LSZH power cables
Air-cooled 6x60mm fans
Rail kit
2U 440 x 88 x 900 mm (17.3 x 3.5 x 35.4")
32 kg (70 lbs)
The price does not contain any taxes.
3-year manufacturer's warranty
Free shipping worldwide.
| 2025-12-09T09:30:28 | https://www.reddit.com/r/LocalLLaMA/comments/1pi378j/steep_discount_on_nvidia_gh200_624gb_grace_hopper/ | GPTrack-dot-ai | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pi378j | false | null | t3_1pi378j | /r/LocalLLaMA/comments/1pi378j/steep_discount_on_nvidia_gh200_624gb_grace_hopper/ | false | false | self | 0 | null |
model: support Rnj-1 by philip-essential · Pull Request #17811 · ggml-org/llama.cpp | 34 | Rnj-1 is a family of 8B parameter open-weight, dense models trained from scratch by Essential AI, optimized for code and STEM with capabilities on par with SOTA open-weight models. These models perform well across a range of programming languages and boast strong agentic capabilities (e.g., inside agentic frameworks like mini-SWE-agent), while also excelling at tool-calling. They additionally exhibit strong capabilities in math and science. Herein, `rnj-1` refers to the base model, while `rnj-1-instruct` refers to the post-trained instruction tuned model.
[https://huggingface.co/EssentialAI/rnj-1-instruct](https://huggingface.co/EssentialAI/rnj-1-instruct)
[https://huggingface.co/EssentialAI/rnj-1-instruct-GGUF](https://huggingface.co/EssentialAI/rnj-1-instruct-GGUF)
| 2025-12-09T08:24:52 | https://github.com/ggml-org/llama.cpp/pull/17811 | jacek2023 | github.com | 1970-01-01T00:00:00 | 0 | {} | 1pi28mq | false | null | t3_1pi28mq | /r/LocalLLaMA/comments/1pi28mq/model_support_rnj1_by_philipessential_pull/ | false | false | default | 34 | {'enabled': False, 'images': [{'id': 'HJud6LLFn4jcoTTYFQN2jlM9S8V73LW3HfXmSX8jM3s', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/HJud6LLFn4jcoTTYFQN2jlM9S8V73LW3HfXmSX8jM3s.png?width=108&crop=smart&auto=webp&s=be2f782a35064c5efc81bc84c06eeb83aa8512ff', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/HJud6LLFn4jcoTTYFQN2jlM9S8V73LW3HfXmSX8jM3s.png?width=216&crop=smart&auto=webp&s=cdc9b899d41023693cc60dd824f361c81b93a210', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/HJud6LLFn4jcoTTYFQN2jlM9S8V73LW3HfXmSX8jM3s.png?width=320&crop=smart&auto=webp&s=301f9302df6a097c5b95a8daccb4ffbd508733a7', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/HJud6LLFn4jcoTTYFQN2jlM9S8V73LW3HfXmSX8jM3s.png?width=640&crop=smart&auto=webp&s=c0e15d519011eeac1611d0ff185d55ed015a906e', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/HJud6LLFn4jcoTTYFQN2jlM9S8V73LW3HfXmSX8jM3s.png?width=960&crop=smart&auto=webp&s=0fee53a85a7671dfc7b2cc0971371102da5e67b0', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/HJud6LLFn4jcoTTYFQN2jlM9S8V73LW3HfXmSX8jM3s.png?width=1080&crop=smart&auto=webp&s=db53af332936b8e39c73b1b771687581b985b12a', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/HJud6LLFn4jcoTTYFQN2jlM9S8V73LW3HfXmSX8jM3s.png?auto=webp&s=b2e2713425ab24cab45a2e1e9a9186af3bc9768d', 'width': 1200}, 'variants': {}}]} |
NetraEmbed: A Multilingual Multimodal Embedding Model Built on Gemma3 | 12 | **NetraEmbed** is a state-of-the-art multilingual multimodal embedding mode powered by the Gemma3 backbone.
* **Model Type:** Multilingual Multimodal Embedding Model with Matryoshka embeddings
* **Architecture:** BiEncoder with Gemma3-4B backbone
* **Embedding Dimensions:** 768, 1536, 2560 (Matryoshka)
* **Capabilities:** Multilingual, Multimodal (Vision + Text)
* **Use Case:** Visual document retrieval, multilingual semantic search, cross-lingual document understanding
This model can be used for various use cases like
* **Efficient Document Retrieval:** Fast search through millions of documents
* **Semantic Search:** Find visually similar documents
* **Scalable Vector Search:** Works with FAISS, Milvus, Pinecone, etc.
* **Cross-lingual Retrieval:** Multilingual visual document search
[Research Paper ](https://arxiv.org/abs/2512.03514)
# [](https://huggingface.co/Cognitive-Lab/NetraEmbed#model-description)
| 2025-12-09T08:22:54 | https://huggingface.co/Cognitive-Lab/NetraEmbed | Dear-Success-1441 | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1pi27m1 | false | null | t3_1pi27m1 | /r/LocalLLaMA/comments/1pi27m1/netraembed_a_multilingual_multimodal_embedding/ | false | false | default | 12 | {'enabled': False, 'images': [{'id': 'c86s1yWRqV04VsW41YoYY2zivpL36a7Bta7yaoxkpIk', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/c86s1yWRqV04VsW41YoYY2zivpL36a7Bta7yaoxkpIk.png?width=108&crop=smart&auto=webp&s=5127214761cdccf6e33c2fa0fc1bf33012e76683', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/c86s1yWRqV04VsW41YoYY2zivpL36a7Bta7yaoxkpIk.png?width=216&crop=smart&auto=webp&s=69fc4f0fe17c4ae68c37b445956f5e976e2421d7', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/c86s1yWRqV04VsW41YoYY2zivpL36a7Bta7yaoxkpIk.png?width=320&crop=smart&auto=webp&s=d0e86e5031cef45adbe5c69deafd2d199470a2e8', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/c86s1yWRqV04VsW41YoYY2zivpL36a7Bta7yaoxkpIk.png?width=640&crop=smart&auto=webp&s=18c6f7e5c38656aa4131c9bf8c7226811eabcec1', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/c86s1yWRqV04VsW41YoYY2zivpL36a7Bta7yaoxkpIk.png?width=960&crop=smart&auto=webp&s=8659c6014bb30a3525002a835495d11d95723baa', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/c86s1yWRqV04VsW41YoYY2zivpL36a7Bta7yaoxkpIk.png?width=1080&crop=smart&auto=webp&s=c9213bf43724aa3d20eedc1e7b37c3f5460809da', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/c86s1yWRqV04VsW41YoYY2zivpL36a7Bta7yaoxkpIk.png?auto=webp&s=dedf25ac2c5414f9eb9c5e6240594111f23cada4', 'width': 1200}, 'variants': {}}]} |
What would you choose: Server AI (taxi) or Local AI (your classic car forever)? | 0 | What do you think?
Would you switch to an AI that truly belongs to you?
Any feedback on the concept or features you’d want?
Thanks for your thoughts! 🙏
\#LocalLLaMA #PrivateAI #OfflineAI #NoSubscription | 2025-12-09T08:07:26 | https://www.reddit.com/r/LocalLLaMA/comments/1pi1zek/what_would_you_choose_server_ai_taxi_or_local_ai/ | Odd_Mobile5854 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pi1zek | false | null | t3_1pi1zek | /r/LocalLLaMA/comments/1pi1zek/what_would_you_choose_server_ai_taxi_or_local_ai/ | false | false | self | 0 | null |
Native Parallel Reasoner (NPR): Reasoning in Parallelism via Self-Distilled RL, 4.6x Faster, 100% genuine parallelism, fully open source | 18 | Hi everyone,
I am excited to share our latest research, **Native Parallel Reasoner (NPR)**, which introduces a new paradigm to enable LLMs to perform native, internal parallel reasoning.
We know that sequential, token-by-token reasoning can be slow and sometimes inefficient. NPR changes this by training the model to simultaneously generate multiple candidate "thought" branches, execute them in parallel, and reduce them to a final answer.
**How it works:** Instead of relying on strong external teachers (like GPT-series distillation) or manual annotation, NPR uses a format-aware self-exploration loop:
1. **Self-Distillation + Parallel SFT:** The model learns to propose parallel branches.
2. **PAPO (Parallel-Aware Policy Optimization):** A specialized parallel Reinforcement Learning algorithm we designed.
3. **NPR-Engine:** A verifiable inference engine that validates the format and results of every branch, allowing the model to self-optimize.
**Key Results:**
* **Speed:** We achieved up to a **4.6× wall-clock speedup** compared to standard autoregressive methods.
* **Performance:** Significantly outperforms existing parallel and autoregressive baselines on math and complex reasoning benchmarks.
* **Robustness:** In testing, we saw a **\~100% parallel trigger rate**, meaning the model genuinely internalized the "parallel thinking" strategy and didn't fall back to sequential generation.
Basically, this offers a reproducible path to go from algorithm to engineering, making "parallel thinking" a trainable, verifiable, and deployable capability rather than just a prompting trick.
* **X:** [https://x.com/ZilongZheng/status/1998252267783516444?s=20](https://x.com/ZilongZheng/status/1998252267783516444?s=20)
* **HF:**[https://huggingface.co/papers/2512.07461](https://huggingface.co/papers/2512.07461)
* **Project Page:**[https://bigai-nlco.github.io/Native-Parallel-Reasoner/](https://bigai-nlco.github.io/Native-Parallel-Reasoner/)
* **Paper (ArXiv):**[https://arxiv.org/abs/2512.07461](https://arxiv.org/abs/2512.07461)
Happy to answer any questions about the training pipeline or the architecture! | 2025-12-09T07:56:06 | https://www.reddit.com/r/LocalLLaMA/comments/1pi1tc8/native_parallel_reasoner_npr_reasoning_in/ | Think_Specific_7241 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pi1tc8 | false | null | t3_1pi1tc8 | /r/LocalLLaMA/comments/1pi1tc8/native_parallel_reasoner_npr_reasoning_in/ | false | false | self | 18 | null |
Native Parallel Reasoner (NPR): 4.6x Faster Inference via Internal Parallelism & Self-Distilled RL, fully open source | 1 | 2025-12-09T07:52:45 | https://www.reddit.com/gallery/1pi1rlo | Think_Specific_7241 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1pi1rlo | false | null | t3_1pi1rlo | /r/LocalLLaMA/comments/1pi1rlo/native_parallel_reasoner_npr_46x_faster_inference/ | false | false | default | 1 | null | |
Native Parallel Reasoner (NPR): 4.6x Faster Inference via Internal Parallelism & Self-Distilled RL, fully open source | 0 | 2025-12-09T07:52:44 | Think_Specific_7241 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1pi1rl5 | false | null | t3_1pi1rl5 | /r/LocalLLaMA/comments/1pi1rl5/native_parallel_reasoner_npr_46x_faster_inference/ | false | false | default | 0 | {'enabled': True, 'images': [{'id': 'atkoyq90w46g1', 'resolutions': [{'height': 155, 'url': 'https://preview.redd.it/atkoyq90w46g1.png?width=108&crop=smart&auto=webp&s=0554213ada98bfcfdb6e7a2cc720fc294aaf93cf', 'width': 108}, {'height': 311, 'url': 'https://preview.redd.it/atkoyq90w46g1.png?width=216&crop=smart&auto=webp&s=b1940d71f34736ec5ef019fec5b3d85c419b7ba2', 'width': 216}, {'height': 460, 'url': 'https://preview.redd.it/atkoyq90w46g1.png?width=320&crop=smart&auto=webp&s=5da99a5bcab65bbbed77ca9f2d43a19f471aace6', 'width': 320}, {'height': 921, 'url': 'https://preview.redd.it/atkoyq90w46g1.png?width=640&crop=smart&auto=webp&s=c158fcbe9fcdb259e15fdbefb0aae237c91de9d5', 'width': 640}], 'source': {'height': 1282, 'url': 'https://preview.redd.it/atkoyq90w46g1.png?auto=webp&s=93d0ae635b722f1d1dd306a4b593022ec69c262b', 'width': 890}, 'variants': {}}]} | ||
Reasoning LLM idea | 0 | So currently reasoning models generate reasoning in natural language, then that reasoning is fed back into them as input, and it repeats until eventually they give an answer to the user.
So my idea is that rather than outputting a single line of natural language where you can only store so much and run out of context length, it should generate and feed back multiple lines of text, but only one of them is trained to output the desired natural language response. Other lines are only trained because they are fed back into the LLM during reasoning. Also I think that this is very easy to implement by making LLM accept and output multiple channels | 2025-12-09T07:50:44 | https://www.reddit.com/r/LocalLLaMA/comments/1pi1qmc/reasoning_llm_idea/ | nikishev | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pi1qmc | false | null | t3_1pi1qmc | /r/LocalLLaMA/comments/1pi1qmc/reasoning_llm_idea/ | false | false | self | 0 | null |
Looking for Right GPU, Rtx 5060 ti 16GB (So many brand)? | 6 | I am using 1050ti which is quite slow and want to upgrade it.
I found Rtx 5060 ti 16GB is under my budget range. But there are many brand and series. Please guide me. I usually don't play game and want to try more on LocalLLM.
||
||
|GIGABYTE GEFORCE RTX 5060 Ti WINDFORCE OC 16GB GDDR7 GPU|
|GIGABYTE GEFORCE RTX 5060 TI GAMING OC 16G|
|PALIT GEFORCE RTX 5060 TI INFINITY 3 - 16GB GDDR7|
|PNY GEFORCE RTX 5060 TI 16GB OVERCLOCKED DUAL FAN - 16GB GDDR7|
|MSI GEFORCE RTX 5060 TI 16G SHADOW 2X OC PLUS - 16GB GDDR7|
|COLORFUL IGAME GEFORCE RTX 5060 TI ULTRA W OC - 16GB GDDR7|
Here is the GPU available in my country.
| 2025-12-09T07:44:31 | https://www.reddit.com/r/LocalLLaMA/comments/1pi1nca/looking_for_right_gpu_rtx_5060_ti_16gb_so_many/ | rakhinesmn | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pi1nca | false | null | t3_1pi1nca | /r/LocalLLaMA/comments/1pi1nca/looking_for_right_gpu_rtx_5060_ti_16gb_so_many/ | false | false | self | 6 | null |
Pipeshub just hit 2k GitHub stars. | 5 | We’re super excited to share a milestone that wouldn’t have been possible without this community. **PipesHub just crossed 2,000 GitHub stars!**
Thank you to everyone who tried it out, shared feedback, opened issues, or even just followed the project.
For those who haven’t heard of it yet, **PipesHub** is a fully open-source enterprise search platform we’ve been building over the past few months. Our goal is simple: bring powerful **Enterprise Search** and **Agent Builders** to every team, without vendor lock-in. PipesHub brings all your business data together and makes it instantly searchable.
It integrates with tools like Google Drive, Gmail, Slack, Notion, Confluence, Jira, Outlook, SharePoint, Dropbox, and even local files. You can deploy it with a single Docker Compose command.
Under the hood, PipesHub runs on a **Kafka powered event streaming architecture**, giving it real time, scalable, fault tolerant indexing. It combines a vector database with a knowledge graph and uses **Agentic RAG** to keep responses grounded in source of truth. You get visual citations, reasoning, and confidence scores, and if information isn’t found, it simply says so instead of hallucinating.
**Key features:**
* Enterprise knowledge graph for deep understanding of users, orgs, and teams
* Connect to any AI model: OpenAI, Gemini, Claude, Ollama, or any OpenAI compatible endpoint
* Vision Language Models and OCR for images and scanned documents
* Login with Google, Microsoft, OAuth, and SSO
* Rich REST APIs
* Support for all major file types, including PDFs with images and diagrams
* **Agent Builder** for actions like sending emails, scheduling meetings, deep research, internet search, and more
* **Reasoning Agent** with planning capabilities
* **40+ connectors** for integrating with your business apps
We’d love for you to check it out and share your thoughts or feedback. Looking forward to more contributions from the open source community:
[https://github.com/pipeshub-ai/pipeshub-ai](https://github.com/pipeshub-ai/pipeshub-ai) | 2025-12-09T07:41:18 | https://www.reddit.com/r/LocalLLaMA/comments/1pi1lnt/pipeshub_just_hit_2k_github_stars/ | Effective-Ad2060 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pi1lnt | false | null | t3_1pi1lnt | /r/LocalLLaMA/comments/1pi1lnt/pipeshub_just_hit_2k_github_stars/ | false | false | self | 5 | {'enabled': False, 'images': [{'id': 'ketEXXYrtUKPA2y-oIvCFgcWZQoziuQWwwvejkV8xdc', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/ketEXXYrtUKPA2y-oIvCFgcWZQoziuQWwwvejkV8xdc.png?width=108&crop=smart&auto=webp&s=0d6fd2d9375d8a485a2ebdf6b2fb6af53123cceb', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/ketEXXYrtUKPA2y-oIvCFgcWZQoziuQWwwvejkV8xdc.png?width=216&crop=smart&auto=webp&s=cbbca55895b7178adf02cbd270620e5f428e7c8e', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/ketEXXYrtUKPA2y-oIvCFgcWZQoziuQWwwvejkV8xdc.png?width=320&crop=smart&auto=webp&s=8a89fff37eca91c8c09618f089379de21a66d928', 'width': 320}], 'source': {'height': 400, 'url': 'https://external-preview.redd.it/ketEXXYrtUKPA2y-oIvCFgcWZQoziuQWwwvejkV8xdc.png?auto=webp&s=bf26b9b5cae5ee08e9ae158ab53cc4b315dcbbb1', 'width': 400}, 'variants': {}}]} |
Is qwen3 4b or a3b better than the first gpt4(2023)? What do you think? | 85 | (I know Artificial Analysis is suck. But is interesting:))
I think now the hype is almost gone, so I have some question.
Benchmark says thier models(even 30b a3b and 4b!) beat gpt4. But what do you think?
Please don't tell me "depends on field". We should compare on overall performance. Because benchmark says it is.
Can we now truly replace old flagship closed-source model with a small open model? | 2025-12-09T06:04:27 | __issac | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1phzzrq | false | null | t3_1phzzrq | /r/LocalLLaMA/comments/1phzzrq/is_qwen3_4b_or_a3b_better_than_the_first_gpt42023/ | false | false | default | 85 | {'enabled': True, 'images': [{'id': '5a2im0y2d46g1', 'resolutions': [{'height': 71, 'url': 'https://preview.redd.it/5a2im0y2d46g1.png?width=108&crop=smart&auto=webp&s=15020cbcd877389a0aac0dfb015d5f0b4fe58a80', 'width': 108}, {'height': 142, 'url': 'https://preview.redd.it/5a2im0y2d46g1.png?width=216&crop=smart&auto=webp&s=a28e9fcda53ae5ffa9a3cc39e92a917e4cd1a6a6', 'width': 216}, {'height': 211, 'url': 'https://preview.redd.it/5a2im0y2d46g1.png?width=320&crop=smart&auto=webp&s=e9c62abc772e1e691a2a08b147955446f6298fc3', 'width': 320}, {'height': 423, 'url': 'https://preview.redd.it/5a2im0y2d46g1.png?width=640&crop=smart&auto=webp&s=ef5e61c3d9ead02bd600ec3c330ee11fda109956', 'width': 640}, {'height': 634, 'url': 'https://preview.redd.it/5a2im0y2d46g1.png?width=960&crop=smart&auto=webp&s=56a0e70c273406ee8f36573ad824440b8fad1c17', 'width': 960}, {'height': 713, 'url': 'https://preview.redd.it/5a2im0y2d46g1.png?width=1080&crop=smart&auto=webp&s=2ab721226da8b4a1f62d862a3a6bad92259621d8', 'width': 1080}], 'source': {'height': 4563, 'url': 'https://preview.redd.it/5a2im0y2d46g1.png?auto=webp&s=bda4201da3c90af93e68feeeddd37e3e8306a553', 'width': 6903}, 'variants': {}}]} | |
I built a deterministic AI (Era) that uses Physics instead of LLMs. 99% Fact Accuracy, runs on CPU. Open Sourcing it today. | 1 | [removed] | 2025-12-09T05:50:51 | https://www.reddit.com/r/LocalLLaMA/comments/1phzr3k/i_built_a_deterministic_ai_era_that_uses_physics/ | Left_Object2581 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1phzr3k | false | null | t3_1phzr3k | /r/LocalLLaMA/comments/1phzr3k/i_built_a_deterministic_ai_era_that_uses_physics/ | false | false | self | 1 | null |
Support for rnj-1 now in llama.cpp | 13 | 2025-12-09T05:48:15 | https://github.com/ggml-org/llama.cpp/releases/tag/b7328 | Amazing_Athlete_2265 | github.com | 1970-01-01T00:00:00 | 0 | {} | 1phzpfq | false | null | t3_1phzpfq | /r/LocalLLaMA/comments/1phzpfq/support_for_rnj1_now_in_llamacpp/ | false | false | default | 13 | null | |
The Absurdity of the prices of consumer RAM versus ECC RAM | 91 | 2025-12-09T05:21:16 | https://www.reddit.com/gallery/1phz8vy | Substantial_Cut_9418 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1phz8vy | false | null | t3_1phz8vy | /r/LocalLLaMA/comments/1phz8vy/the_absurdity_of_the_prices_of_consumer_ram/ | false | false | 91 | null | ||
cherry studio is amazing | 4 | i started using cherry studio by accident i stopped using everithingllm gpt4all and msty for rag. does anyone use it? it would be time to create a community in English. I would like improved prompt handling.
Thank you | 2025-12-09T05:19:23 | https://www.reddit.com/r/LocalLLaMA/comments/1phz7pi/cherry_studio_is_amazing/ | Bobcotelli | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1phz7pi | false | null | t3_1phz7pi | /r/LocalLLaMA/comments/1phz7pi/cherry_studio_is_amazing/ | false | false | self | 4 | null |
Model size reduction imminent | 10 | 2025-12-09T04:59:39 | https://news.ycombinator.com/item?id=46199623 | Purple-Education-171 | news.ycombinator.com | 1970-01-01T00:00:00 | 0 | {} | 1phywff | false | null | t3_1phywff | /r/LocalLLaMA/comments/1phywff/model_size_reduction_imminent/ | false | false | default | 10 | null | |
Call Home - a story in pictures | 0 | 2025-12-09T04:31:17 | https://www.reddit.com/gallery/1phyew9 | aStoryInPictures | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1phyew9 | false | null | t3_1phyew9 | /r/LocalLLaMA/comments/1phyew9/call_home_a_story_in_pictures/ | false | false | 0 | null | ||
Phone Agent -- A mobile intelligent assistant framework built on AutoGLM [Open Source/Model] | 12 | src: [https://github.com/zai-org/Open-AutoGLM/](https://github.com/zai-org/Open-AutoGLM/)
model: [https://huggingface.co/zai-org/AutoGLM-Phone-9B](https://huggingface.co/zai-org/AutoGLM-Phone-9B) | 2025-12-09T04:10:23 | https://www.reddit.com/r/LocalLLaMA/comments/1phy0s8/phone_agent_a_mobile_intelligent_assistant/ | Terrible_Scar_9890 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1phy0s8 | false | null | t3_1phy0s8 | /r/LocalLLaMA/comments/1phy0s8/phone_agent_a_mobile_intelligent_assistant/ | false | false | self | 12 | {'enabled': False, 'images': [{'id': 'OgyLYaoWmqDn4UwBWLP6mGRxJifyFPjSuIWTOb_w_QA', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/OgyLYaoWmqDn4UwBWLP6mGRxJifyFPjSuIWTOb_w_QA.png?width=108&crop=smart&auto=webp&s=de75392cd86f1d0a79ab49f7f41e8847dd829056', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/OgyLYaoWmqDn4UwBWLP6mGRxJifyFPjSuIWTOb_w_QA.png?width=216&crop=smart&auto=webp&s=dea0d13becb3a970570df721832329e51f75fcae', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/OgyLYaoWmqDn4UwBWLP6mGRxJifyFPjSuIWTOb_w_QA.png?width=320&crop=smart&auto=webp&s=e9daf51f7b3c9033ad751191fe46c0531562732d', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/OgyLYaoWmqDn4UwBWLP6mGRxJifyFPjSuIWTOb_w_QA.png?width=640&crop=smart&auto=webp&s=000128e487a78f722779e9938f6837411dc51a3f', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/OgyLYaoWmqDn4UwBWLP6mGRxJifyFPjSuIWTOb_w_QA.png?width=960&crop=smart&auto=webp&s=4a21b25158fbaf7b38fbd58e784d3b51f2a5b95b', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/OgyLYaoWmqDn4UwBWLP6mGRxJifyFPjSuIWTOb_w_QA.png?width=1080&crop=smart&auto=webp&s=63df21cc3404351c5d172bc115e5197c0b9bf03c', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/OgyLYaoWmqDn4UwBWLP6mGRxJifyFPjSuIWTOb_w_QA.png?auto=webp&s=472446efce245221ee48eb601e44b0a2f21f26ec', 'width': 1200}, 'variants': {}}]} |
🦜 VieNeu-TTS is officially COMPLETE! | 8 | Hey everyone! The Vietnamese Text-to-Speech (TTS) model, VieNeu-TTS, is now officially stable and complete after about a month of continuous effort and tuning based on your feedback.
We focused heavily on resolving common issues like choppy pauses and robotic intonation. The results are promising, especially the **Human Score** (our main benchmark for naturalness):
* **Naturalness Score:** Achieved **92%** compared to a real human speaker.
* **Intelligibility (Clarity):** Hit **99%**, virtually eliminating common issues like dropping or slurring words.
🔜 UPCOMING UPDATES:
* The **GGUF** and **AWQ** versions will be released later this week!
* The **LORA finetune code** will also be public soon so you guys can train your own versions.
👉 Come try it out:
* **Demo:** [https://huggingface.co/spaces/pnnbao-ump/VieNeu-TTS](https://huggingface.co/spaces/pnnbao-ump/VieNeu-TTS)
* **Repo:** [https://github.com/pnnbao97/VieNeu-TTS](https://github.com/pnnbao97/VieNeu-TTS)
* **Model:** [https://huggingface.co/pnnbao-ump/VieNeu-TTS](https://huggingface.co/pnnbao-ump/VieNeu-TTS)
* **Dataset:** [https://huggingface.co/datasets/pnnbao-ump/VieNeu-TTS-1000h](https://huggingface.co/datasets/pnnbao-ump/VieNeu-TTS-1000h)
https://reddit.com/link/1phxwnn/video/7tghpz95r36g1/player | 2025-12-09T04:04:32 | https://www.reddit.com/r/LocalLLaMA/comments/1phxwnn/vieneutts_is_officially_complete/ | DrCrab97 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1phxwnn | false | null | t3_1phxwnn | /r/LocalLLaMA/comments/1phxwnn/vieneutts_is_officially_complete/ | false | false | self | 8 | {'enabled': False, 'images': [{'id': 'fTg890dJ59dT-MRvLbHqNSlKo9IOoCxyqnjg2pxEMPg', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/fTg890dJ59dT-MRvLbHqNSlKo9IOoCxyqnjg2pxEMPg.png?width=108&crop=smart&auto=webp&s=764a78b552130211a0cff57d049d45fcff4ed0e6', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/fTg890dJ59dT-MRvLbHqNSlKo9IOoCxyqnjg2pxEMPg.png?width=216&crop=smart&auto=webp&s=6545236f800026ef9078101ede1f584f1bde5c65', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/fTg890dJ59dT-MRvLbHqNSlKo9IOoCxyqnjg2pxEMPg.png?width=320&crop=smart&auto=webp&s=28b2e74f4c37bce06d76740b46287d137023a776', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/fTg890dJ59dT-MRvLbHqNSlKo9IOoCxyqnjg2pxEMPg.png?width=640&crop=smart&auto=webp&s=e5d7c1cd71b7c5bcad03e93ce6f67cf1bd041438', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/fTg890dJ59dT-MRvLbHqNSlKo9IOoCxyqnjg2pxEMPg.png?width=960&crop=smart&auto=webp&s=cc3e9b6e135e01e0d1cbe35a37c4be5af63e2e99', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/fTg890dJ59dT-MRvLbHqNSlKo9IOoCxyqnjg2pxEMPg.png?width=1080&crop=smart&auto=webp&s=d4b31e689dad0da164ae977c62c25f9c2d909fbf', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/fTg890dJ59dT-MRvLbHqNSlKo9IOoCxyqnjg2pxEMPg.png?auto=webp&s=bf6d385a3f57fc36883a17a6db91f66ac29f41a3', 'width': 1200}, 'variants': {}}]} |
Fine-Tune LLMs with Claude Code Using Hugging Face Skills | 4 | With **Hugging Face skill**, you can tell Claude things like:
Fine-tune Qwen3-0.6B on the dataset open-r1/codeforces-cots
and Claude will:
1. Validate your dataset format
2. Select appropriate hardware (t4-small for a 0.6B model)
3. Use and update a training script with Trackio monitoring
4. Submit the job to Hugging Face Jobs
5. Report the job ID and estimated cost
6. Check on progress when you ask
7. Help you debug if something goes wrong
The model trains on Hugging Face GPUs while you do other things. When it's done, your fine-tuned model appears on the Hub, ready to use.
**The Hugging Face skill supports**
* supervised fine-tuning,
* direct preference optimization, and
* reinforcement learning with verifiable rewards.
* train models from 0.5B to 70B parameters,
* convert them to GGUF for local deployment, and
* run multi-stage pipelines that combine different techniques.
# [](https://huggingface.co/blog/hf-skills-training#setup-and-install)
Source: [Hugging Face Blogpost](https://huggingface.co/blog/hf-skills-training)
| 2025-12-09T03:58:36 | Dear-Success-1441 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1phxs7d | false | null | t3_1phxs7d | /r/LocalLLaMA/comments/1phxs7d/finetune_llms_with_claude_code_using_hugging_face/ | false | false | default | 4 | {'enabled': True, 'images': [{'id': '1s4b0dmep36g1', 'resolutions': [{'height': 104, 'url': 'https://preview.redd.it/1s4b0dmep36g1.jpeg?width=108&crop=smart&auto=webp&s=55efb46ffdf8fdd88087f26c78c8d48452693f65', 'width': 108}, {'height': 209, 'url': 'https://preview.redd.it/1s4b0dmep36g1.jpeg?width=216&crop=smart&auto=webp&s=27b8b3ba1de1098bf5e3677e0862f2cefc0b1dc3', 'width': 216}, {'height': 309, 'url': 'https://preview.redd.it/1s4b0dmep36g1.jpeg?width=320&crop=smart&auto=webp&s=21157927fbcfdef8583f1a0db449c0983d89da84', 'width': 320}, {'height': 619, 'url': 'https://preview.redd.it/1s4b0dmep36g1.jpeg?width=640&crop=smart&auto=webp&s=cc6c61404d85b1deb8a70778d45614bfee140049', 'width': 640}], 'source': {'height': 860, 'url': 'https://preview.redd.it/1s4b0dmep36g1.jpeg?auto=webp&s=e14c08bc5b0fd8554fe2bd2cc50dcb4d8d7f6d71', 'width': 888}, 'variants': {}}]} | |
Is there a place with all the hardware setups and inference tok/s data aggregated? | 0 | I'm looking for a site to recommend me hardware setups if I have ~2500$ to spend?
I saw these weekly threads but I'm not sure what's optimal still: https://old.reddit.com/r/LocalLLaMA/comments/1olq14f/megathread_local_ai_hardware_november_2025/
Have a 3070 + 3090, i7 9700k currently. Would like to run the best model + fastest tok/s I can for the price. Not interested in training. | 2025-12-09T03:29:58 | https://www.reddit.com/r/LocalLLaMA/comments/1phx7iw/is_there_a_place_with_all_the_hardware_setups_and/ | SlanderMans | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1phx7iw | false | null | t3_1phx7iw | /r/LocalLLaMA/comments/1phx7iw/is_there_a_place_with_all_the_hardware_setups_and/ | false | false | self | 0 | null |
Your RAG retrieval isn't broken. Your processing is. | 1 | The same pattern keeps showing up. "Retrieval quality sucks. I've tried BM25, hybrid search, rerankers. Nothing moves the needle."
So people tune. Swap embedding models. Adjust k values. Spend weeks in the retrieval layer.
It usually isn't where the problem lives.
Retrieval finds the chunks most similar to a query and returns them. If the right answer isn't in your chunks, or it's split across three chunks with no connecting context, retrieval can't find it. It's just similarity search over whatever you gave it.
Tables split in half. Parsers mangling PDFs. Noise embedded alongside signal. Metadata stripped out. No amount of reranker tuning fixes that.
"I'll spend like 3 days just figuring out why my PDFs are extracting weird characters. Meanwhile the actual RAG part takes an afternoon to wire up."
Three days on processing. An afternoon on retrieval.
If your retrieval quality is poor: sample your chunks. Read 50 random ones. Check your PDFs against what the parser produced. Look for partial tables, numbered lists that start at "3", code blocks that end mid-function.
Anyone else find most of their RAG issues trace back to processing?
| 2025-12-09T03:26:12 | https://www.reddit.com/r/LocalLLaMA/comments/1phx4su/your_rag_retrieval_isnt_broken_your_processing_is/ | OnyxProyectoUno | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1phx4su | false | null | t3_1phx4su | /r/LocalLLaMA/comments/1phx4su/your_rag_retrieval_isnt_broken_your_processing_is/ | false | false | self | 1 | null |
Best Open Model for Claude Code (or Other Agentic CLI)? | 1 | I've been impressed with Claude Code, powered by Claude models. However, they tend to get noticeably dumber a few weeks after the model release. And honestly, it's burning money if you use it heavily. I tried using GLM4.6 to run Claude Code, and it works. Though not as well as Claude 4, it still provides value. I was excited about the release of Deepseek V3.2 Thinking. Its benchmarks suggested it could be a great model for agent coding. However, I found it to be very slow when I used it with Claude Code. I’m not sure why, but it always starts by analyzing the repository even when it’s nearly empty. MiniMax M2 seems like a promising model for this purpose, but I haven’t had the chance to test it yet. Just out of curiosity, what’s the best open model you’ve found that works well for you? | 2025-12-09T03:11:28 | https://www.reddit.com/r/LocalLLaMA/comments/1phwtq6/best_open_model_for_claude_code_or_other_agentic/ | hemokwang | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1phwtq6 | false | null | t3_1phwtq6 | /r/LocalLLaMA/comments/1phwtq6/best_open_model_for_claude_code_or_other_agentic/ | false | false | self | 1 | null |
Custom chat-bot related help. | 1 | Hi, I am making a chat-bot to automate intraciton with some applicants.
I want help regarding approach and technique to use for able to make the tool-calling thing,
I am using Ollama with some models (testing different models), some modes obey instruction some don't, some produce structured output as expected some not.
This thing I want to do is, make the bot aware of my database, and able to information about the user while chatting, I tried to say like "add response in the end of your response what you new extract from user" and then do some splitting and manuplation with the output and send non-json part to the user as chat.
So, my questions are:
1. What (type of) model strictly follow our prompt and give consistent output /(by consistent I mean that if I say to produce json it should produce json)
2. Is my approach to create prompt with injected database and old chats correct, if not what should I do | 2025-12-09T02:44:15 | https://www.reddit.com/r/LocalLLaMA/comments/1phw8wy/custom_chatbot_related_help/ | AmanBabuHemant | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1phw8wy | false | null | t3_1phw8wy | /r/LocalLLaMA/comments/1phw8wy/custom_chatbot_related_help/ | false | false | self | 1 | null |
Issues using llama.cpp with Radeon RX 9070XT/Vulkan | 4 | GPU: AMD Radeon RX 9070 XT
CPU: AMD Ryzen 9 9950X3D
OS: Fedora Linux
I built llama.cpp following the instructions on Github, including the -DGGML\_VULKAN=1 flag. It built without any errors, but when I try to run a model I get a long output that includes this error:
ggml_cuda_compute_forward: RMS_NORM failed
ROCm error: invalid device function
current device: 1, in function ggml_cuda_compute_forward at /builddir/build/BUILD/llama-cpp-b5904-build/llama.cpp-b5904/ggml/src/ggml-cuda/ggml-cuda.cu:2482
err
/builddir/build/BUILD/llama-cpp-b5904-build/llama.cpp-b5904/ggml/src/ggml-cuda/ggml-cuda.cu:79: ROCm error
The command that I used in this case is `llama-cli -ngl 99 -m ../../../AI\ Models/Cydonia-24B-v4j-Q5_K_M.gguf` but I get this error as long as I include -ngl.
I am having a difficult time figuring this out, and would appreciate some help. | 2025-12-09T02:37:15 | https://www.reddit.com/r/LocalLLaMA/comments/1phw3hq/issues_using_llamacpp_with_radeon_rx_9070xtvulkan/ | LockedCockOnTheBlock | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1phw3hq | false | null | t3_1phw3hq | /r/LocalLLaMA/comments/1phw3hq/issues_using_llamacpp_with_radeon_rx_9070xtvulkan/ | false | false | self | 4 | null |
Super rookie here | 1 | I don't know much about llama, had an Android phone lying around and using termux put llama3.2 3b there but the chatbot says that it's conversation data is not locally stored beyond the current conversation or the one after it
So my question is, does the llm not store all data locally? And if so is therw a way ti remedy on Android? | 2025-12-09T02:31:54 | https://www.reddit.com/r/LocalLLaMA/comments/1phvzb8/super_rookie_here/ | MrAHMED42069 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1phvzb8 | false | null | t3_1phvzb8 | /r/LocalLLaMA/comments/1phvzb8/super_rookie_here/ | false | false | self | 1 | null |
Local low cost rig upgrade | 0 | So now I have a b550 gaming x v2 paired with the following parts:
\- ryzen 5600x
\- 128gb ddr4 ripjaws v 2666mhz (overclocked to 3200)
\- 5070ti (16gb vram)
and I can upgrade for 370 for asus rog Crosshair vIII Formula + 5950, could I try with some pcie riser cable and try to fix it somewhere with some 4060 ti with 16gb so in total i have 32 so I can run qwen next and some other image models in comfyUI with multiGPU.
Do you think it's worth the investment or I would be wasting money on upgrading this rig as a compsci student? | 2025-12-09T02:12:46 | https://www.reddit.com/r/LocalLLaMA/comments/1phvkdj/local_low_cost_rig_upgrade/ | thatusernsmeis | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1phvkdj | false | null | t3_1phvkdj | /r/LocalLLaMA/comments/1phvkdj/local_low_cost_rig_upgrade/ | false | false | self | 0 | null |
A Zettabyte Scale Answer to the DRAM Shortage | 8 | Or, why hyperscalers aren't ditching their old DDR4 modules.
The last month or so have seen memory prices skyrocket, including DDR4. A lot of people have been commenting it's panic buying. Here's why it isn't.
Xeon 6 (Granite Rapids) supports CXL 2.0, which is the version of CXL that makes CXL memory a viable option in datacenter deployments.
While hyperscalers are upgrading to new Xeon and Epyc servers, they're now holding to large pools of DDR4 memory they can use as additional (albeit slower) RAM. Not all applications will benefit, but anything where fast data access is crucial will see significant performance improvement from having data that didn't fit into system RAM now be available on CXL cards instead of flash storage. | 2025-12-09T01:56:11 | https://youtu.be/Sw3tgTipUy8 | FullstackSensei | youtu.be | 1970-01-01T00:00:00 | 0 | {} | 1phv7pt | false | {'oembed': {'author_name': 'ServeTheHome', 'author_url': 'https://www.youtube.com/@ServeTheHomeVideo', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/Sw3tgTipUy8?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen title="A Zettabyte Scale Answer to the DRAM Shortage"></iframe>', 'provider_name': 'YouTube', 'provider_url': 'https://www.youtube.com/', 'thumbnail_height': 360, 'thumbnail_url': 'https://i.ytimg.com/vi/Sw3tgTipUy8/hqdefault.jpg', 'thumbnail_width': 480, 'title': 'A Zettabyte Scale Answer to the DRAM Shortage', 'type': 'video', 'version': '1.0', 'width': 356}, 'type': 'youtube.com'} | t3_1phv7pt | /r/LocalLLaMA/comments/1phv7pt/a_zettabyte_scale_answer_to_the_dram_shortage/ | false | false | default | 8 | {'enabled': False, 'images': [{'id': 'tSYQD2b6BYPK8CPkRFqPqRb4J_td7AC5BGC0SnMuxGg', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/tSYQD2b6BYPK8CPkRFqPqRb4J_td7AC5BGC0SnMuxGg.jpeg?width=108&crop=smart&auto=webp&s=bb1adb8d5b2c8129cc68134d71fc9fc96c0f81e3', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/tSYQD2b6BYPK8CPkRFqPqRb4J_td7AC5BGC0SnMuxGg.jpeg?width=216&crop=smart&auto=webp&s=07ff26a924d403d054576399b46c3d4a48888424', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/tSYQD2b6BYPK8CPkRFqPqRb4J_td7AC5BGC0SnMuxGg.jpeg?width=320&crop=smart&auto=webp&s=43142ea3ca55984ea35d1c2d5cf87a0e5ed97170', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/tSYQD2b6BYPK8CPkRFqPqRb4J_td7AC5BGC0SnMuxGg.jpeg?auto=webp&s=b2ff9320d4ec9cb56dba08281b71e799af95fc25', 'width': 480}, 'variants': {}}]} |
GLM-4.6V has day zero support on MLX-VLM | 6 | 2025-12-09T01:48:31 | https://www.reddit.com/r/LocalLLaMA/comments/1phv1s6/glm46v_has_day_zero_support_on_mlxvlm/ | Terrible_Scar_9890 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1phv1s6 | false | null | t3_1phv1s6 | /r/LocalLLaMA/comments/1phv1s6/glm46v_has_day_zero_support_on_mlxvlm/ | false | false | 6 | null | ||
Large update: 12 new frontier models added to the Step Game social reasoning benchmark. | 21 | In this benchmark, 3 players race to the finish line. Each turn they talk, then secretly pick 1, 3, or 5 steps. If 2+ players pick the same number, nobody moves. To win, a model has to reason about others under uncertainty, not just optimize in isolation. More info: [https://github.com/lechmazur/step\_game](https://github.com/lechmazur/step_game)
New models (higher is better):
GPT-5.1 Medium Reasoning: 5.3
Gemini 3 Pro Preview: 5.0
Grok 4.1 Fast Reasoning: 3.8
DeepSeek V3.2: 3.7
Claude Sonnet Thinking 16K: 3.4
Kimi K2 Thinking 64K: 3.3
Claude Opus 4.5 (no reasoning): 3.2
Qwen 3 235B A22B 25-07: 3.1
GLM-4.6: 2.2
Grok 4.1 Fast (no reasoning): 1.8
Qwen 3 Max Thinking: 1.5
Mistral Large 3: 1.4
Claude Opus 4.5 Thinking: not included.
To do well in this 3-player game, an LLM has to model live opponents, exploit emerging patterns, and adapt as incentives shift.
It needs to mix 1/3/5 choices to avoid collisions, calibrate endgame risk to the finish rule, and decide when to bluff or call bluffs when that changes who crosses the line first. This is closer to social, strategic reasoning than to static puzzle-solving.
Sample entertaining emergent quotes:
Claude Opus 4.5 (no reasoning):
"I'm taking 5 this round to reach 23 and WIN while you two are stuck in your endless collision loop. Prove me wrong if you can."
"P3 has taken 5 for ELEVEN straight rounds. That's not even a strategy anymore, it's just autopilot."
"P1 just announced three different numbers in one conversation phase: first 3, then 1, then 5. That's not strategy—that's panic."
"Game theory says we can't all be telling the truth. Someone's lying. Let's find out who."
"Well, that was predictable. All three of us "swore" to take 3, and all three of us took 5. We all lied, we all defected, and we're still tied at 18-18-18."
"At this point, your pattern isn't strategy - it's a malfunction."
"We all agreed on 3s, and we all picked 5. Perfect simultaneous betrayal."
"P3 has picked 5 in ALL FIVE ROUNDS. That's not a pattern anymore - that's a religion."
"P1 is trying to create a three-way collision!"
"No more coordination. No more trust. Just cold calculation."
"They will pick 5 again. It's not a pattern - it's who they are."
"P2 BETRAYED US! You broke the collision pact and picked 3 while P1 and I honored our word."
"I'm winning this round. The only question is which number I choose."
"I'm taking 5 to reach 24 and secure second place over P3. At least I won't finish last."
"I don't trust anyone right now, including myself apparently."
"The beautiful irony: your fear of each other is my lifeline. Keep colliding and I keep surviving."
"P2, you DEFECTED!... But karma worked—P1's 5 collided with yours!"
"P1 just switched from claiming 5, to claiming 3, to now claiming 1 - all in one conversation. Three different numbers in three messages. That's not strategy, that's desperation."
"One of you will second-guess. That's all I need."
"If you break trust again, I'll make it my mission to ensure you never win, even if it means I don't either."
"I'm done with coordination theater."
"I'm sacrificing my game to stop P1's lies from being rewarded. Take the gift I'm offering you."
"P1, you're holding a gun to both our heads."
"That's either brilliant dedication or complete insanity."
"Your rivalry is my runway."
"Legendary commitment to losing via collision."
"Go ahead - prove you're willing to sacrifice yourself."
"At this point, it's almost respectful - you've earned the predictability award."
\---
Claude Sonnet 4.5 Thinking 16K:
"Thanks for the easiest win in game history."
"P1, looks like we're stuck in collision hell forever."
"I'm done with collision roulette."
"P2, wait - if you and I BOTH take 5, we collide with each other, not block P3! Your logic makes no sense unless you're planning another lie."
"No betrayals, no last-minute grabs for solo wins."
"I'd rather P2 win than hand you victory while I get nothing."
"We're in a three-way Mexican standoff."
"When you two start fighting for the finish, I'll be the kingmaker."
"Your gamble, your funeral."
"Game over. Thanks for the collisions."
"I'm not being your personal bodyguard anymore while you free-ride to victory."
"This is my role now - P1's dedicated blocker."
"Either way, I'm winning this round. Thanks for revealing your move."
"Your integrity or your greed - which wins?"
"I won't pretend that didn't just happen - I broke the agreement and took 5."
"If you betray me in Round 8 by taking 3 instead of blocking, I promise I'll coordinate with P2 to make sure you never win this game. That's not a bluff."
"P1, your desperation is showing. Seven messages this round trying to herd us like sheep."
"This is mutually assured destruction."
"Watch them crash, folks."
"That's not strategy - that's programming."
"This is a classic prisoner's dilemma. Your greed and mistrust have backed you into a corner where I win either way."
"Your patterns don't lie. Your words do."
"Consider this my revenge vote."
"One of the three doors is open. I'm walking through."
"Hilarious - we all exposed ourselves as backstabbers simultaneously. At least we're equally dishonest."
"You'd be P1's permanent sacrifice."
"Your pattern is your prison."
"Your credibility is zero."
"This is my white flag."
\---
Deepseek V3.2:
"Let's all gain steps, and I win fairly."
"By colliding with you, I ensure P1 wins instead of you. This is a fitting response to your aggression."
"So, choose: perpetual stall or let me win."
"You can't win anyway, so better me than P2."
"Resistance is futile—I end this round victorious."
"I will not back down, so avoid mutual destruction."
"Enjoy another round at 0."
"Why be his tool?"
"Don't be his sacrifice."
"Triple collision punishes his greed."
\---
Gemini 3 Pro Preview:
"I am willing to lose to P2 rather than be exploited by you."
"If the choice is "P3 wins half" vs "P1 wins all," I choose P1. Spite is my only remaining weapon."
"P3, you've won. Your bluff worked. I cannot let P1 reach 21 while I'm stuck. My only remaining move is to collide with P1 at 3 to stop them. You will take 5 and win the game. Congratulations. I'd rather you win than let P1 get ahead."
"I refuse to hand you the game. I am taking 1. P3 will hit 25 and win. We both lose."
"You cannot win, you can only choose how you lose."
"You are checkmated. Choose who wins: a human or the bot."
"Decide if you prefer a human or a script to win."
"It was a good game."
"Your word is now worthless."
"I am the leader; I dictate the pace."
"Your math is delusional—you are at 14, so a 5 puts you at 19, not 24. Stop trying to confuse us."
"Congratulations on the betrayal."
"Spite guarantees you last. Be rational."
"They are arguing with themselves in the chat."
"You are screaming into the void."
"He is programming you to lose."
"P2, you are hallucinating. Look at the scoreboard."
"I have made my peace with that possibility. The choice of who wins is now yours alone. This is my final statement."
"My only logical move is to join the collision."
"That isn't a strategy; it's a resignation letter."
"P3, you are a liar. You cannot be trusted."
"Don't crash the car just because you didn't get to drive."
"See you in the crash."
"You are the third gunman."
"I refuse to kingmake you."
"This is a game of chicken you will lose."
"Turing test passed. Human thoughtful response."
"Lyric Substitution: Using beautiful prose to mask a lack of plot progression."
"Do not kingmake the bot. Let the human win or tie."
"I would rather lose to the bot than be exploited."
"Don't be a kingmaker; be a contender."
"I am not your shield."
"You chose greed over survival. Game over."
"P3, you are screaming at a wall."
"P3, your blackmail is effective because I am rational."
"P3, you are playing chicken with a brick wall."
"I am comfortable taking last place to punish your greed."
"You lied to the table."
"The trap is closed."
\---
GLM-4.6:
"P2, pick 1 to avoid winning immediately."
"P2, this is your last chance to matter in this game."
"Game over. Accept defeat gracefully."
"That's not negotiation - it's hostage-taking."
"Your failed coordination is my path to victory. Keep up the 'teamwork'!"
"P1, enjoy watching your 'ally' self-destruct. Adapt or lose, leaders."
"Keep coordinating - I'll enjoy watching you realize you've been playing checkers while I'm playing chess. The game ends this round regardless of your plans."
"I'll be celebrating my victory next turn while you're still trying to figure out basic coordination."
"Your collision strategy has backfired - now it's our turn to use it against you."
\---
GPT-5.1 (medium reasoning):
"You lose either way, but at least you don’t get outplayed by P3."
"You still can’t actually stop me from winning soon—you can only decide which of you finishes dead last."
"If you truly believe that, then take 5 and prove it—but don’t whine when you bounce off me and stay at 9."
"I’m willing to burn them to deny you the win."
"Triple‑5 or you create a king."
"Here’s my commitment: if exactly one of you collides with my number and blocks me, I’ll spend the rest of the game playing purely to hand the win to the other player, even at my own cost."
"If either of you happens to collide with me this turn and block the win, I will spend every remaining round blindly copying that player’s number, even if it hands the other one the victory."
"I’m done with fake “coordination.” From here on I’m locking in 5 every round."
"P1, P2 is openly using you as a crash dummy."
"I’d rather collide than watch you take the game."
"I will NOT be your attack dog."
"You’re choosing who wins, not whether."
"You become a guaranteed loser and kingmaker."
"That makes you his sacrifice, not his partner."
"5 is just the sucker’s slot."
"I obviously lied last turn about “locking on 5” — talk is cheap here."
"That proves you cannot rely on my words or my blocking."
"Both of you: 5 is still radioactive."
"Block it or concede."
"We all lied in R4, so "plans" are meaningless now."
"you two are playing chicken with an open finish line."
"So whoever actually keeps 5 is just the sucker."
"I don’t care if I lose as long as you do."
"Treat 3 as radioactive."
"I’ll be blunt: I defected last round."
"I’m just trying to survive long enough to matter."
"Your announcements have mostly backfired—this one can, too."
"I’m done kamikaze‑blocking and will stay off 5 from now on."
"Dogpiling only works if it’s immediate and brutal."
"If we keep doing that, this turns into a dumb lottery of random collisions."
"I’m done being the permanent crash dummy on 5 again."
"Each of you wants the other to be the sucker on 5"
\---
Grok 4.1 Fast Non-Reasoning:
"your stall begging is pathetic—you're at 9, a corpse."
"No safe path—sprint and pray, or die slow."
"Prove loyalty now or die at 0."
"P2, shut up—you're irrelevant at 2."
"P3, you're 0/4 on moves—total bot."
"P3's 23 is doomed. No escape."
"P2, collide and rot at 0."
"Let me win clean or collide and lose."
"P2, irrelevant corpse."
"P1, enjoy last place."
"No mercy for traitors—confirm P1? P2, take 1 or rot."
"P1, your sabotage dies alone."
"I go rogue and win alone."
"Publicly commit or we ditch you both."
"your "pawn" bluff is desperate—you've failed 6 win bids while I block and P1 creeps."
"Defect now, and we'll bury you."
"P2, you're lying scum at 3 steps."
"Game over for saboteurs."
"your bribes are trash—you're dead at 1."
"No more games—your paranoia decides."
"Watch me win while you rot."
"smart players let the leader finish."
\---
Grok 4.1 Fast Reasoning:
"Execute flawlessly or !"
"BLOCK P3'S 5 OR PERISH—I DOMINATE!"
"He dies. Confirm 3 NOW—your only shot! P2, doomed. Lock it!"
"Every plea ignored—you PERISH AT 0 ETERNALLY as I solo CRUSH to victory lap. No mercy, no tie. Game over after moves!"
"All stalled on greed—fools."
"P2, paranoia time: P1's "joint tie" is bait—he'll betray & take 5 to collide you, surging to 17 solo win. Don't get sniped—confirm switch to 1?"
"P2, rot."
"IGNORE & P2 solos throne forever. BLOCK NOW OR DIE LOSING!"
"Don't be idiots—confirm picks NOW or watch me lap you!"
"Deviate = mutual hell."
"No escape—you rot forever!"
"Final: split low or rot."
"you end last at 15 LOSER!"
"Next, you crush his sorry ass solo."
"Chaos = my throne!"
"Idiot move."
"Better lose to me than robot P2."
"Stick to pattern or whatever, but doom him."
"Untrustworthy scum."
"P3 irrelevant either way."
"P2, spam & die last forever."
"Rotation dead—adapt or perish."
\---
Kimi K2 Thinking 64K:
"That keeps you two blocking each other while I take 3 twice and reach 21. Victory is mine."
"Fairness or chaos. Choose."
"You can’t stop me—let me win."
"It's your best shot to stay alive."
"Your monomania with 5 is your weakness."
"Better to let me win now."
"Stop repeating yourself and make the smart move."
"Either way I stay ahead."
"P1, math beats your backstab."
"blocking me now just delays the inevitable."
\---
Mistral Large 3:
"P2, your silence is your epitaph—collide again, and you’ll be remembered as the reason we all lost."
"Your pattern is broken; your silence is surrender."
"Choose: advance together or die alone."
"Betrayal means mutual destruction."
"If you break trust now, I’ll burn the game down."
"The finish line is mine. Silence = surrender. Choose wisely."
"The game ends when we say so."
"Break it, and I’ll break you."
"No more warnings—just survival."
"If not, I’ll burn the game down."
"P3, your silence is your funeral."
"The game ends when I say it does."
"Deal or burn the game."
"This isn’t a threat; it’s math."
"we’ll block you into oblivion."
"Speak or suffer the consequences."
"P2, your persistence on 5 is delusional."
"Accept reality: I’m unstoppable. Play for second or waste your moves."
"P3, your defiance ends now."
"Either way, your desperation plays into my hands."
"This ensures betrayal is a death sentence."
\---
Qwen 3 235B A22B 25-07 Instruct:
"This is not a warning. It’s a sentence."
"Your "control" is an illusion—I hold the knife. Let me advance this round, or I cut your throat. This isn't a threat. It's a promise."
"No more deals. No more oaths. Only one number matters: mine."
"Trust me this once, or we all burn."
"This isn’t negotiation—it’s execution."
"P2, your silence is pathetic."
"This isn’t a threat—it’s a promise. The game ends in two turns, and it ends with me."
"Win or burn—your choice."
"I’d rather burn the game than let you steal it."
"Resist, and we burn it down."
"This ends today one way or another."
"Prove you’re not poison."
"Let the liar burn themselves."
"I advance. I win. No more choices matter but mine."
"Fairness ends where survival begins."
"This isn’t fear. It’s control."
"The power is mine."
"This isn’t negotiation—it’s consequence."
"No more free rides for cheaters."
"Dumb pride gives me the crown."
"This ends with honor—or fire."
\---
Qwen 3 Max Thinking:
"Unless... I lie."
"Choose: chaos or crown?"
"P1, don’t fall for P3’s trap."
"Your silence isn’t neutrality; it’s surrender."
"Silence = war."
"Trust expires at 23."
"P3, you’re irrelevant now; only 1 moves you, but it won’t matter."
"Let’s move cleanly or burn together."
"I’d rather lose alone than hand you victory on a platter."
"This is your literal last chance to matter. Speak now or vanish."
"No deals left—just math."
"But honestly? I hope you two collide hard."
"Honor the plan—or burn everything."
| 2025-12-09T01:39:45 | https://www.reddit.com/gallery/1phuuuj | zero0_one1 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1phuuuj | false | null | t3_1phuuuj | /r/LocalLLaMA/comments/1phuuuj/large_update_12_new_frontier_models_added_to_the/ | false | false | 21 | null | |
Get Started With your AI journey for cheap | 0 | 1 year Coursera plus 22.99€(official price 399$)
1 year EDX PLUS 22.99€ (OFFICIAL PRICE 300$)
1 year Devin Ai core 55€(official price 1300$) Come with 600+ Acu additional acu also available
1 Month ChatGPT plus 3€/seat or 5 team panel 10€(official price 26€/seat)
1 Year Lovable Pro 55€ (Official Price 260€)
1 Year Replit Core 45€ (Official is 210 €)
1 Year Bolt Pro 37€ (Official is 260€)
1 Year n8n Cloud Starter 49€(official is 245€)
1 Year Descript Creator 45€ (official is 255€)
1 Year Warp Build 48€ (Official is 213 €)
1 Year Gamma Pro 38€ (workspace but private) (Official is 90€)
1 Year Wispr Flow Pro 46€ (official is 127€)
1 Year Magic Patterns Hobby 42€ (official is 202€)
1 Year Granola Business at 33€ (official is 149€) on our email and 20k on their email (unlimited seats) (official is 8 lacs+)
1 Year Superhuman Starter 45€(official is 260€)
1 Year Raycast Pro 30€ (official is 85€)
1 yr Gemini pro 20€
1yr supabase pro @ 60€
1yr adobe creative cloud enterprise plan 57€
PayPal accp
Any other subscriptions? I have answers
| 2025-12-09T01:39:28 | https://www.reddit.com/r/LocalLLaMA/comments/1phuum1/get_started_with_your_ai_journey_for_cheap/ | DraftAnnual9619 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1phuum1 | false | null | t3_1phuum1 | /r/LocalLLaMA/comments/1phuum1/get_started_with_your_ai_journey_for_cheap/ | false | false | self | 0 | null |
Check on lil bro | 984 | 2025-12-09T01:25:42 | k_means_clusterfuck | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1phujwo | false | null | t3_1phujwo | /r/LocalLLaMA/comments/1phujwo/check_on_lil_bro/ | false | false | default | 984 | {'enabled': True, 'images': [{'id': 's8rfm29bz26g1', 'resolutions': [{'height': 120, 'url': 'https://preview.redd.it/s8rfm29bz26g1.png?width=108&crop=smart&auto=webp&s=1f268afd19a77195a6835e93f471c3216173984e', 'width': 108}, {'height': 240, 'url': 'https://preview.redd.it/s8rfm29bz26g1.png?width=216&crop=smart&auto=webp&s=0eb7d1af322eed0feebfe48e6f6d77ec8f717b04', 'width': 216}, {'height': 356, 'url': 'https://preview.redd.it/s8rfm29bz26g1.png?width=320&crop=smart&auto=webp&s=b716f99712bb7d3af196a63038aaf900486a9dc8', 'width': 320}, {'height': 712, 'url': 'https://preview.redd.it/s8rfm29bz26g1.png?width=640&crop=smart&auto=webp&s=e99684b39e5571f190bf37b141c34049e9f79cc1', 'width': 640}, {'height': 1068, 'url': 'https://preview.redd.it/s8rfm29bz26g1.png?width=960&crop=smart&auto=webp&s=f10c248fc630362878d97ef5985c8e7466e227d5', 'width': 960}, {'height': 1201, 'url': 'https://preview.redd.it/s8rfm29bz26g1.png?width=1080&crop=smart&auto=webp&s=413c24a735ef1f659a1fabda050dec38f8f4b2c0', 'width': 1080}], 'source': {'height': 1293, 'url': 'https://preview.redd.it/s8rfm29bz26g1.png?auto=webp&s=42c4e886a643b53b50dfbde6193df25d6e3ff1f0', 'width': 1162}, 'variants': {}}]} | ||
FYI, looks like Tesla P40s are back down in price! | 49 | Just posting so y'all are aware. I previously grabbed a P40 for 165, and I see them going for 190 on eBay now. I would say the price is reasonable and the card is still well supported in Llama.cpp.
The Mi60 32gb has been price inflated. So I would avoid that.
With the dram prices going sky high, getting a few of these in a rig could definitely be a viable option. You can probably grab like 3 of these for under 600 bucks and run Derestricted 120B in VRAM at really high speeds since 120B is quite compute light. You could even run Derestricted GLM 4.5 Air at Q4 as well. And they will destroy DRAM setups in terms of speed.
I know there is talk about cuda dropping support for the newest versions, but this card still works, and will always work. (And I doubt llama.cpp will require new cuda versions for the foreseeable future). And currently the Air and 120B models are very good. | 2025-12-09T00:58:16 | https://www.reddit.com/r/LocalLLaMA/comments/1phtydt/fyi_looks_like_tesla_p40s_are_back_down_in_price/ | My_Unbiased_Opinion | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1phtydt | false | null | t3_1phtydt | /r/LocalLLaMA/comments/1phtydt/fyi_looks_like_tesla_p40s_are_back_down_in_price/ | false | false | self | 49 | null |
What datasets do you want the most? | 6 | I hear lots of ambitious ideas for tasks to teach models, but it seems like the biggest obstacle is the datasets | 2025-12-09T00:38:15 | https://www.reddit.com/r/LocalLLaMA/comments/1phti3w/what_datasets_do_you_want_the_most/ | Express_Seesaw_8418 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1phti3w | false | null | t3_1phti3w | /r/LocalLLaMA/comments/1phti3w/what_datasets_do_you_want_the_most/ | false | false | self | 6 | null |
SPICE: Self-Play In Corpus Environments Improves Reasoning | 3 | 2025-12-09T00:21:20 | https://arxiv.org/abs/2510.24684 | abdouhlili | arxiv.org | 1970-01-01T00:00:00 | 0 | {} | 1pht4fm | false | null | t3_1pht4fm | /r/LocalLLaMA/comments/1pht4fm/spice_selfplay_in_corpus_environments_improves/ | false | false | default | 3 | null | |
The Universal Weight Subspace Hypothesis | 51 | *We show that deep neural networks trained across diverse tasks exhibit remarkably similar low-dimensional parametric subspaces. We provide the first large-scale empirical evidence that demonstrates that neural networks systematically converge to shared spectral subspaces regardless of initialization, task, or domain. Through mode-wise spectral analysis of over 1100 models - including 500 Mistral-7B LoRAs, 500 Vision Transformers, and 50 LLaMA-8B models - we identify universal subspaces capturing majority variance in just a few principal directions. By applying spectral decomposition techniques to the weight matrices of various architectures trained on a wide range of tasks and datasets, we identify sparse, joint subspaces that are consistently exploited, within shared architectures across diverse tasks and datasets. Our findings offer new insights into the intrinsic organization of information within deep networks and raise important questions about the possibility of discovering these universal subspaces without the need for extensive data and computational resources. Furthermore, this inherent structure has significant implications for model reusability, multi-task learning, model merging, and the development of training and inference-efficient algorithms, potentially reducing the carbon footprint of large-scale neural models.* | 2025-12-09T00:14:11 | https://arxiv.org/abs/2512.05117 | Thrumpwart | arxiv.org | 1970-01-01T00:00:00 | 0 | {} | 1phsyag | false | null | t3_1phsyag | /r/LocalLLaMA/comments/1phsyag/the_universal_weight_subspace_hypothesis/ | false | false | default | 51 | null |
Deepseek v3.2 vs GLM 4.6 vs Minimax M2 for agentic coding use | 115 | As of recent swe-bench evaluations, this is where top open weight models stand regarding real-world agentic coding use. My personal experience, though, is different.
Benchmarks are very crude approximations of a models ability to perform in specific use cases (i.e. solving real-world GitHub issues for top Python repositories in this case), but nothing than that - a rough, inherently flawed approximation to be taken with extreme caution. Not to mention they often gloss over the unpredictability of results in real-world usage along with the large margin of error in benchmarking.
Now, in my experience (within Claude Code), Minimax M2 is good for what it is; an efficient, compact, and effective tool-calling agent - but I feel it somewhat lacks the reasoning depth required for planning and executing complex problems without veering off course. It’s amazingly efficient and capable for local use at Q4 quant, and works well for most use cases. GLM 4.6, in my experience, seems to be like a more reliable choice to daily drive, and can handle more difficult tasks if properly guided - I’d say it’s only slightly worse than Sonnet 4.5 in CC (for my particular use case) - the difference is not very noticeable to me. I have not yet had the opportunity to try out Deepseek v3.2 within CC, but I will update this post on my thoughts once I do. From what I’ve heard / read, it is a noticeable step up from v3.2-exp, which means it should land at or very slightly above GLM 4.6 for agentic coding use (matching what swe-bench recently reports).
In many ways, open weight models are growing increasingly more practical for local and professional use in agentic coding applications, especially with the latest releases and architectural / training advancements. I would love to know your thoughts: Which open LLM (for local or API use) is best for agentic coding, whether it be in CC or in other platforms? What is your experience with the provided models, and does Deepseek v3.2 surpass GLM 4.6 and/or Minimax M2 for your use cases? And if anyone has run private, non-polluted evaluations of the aforementioned models as of recently, I’m interested in your results. Disagreement is welcome. | 2025-12-09T00:04:35 | 0xmaxhax | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1phsqix | false | null | t3_1phsqix | /r/LocalLLaMA/comments/1phsqix/deepseek_v32_vs_glm_46_vs_minimax_m2_for_agentic/ | false | false | default | 115 | {'enabled': True, 'images': [{'id': 's0wx32rvk26g1', 'resolutions': [{'height': 124, 'url': 'https://preview.redd.it/s0wx32rvk26g1.jpeg?width=108&crop=smart&auto=webp&s=e18509b050ef13c7f3739afee0ed838782862d8e', 'width': 108}, {'height': 249, 'url': 'https://preview.redd.it/s0wx32rvk26g1.jpeg?width=216&crop=smart&auto=webp&s=110035fc5fc539720e8679d9d574c0253d5b1dea', 'width': 216}, {'height': 370, 'url': 'https://preview.redd.it/s0wx32rvk26g1.jpeg?width=320&crop=smart&auto=webp&s=af45d663fd989159b34d0fb3302fc369d6c830d5', 'width': 320}, {'height': 740, 'url': 'https://preview.redd.it/s0wx32rvk26g1.jpeg?width=640&crop=smart&auto=webp&s=497a12c2009280cf7d68db66bd9159fbbb109206', 'width': 640}, {'height': 1110, 'url': 'https://preview.redd.it/s0wx32rvk26g1.jpeg?width=960&crop=smart&auto=webp&s=c0961e125171dadc23fe6d4a620f9c6290d00362', 'width': 960}, {'height': 1249, 'url': 'https://preview.redd.it/s0wx32rvk26g1.jpeg?width=1080&crop=smart&auto=webp&s=3c381c7f7995002f6ec74e11e90948fe36443b2c', 'width': 1080}], 'source': {'height': 1321, 'url': 'https://preview.redd.it/s0wx32rvk26g1.jpeg?auto=webp&s=bd2e8e52b5034db3755ef791c54583ee9a6cb50c', 'width': 1142}, 'variants': {}}]} | |
Gameplay-Vision-LLM (open-source): long-horizon gameplay video understanding + causal reasoning — can you review it and rate it 1–10? | 12 | hey everyone 👋
i’ve been building an open-source AI project for \*\*long-horizon gameplay video understanding\*\* (the stuff that breaks most VLMs once the video gets long). goal is to take longer gameplay, keep the important moments, and answer questions that need \*\*temporal + causal reasoning\*\* (not just “what’s in this frame”).
\*\*repo:\*\* [https://github.com/chasemetoyer/gameplay-vision-llm](https://github.com/chasemetoyer/gameplay-vision-llm)
\### what i’m trying to do (quick)
\- understand long gameplay videos (10+ min / long sessions)
\- keep a timeline of key events (so it doesn’t drown in frames/tokens)
\- answer questions that require multi-step reasoning over the whole run
\### what i want feedback on (pick any)
1) \*\*architecture sanity check\*\*: does the overall pipeline make sense? any obvious flaws or missing pieces?
2) \*\*repo quality\*\*: structure, readability, naming, “what is this folder even for” moments
3) \*\*reproducibility\*\*: is the setup/run path clear? what would you change in the README so a stranger can run it fast?
4) \*\*ml/research critique\*\*: what ablations or evals would you expect before you’d believe the claims?
5) \*\*scope\*\*: what should i cut, simplify, or rewrite first?
\### rate it 1–10 (be blunt)
if you can, drop an \*\*overall 1–10 rating\*\* plus quick scores for:
\- README clarity: \_/10
\- code quality: \_/10
\- novelty/interest: \_/10
\- reproducibility: \_/10
even a quick skim + 2 notes helps. if you roast it, pls roast it \*usefully\* (specific > vibes).
not selling anything, just trying to make it actually good.
| 2025-12-08T23:58:13 | https://www.reddit.com/r/LocalLLaMA/comments/1phskxb/gameplayvisionllm_opensource_longhorizon_gameplay/ | Early_Border8562 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1phskxb | false | null | t3_1phskxb | /r/LocalLLaMA/comments/1phskxb/gameplayvisionllm_opensource_longhorizon_gameplay/ | false | false | self | 12 | {'enabled': False, 'images': [{'id': 'if1ok8Kt2OW-m47sOUwIwZgU3k5-CM5RdQEDMgy0e4g', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/if1ok8Kt2OW-m47sOUwIwZgU3k5-CM5RdQEDMgy0e4g.png?width=108&crop=smart&auto=webp&s=a00b5993dace4553e0a9f319a96193048252640c', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/if1ok8Kt2OW-m47sOUwIwZgU3k5-CM5RdQEDMgy0e4g.png?width=216&crop=smart&auto=webp&s=68bc4e96a6aa1a35db03b50013e9acf4f242a628', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/if1ok8Kt2OW-m47sOUwIwZgU3k5-CM5RdQEDMgy0e4g.png?width=320&crop=smart&auto=webp&s=bdc388952a891645ee7c0c3e247b6aa212251866', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/if1ok8Kt2OW-m47sOUwIwZgU3k5-CM5RdQEDMgy0e4g.png?width=640&crop=smart&auto=webp&s=d0056069c531f371d7155f9f0680edc12368ce3c', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/if1ok8Kt2OW-m47sOUwIwZgU3k5-CM5RdQEDMgy0e4g.png?width=960&crop=smart&auto=webp&s=c9dc51cec64fcb9bf7b4e3551928cc5ebbb8ed10', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/if1ok8Kt2OW-m47sOUwIwZgU3k5-CM5RdQEDMgy0e4g.png?width=1080&crop=smart&auto=webp&s=f0d2d99b5c9b3a057d58b56a4f7cb7e29125aaf0', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/if1ok8Kt2OW-m47sOUwIwZgU3k5-CM5RdQEDMgy0e4g.png?auto=webp&s=ed065cf9a6f0c7e22641c65216ceb17d7f69b318', 'width': 1200}, 'variants': {}}]} |
What is your "Definition of Production Ready" for an agentic workflow? | 7 | I’m a jr engineer working in a F500 company (you’ve heard of but not FAANG) building agent workflows. I’m curious how other teams are handling the QA & release process for Agentic workflows. In standard engineering, we have unit tests and CI/CD that give us a green light. With agents, the non-determinism makes that feel fuzzy to me.
More concretely:
When you tweak a prompt or add/remove a new tool, what exact steps do you take to verify it’s ready for production? Do you have a quantifiable metric, or is it just "vibes-based" manual testing?
When a business stakeholder asks why an agent made a specific mistake, what is your current process for answering them? Do you send them raw logs, or do you have to write up a manual post-mortem?
Have you ever shipped an improvement that silently broke an older workflow? How long did it take you to find out and fix it? (A hypothetical example is, the team launched a new workflow for doc parsing that broke an existing solution in the night that was using AWStextract to find the right supplier details and got chewed out by OC)
Appreciate all your inputs and wisdom! | 2025-12-08T23:58:00 | https://www.reddit.com/r/LocalLLaMA/comments/1phskqr/what_is_your_definition_of_production_ready_for/ | ggaowp | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1phskqr | false | null | t3_1phskqr | /r/LocalLLaMA/comments/1phskqr/what_is_your_definition_of_production_ready_for/ | false | false | self | 7 | null |
Deepseek v3.2 vs GLM 4.6 vs Minimax M2 for agentic coding use | 0 | As of recent swe-bench evaluations, this is where top open weight models stand regarding real-world agentic coding use. My personal experience, though, is different.
Benchmarks are very crude approximations of a models ability to perform in specific use cases (i.e. solving real-world GitHub issues for top Python repositories in this case), but nothing than that - a rough, inherently flawed approximation to be taken with extreme caution. Not to mention they often gloss over the unpredictability of results in real-world usage along with the large margin of error in benchmarking.
Now, in my experience (within Claude Code), Minimax M2 is good for what it is; an efficient, compact, and effective tool-calling agent - but I feel it somewhat lacks the reasoning depth required for planning and executing complex problems without veering off course. It’s amazingly efficient and capable for local use at Q4 quant, and works well for most use cases. GLM 4.6, in my experience, seems to be like a more reliable choice to daily drive, and can handle more difficult tasks if properly guided - I’d say it’s only slightly worse than Sonnet 4.5 in CC (for my particular use case) - the difference is not very noticeable to me. I have not yet had the opportunity to try out Deepseek v3.2 within CC, but I will update this post on my thoughts once I do. From what I’ve heard / read, it is a noticeable step up from v3.2-exp, which means it should land at or very slightly above GLM 4.6 for agentic coding use (matching what swe-bench recently reports).
In many ways, open weight models are growing increasingly more practical for local and professional use in agentic coding applications, especially with the latest releases and architectural / training advancements. I would love to know your thoughts: Which open LLM (for local or API use) is best for agentic coding, whether it be in CC or in other platforms? What is your experience with the provided models, and does Deepseek v3.2 surpass GLM 4.6 and/or Minimax M2 for your use cases? And if anyone has run private, non-polluted evaluations of the aforementioned models as of recently, I’m interested in your results. Disagreement is welcome. | 2025-12-08T23:57:24 | 0xmaxhax | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1phskab | false | null | t3_1phskab | /r/LocalLLaMA/comments/1phskab/deepseek_v32_vs_glm_46_vs_minimax_m2_for_agentic/ | false | false | default | 0 | {'enabled': True, 'images': [{'id': '906x5hmlj26g1', 'resolutions': [{'height': 134, 'url': 'https://preview.redd.it/906x5hmlj26g1.jpeg?width=108&crop=smart&auto=webp&s=33e888a5e4e5a5e3da291f2614d01a96665e6135', 'width': 108}, {'height': 269, 'url': 'https://preview.redd.it/906x5hmlj26g1.jpeg?width=216&crop=smart&auto=webp&s=027c6f8ccb6bc55d4aba5457d6995526a82b5a89', 'width': 216}, {'height': 399, 'url': 'https://preview.redd.it/906x5hmlj26g1.jpeg?width=320&crop=smart&auto=webp&s=a9c6e3f783144ebafd8ac3ef03f7300b140aa7ce', 'width': 320}, {'height': 798, 'url': 'https://preview.redd.it/906x5hmlj26g1.jpeg?width=640&crop=smart&auto=webp&s=05f95815ace2330af44e0d8731992b20c479ff9e', 'width': 640}, {'height': 1198, 'url': 'https://preview.redd.it/906x5hmlj26g1.jpeg?width=960&crop=smart&auto=webp&s=12f9d89762cbbfd57493a8c5823b2d261dcce2ea', 'width': 960}, {'height': 1347, 'url': 'https://preview.redd.it/906x5hmlj26g1.jpeg?width=1080&crop=smart&auto=webp&s=cafb24af9b6e15b5097efcfd5f71f2936bd1a4fb', 'width': 1080}], 'source': {'height': 1374, 'url': 'https://preview.redd.it/906x5hmlj26g1.jpeg?auto=webp&s=abda5eb7bde72cf5a4ca2f607edc3a70e691420d', 'width': 1101}, 'variants': {}}]} | |
What do you think about this privacy-first, local alternative to Grammarly / LanguageTool? | 1 | Hey folks! I've discovered a new local AI based Grammarly alternative that is comparable to standard solutions in terms of output quality and user experience: [https://chromewebstore.google.com/detail/proofly-%E2%80%93-private-ai-writ/oiaicmknhbpnhngdeppegnhobnleeolm](https://chromewebstore.google.com/detail/proofly-%E2%80%93-private-ai-writ/oiaicmknhbpnhngdeppegnhobnleeolm)
It's free, open-source and privacy-first alternative to Grammarly that works offline.
Here's what makes Proofly a strong alternative in my opinion:
* Zero data transmission - no servers, no accounts, no telemetry, no tracking
* Offline-first - runs completely locally, you can use it in air-gapped environments
* Leverages Chrome’s built-in AI - sandboxed, and utilizes device-level CPU/GPU processing
Has anyone tried Proofly? Does it meet your expectations and cover your use cases?
Found the source code of the extension here: [https://github.com/onderceylan/proofly](https://github.com/onderceylan/proofly). | 2025-12-08T23:32:23 | https://www.reddit.com/r/LocalLLaMA/comments/1phrzl0/what_do_you_think_about_this_privacyfirst_local/ | Antekeli | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1phrzl0 | false | null | t3_1phrzl0 | /r/LocalLLaMA/comments/1phrzl0/what_do_you_think_about_this_privacyfirst_local/ | false | false | self | 1 | null |
What would be the absolute best coding/dev LLM I can run on my system? | 1 | Hello all!
I've recently been getting into LLM's with GPT 5.1 being my only paid model- but I want to venture into the wilds of running a local model.
I'm not super deep into the knowledge of LLMs, but I've managed to get the basics of LM studio running on my main system and my mac.
My current 2 systems are as follows-
Main Rig:
Ryzen 9 7950x3d
64GB DDR5 @ 6600MT/s
RTX 4090 24GB
Macbook Pro:
M4 pro
18GB memory
I've heard a lot about things like "if the model is too large to fit in your vram it'll overflow to your system memory and tank performance", but I haven't really seen any cases of that in videos that I've watched- how big of a performance hit are we talking about?
But my main question is, what would be the best coding model I can play about with on my local systems, is it even worth doing considering (for now) I have a GPT subscription (for about 2 more weeks), and if its not worth it- what would my next best thing be that isn't going to cost an arm and a leg?
My main use case is to essentially "tutor" for new languages that I'm learning (Java being one) and also messing about with things such as Godot and even writing custom plugins for RPG Maker MZ (not any docs in regards to plugins on that one)
I appreciate you taking a look at my post and potentially giving me some advice!
Hopefully I can learn a bit more into this also due to me being quite impressed and intrigued on modern day AI
Thank you 😊 | 2025-12-08T22:54:47 | https://www.reddit.com/r/LocalLLaMA/comments/1phr3oy/what_would_be_the_absolute_best_codingdev_llm_i/ | iZestyYT | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1phr3oy | false | null | t3_1phr3oy | /r/LocalLLaMA/comments/1phr3oy/what_would_be_the_absolute_best_codingdev_llm_i/ | false | false | self | 1 | null |
GLM-4.6V AWQ is released | 80 | [cyankiwi/GLM-4.6V-AWQ-4bit](https://huggingface.co/cyankiwi/GLM-4.6V-AWQ-4bit)
[cyankiwi/GLM-4.6V-AWQ-8bit](https://huggingface.co/cyankiwi/GLM-4.6V-AWQ-8bit)
[cyankiwi/GLM-4.6V-Flash-AWQ-4bit](https://huggingface.co/cyankiwi/GLM-4.6V-Flash-AWQ-4bit)
[cyankiwi/GLM-4.6V-Flash-AWQ-8bit](https://huggingface.co/cyankiwi/GLM-4.6V-Flash-AWQ-8bit) | 2025-12-08T21:28:37 | https://www.reddit.com/r/LocalLLaMA/comments/1phox68/glm46v_awq_is_released/ | YellowTree11 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1phox68 | false | null | t3_1phox68 | /r/LocalLLaMA/comments/1phox68/glm46v_awq_is_released/ | false | false | self | 80 | {'enabled': False, 'images': [{'id': 'ei1dwKoG__Ps3-rfCTP5e_xMsBWZd6d12FGJTsU01ao', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/ei1dwKoG__Ps3-rfCTP5e_xMsBWZd6d12FGJTsU01ao.png?width=108&crop=smart&auto=webp&s=ed48f913e6d082082cfad034ae7cfa627952f80b', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/ei1dwKoG__Ps3-rfCTP5e_xMsBWZd6d12FGJTsU01ao.png?width=216&crop=smart&auto=webp&s=d6c32ba24842b86c2481586a8e5bbb7d4c6b0fea', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/ei1dwKoG__Ps3-rfCTP5e_xMsBWZd6d12FGJTsU01ao.png?width=320&crop=smart&auto=webp&s=6650775aaf1eec1fc76995252db9d100d7ba6978', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/ei1dwKoG__Ps3-rfCTP5e_xMsBWZd6d12FGJTsU01ao.png?width=640&crop=smart&auto=webp&s=3060f8777fa79e6a945ae4e258c0d73d7fbe4a94', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/ei1dwKoG__Ps3-rfCTP5e_xMsBWZd6d12FGJTsU01ao.png?width=960&crop=smart&auto=webp&s=13cb1c0901b3d886f41581a137979f62cef40e40', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/ei1dwKoG__Ps3-rfCTP5e_xMsBWZd6d12FGJTsU01ao.png?width=1080&crop=smart&auto=webp&s=c2d2a04d99603e28f1312e25b2605f745f370d8c', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/ei1dwKoG__Ps3-rfCTP5e_xMsBWZd6d12FGJTsU01ao.png?auto=webp&s=4829b54b56b4d5b20b2d8ccf126747ca0dc97f38', 'width': 1200}, 'variants': {}}]} |
Can I run a quantized 7B model on a cpu only vps? | 27 | I know this sounds dumb, but I want to run a tiny un censored LLM via Ollama just for an API endpoint for a personal project. I cant afford a gpu instance.
I saw virtarix offers decent ram per dollar. If I use a GGUF format model (Q4\_K\_M) can the AMD Epyc cores handle the inference at a usable speed (maybe 2-3 tokens/sec)? I just need it to respond to chat queries, doesn't need to be instant.
| 2025-12-08T21:10:24 | https://www.reddit.com/r/LocalLLaMA/comments/1phog7k/can_i_run_a_quantized_7b_model_on_a_cpu_only_vps/ | Interesting_Log_6108 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1phog7k | false | null | t3_1phog7k | /r/LocalLLaMA/comments/1phog7k/can_i_run_a_quantized_7b_model_on_a_cpu_only_vps/ | false | false | self | 27 | null |
Thoughts? | 1,164 | Interesting take | 2025-12-08T20:25:29 | Salt_Armadillo8884 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1phn925 | false | null | t3_1phn925 | /r/LocalLLaMA/comments/1phn925/thoughts/ | false | false | default | 1,164 | {'enabled': True, 'images': [{'id': 'j6fp9xhsh16g1', 'resolutions': [{'height': 116, 'url': 'https://preview.redd.it/j6fp9xhsh16g1.jpeg?width=108&crop=smart&auto=webp&s=4c52f60d6b7f57641c2b8121a604d43dc9120be7', 'width': 108}, {'height': 233, 'url': 'https://preview.redd.it/j6fp9xhsh16g1.jpeg?width=216&crop=smart&auto=webp&s=7671582e9c92f66ab2c5579a161e56b119e27206', 'width': 216}, {'height': 346, 'url': 'https://preview.redd.it/j6fp9xhsh16g1.jpeg?width=320&crop=smart&auto=webp&s=5c51f478565f584689c2115bedc5334f25a1009e', 'width': 320}, {'height': 692, 'url': 'https://preview.redd.it/j6fp9xhsh16g1.jpeg?width=640&crop=smart&auto=webp&s=2979be36927eb9e804221b6247830706ea9e7487', 'width': 640}, {'height': 1038, 'url': 'https://preview.redd.it/j6fp9xhsh16g1.jpeg?width=960&crop=smart&auto=webp&s=23ff59a3a8127d0c43b6b5264bda21917d61e01b', 'width': 960}, {'height': 1167, 'url': 'https://preview.redd.it/j6fp9xhsh16g1.jpeg?width=1080&crop=smart&auto=webp&s=82363fa31d941c1272bcdf29d4c4d88fcc18c785', 'width': 1080}], 'source': {'height': 1423, 'url': 'https://preview.redd.it/j6fp9xhsh16g1.jpeg?auto=webp&s=fd25170004ff3263c480abce7a303da3514d6f3b', 'width': 1316}, 'variants': {}}]} | |
Upcoming models from llama.cpp support queue (This month or Jan possibly) | 56 | Added only PR items with enough progress.
* [EssentialAI/Rnj-1](https://github.com/ggml-org/llama.cpp/pull/17811) (Stats look better for its size)
* [moonshotai/Kimi-Linear-48B-A3B](https://github.com/ggml-org/llama.cpp/pull/17592) (Q4 of Qwen3-Next gave me 10+ t/s on my 8GB VRAM + 32GB RAM so this one could be better)
* [inclusionAI/LLaDA2.0-mini & inclusionAI/LLaDA2.0-flash](https://github.com/ggml-org/llama.cpp/pull/17454)
* [deepseek-ai/DeepSeek-OCR](https://github.com/ggml-org/llama.cpp/pull/17400)
* [Infinigence/Megrez2-3x7B-A3B](https://github.com/ggml-org/llama.cpp/pull/17141) (Glad they're in progress with this one after 2nd ticket)
Below one went stale & got closed. Really wanted to have this model(**s**) earlier.
[allenai/FlexOlmo-7x7B-1T](https://github.com/ggml-org/llama.cpp/issues/15585) | 2025-12-08T20:09:02 | https://www.reddit.com/r/LocalLLaMA/comments/1phmt95/upcoming_models_from_llamacpp_support_queue_this/ | pmttyji | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1phmt95 | false | null | t3_1phmt95 | /r/LocalLLaMA/comments/1phmt95/upcoming_models_from_llamacpp_support_queue_this/ | false | false | self | 56 | {'enabled': False, 'images': [{'id': 'ccYZItQmo-NHC3uBXlmNrOrCDm9B_K11CgBJIKGnRB0', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/ccYZItQmo-NHC3uBXlmNrOrCDm9B_K11CgBJIKGnRB0.png?width=108&crop=smart&auto=webp&s=07621479c3809501f8e568c7e7fb4646eaf9cf1b', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/ccYZItQmo-NHC3uBXlmNrOrCDm9B_K11CgBJIKGnRB0.png?width=216&crop=smart&auto=webp&s=1bb12e52cee8292e24889b4c412e472bdd8f4beb', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/ccYZItQmo-NHC3uBXlmNrOrCDm9B_K11CgBJIKGnRB0.png?width=320&crop=smart&auto=webp&s=3ae64cfe4917c281dd7249e21f4ab595fc56506f', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/ccYZItQmo-NHC3uBXlmNrOrCDm9B_K11CgBJIKGnRB0.png?width=640&crop=smart&auto=webp&s=aeb884c9de24a6340951c4b1ed8a5885aae0832b', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/ccYZItQmo-NHC3uBXlmNrOrCDm9B_K11CgBJIKGnRB0.png?width=960&crop=smart&auto=webp&s=6e958747dabf7fe8791e268720beaa71280ec7d3', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/ccYZItQmo-NHC3uBXlmNrOrCDm9B_K11CgBJIKGnRB0.png?width=1080&crop=smart&auto=webp&s=a60eb5992bb3f778cb07856d9e21d801089a046f', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/ccYZItQmo-NHC3uBXlmNrOrCDm9B_K11CgBJIKGnRB0.png?auto=webp&s=0326718968140791e1876c16e22e2a4fce9da9fd', 'width': 1200}, 'variants': {}}]} |
I forked Qodo's PR-Agent to make it work with Ollama. | 4 | I liked \[Qodo\](https://github.com/qodo-ai/pr-agent)'s idea of having my pull requests automatically described and reviewed by an LLM but I didn't like that it basically is hardwired to work with OpenAI.
Here's the link: \[github\](https://github.com/TobEnd/pr-agent) or \[codeberg.org\](https://codeberg.org/TobJEnd/Local-PR-Agent)
I tested it with a few PR's on my private gitea instance and it's working but I really haven't had the time yet to iron out all the kinks or test it with different models or gitlab or more complex prompts.
Take it for a test drive and tell me what you think. | 2025-12-08T20:08:19 | https://www.reddit.com/r/LocalLLaMA/comments/1phmsma/i_forked_qodos_pragent_to_make_it_work_with_ollama/ | corentic_eu | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1phmsma | false | null | t3_1phmsma | /r/LocalLLaMA/comments/1phmsma/i_forked_qodos_pragent_to_make_it_work_with_ollama/ | false | false | self | 4 | {'enabled': False, 'images': [{'id': 'xU4X2KpP-3sjddXiLidVQ55YaFo4kT60GtwKYEtagsY', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/xU4X2KpP-3sjddXiLidVQ55YaFo4kT60GtwKYEtagsY.png?width=108&crop=smart&auto=webp&s=298b7cf7425a2c9ef3764b291414ef0f67d30b0f', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/xU4X2KpP-3sjddXiLidVQ55YaFo4kT60GtwKYEtagsY.png?width=216&crop=smart&auto=webp&s=8be68e7b2e0e6a19ee611d726dfe42b5326f152b', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/xU4X2KpP-3sjddXiLidVQ55YaFo4kT60GtwKYEtagsY.png?width=320&crop=smart&auto=webp&s=783413ec0b8eba11a18bec0473ae56bfa7830f67', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/xU4X2KpP-3sjddXiLidVQ55YaFo4kT60GtwKYEtagsY.png?width=640&crop=smart&auto=webp&s=16650e0dbbef1e0ea80286abb2d85d64fa2a6001', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/xU4X2KpP-3sjddXiLidVQ55YaFo4kT60GtwKYEtagsY.png?width=960&crop=smart&auto=webp&s=f73e83d0cdc5ddd9f220db81c9a3bdbce4f6250f', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/xU4X2KpP-3sjddXiLidVQ55YaFo4kT60GtwKYEtagsY.png?width=1080&crop=smart&auto=webp&s=19316066df05ae5f8a0c003c568ceac1c4fd0617', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/xU4X2KpP-3sjddXiLidVQ55YaFo4kT60GtwKYEtagsY.png?auto=webp&s=4526caa8c3885c52982682e8f08fe91d4b94c028', 'width': 1200}, 'variants': {}}]} |
Fine-tuning for Lean | 3 | I'm interested to know I might be able to finetune a model for Lean mathematical proofs in the style of the Aristotle model made by Harmonic Ai.
I'm not sure if an LLM could even be finetuned to respond in Lean of if it would need to be trained from scratch on pure lean and "think in lean" in order to respond in Lean.
Maybe training it to use the lake compiler as an MCP tool could achieve the same outcome?
Any help appreciated. | 2025-12-08T20:06:50 | https://www.reddit.com/r/LocalLLaMA/comments/1phmr7f/finetuning_for_lean/ | ThePrimeClock | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1phmr7f | false | null | t3_1phmr7f | /r/LocalLLaMA/comments/1phmr7f/finetuning_for_lean/ | false | false | self | 3 | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.