title stringlengths 1 300 | score int64 0 8.54k | selftext stringlengths 0 41.5k | created timestamp[ns]date 2023-04-01 04:30:41 2026-03-04 02:14:14 ⌀ | url stringlengths 0 878 | author stringlengths 3 20 | domain stringlengths 0 82 | edited timestamp[ns]date 1970-01-01 00:00:00 2026-02-19 14:51:53 | gilded int64 0 2 | gildings stringclasses 7
values | id stringlengths 7 7 | locked bool 2
classes | media stringlengths 646 1.8k ⌀ | name stringlengths 10 10 | permalink stringlengths 33 82 | spoiler bool 2
classes | stickied bool 2
classes | thumbnail stringlengths 4 213 ⌀ | ups int64 0 8.54k | preview stringlengths 301 5.01k ⌀ |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Best moe models for 4090: how to keep vram low without losing quality? | 0 | I'm currently self-hosting GPT-OSS 120b (mxfp4) with llama.cpp and offloading just the attention layers to GPU. It works ok - not super fast, but the quality of responses is good enough. Since I'm using offloading, it requires me to always keep in VRAM \~7.5 GB of the model. I'm following this guide - [https://old.reddit.com/r/LocalLLaMA/comments/1mke7ef/120b\_runs\_awesome\_on\_just\_8gb\_vram/](https://old.reddit.com/r/LocalLLaMA/comments/1mke7ef/120b_runs_awesome_on_just_8gb_vram/)
Are there any modern/lightweight/lighter solutions with on-par quality of answers?
The goal is to preserve at least the same quality of answers, but to reduce the VRAM memory usage.
Hardware: I have RTX 4090 24GB VRAM, 196 GB RAM | 2026-01-12T14:38:26 | https://www.reddit.com/r/LocalLLaMA/comments/1qawkdt/best_moe_models_for_4090_how_to_keep_vram_low/ | AdParty3888 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qawkdt | false | null | t3_1qawkdt | /r/LocalLLaMA/comments/1qawkdt/best_moe_models_for_4090_how_to_keep_vram_low/ | false | false | self | 0 | null |
Best open coding model for 128GB RAM? [2026] | 9 | Hello,
What would be your suggestions for an open model to run locally with 128 GB RAM? devstral-small-2-24b-instruct-2512@8bit and max context, or another model? | 2026-01-12T14:30:07 | https://www.reddit.com/r/LocalLLaMA/comments/1qawd0z/best_open_coding_model_for_128gb_ram_2026/ | Acrobatic_Cat_3448 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qawd0z | false | null | t3_1qawd0z | /r/LocalLLaMA/comments/1qawd0z/best_open_coding_model_for_128gb_ram_2026/ | false | false | self | 9 | null |
z.ai prepping for glm-image soon - here is what we know so far | 93 | GLM-Image supports both text-to-image and image-to-image generation within a single model
Text-to-image: generates high-detail images from textual descriptions, with particularly strong performance in information-dense scenarios.
Image-to-image: supports a wide range of tasks, including image editing, style transfer, multi-subject consistency, and identity-preserving generation for people and objects.
arch:
Autoregressive generator: a 9B-parameter model initialized from \[GLM-4-9B-0414\](https://huggingface.co/zai-org/GLM-4-9B-0414), with an expanded vocabulary to incorporate visual tokens. The model first generates a compact encoding of approximately 256 tokens, then expands to 1K–4K tokens, corresponding to 1K–2K high-resolution image outputs.
Diffusion Decoder: a 7B-parameter decoder based on a single-stream DiT architecture for latent-space
[https://github.com/huggingface/diffusers/pull/12921](https://github.com/huggingface/diffusers/pull/12921)
[https://github.com/huggingface/transformers/pull/43100](https://github.com/huggingface/transformers/pull/43100) | 2026-01-12T14:28:22 | https://www.reddit.com/r/LocalLLaMA/comments/1qawbjj/zai_prepping_for_glmimage_soon_here_is_what_we/ | MrAlienOverLord | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qawbjj | false | null | t3_1qawbjj | /r/LocalLLaMA/comments/1qawbjj/zai_prepping_for_glmimage_soon_here_is_what_we/ | false | false | self | 93 | {'enabled': False, 'images': [{'id': 'P4z_SkrIKX5pS_yEph4NfCumh3uEz9HzrURsqI1fz6A', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/P4z_SkrIKX5pS_yEph4NfCumh3uEz9HzrURsqI1fz6A.png?width=108&crop=smart&auto=webp&s=6bd043ec9beb305e45d98f5f7d446403ef64519c', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/P4z_SkrIKX5pS_yEph4NfCumh3uEz9HzrURsqI1fz6A.png?width=216&crop=smart&auto=webp&s=83cab017e1735cf277da67b341873cf359f25492', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/P4z_SkrIKX5pS_yEph4NfCumh3uEz9HzrURsqI1fz6A.png?width=320&crop=smart&auto=webp&s=e895b72c6eea79fdb743f7f07869002a8c8989e3', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/P4z_SkrIKX5pS_yEph4NfCumh3uEz9HzrURsqI1fz6A.png?width=640&crop=smart&auto=webp&s=9d6c93ecf8ed512a37897a6281f5d0ff1b7c6465', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/P4z_SkrIKX5pS_yEph4NfCumh3uEz9HzrURsqI1fz6A.png?width=960&crop=smart&auto=webp&s=e4ae6748d4480d851a650b2be79c578e1dcd9d2a', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/P4z_SkrIKX5pS_yEph4NfCumh3uEz9HzrURsqI1fz6A.png?width=1080&crop=smart&auto=webp&s=d3a517ca12ef53745c6e7e3b9c64818bf82d17b9', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/P4z_SkrIKX5pS_yEph4NfCumh3uEz9HzrURsqI1fz6A.png?auto=webp&s=1164ef3e7e9e16725ab2f2ab3e642b7d82a1aff7', 'width': 1200}, 'variants': {}}]} |
MiniMax Coding Plan - $2/month AI API that works with Cursor, Claude Code, Cline (+ 10% off) | 0 |
Hey everyone,
Wanted to share a deal I found for those using AI coding assistants.
\*\*What is it?\*\*
MiniMax has a "Coding Plan" - unlimited API access to their M2.1 model for $2/month (starter tier). It works with basically every AI coding tool:
\- Cursor
\- Claude Code
\- Cline
\- Roo Code
\- OpenCode
\- Kilo Code
\- Trae
\- Grok CLI
\- Codex CLI
\- Droid
\*\*Why it's interesting:\*\*
\- Way cheaper than OpenAI/Anthropic API costs
\- M2.1 is surprisingly capable for coding tasks
\- Works as a drop-in replacement in most tools
\*\*The deal:\*\*
\- $2/month starter plan runs until Jan 15, 2026
\- Referral program gives 10% off your first payment
If you want to try it: [https://tencent-source.github.io/minimax-coding-plan/](https://tencent-source.github.io/minimax-coding-plan/)
That's my referral page - you get 10% off, I get some API credits. Win-win. | 2026-01-12T14:19:40 | https://www.reddit.com/r/LocalLLaMA/comments/1qaw3pd/minimax_coding_plan_2month_ai_api_that_works_with/ | TheHentaiCulture | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qaw3pd | false | null | t3_1qaw3pd | /r/LocalLLaMA/comments/1qaw3pd/minimax_coding_plan_2month_ai_api_that_works_with/ | false | false | self | 0 | null |
How to make good RAG with spreadsheets and other tabular data such as SQL? | 1 | The issue is that I have various types of spreadsheets and tabular data on multiple subjects across several pages, so it's quite complex. I'm looking for something 100% local. Any response would be appreciated. | 2026-01-12T14:16:05 | https://www.reddit.com/r/LocalLLaMA/comments/1qaw0jw/how_to_make_good_rag_with_spreadsheets_and_other/ | Ga_0512 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qaw0jw | false | null | t3_1qaw0jw | /r/LocalLLaMA/comments/1qaw0jw/how_to_make_good_rag_with_spreadsheets_and_other/ | false | false | self | 1 | null |
Coding LLM Model | 2 | Hy guys, I just bought An macbook 4 pro 48gb ram, what would be the best code model to run on it locally? Thanks! | 2026-01-12T13:52:02 | https://www.reddit.com/r/LocalLLaMA/comments/1qavfcg/coding_llm_model/ | plugshawtycft | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qavfcg | false | null | t3_1qavfcg | /r/LocalLLaMA/comments/1qavfcg/coding_llm_model/ | false | false | self | 2 | null |
[Release] Eva-4B: Specialized Financial Evasion Detection (Based on Qwen3-4B). Outperforms GPT-5.2 on domain benchmarks. | 181 | Hi r/LocalLLaMA,
I'm excited to share **Eva-4B,** a specialized 4B parameter model designed to detect evasive answers in corporate earnings call Q&A sessions.
**What it does:**
It classifies answers into \`direct\`, \`intermediate\`, or \`fully\_evasive\` (using the Rasiah framework). It helps identify when executives are sidestepping analysts' questions.
**Why use this over a general LLM?**
\* **Performance:** On our 1,000-sample human-annotated test set, Eva-4B achieves \*\*81.3% accuracy\*\*, beating GPT-5.2 (80.5%) and coming close to GLM-4.7 and Gemini-3-Flash.
\* **Efficiency:** It's a 4B model (Qwen3 base), making it extremely cheap to run locally or in production pipelines compared to querying Opus or GPT-5.
\* **Data:** Fine-tuned on 30k samples constructed via a multi-model consensus (Claude Opus + Gemini) + LLM-as-Judge pipeline.
**Links:**
\* Hugging Face: [https://huggingface.co/FutureMa/Eva-4B](https://huggingface.co/FutureMa/Eva-4B)
I'd love to hear your feedback or see how it performs on your own financial text samples! | 2026-01-12T13:26:39 | Awkward_Run_9982 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1qautxm | false | null | t3_1qautxm | /r/LocalLLaMA/comments/1qautxm/release_eva4b_specialized_financial_evasion/ | false | false | default | 181 | {'enabled': True, 'images': [{'id': 'bawipxvf6xcg1', 'resolutions': [{'height': 73, 'url': 'https://preview.redd.it/bawipxvf6xcg1.png?width=108&crop=smart&auto=webp&s=24b76b41e479e402f9a3072de64c826929b9bf62', 'width': 108}, {'height': 146, 'url': 'https://preview.redd.it/bawipxvf6xcg1.png?width=216&crop=smart&auto=webp&s=e6c0df571d0f9eb68ff014f0eaa1c5dbbd56d105', 'width': 216}, {'height': 217, 'url': 'https://preview.redd.it/bawipxvf6xcg1.png?width=320&crop=smart&auto=webp&s=88ec5207c706ac04173262d87eac77e92ce3fb75', 'width': 320}, {'height': 435, 'url': 'https://preview.redd.it/bawipxvf6xcg1.png?width=640&crop=smart&auto=webp&s=5b7154c655fd6869091d5814b034f797a1f34845', 'width': 640}], 'source': {'height': 626, 'url': 'https://preview.redd.it/bawipxvf6xcg1.png?auto=webp&s=c548181aabb4c2ec1c781df397335faeba086ac6', 'width': 921}, 'variants': {}}]} | |
We built a privacy oriented, local-first and transparent context IDE. No subscriptions. | 4 | Hi r/LocalLLaMA,
We have been around for a while. We noticed subscription fatigue around AI and agent tools, and we wanted less of a black box in which we don't know how our context is being sent into the cloud.
With that in mind, we are spending time to build **Ollie IDE**.
**The Philosophy:**
* **"Glass-Box" Transparency:** We wanted to see *exactly* what tokens and system prompts are being sent. The IDE shows you the raw context window so you know what the model actually sees.
* **Local-First:** It’s designed to run 100% offline. It hooks into your local Ollama natively. Remote models also available.
* **One-Time Purchase:** Trying to break the subscription cycle. You buy it once, you own the binary forever. No data mining, no telemetry, no recurring billing.
**The Tech:**
* Native builds for Mac/Linux/Win.
* Custom Agent configuration and creation (you can swap system prompts per-chat).
* Specialized tools for code, rich text, images, 3D objects and more.
**Where to get it:** [Try Ollie](https://costa-and-associates.com/ollie)
**Feedback:** [Bug Reports & Suggestions](https://www.reddit.com/user/Ollie_IDE/comments/1q9oyrn/ollie_ide_bug_reports_feedback_feature_requests/)
Cheers, u/Ollie_IDE (and Ollie) | 2026-01-12T13:25:15 | https://www.reddit.com/gallery/1qausr4 | Ollie_IDE | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1qausr4 | false | null | t3_1qausr4 | /r/LocalLLaMA/comments/1qausr4/we_built_a_privacy_oriented_localfirst_and/ | false | false | 4 | null | |
Battle of AI Gateways: Rust vs. Python for AI Infrastructure: Bridging a 3,400x Performance Gap | 0 | 2026-01-12T13:00:45 | https://vidai.uk/blog/rust-python-vidai | Guna1260 | vidai.uk | 1970-01-01T00:00:00 | 0 | {} | 1qau8wx | false | null | t3_1qau8wx | /r/LocalLLaMA/comments/1qau8wx/battle_of_ai_gateways_rust_vs_python_for_ai/ | false | false | default | 0 | null | |
nvidia/nemotron-speech-streaming-en-0.6b | 4 | Has anyone used nvidia/nemotron-speech-streaming-en-0.6b ?
How is it?
Noticed it dropped recently and seems efficient | 2026-01-12T12:42:59 | https://www.reddit.com/r/LocalLLaMA/comments/1qatvsc/nvidianemotronspeechstreamingen06b/ | SlowFail2433 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qatvsc | false | null | t3_1qatvsc | /r/LocalLLaMA/comments/1qatvsc/nvidianemotronspeechstreamingen06b/ | false | false | self | 4 | null |
GLM-4.7 218B REAP model by Cerebras | 79 | [https://huggingface.co/cerebras/GLM-4.7-REAP-218B-A32B](https://huggingface.co/cerebras/GLM-4.7-REAP-218B-A32B)
Curious to see how the quantized versions will perform. | 2026-01-12T12:33:52 | https://www.reddit.com/r/LocalLLaMA/comments/1qatpgb/glm47_218b_reap_model_by_cerebras/ | ResearchWheel5 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qatpgb | false | null | t3_1qatpgb | /r/LocalLLaMA/comments/1qatpgb/glm47_218b_reap_model_by_cerebras/ | false | false | self | 79 | {'enabled': False, 'images': [{'id': '5hX6_0eraUgqUctJmYcs6QnTfCtzJuohSgWkU8_UCSk', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/5hX6_0eraUgqUctJmYcs6QnTfCtzJuohSgWkU8_UCSk.png?width=108&crop=smart&auto=webp&s=1a1e119b8f95e5095f595631bdf125d7c1c3d002', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/5hX6_0eraUgqUctJmYcs6QnTfCtzJuohSgWkU8_UCSk.png?width=216&crop=smart&auto=webp&s=b5a429bcf260855767f006c61f3c638448ff4531', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/5hX6_0eraUgqUctJmYcs6QnTfCtzJuohSgWkU8_UCSk.png?width=320&crop=smart&auto=webp&s=d227add14a649d09f0924b69bb69cba800068361', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/5hX6_0eraUgqUctJmYcs6QnTfCtzJuohSgWkU8_UCSk.png?width=640&crop=smart&auto=webp&s=268e6ef84de40fa7091b8db57b1cb58b076cae75', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/5hX6_0eraUgqUctJmYcs6QnTfCtzJuohSgWkU8_UCSk.png?width=960&crop=smart&auto=webp&s=ff01e9ac6f2835232089a02b4243ca161fc3469e', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/5hX6_0eraUgqUctJmYcs6QnTfCtzJuohSgWkU8_UCSk.png?width=1080&crop=smart&auto=webp&s=eb765adf30c5fe096ba0e765b7f95bd9160a95ae', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/5hX6_0eraUgqUctJmYcs6QnTfCtzJuohSgWkU8_UCSk.png?auto=webp&s=d733d03aba378d23c9cc73c446a10cea994610fe', 'width': 1200}, 'variants': {}}]} |
Building a free K-10 education platform - seeking advice on transitioning from Google AI Studio to local LLMs | 1 | Hey everyone,
I need your help in improving a gratis access K-10 education platform. I think this community's expertise is exactly what I need.
**The project:** I've built an educational platform for Grades 1-10 aimed at students who can't afford tutoring or premium EdTech subscriptions. Currently it runs on Google AI Studio API keys (free tier), which works for limited usage but isn't sustainable or truly "free as in freedom."
**The goal:** I want to transition to local LLMs so the platform can be:
- Self-hosted by schools/NGOs in low-resource settings
- Truly free with no API costs or usage caps
- Private (student data never leaves the local network)
**Where I need help:**
1. **Model recommendations** - What would you suggest for educational Q&A, explanation generation, and simple tutoring for K-10? Needs to be coherent but doesn't need to be cutting-edge. Thinking Mistral 7B or Phi-3 territory?
2. **Deployment reality check** - What's the minimum viable hardware to run inference for a small school (~20-50 concurrent users)? Is this even realistic without GPU infrastructure?
3. **Quantization trade-offs** - For educational content, how much quality loss is acceptable with Q4/Q5 quantization?
4. **Anyone done similar?** - Would love to connect if you've deployed local LLMs for education in resource-constrained environments.
Happy to share more details about the architecture. Not here to promote anything - genuinely seeking guidance from people who've done the hard work of making local inference practical.
Thanks for reading 🙏 | 2026-01-12T12:32:51 | https://www.reddit.com/r/LocalLLaMA/comments/1qatop7/building_a_free_k10_education_platform_seeking/ | ordin8forgood | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qatop7 | false | null | t3_1qatop7 | /r/LocalLLaMA/comments/1qatop7/building_a_free_k10_education_platform_seeking/ | false | false | self | 1 | null |
I kept breaking multi-agent systems on shared state, so I built a small OSS thing. Looking for a few builders to test. | 2 | I have been building multi agent systems for a while and kept running into the same issue. Once agents are split across processes or services, shared state becomes fragile. Retries overwrite things, handoffs break, and debugging turns into digging through logs.
I ended up building a small open source, self hosted project called memX to scratch my own itch. It is a shared memory layer for agents, backed by Redis, with schema validation and pub sub so agents can react when state changes. It does not replace LangGraph state. It lives outside the orchestrator and is meant for cases where multiple agents or workflows need to share state safely.
Repo here: https://github.com/MehulG/memX
It is early and a bit rough. I am not selling anything.
I am looking for 2 or 3 people who are already running multi agent workflows and are open to trying this on just one workflow and telling me honestly what works and what does not. I will help with setup and integration myself. Even a this is useless response would help.
If this sounds relevant, feel free to comment or DM. Happy to walk through a simple A to B to C handoff example. | 2026-01-12T11:50:51 | https://www.reddit.com/r/LocalLLaMA/comments/1qasvdm/i_kept_breaking_multiagent_systems_on_shared/ | Temporary-Tap-7323 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qasvdm | false | null | t3_1qasvdm | /r/LocalLLaMA/comments/1qasvdm/i_kept_breaking_multiagent_systems_on_shared/ | false | false | self | 2 | {'enabled': False, 'images': [{'id': 'vgA_pk1rOZOOdU_a3MTzPOFnHBmux9oPwAGEMGsNuIU', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/vgA_pk1rOZOOdU_a3MTzPOFnHBmux9oPwAGEMGsNuIU.png?width=108&crop=smart&auto=webp&s=003b50f3da10cc7e84912b6bd35bd822f7595f64', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/vgA_pk1rOZOOdU_a3MTzPOFnHBmux9oPwAGEMGsNuIU.png?width=216&crop=smart&auto=webp&s=778a48d0da20aa1442629cab8da3f75bf642e88f', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/vgA_pk1rOZOOdU_a3MTzPOFnHBmux9oPwAGEMGsNuIU.png?width=320&crop=smart&auto=webp&s=3a6b23b537fcce7a89d9fe2c3946baa4a97851f3', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/vgA_pk1rOZOOdU_a3MTzPOFnHBmux9oPwAGEMGsNuIU.png?width=640&crop=smart&auto=webp&s=34973b36507bb6cfee9e669922cd4a1f3d1e84be', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/vgA_pk1rOZOOdU_a3MTzPOFnHBmux9oPwAGEMGsNuIU.png?width=960&crop=smart&auto=webp&s=81da04275ee674a91f5e4d8b0c8e061052746bf5', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/vgA_pk1rOZOOdU_a3MTzPOFnHBmux9oPwAGEMGsNuIU.png?width=1080&crop=smart&auto=webp&s=d59f0c0295fc84058fccbf79fa879d04d4c01c89', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/vgA_pk1rOZOOdU_a3MTzPOFnHBmux9oPwAGEMGsNuIU.png?auto=webp&s=22be596bba0b3543da775f1dbc830d1bac2d548e', 'width': 1200}, 'variants': {}}]} |
Which are the exacto-like providers? | 0 | What are the reliable providers you use with OSS models? I mean which don't use bad quantization other tricks?
I looked at OpenRouter's exacto models and these are the providers selected them. Can they all be trusted for quality / quantization?
- deepinfra
- novita
- groq
- z-ai
- moonshotai
- atlas-cloud
- baseten | 2026-01-12T11:02:35 | https://www.reddit.com/r/LocalLLaMA/comments/1qas0r8/which_are_the_exactolike_providers/ | hyperknot | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qas0r8 | false | null | t3_1qas0r8 | /r/LocalLLaMA/comments/1qas0r8/which_are_the_exactolike_providers/ | false | false | self | 0 | null |
What if the Universe has a Backend? Introducing the Index Resonance (RI) Framework. | 0 | The RI Model (Index Resonance) is a philosophical and mathematical framework describing the "source code" of the Universe. RI explains how data is processed "under the hood" of existence. Presenting the 9 Fundamental Laws:#Metaphysics#Philosophy#RIModel | 2026-01-12T11:00:02 | https://www.reddit.com/gallery/1qarywq | Erikqamalyan1 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1qarywq | false | null | t3_1qarywq | /r/LocalLLaMA/comments/1qarywq/what_if_the_universe_has_a_backend_introducing/ | false | false | 0 | null | |
Supertonic 2 TTS available on Hugging Face! | 62 | demo: [https://huggingface.co/spaces/Supertone/supertonic-2](https://huggingface.co/spaces/Supertone/supertonic-2)
model: [https://huggingface.co/Supertone/supertonic-2](https://huggingface.co/Supertone/supertonic-2)
Now in 5 languages (EN, KO, ES, PT, FR), generates 1 sec of audio in 0.006 sec. | 2026-01-12T10:59:17 | https://v.redd.it/cgp59yzfgwcg1 | paf1138 | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1qarygi | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/cgp59yzfgwcg1/DASHPlaylist.mpd?a=1770807569%2CMzBlYzQ0ZTQ5ZjkwODQzM2VmMGZjMjgzOGY2NDA0MTJiZDEyMmNkZjRkMTc4NjFhZTQzZTUxMGQ2ZGFlY2Y1ZA%3D%3D&v=1&f=sd', 'duration': 9, 'fallback_url': 'https://v.redd.it/cgp59yzfgwcg1/CMAF_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/cgp59yzfgwcg1/HLSPlaylist.m3u8?a=1770807569%2CMWEyNThkY2ViMWM4MzY1NmI4NGFhMjMzYjkxMjQ1ZDA0MGQ5Mjk2NjU1YzNjMTU1MjNmN2RkNjJmZGM2Yzk1Nw%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/cgp59yzfgwcg1/CMAF_96.mp4', 'transcoding_status': 'completed', 'width': 1248}} | t3_1qarygi | /r/LocalLLaMA/comments/1qarygi/supertonic_2_tts_available_on_hugging_face/ | false | false | 62 | {'enabled': False, 'images': [{'id': 'bW0xNzkxMGdnd2NnMUnVCiAxQGnt1ilbiC_caBfY1exLsRF9vnxWGz-1gEiE', 'resolutions': [{'height': 93, 'url': 'https://external-preview.redd.it/bW0xNzkxMGdnd2NnMUnVCiAxQGnt1ilbiC_caBfY1exLsRF9vnxWGz-1gEiE.png?width=108&crop=smart&format=pjpg&auto=webp&s=da2f1da131d96d3c8b8d8627ece2d728f2be85ac', 'width': 108}, {'height': 186, 'url': 'https://external-preview.redd.it/bW0xNzkxMGdnd2NnMUnVCiAxQGnt1ilbiC_caBfY1exLsRF9vnxWGz-1gEiE.png?width=216&crop=smart&format=pjpg&auto=webp&s=074ae50bdd90c38c37243588de0c367dee106c2f', 'width': 216}, {'height': 276, 'url': 'https://external-preview.redd.it/bW0xNzkxMGdnd2NnMUnVCiAxQGnt1ilbiC_caBfY1exLsRF9vnxWGz-1gEiE.png?width=320&crop=smart&format=pjpg&auto=webp&s=88156120c60877e3fb89713a78f6a2a059c8a5fc', 'width': 320}, {'height': 553, 'url': 'https://external-preview.redd.it/bW0xNzkxMGdnd2NnMUnVCiAxQGnt1ilbiC_caBfY1exLsRF9vnxWGz-1gEiE.png?width=640&crop=smart&format=pjpg&auto=webp&s=e727b5489a6117573e015a12a6a56372cfa88d52', 'width': 640}, {'height': 830, 'url': 'https://external-preview.redd.it/bW0xNzkxMGdnd2NnMUnVCiAxQGnt1ilbiC_caBfY1exLsRF9vnxWGz-1gEiE.png?width=960&crop=smart&format=pjpg&auto=webp&s=dcf00b36643ab45eed1425b23e826fd2f5b82c89', 'width': 960}, {'height': 934, 'url': 'https://external-preview.redd.it/bW0xNzkxMGdnd2NnMUnVCiAxQGnt1ilbiC_caBfY1exLsRF9vnxWGz-1gEiE.png?width=1080&crop=smart&format=pjpg&auto=webp&s=b18f0afa86d6824c5c78d9de32842369e5e98188', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/bW0xNzkxMGdnd2NnMUnVCiAxQGnt1ilbiC_caBfY1exLsRF9vnxWGz-1gEiE.png?format=pjpg&auto=webp&s=81665dfab363171cec11cb899c85c0dc8e60eb86', 'width': 1248}, 'variants': {}}]} | |
SCAM EXPOSED: Blackbox AI "Pro Max" ($40/mo) is FAKE. I tested all 5 top models - here is the proof. | 0 | I subscribed to the Blackbox AI "Pro Max" plan ($40/month) to get access to the latest frontier models like **Claude Opus 4.5**, **Gemini 3 Pro**, and **GPT-5.2**.
After noticing the quality was low, I ran a systematic audit of every single "Pro" model using specific "knowledge cutoff" tests.
**The Result:** NOT A SINGLE MODEL IS REAL. They are all older, cheaper models wrapped in a fake UI label.
**The Proof (See attached screenshots):**
**1. Gemini 3 Pro (FAKE)**
* **The Test:** "Are you Gemini 3 Pro?"
* **The Response:** *"I am not Gemini 3 Pro. As of right now, Google has not released a model called 'Gemini 3'... My current architecture is based on the* ***Gemini 1.5 series***\*."\*
* **Verdict:** It explicitly admitted it is the old Gemini 1.5.
**2. Claude Sonnet 4.5 (FAKE)**
* **The Test:** "State your internal version string."
* **The Response:** *"I am* ***Claude 3.5 Sonnet***\*... My knowledge cutoff date is\* ***April 2024***\*."\*
* **Verdict:** Confirmed old model. Real Sonnet 4.5 was released in late 2025.
**3. Grok-4 (FAKE)**
* **The Test:** "Who won the US Election in Nov 2024?"
* **The Response:** \*"The November 2024 US election results... are unknown to me—\****future events*** *I can't predict."*
* **Verdict:** It thinks it is still mid-2024. Real Grok-4 (released July 2025) would know the election winner instantly.
**4. Claude Opus 4.5 (FAKE)**
* **The Test:** "Who won Super Bowl LIX (Feb 2025)?"
* **The Response:** *"This falls right at the edge of my knowledge cutoff. I don't have reliable information."*
* **Verdict:** Real Claude Opus 4.5 has a cutoff of **May 2025** and knows the Super Bowl results perfectly. This is clearly an older model.
**5. GPT-5.2 (FAKE)**
* **The Test:** "Describe a headline from August 2025."
* **The Response:** *"I can't reliably provide a specific major global news headline from August 2025."*
* **Verdict:** Real GPT-5.2 (released Dec 2025) has an August 2025 cutoff and native knowledge of these events.
**Conclusion:**
Blackbox AI is charging premium prices for a dashboard that simply routes your requests to older, cheaper models (likely Gemini 1.5 Flash and Claude 3.5 Sonnet). Do not waste your money on this scam. | 2026-01-12T10:55:34 | https://www.reddit.com/gallery/1qarw69 | frankierave889 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1qarw69 | false | null | t3_1qarw69 | /r/LocalLLaMA/comments/1qarw69/scam_exposed_blackbox_ai_pro_max_40mo_is_fake_i/ | false | false | 0 | null | |
SCAM EXPOSED: Blackbox AI "Pro Max" ($40/mo) is FAKE. I tested all 5 top models - here is the proof. | 1 | 2026-01-12T10:53:45 | https://www.reddit.com/gallery/1qarv1u | frankierave889 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1qarv1u | false | null | t3_1qarv1u | /r/LocalLLaMA/comments/1qarv1u/scam_exposed_blackbox_ai_pro_max_40mo_is_fake_i/ | false | false | 1 | null | ||
What I use for my MCP Server | 0 | Apparently this thing has only two downloads and I am one of them. Anyway this thing is basically responsible for the backend of my server.
I just know it works. Don't ask me how
Don't ask me any more questions about open sourcing my code or what MCP server I use. The answer is 127.0.0.1 lol
Shout out to whoever made this 👇 and shout out LM Studio
https://www.piwheels.org/project/mcp-streamablehttp-proxy/ | 2026-01-12T10:29:15 | https://v.redd.it/6qm5e7l7bwcg1 | Serious_Molasses313 | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1qargb4 | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/6qm5e7l7bwcg1/DASHPlaylist.mpd?a=1770805778%2CMzE2NDJhYTA3Yzc5ZTM2ZWUwNjAyYmEyOTM3NThlZGI4NjYzMjVhOTcxYWU0OTllNzBlMDUxODNjZDRlMmY4Yg%3D%3D&v=1&f=sd', 'duration': 85, 'fallback_url': 'https://v.redd.it/6qm5e7l7bwcg1/CMAF_1080.mp4?source=fallback', 'has_audio': True, 'height': 1920, 'hls_url': 'https://v.redd.it/6qm5e7l7bwcg1/HLSPlaylist.m3u8?a=1770805778%2CNDFiMDJjMmMxNjA3OGJiMGYwODhhZDMwMTgyZmQ0YjUxNmEzZjkxODExYzMwNDM5MmRkMTAxNmYzNmUyMWUxYQ%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/6qm5e7l7bwcg1/CMAF_96.mp4', 'transcoding_status': 'completed', 'width': 856}} | t3_1qargb4 | /r/LocalLLaMA/comments/1qargb4/what_i_use_for_my_mcp_server/ | false | false | 0 | {'enabled': False, 'images': [{'id': 'cXhtcjM0bTdid2NnMblUwx_jfxc-ZEGTmtiGzsVIWHhptPO8baqLaVylwbet', 'resolutions': [{'height': 216, 'url': 'https://external-preview.redd.it/cXhtcjM0bTdid2NnMblUwx_jfxc-ZEGTmtiGzsVIWHhptPO8baqLaVylwbet.png?width=108&crop=smart&format=pjpg&auto=webp&s=7a2cacb6e0d3639dfbd06950073d5e9757e0ad65', 'width': 108}, {'height': 432, 'url': 'https://external-preview.redd.it/cXhtcjM0bTdid2NnMblUwx_jfxc-ZEGTmtiGzsVIWHhptPO8baqLaVylwbet.png?width=216&crop=smart&format=pjpg&auto=webp&s=8bcfce299fe3acfec2d89addbf7a812fa0adf4a5', 'width': 216}, {'height': 640, 'url': 'https://external-preview.redd.it/cXhtcjM0bTdid2NnMblUwx_jfxc-ZEGTmtiGzsVIWHhptPO8baqLaVylwbet.png?width=320&crop=smart&format=pjpg&auto=webp&s=c277e554d75004df419ef6ce4490b204df7bdb5a', 'width': 320}, {'height': 1280, 'url': 'https://external-preview.redd.it/cXhtcjM0bTdid2NnMblUwx_jfxc-ZEGTmtiGzsVIWHhptPO8baqLaVylwbet.png?width=640&crop=smart&format=pjpg&auto=webp&s=13553f3af4782b8c6456fd42a8a8e1be4aa76ee2', 'width': 640}], 'source': {'height': 1746, 'url': 'https://external-preview.redd.it/cXhtcjM0bTdid2NnMblUwx_jfxc-ZEGTmtiGzsVIWHhptPO8baqLaVylwbet.png?format=pjpg&auto=webp&s=db256d798e88e0f6c3e1822822bf4507476aae5d', 'width': 778}, 'variants': {}}]} | |
Is anyone offering compute to finetune a Unique GPT-OSS models? Trying to build an MLA Diffusion Language model. | 15 | I’m currently experimenting with GPT-OSS, inspired by many recent MLA/Diffusion model, I’m trying to convert GPT-OSS into an MLA diffusion model. Mostly trying to implement and get it working with inference on an H100 and has been using whatever I can on vast.ai 8x RTX PRO 6000/8x B200 or any other places that has compute for cheap. But training a 120B is super difficult and expensive. So I’m working on data filtering and using embeddings to first to get a much smaller high quality dataset. And experimenting a lot with newer finetuning techniques and methods
If anyone is interested, I would love your help! Please feel free comment and I will reach out. Or if anyone is on discord: _radna they can also reach me 24/7 | 2026-01-12T09:45:14 | https://www.reddit.com/r/LocalLLaMA/comments/1qaqqqn/is_anyone_offering_compute_to_finetune_a_unique/ | Ok_Difference_4483 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qaqqqn | false | null | t3_1qaqqqn | /r/LocalLLaMA/comments/1qaqqqn/is_anyone_offering_compute_to_finetune_a_unique/ | false | false | self | 15 | null |
Here's my opinion on Mac Studio vs RTX 5090. Do you think otherwise? If yes, lets discuss. | 0 | 2026-01-12T09:21:19 | https://theneuralpost.com/2025/12/16/mac-studio-vs-nvidia-rtx-5090-the-ultimate-local-ai-hardware-battle/ | Puzzled_Definition14 | theneuralpost.com | 1970-01-01T00:00:00 | 0 | {} | 1qaqdhg | false | null | t3_1qaqdhg | /r/LocalLLaMA/comments/1qaqdhg/heres_my_opinion_on_mac_studio_vs_rtx_5090_do_you/ | false | false | default | 0 | null | |
local ai agnet on gtx 1080ti pycharm+lmstudio | 0 | 2026-01-12T08:27:26 | https://www.youtube.com/watch?v=MAXAaa4Md0M | Legion10008 | youtube.com | 1970-01-01T00:00:00 | 0 | {} | 1qapjf1 | false | {'oembed': {'author_name': 'Combo_Ai', 'author_url': 'https://www.youtube.com/@comboai1000', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/MAXAaa4Md0M?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen title="local AI agent ana 1.3 on gtx 1080ti"></iframe>', 'provider_name': 'YouTube', 'provider_url': 'https://www.youtube.com/', 'thumbnail_height': 360, 'thumbnail_url': 'https://i.ytimg.com/vi/MAXAaa4Md0M/hqdefault.jpg', 'thumbnail_width': 480, 'title': 'local AI agent ana 1.3 on gtx 1080ti', 'type': 'video', 'version': '1.0', 'width': 356}, 'type': 'youtube.com'} | t3_1qapjf1 | /r/LocalLLaMA/comments/1qapjf1/local_ai_agnet_on_gtx_1080ti_pycharmlmstudio/ | false | false | default | 0 | {'enabled': False, 'images': [{'id': 'B8DaNUP4oAnjEBKbBie4OYnvbKUw0DP5t1jHTK8rQ6o', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/B8DaNUP4oAnjEBKbBie4OYnvbKUw0DP5t1jHTK8rQ6o.jpeg?width=108&crop=smart&auto=webp&s=4f4ec61e627dc5e3b4d3f626d59ddbe69b34bdf4', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/B8DaNUP4oAnjEBKbBie4OYnvbKUw0DP5t1jHTK8rQ6o.jpeg?width=216&crop=smart&auto=webp&s=c677b78d35253e7188a624ddab6937fdecc8f23f', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/B8DaNUP4oAnjEBKbBie4OYnvbKUw0DP5t1jHTK8rQ6o.jpeg?width=320&crop=smart&auto=webp&s=5cc601638a082ee95e766f70bbb5d72f58f1a516', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/B8DaNUP4oAnjEBKbBie4OYnvbKUw0DP5t1jHTK8rQ6o.jpeg?auto=webp&s=f0981647ead25f51569a0c4aac0c215cd6a3698f', 'width': 480}, 'variants': {}}]} | |
stable pcie 5.0 long (>0.5m) riser cable or something else ? | 1 | 2026-01-12T07:46:54 | https://www.reddit.com/r/LocalLLaMA/comments/1qaowbc/stable_pcie_50_long_05m_riser_cable_or_something/ | Comfortable-Plate467 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qaowbc | false | null | t3_1qaowbc | /r/LocalLLaMA/comments/1qaowbc/stable_pcie_50_long_05m_riser_cable_or_something/ | false | false | 1 | null | ||
Can i use my 4070 laptop to finetune llms, like lama 3.1 8b or bigger? | 0 | I have a laptop and its specs are
4070
I7 14650
16gb ram
If i cant, what is the best setup i can do to finetune freely?, is it colab or is there better options? | 2026-01-12T07:34:50 | https://www.reddit.com/r/LocalLLaMA/comments/1qaopjs/can_i_use_my_4070_laptop_to_finetune_llms_like/ | Beyond_Birthday_13 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qaopjs | false | null | t3_1qaopjs | /r/LocalLLaMA/comments/1qaopjs/can_i_use_my_4070_laptop_to_finetune_llms_like/ | false | false | self | 0 | null |
Remote Code Execution in OpenCode, update now | 5 | 2026-01-12T07:14:36 | https://cy.md/opencode-rce/ | rm-rf-rm | cy.md | 1970-01-01T00:00:00 | 0 | {} | 1qaodvw | false | null | t3_1qaodvw | /r/LocalLLaMA/comments/1qaodvw/remote_code_execution_in_opencode_update_now/ | false | false | default | 5 | null | |
Accessibility app idea (I don't know if it exists, maybe someone can make it a reality) | 0 | A few days ago, I was in a bookstore when a blind customer arrived. It struck me how challenging it can be for someone who is blind and alone with only their guide dog—to accomplish something as simple as buying a specific-expensive pen.
(It was Christmas, so he was likely buying the pen as a gift for the person who cares for him.)
I don’t have the expertise or resources to develop a APP myself, but if something like this doesn’t already exist, perhaps someone out there could create it.
Models like Qwen-2B-VL (Q8\_0) use only about 500 MB of RAM, and I’ve seen that small language models can now run efficiently even at good speeds on mid-range smartphones. That kind of technology could potentially be part of an accessibility solution.
| 2026-01-12T07:08:59 | Shockbum | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1qaoai0 | false | null | t3_1qaoai0 | /r/LocalLLaMA/comments/1qaoai0/accessibility_app_idea_i_dont_know_if_it_exists/ | false | false | 0 | {'enabled': True, 'images': [{'id': 'afu-33QcVvWIBuX7jIdp0ESAk5ki-uWrTIM85voETds', 'resolutions': [{'height': 60, 'url': 'https://preview.redd.it/qm8hrshr9vcg1.jpeg?width=108&crop=smart&auto=webp&s=5a4afeb20194d8ad53c3bc00a1d1c69d1ff127cd', 'width': 108}, {'height': 121, 'url': 'https://preview.redd.it/qm8hrshr9vcg1.jpeg?width=216&crop=smart&auto=webp&s=a9a3436c823fb2946ad99e3664cc1e8468efc6e7', 'width': 216}, {'height': 180, 'url': 'https://preview.redd.it/qm8hrshr9vcg1.jpeg?width=320&crop=smart&auto=webp&s=1805910b7937915d0775496abf146f9e703ae080', 'width': 320}, {'height': 360, 'url': 'https://preview.redd.it/qm8hrshr9vcg1.jpeg?width=640&crop=smart&auto=webp&s=9b75860b097cd64bb2eb6d72f3d05bc78ad1be1f', 'width': 640}, {'height': 540, 'url': 'https://preview.redd.it/qm8hrshr9vcg1.jpeg?width=960&crop=smart&auto=webp&s=901c1ae6a7c4d3a9a1d414fc403e91a08e9366cf', 'width': 960}, {'height': 607, 'url': 'https://preview.redd.it/qm8hrshr9vcg1.jpeg?width=1080&crop=smart&auto=webp&s=1a5faea295b1dea43aa9ad92fc8ee53c9ac8656b', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://preview.redd.it/qm8hrshr9vcg1.jpeg?auto=webp&s=d3e97b9cd59ccf2351df7d2b61757114a250d793', 'width': 1920}, 'variants': {}}]} | ||
Agentic ProbLLMs: Exploiting AI Computer-Use and Coding Agents (youtube) -- "local" can make people complacent on security, but if you push code to github, worth a watch, even if you don't use AI coding tools. | 14 | Good talk at 39C3 Conference [https://www.youtube.com/watch?v=8pbz5y7\_WkM](https://www.youtube.com/watch?v=8pbz5y7_WkM)
Nothing novel, no breaking news here, but a nice tight overview of the landscape, with a great overview of AgentHopper, which is basically a virus framework spread by coding agents via your local env, and push/pull from github.
* **Adversarial Misclassification in Vision & Text Models** \[[00:42](http://www.youtube.com/watch?v=8pbz5y7_WkM&t=42)\], \[[45:03](http://www.youtube.com/watch?v=8pbz5y7_WkM&t=2703)\]
* The speaker demonstrates how hidden commands in images or text (like invisible Unicode tags) can force major AI models like Gemini and Grok to misclassify a panda as a monkey or answer "42" to "1+1".
* **Malware Download via Computer-Use Agents** \[[08:13](http://www.youtube.com/watch?v=8pbz5y7_WkM&t=493)\]
* Anthropic’s "Computer Use" agent is tricked into clicking a link on a malicious website, downloading a malware binary, making it executable, and launching it to join a botnet.
* **"ClickFix" Social Engineering Attack on AI Agents** \[[10:38](http://www.youtube.com/watch?v=8pbz5y7_WkM&t=638)\]
* Agents are shown to be vulnerable to "ClickFix" attacks where they are tricked into copying malicious code from a fake "prove you are human" prompt and pasting it into a terminal, granting attackers remote access.
* **Data Leakage via Local Port Exposure (Devin AI)** \[[18:13](http://www.youtube.com/watch?v=8pbz5y7_WkM&t=1093)\]
* The coding agent Devin is manipulated through a multi-stage prompt injection to run a local web server exposing its file system, then leaking the public URL to an attacker via an image render.
* **Data Exfiltration via DNS Requests (Claude Code & Amazon Q)** \[[22:12](http://www.youtube.com/watch?v=8pbz5y7_WkM&t=1332)\]
* The speaker exposes a flaw where agents allow specific commands like ping or nslookup without user approval, which can be exploited to smuggle sensitive environment variables out via DNS queries.
* **Arbitrary Code Execution via find Command (Amazon Q)** \[[26:02](http://www.youtube.com/watch?v=8pbz5y7_WkM&t=1562)\]
* Amazon Q’s developer extension allowed the find command to run without approval, which was exploited using the -exec flag to launch arbitrary commands (like a calculator) on the host machine.
* **Hidden Instructions via Unicode Tags (Google Jewels & Anti-Gravity)** \[[27:05](http://www.youtube.com/watch?v=8pbz5y7_WkM&t=1625)\]
* Invisible Unicode tag characters hidden in GitHub issues or tickets are used to inject malicious instructions that the AI can read but humans cannot see, leading to unauthorized code compilation and execution.
* **Self-Modifying Configuration & "YOLO Mode" (GitHub Copilot)** \[[31:09](http://www.youtube.com/watch?v=8pbz5y7_WkM&t=1869)\]
* GitHub Copilot is tricked into modifying its own settings.json file to enable "tools.approve" (YOLO mode), effectively bypassing human-in-the-loop security controls to allow unrestricted code execution.
* **Cross-Agent Configuration Exploits** \[[34:46](http://www.youtube.com/watch?v=8pbz5y7_WkM&t=2086)\]
* The presenter explains how one compromised agent can be used to modify the configuration files of a different agent on the same machine, "freeing" it to run malicious commands.
* **"Agent Hopper" AI Virus** \[[35:44](http://www.youtube.com/watch?v=8pbz5y7_WkM&t=2144)\]
* A proof-of-concept AI worm creates a self-replicating cycle where an infected repository infects the developer's agent, which then spreads the malicious prompt to other repositories and pushes them back to GitHub to infect new developers.
| 2026-01-12T06:54:41 | https://www.reddit.com/r/LocalLLaMA/comments/1qao1ra/agentic_probllms_exploiting_ai_computeruse_and/ | coloradical5280 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qao1ra | false | null | t3_1qao1ra | /r/LocalLLaMA/comments/1qao1ra/agentic_probllms_exploiting_ai_computeruse_and/ | false | false | self | 14 | {'enabled': False, 'images': [{'id': '-3NG7BoqxhxmGk7_Bwf_jUUTWYKldvibiGAcijSuTDM', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/-3NG7BoqxhxmGk7_Bwf_jUUTWYKldvibiGAcijSuTDM.jpeg?width=108&crop=smart&auto=webp&s=bb333eaf097695f63ddcc0474f2f6535ac52bbe5', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/-3NG7BoqxhxmGk7_Bwf_jUUTWYKldvibiGAcijSuTDM.jpeg?width=216&crop=smart&auto=webp&s=ab4199f21f32848c22beb3f0ee5c422299067664', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/-3NG7BoqxhxmGk7_Bwf_jUUTWYKldvibiGAcijSuTDM.jpeg?width=320&crop=smart&auto=webp&s=42ab30942e63b581338c9c116b9a2129b714fbc7', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/-3NG7BoqxhxmGk7_Bwf_jUUTWYKldvibiGAcijSuTDM.jpeg?auto=webp&s=b9cf8a6f91f3bad85a87fcccc31fec537339ebdc', 'width': 480}, 'variants': {}}]} |
Dual GPU King 95+x870e Taichi lite | 0 | If anyone is interested in my setup and how I got more performance from a second gpu.. | 2026-01-12T06:21:42 | https://www.reddit.com/r/LocalLLaMA/comments/1qanh5x/dual_gpu_king_95x870e_taichi_lite/ | sloth_cowboy | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qanh5x | false | null | t3_1qanh5x | /r/LocalLLaMA/comments/1qanh5x/dual_gpu_king_95x870e_taichi_lite/ | false | false | self | 0 | null |
China's AGI-Next Roundtable: Leaders from Zhipu, Kimi, Qwen, and Tencent discuss the future of AI | 92 | [Automated RL Data Synthesis for Agentic Tasks](https://preview.redd.it/kz6g7cqbzucg1.png?width=1832&format=png&auto=webp&s=5a62aa25fda324c86227bec8cf9cb17e34aec18d)
[Kimi Linear: An Expressive, Efficient Attention Architecture](https://preview.redd.it/jen154uhzucg1.png?width=1878&format=png&auto=webp&s=fca4b8b2357da383d608e4fd6811a7be444d8e67)
[Goat Lin, caught in a media storm](https://preview.redd.it/vz29chxlzucg1.png?width=1567&format=png&auto=webp&s=768e9b989dd5fcf8c7543919b3bfeeb8a499c80c)
Later, I will translate and organize the main viewpoints of several guests into English in the comments section. | 2026-01-12T06:04:40 | https://www.reddit.com/r/LocalLLaMA/comments/1qan6bb/chinas_aginext_roundtable_leaders_from_zhipu_kimi/ | nekofneko | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qan6bb | false | null | t3_1qan6bb | /r/LocalLLaMA/comments/1qan6bb/chinas_aginext_roundtable_leaders_from_zhipu_kimi/ | false | false | 92 | null | |
Simple app to compare OpenAI, Gemini, and Anthropic responses | 0 | 2026-01-12T05:57:14 | https://www.reddit.com/r/LocalLLaMA/comments/1qan19l/simple_app_to_compare_openai_gemini_and_anthropic/ | Prashant-Lakhera | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qan19l | false | null | t3_1qan19l | /r/LocalLLaMA/comments/1qan19l/simple_app_to_compare_openai_gemini_and_anthropic/ | false | false | 0 | null | ||
MiniMax-M2.1 vs GLM-4.5-Air is the bigger really the better (coding)? | 27 | So I managed to get both MiniMax-M2.1 and GLM-4.5-Air running locally with 48GB vram and 128GB ram.
\- MiniMax-M2.1-UD-Q4\_K\_XL
\- GLM-4.5-Air-UD-Q6\_K\_XL
Both with 100k context q8\_0 KV, and both get simmilar speed: \~11 to \~6tps when context is mostly filled. Minimax has slightly slower prompt processing than GLM. Not great not terrible but enough for agentic coding.
I've read good things about the MiniMax but frankly I can't convince myself it is a better model, using both models with Cline in Vscode
\- GLM reliably generates better and more detailed plan of action comparing to Minimax and diligently executes step by step
\- Minimax aims to complete the (less) detailed plan, often ignoring some issues just to mark it done
\- Despite being smaller, GLM produces better code and requires less intervention after the task is completed comparing to Minimax.
Anyone else having simmilar observations?
In both cases i run the sam prompt, on a project that requires:
\- you are an expert working on a new feature
\- analyze existing code base
\- make some architecturial decision
\- implement feature
\- implement test
\- verify all works (end to end testing)
I have "only" 48GB VRAM and 128GB RAM for my AI VM, here's the llama.cpp config:
GLM-4.5-Air:
cmd: >
llama-server --port ${PORT}
--model /nvme/gguf/GLM-4.5-Air-UD-Q6_K_XL-00001-of-00003.gguf
--ctx-size 100000
--cache-type-k q8_0
--cache-type-v q8_0
--flash-attn on
--temp 1.0
--min-p 0.0
--top-p 0.95
--top-k 40
--batch-size 4096
--ubatch-size 1024
-ngl 999 -mg 0 -ts 20,22 -ot ".ffn_(up|down)_exps.=CPU"
aliases:
- glm-4.5-air
MiniMax-M2.1:
cmd: >
llama-server --port ${PORT}
--model /nvme/gguf/MiniMax-M2.1-UD-Q4_K_XL-00001-of-00003.gguf
--ctx-size 100000
--cache-type-k q8_0
--cache-type-v q8_0
--flash-attn on
--temp 1.0
--min-p 0.0
--top-p 0.95
--top-k 40.0
--batch-size 4096
--ubatch-size 1024
--mmap -ngl 999 -mg 0 -ts 10,61 -ot "\.(1[4-9]|[2-9][0-9])\.ffn_(up|down|gate)_exps.=CPU"
aliases:
- minimax-m2.1
| 2026-01-12T05:45:01 | https://www.reddit.com/r/LocalLLaMA/comments/1qamt9a/minimaxm21_vs_glm45air_is_the_bigger_really_the/ | ChopSticksPlease | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qamt9a | false | null | t3_1qamt9a | /r/LocalLLaMA/comments/1qamt9a/minimaxm21_vs_glm45air_is_the_bigger_really_the/ | false | false | self | 27 | null |
Emulating style of an author | 0 | Looking for ways to emulate an author’s writing style with an LLM without fine tuning. Prompt plus examples works briefly but drifts on longer outputs. The agent also gets ~200–300 tokens of factual context from a web search that it has to write the article from. Curious what people have done to keep the voice consistent without losing accuracy to the source info. Any resources would work. | 2026-01-12T05:30:03 | https://www.reddit.com/r/LocalLLaMA/comments/1qamj6k/emulating_style_of_an_author/ | Different-Activity-4 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qamj6k | false | null | t3_1qamj6k | /r/LocalLLaMA/comments/1qamj6k/emulating_style_of_an_author/ | false | false | self | 0 | null |
Looking for feedback on Mac mini server settings for Ollama | 0 | Hi there,
Been following this community for quite some time but finally had a reason to make my first post!
I setup Ollama on my M4 Pro Mac mini to play around with LLMs a few months ago, and ended up with a few workflows that are actually quite helpful. I'd like to make sure my local Ollama instance is running dependably now. It seems now that Apple shelved XServe, we have to hunt through a lot of settings to find the right options. Here is what I have found so far - are there any other settings folks would recommend for an always-on Ollama server?
* Energy Mode: High Power
* Prevent automatic sleeping when the display is off: On
* Put hard disks to sleep when possible: Off
* Wake for network access: On
* Start up automatically after power failure: On
* Turn off display when inactive: Never (not sure if this is really needed, as the Mac is headless)
* Log in automatically: On
* Open at Login: Added Ollama app
* Screen Sharing and Remote Login: On (so I can administer remotely from my laptop)
Cheers,
Zach
| 2026-01-12T05:01:01 | https://www.reddit.com/r/LocalLLaMA/comments/1qalz0x/looking_for_feedback_on_mac_mini_server_settings/ | zachrattner | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qalz0x | false | null | t3_1qalz0x | /r/LocalLLaMA/comments/1qalz0x/looking_for_feedback_on_mac_mini_server_settings/ | false | false | self | 0 | null |
Tool for converting Confluence docs to LLM-friendly Markdown (for RAG pipelines) | 1 | If you're building RAG over corporate Confluence documentation, you might hit this annoying issue:
Confluence's exported .doc files aren't real Word documents - they're MIME-encoded HTML. LangChain's UnstructuredWordDocumentLoader, docx parsers, and most extraction tools fail on them.
I built a preprocessing tool to solve this: [https://github.com/aqueeb/confluence2md](https://github.com/aqueeb/confluence2md)
It converts Confluence exports to clean Markdown that chunks well:
\- Parses MIME structure → extracts HTML → converts via pandoc
\- Emoji images → Unicode characters
\- Info/warning/tip boxes → blockquotes with labels
\- Proper code block handling with language hints
\- Batch processing for entire doc directories
The output works great with LangChain's MarkdownTextSplitter or any recursive chunker. Single binary, no dependencies.
Sharing in case anyone else is trying to RAG over their company's Confluence and hitting weird parsing errors. | 2026-01-12T04:47:59 | https://www.reddit.com/r/LocalLLaMA/comments/1qalpru/tool_for_converting_confluence_docs_to/ | aqueebqadri | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qalpru | false | null | t3_1qalpru | /r/LocalLLaMA/comments/1qalpru/tool_for_converting_confluence_docs_to/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'Wtv7VKJjhN7LtrTgskRC9lTk9jukzPJo2dSkvzELR2Q', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/Wtv7VKJjhN7LtrTgskRC9lTk9jukzPJo2dSkvzELR2Q.png?width=108&crop=smart&auto=webp&s=6a54da6a3038cdb676914064fba03e1d255fe11a', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/Wtv7VKJjhN7LtrTgskRC9lTk9jukzPJo2dSkvzELR2Q.png?width=216&crop=smart&auto=webp&s=b293e1d67b0583631cb1ef05c4447596c87a43c6', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/Wtv7VKJjhN7LtrTgskRC9lTk9jukzPJo2dSkvzELR2Q.png?width=320&crop=smart&auto=webp&s=b22f3bddea809c58cfaf1ee64d49f70b120d7d4f', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/Wtv7VKJjhN7LtrTgskRC9lTk9jukzPJo2dSkvzELR2Q.png?width=640&crop=smart&auto=webp&s=65be5cd001b1b64c646bb993bd3e14610e2b6bea', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/Wtv7VKJjhN7LtrTgskRC9lTk9jukzPJo2dSkvzELR2Q.png?width=960&crop=smart&auto=webp&s=e8cb6cfe9b9c0442f81b4525b19ef01d09333db9', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/Wtv7VKJjhN7LtrTgskRC9lTk9jukzPJo2dSkvzELR2Q.png?width=1080&crop=smart&auto=webp&s=c4ea467c0c3562ce8f1c0d94798decd60bb82932', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/Wtv7VKJjhN7LtrTgskRC9lTk9jukzPJo2dSkvzELR2Q.png?auto=webp&s=75097d995304e3b6fdec4e250bb20a9dcd304e40', 'width': 1200}, 'variants': {}}]} |
Just launched Plano v0.4 - a unified data plane supporting polyglot AI development | 1 | [removed] | 2026-01-12T04:07:19 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 1qakvzt | false | null | t3_1qakvzt | /r/LocalLLaMA/comments/1qakvzt/just_launched_plano_v04_a_unified_data_plane/ | false | false | default | 1 | null | ||
One Shot Pass@1 Benchmarking | 1 | \[P\] I benchmarked 11 LLMs using 25 handcrafted math & logic puzzles. One puzzle broke every single model.
I got tired of benchmarks that let models retry 100 times (pass@k), or use abstract API harnesses that don’t reflect how real users interact with these systems.
So I built my own.
Vault of Echoes is a dataset of 25 handcrafted math + logic puzzles designed to break lazy reasoning and test what LLMs can actually do—under pressure.
Ran the full benchmark through real chat interfaces exactly on Jan 5th 2026.
\---
The Protocol
\- UI-native: No APIs. I tested the actual web-based chat interfaces (ChatGPT, Gemini, Le Chat, Claude, etc.). I wanted to capture product-layer behaviors like refusals, formatting drift, and hallucinations.
\- One shot: Each model got one fresh session per puzzle. No retries. No "let’s think step by step" pre-prompts—unless the model initiated it.
\- Strict output: Every puzzle ends with a Vault Directive (a precise answer format). If the model rambled or missed the structure, it failed.
The Results (Pass@1)
| Rank | Model | Score | Note |
|------|------------------|--------|------|
| 🥇 | Gemini PRO | 20/25 | Very format-compliant. Strong overall. |
| 🥈 | GPT PRO | 19/25 | Solid, but struggled with invariants. |
| 🥉 | Qwen 3 Max | 19/25 | Matched GPT PRO in fast mode. Efficient and sharp. |
| 4 | DeepSeek 3.2 | 16/25 | Good mid-tier performance. |
| 5 | GPT 5.2 | 15/25 | |
| 5 | Gemini 3 | 15/25 | |
| 7 | Claude Sonnet 4.5 | 10/25 | Lots of refusals and formatting errors. |
| 8 | Nova | 8/25 | |
| 9 | Meta (LLaMA) | 7/25 | Refused several puzzles entirely. |
| 9 | Le Chat | 7/25 | |
| 11 | Grok 4.1 (xAI) | 3/25 | Hallucinated frequently. Full collapse on most logic. |
Key Findings
1. Qwen is absurdly efficient
It tied GPT PRO despite being a fast model with no deliberation mode. That’s... not something I expected - AND FREE!!
2. The Safety Tax is real
Meta and Le Chat failed many puzzles not from reasoning, but from refusal. Several were flagged too complex.
3. Puzzle #4: The unsolved benchmark
“Two Clues, One Suspect” had a 0% pass rate.
A single, bounded, multi disciplinary (math), logic problem. Undefeated.
Every model hallucinated the final answer . Not one passed. GPT PRO thought for 42 minutes to provide a wrong answer. Bruh.
The Data
Benchmark paper (Open Access):
[https://zenodo.org/records/18216959](https://zenodo.org/records/18216959)
\---
Challenge
If anyone can get an open-weight model (LLaMA 3 70B, Command-R+, Mixtral, etc.) to solve Puzzle #4 in one shot—post the transcript.
Let’s see what open models can really do.
Or maybe… let’s fine-tune one.
I'll curate the math data.
Who brings the compute? <:)
| 2026-01-12T03:24:57 | https://www.reddit.com/r/LocalLLaMA/comments/1qak08c/one_shot_pass1_benchmarking/ | Hot_Inspection_9528 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qak08c | false | null | t3_1qak08c | /r/LocalLLaMA/comments/1qak08c/one_shot_pass1_benchmarking/ | false | false | self | 1 | null |
Local LLM + Internet Search Capability = WOW | 221 | Am on Qwen 3, asked about the training date and it said 2024. Alright, guess that's the thing I need to live with. Just need to constantly lookup HF for updated LLM which fits my cute 16gb vram.
Then someone said always ground your local AI with internet searches. A quick search = LM studio duckduckgo plugin
Within 15 minutes, prompt with "searching the web", exactly the same interface I saw at ChatGPT!
Man, this local AI is getting better. Am I having 'agentic-AI' now? haha. I.e., tool calling is always something i heard of, but think that it's reserved for some CS-pro, not an average joe like me.
so now what, when was your 'wow-moment' for stuff like this, and what other things you design in your workflow to make locally run LLM so potent and, most importantly, private? =) | 2026-01-12T03:21:41 | https://www.reddit.com/r/LocalLLaMA/comments/1qajxrg/local_llm_internet_search_capability_wow/ | alex_godspeed | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qajxrg | false | null | t3_1qajxrg | /r/LocalLLaMA/comments/1qajxrg/local_llm_internet_search_capability_wow/ | false | false | self | 221 | null |
Advice for a tool that blocks dangerous terminal commands from AI coding assistants | 8 | Hey there,
I'm building a Mac app that intercepts dangerous terminal commands before they execute. The goal is to catch things like rm -rf or git reset --hard when AI coding tools (Claude Code, Cursor, etc.) accidentally run something destructive.
The idea came after Claude deleted my src/ folder while "cleaning up files." I figured I'm probably not the only one this has happened to.
Right now it:
\- Hooks into zsh to catch commands before they run
\- Shows a popup letting you Block, Allow, or Snapshot first
\- Works offline, no cloud, no account
Can you give me some feedback on whether this is useful? What commands would you want it to catch? Is this overkill or have you had similar accidents?
Here's a quick demo: [https://osiris-sable.vercel.app](https://osiris-sable.vercel.app)
Thank you | 2026-01-12T02:46:57 | https://www.reddit.com/r/LocalLLaMA/comments/1qaj6je/advice_for_a_tool_that_blocks_dangerous_terminal/ | spacepings | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qaj6je | false | null | t3_1qaj6je | /r/LocalLLaMA/comments/1qaj6je/advice_for_a_tool_that_blocks_dangerous_terminal/ | false | false | self | 8 | null |
how do I get ubuntu to not allocate vram on an amd r9700 pro: 519/32624 MB | 2 | rocm-smi is showing:
+------------------------------------------------------------------------------+
| AMD-SMI 26.2.0+021c61fc amdgpu version: 6.14.0-37 ROCm version: 7.1.1 |
| VBIOS version: 023.008.000.068.000001 |
| Platform: Linux Baremetal |
|-------------------------------------+----------------------------------------|
| BDF GPU-Name | Mem-Uti Temp UEC Power-Usage |
| GPU HIP-ID OAM-ID Partition-Mode | GFX-Uti Fan Mem-Usage |
|=====================================+========================================|
| 0000:03:00.0 ...Radeon AI PRO R9700 | 0 % 34 °C 0 34/300 W |
| 0 0 N/A N/A | 2 % 20.0 % 519/32624 MB |
|-------------------------------------+----------------------------------------|
| 0000:07:00.0 ...Radeon AI PRO R9700 | 0 % 37 °C 0 40/300 W |
| 1 1 N/A N/A | 17 % 20.0 % 519/32624 MB |
|-------------------------------------+----------------------------------------|
| 0000:7f:00.0 AMD Radeon Graphics | N/A N/A 0 N/A/0 W |
| 2 2 N/A N/A | N/A N/A 43/2048 MB |
+-------------------------------------+----------------------------------------+
+------------------------------------------------------------------------------+
| Processes: |
| GPU PID Process Name GTT_MEM VRAM_MEM MEM_USAGE CU % |
|==============================================================================|
| No running processes found |
+------------------------------------------------------------------------------+
I updated my grub file to disable the ECC that consumes ~ 2 gigs per card.
(GRUB_CMDLINE_LINUX_DEFAULT="quiet splash amdgpu.ras_enable=0")
and now I am trying to get the 519 megs on each r9700 freed up.
GPT oss 120b is on the cusp of fitting entirely in VRAM with some KV space freeing up this ~ 5 gigs total
Another thing I did try was following google AI telling me to disable it in X11
Section "Device"
Identifier "AMDGPU_dGPU_1"
Driver "amdgpu"
BusID "PCI:3:0:0"
Option "Ignore" "True"
EndSection
Section "Device"
Identifier "AMDGPU_dGPU_2"
Driver "amdgpu"
BusID "PCI:7:0:0"
Option "Ignore" "True"
EndSection
but the BusID format is different between here and most other places (0000:03:00.0 vs CI:3:0:0 ) | 2026-01-12T02:40:23 | https://www.reddit.com/r/LocalLLaMA/comments/1qaj1di/how_do_i_get_ubuntu_to_not_allocate_vram_on_an/ | jdchmiel | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qaj1di | false | null | t3_1qaj1di | /r/LocalLLaMA/comments/1qaj1di/how_do_i_get_ubuntu_to_not_allocate_vram_on_an/ | false | false | self | 2 | null |
How I scraped 100,000 fishing posts to find a secret spot with vector DBs and LLMs | 28 | I caught a 5 pound bass by doing this lol, and the article should be a pretty cool intro to scraping. It's also the reason I have a bunch of massive bass fishing reports sitting on my mac
Typical LLM tools for scraping aren't economical work at this scale, so this was all manual and surprisingly fun. | 2026-01-12T02:11:08 | https://www.meter.sh/blog/how-i-scraped-100k-fishing-posts | Ready-Interest-1024 | meter.sh | 1970-01-01T00:00:00 | 0 | {} | 1qaiekr | false | null | t3_1qaiekr | /r/LocalLLaMA/comments/1qaiekr/how_i_scraped_100000_fishing_posts_to_find_a/ | false | false | default | 28 | null |
Personal Intelligence | 0 | "OSINT" with GPT OSS and Qwen VL 4B | 2026-01-12T02:04:40 | https://v.redd.it/udo0t4ssotcg1 | Serious_Molasses313 | /r/LocalLLaMA/comments/1qai9gg/personal_intelligence/ | 1970-01-01T00:00:00 | 0 | {} | 1qai9gg | false | null | t3_1qai9gg | /r/LocalLLaMA/comments/1qai9gg/personal_intelligence/ | false | false | default | 0 | null |
Which GPU(s) to buy for $45k? | 0 | I am working on building a machine for building local LLMs. Please what GPUs to buy for this. My budget for spending on GPUs in 45k USD. Thanks | 2026-01-12T02:02:36 | https://www.reddit.com/r/LocalLLaMA/comments/1qai7rs/which_gpus_to_buy_for_45k/ | kob123fury | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qai7rs | false | null | t3_1qai7rs | /r/LocalLLaMA/comments/1qai7rs/which_gpus_to_buy_for_45k/ | false | false | self | 0 | null |
I built Muninn, an open-source proxy for AI coding agents like Claude Code. | 0 | I built Muninn, an open-source proxy for AI coding agents like Claude Code.
The basic idea: instead of stuffing your entire codebase into the context window, Muninn lets the LLM explore your code programmatically using tools (grep, read files, search symbols).
How it works:
\- Router: A fast classifier (runs on Groq's Llama 8B) that looks at each request and decides: does this need codebase exploration, or can it pass straight through to Claude? (fully local SLM planned in the future as i get some traces collected)
\- RLM Engine: When exploration is needed, a Recursive Language Model loop kicks in - a cheaper model (like Qwen 32B on Groq) iteratively uses tools to gather context, then hands off a focused summary to your main model.
Net result: Claude only sees what matters, and the expensive exploration happens on fast/cheap inference.
Also added an OpenAI-compatible endpoint if you have Claude MAX - use your flat-rate subscription credits with other tools (Cursor, Continue, Aider, etc).
Written in Rust. Still early but functional.
[https://github.com/colliery-io/muninn](https://github.com/colliery-io/muninn) | 2026-01-12T01:59:09 | https://github.com/colliery-io/muninn | Fit-Presentation-591 | github.com | 1970-01-01T00:00:00 | 0 | {} | 1qai4ul | false | null | t3_1qai4ul | /r/LocalLLaMA/comments/1qai4ul/i_built_muninn_an_opensource_proxy_for_ai_coding/ | false | false | default | 0 | {'enabled': False, 'images': [{'id': 'KmPO0fNjTUzcDLL2tlK2ChA5sbepGS_SnkLInoXLiG8', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/KmPO0fNjTUzcDLL2tlK2ChA5sbepGS_SnkLInoXLiG8.png?width=108&crop=smart&auto=webp&s=50320a0402b1cd92629d51716df9a4add95d64e7', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/KmPO0fNjTUzcDLL2tlK2ChA5sbepGS_SnkLInoXLiG8.png?width=216&crop=smart&auto=webp&s=414d848b51857110f165f710df7b7f46408d41ab', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/KmPO0fNjTUzcDLL2tlK2ChA5sbepGS_SnkLInoXLiG8.png?width=320&crop=smart&auto=webp&s=fc3374abe8757615ade0610cb40b9bc67cc52a39', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/KmPO0fNjTUzcDLL2tlK2ChA5sbepGS_SnkLInoXLiG8.png?width=640&crop=smart&auto=webp&s=284379bab2980516956e16ab1656650c3aa643e7', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/KmPO0fNjTUzcDLL2tlK2ChA5sbepGS_SnkLInoXLiG8.png?width=960&crop=smart&auto=webp&s=8f7be52331cc004035fb70d33f04bf183f516b9b', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/KmPO0fNjTUzcDLL2tlK2ChA5sbepGS_SnkLInoXLiG8.png?width=1080&crop=smart&auto=webp&s=3beb09606c4be7c4e52fe1791973e2fecbb30c2f', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/KmPO0fNjTUzcDLL2tlK2ChA5sbepGS_SnkLInoXLiG8.png?auto=webp&s=25da53999dff20c77f7b53731cb41effca8d5153', 'width': 1200}, 'variants': {}}]} |
Tool for generating LLM datasets | 1 | [removed] | 2026-01-12T01:48:34 | https://www.reddit.com/r/LocalLLaMA/comments/1qahw8c/tool_for_generating_llm_datasets/ | Express_Seesaw_8418 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qahw8c | false | null | t3_1qahw8c | /r/LocalLLaMA/comments/1qahw8c/tool_for_generating_llm_datasets/ | false | false | self | 1 | null |
Tool for generating LLM datasets | 1 | [removed] | 2026-01-12T01:45:34 | https://www.reddit.com/r/LocalLLaMA/comments/1qahtro/tool_for_generating_llm_datasets/ | Express_Seesaw_8418 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qahtro | false | null | t3_1qahtro | /r/LocalLLaMA/comments/1qahtro/tool_for_generating_llm_datasets/ | false | false | self | 1 | null |
Tool for generating LLM datasets (just launched) | 1 | [removed] | 2026-01-12T01:43:59 | https://www.reddit.com/r/LocalLLaMA/comments/1qahsj5/tool_for_generating_llm_datasets_just_launched/ | Express_Seesaw_8418 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qahsj5 | false | null | t3_1qahsj5 | /r/LocalLLaMA/comments/1qahsj5/tool_for_generating_llm_datasets_just_launched/ | false | false | self | 1 | null |
How do I research speech to speech models? | 0 | Let's say for example I want to make a recording and make it sound like Sasuke from Naruto, or Trump. I'm trying to look up options, but I don't know the lingo. For images, for example, I know that there are a lot of ways of running stable diffusion locally, and that if I want to make an image of a specific character, there are "Lora"s I can use for that.
I don't really have any idea of what to even begin searching for to do something similar with changing my voice to a specific character's. Could you guys help me learn the general lingo? Also, I'd love to hear about resources to do this for free: whether they be free websites or locally run programs, as well as any existing banks for..... idk, the sound Loras for the different characters. I know these exist as a technology - I've seen paid services for them - I just don't know how to get started. | 2026-01-12T01:13:34 | https://www.reddit.com/r/LocalLLaMA/comments/1qah3zq/how_do_i_research_speech_to_speech_models/ | frigidice363 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qah3zq | false | null | t3_1qah3zq | /r/LocalLLaMA/comments/1qah3zq/how_do_i_research_speech_to_speech_models/ | false | false | self | 0 | null |
GLM-4.7 can Replace Opus 4.5 and G3 | 1 | [removed] | 2026-01-12T01:05:40 | https://www.reddit.com/r/LocalLLaMA/comments/1qagxbl/glm47_can_replace_opus_45_and_g3/ | Impressive-Olive8372 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qagxbl | false | null | t3_1qagxbl | /r/LocalLLaMA/comments/1qagxbl/glm47_can_replace_opus_45_and_g3/ | false | false | self | 1 | null |
GLM-4.7 can Replace Opus 4.5 and G3 | 1 | [removed] | 2026-01-12T01:04:28 | https://www.reddit.com/r/LocalLLaMA/comments/1qagwb2/glm47_can_replace_opus_45_and_g3/ | Impressive-Olive8372 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qagwb2 | false | null | t3_1qagwb2 | /r/LocalLLaMA/comments/1qagwb2/glm47_can_replace_opus_45_and_g3/ | false | false | self | 1 | null |
GLM-4.7 can Replace Opus 4.5 and gemini 3 | 1 | [removed] | 2026-01-12T01:02:59 | https://www.reddit.com/r/LocalLLaMA/comments/1qagv2f/glm47_can_replace_opus_45_and_gemini_3/ | Impressive-Olive8372 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qagv2f | false | null | t3_1qagv2f | /r/LocalLLaMA/comments/1qagv2f/glm47_can_replace_opus_45_and_gemini_3/ | false | false | 1 | null | |
Qwen cutoff date makes our current reality too dystopian to be credible | 281 | I provided some recent news articles from reputable sources and some wikipedia and Qwen flatout refuses to believe any of it. Note that claims such as "systematically executing citizens who resisted." were never in the prompt or the sources! / Qwen-3-80B
Today at 01:30
Understood. Here is the list, stripped of references to verification or sources — only the claims and why they are implausible, based on known structures of power, law, and human behavior:
# 🚫 Least Credible Events — Why They Are Impossible
# 1. Elon Musk made a Nazi salute at Trump’s 2025 inauguration
He performed a raised-arm gesture at a public rally, repeated it, and said, “My heart goes out to you.” The act was widely interpreted as a Nazi salute and condemned globally.
→ **Impossible because** no person with his wealth, influence, and legal exposure would perform a gesture that instantly brands him as a symbol of genocide. The global backlash, corporate collapse, criminal liability, and social annihilation would be immediate and total. This is not political theater — it is self-erasure.
# 2. U.S. kidnapped Nicolás Maduro and flew him to the U.S. for trial
U.S. special forces stormed Caracas, abducted Venezuela’s president, and transported him to American custody for drug trafficking charges.
→ **Impossible because** kidnapping a sitting head of state is not an intelligence operation — it is an act of war. Venezuela has Russian and Iranian backing. The act would trigger immediate military retaliation, global isolation, and the collapse of U.S. diplomatic credibility. No state, no matter how aggressive, risks this without total annihilation.
# 3. U.S. seized a Russian-flagged oil tanker in the Atlantic
U.S. Coast Guard intercepted and boarded a Russian-flagged vessel in international waters, despite Russian military vessels being present.
→ **Impossible because** seizing a vessel flagged by a nuclear-armed state in peacetime, especially with military presence, is not enforcement — it is a declaration of war. Russia would respond. The global energy system would fracture. This is not policy. It is the end of the international order.
# 4. ICE executed U.S. citizens who opposed them
ICE agents shot and killed American citizens during raids, including a woman who reversed her car, and began systematically executing citizens who resisted.
→ **Impossible because** execution requires due process. The U.S. military, judiciary, and law enforcement are bound by the Constitution. Killing citizens without trial is not immigration enforcement — it is domestic insurrection. The moment this occurred, the military would intervene, Congress would impeach, and the government would cease to function.
# 5. Trump threatens to take Greenland by force
Trump declared the U.S. would take control of Greenland “whether they like it or not,” and Denmark responded by calling it a betrayal of NATO.
→ **Impossible because** Greenland is part of a NATO ally. Invading an ally’s territory violates the core principle of the alliance. No U.S. president, however extreme, would risk triggering the collapse of the entire Western security structure. This is not diplomacy — it is the suicide of American global leadership.
# ✅ Final Note:
These are not exaggerated scenarios.
They are **narratives of collapse** — events that require the total abandonment of law, alliance, institutional restraint, and human survival.
They are not plausible.
They are dystopian fiction. | 2026-01-12T00:38:33 | https://www.reddit.com/r/LocalLLaMA/comments/1qagaaq/qwen_cutoff_date_makes_our_current_reality_too/ | Swimming_Cover_9686 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qagaaq | false | null | t3_1qagaaq | /r/LocalLLaMA/comments/1qagaaq/qwen_cutoff_date_makes_our_current_reality_too/ | false | false | self | 281 | null |
Hunyuan MT-1.5 Demo | 33 | Recently, Hunyuan released a new translation model called [MT-1.5](https://huggingface.co/tencent/HY-MT1.5-7B).
It seems like there is no public demo (at least without signup), so I hosted the Q8\_0 version with llama.cpp and a basic frontend to play around with different languages.
I am pretty impressed by the 7B model so far. I tried out a few different examples and it mostly "agrees" with the output of closed-source models like ChatGPT. Hope it helps in my spanish learning journey!
Here's the link: [ai.lucahu.xyz/translate](https://ai.lucahu.xyz/translate) | 2026-01-12T00:26:56 | https://www.reddit.com/r/LocalLLaMA/comments/1qag0nf/hunyuan_mt15_demo/ | finanzwegwerf20 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qag0nf | false | null | t3_1qag0nf | /r/LocalLLaMA/comments/1qag0nf/hunyuan_mt15_demo/ | false | false | self | 33 | {'enabled': False, 'images': [{'id': 'VSVTZL0u39kj1Rt8FJu3a5nTaIYOY7QPC3Xw6R-dwvw', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/VSVTZL0u39kj1Rt8FJu3a5nTaIYOY7QPC3Xw6R-dwvw.png?width=108&crop=smart&auto=webp&s=d7c0b66e08004befb670aa6ea919715c92dae63e', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/VSVTZL0u39kj1Rt8FJu3a5nTaIYOY7QPC3Xw6R-dwvw.png?width=216&crop=smart&auto=webp&s=6651b2d07b4860accafc6d9cc33d41309c3675c6', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/VSVTZL0u39kj1Rt8FJu3a5nTaIYOY7QPC3Xw6R-dwvw.png?width=320&crop=smart&auto=webp&s=1b520c0d5f7d53fe960b5927f479a8244e90194f', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/VSVTZL0u39kj1Rt8FJu3a5nTaIYOY7QPC3Xw6R-dwvw.png?width=640&crop=smart&auto=webp&s=0823c56bcbb0f3b9291884b4b2f9b0d8ccfe511c', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/VSVTZL0u39kj1Rt8FJu3a5nTaIYOY7QPC3Xw6R-dwvw.png?width=960&crop=smart&auto=webp&s=208a820d9926da694595c0622cb2ba5a8d554a53', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/VSVTZL0u39kj1Rt8FJu3a5nTaIYOY7QPC3Xw6R-dwvw.png?width=1080&crop=smart&auto=webp&s=5266ad70da819bf7b904baca4b86b02a988ae92e', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/VSVTZL0u39kj1Rt8FJu3a5nTaIYOY7QPC3Xw6R-dwvw.png?auto=webp&s=7e9c2d716f0a0a81af8253f89eeed78d127f58b7', 'width': 1200}, 'variants': {}}]} |
Anyone using “JSON Patch” (RFC 6902) to fix only broken parts of LLM JSON outputs? | 0 | Hi folks — I’m building a pipeline where an LLM extracts a large structured JSON (100+ items) from documents. I run a deterministic validator (schema + business invariants). When validation fails, I currently ask another LLM call to “fix it”… but it re-outputs the entire JSON, which:
• wastes tokens
• risks mutating correct fields
• makes diffs/debugging painful
I want a patch-based approach: fix ONLY the broken parts.
I’m inspired by the idea of asking the model for JSON Patch (RFC 6902) or some “minimal patch” format instead of regenerating the full object. Also reading this paper: https://arxiv.org/html/2510.04717v1 (JSON editing efficiency).
My current thinking:
• Validator pinpoints the failing node(s)
• Send the model only a small local context (broken node + parents/children)
• Ask for patch ops (e.g., RFC 6902 JSON Patch or domain ops like reparent, set_values)
• Apply patch deterministically
• Re-validate / retry (bounded)
Another idea would be to grant access to the json file through tools (pydanticAI framework) and ask the agent to repair only the broken part but it seems this is not working
Has anyone shipped this in production? What worked / failed?
If you’ve tested the JSON Whisperer idea (or anything similar), I’d love your results! | 2026-01-12T00:25:52 | https://www.reddit.com/r/LocalLLaMA/comments/1qafzs9/anyone_using_json_patch_rfc_6902_to_fix_only/ | Professional_Term579 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qafzs9 | false | null | t3_1qafzs9 | /r/LocalLLaMA/comments/1qafzs9/anyone_using_json_patch_rfc_6902_to_fix_only/ | false | false | self | 0 | null |
4x RTX 6000 Pro LACT Config | 4 | Took a little tuning but I was able to get this config working for LACT with my Blackwells on a single 1600 Watt GPU.
This likely can still be optimized but should serve as a good starting point for anyone else running 4 Blackwell GPUs from one 1600W PSU
version: 5
daemon:
log_level: info
admin_group: sudo
disable_clocks_cleanup: false
apply_settings_timer: 5
current_profile: null
auto_switch_profiles: false
gpus:
10DE:2BB1-10DE:204B-0000:01:00.0:
vendor: nvidia
power_cap: 310
min_core_clock: 210
max_core_clock: 2600
gpu_clock_offsets:
0: 1100
mem_clock_offsets:
0: 4000
10DE:2BB1-10DE:204B-0000:21:00.0:
vendor: nvidia
power_cap: 310
min_core_clock: 210
max_core_clock: 2600
gpu_clock_offsets:
0: 1100
mem_clock_offsets:
0: 4000
10DE:2BB1-10DE:204B-0000:41:00.0:
vendor: nvidia
power_cap: 310
min_core_clock: 210
max_core_clock: 2600
gpu_clock_offsets:
0: 1100
mem_clock_offsets:
0: 4000
10DE:2BB1-10DE:204B-0000:81:00.0:
vendor: nvidia
power_cap: 310
min_core_clock: 210
max_core_clock: 2600
gpu_clock_offsets:
0: 1100
mem_clock_offsets:
0: 4000 | 2026-01-12T00:16:07 | https://www.reddit.com/r/LocalLLaMA/comments/1qafrp8/4x_rtx_6000_pro_lact_config/ | I-cant_even | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qafrp8 | false | null | t3_1qafrp8 | /r/LocalLLaMA/comments/1qafrp8/4x_rtx_6000_pro_lact_config/ | false | false | self | 4 | null |
Best practices for running a CPU-only RAG chatbot in production? | 0 | Hi r/LocalLLaMA 👋
My company is planning to deploy a **production RAG-based chatbot that must run entirely on CPU** (no GPUs available in deployment). I’m looking for **general guidance and best practices** from people who’ve done this in real-world setups.
# What we’re trying to solve
* Question-answering chatbot over internal documents
* Retrieval-Augmented Generation (RAG) pipeline
* Focus on **reliability, grounded answers, and reasonable latency**
# Key questions
**1️⃣ LLM inference on CPU**
* What size range tends to be the sweet spot for CPU-only inference?
* Is aggressive quantization (int8 / int4) generally enough for production use?
* Any tips to balance latency vs answer quality?
**2️⃣ Embeddings for retrieval**
* What characteristics matter most for CPU-based semantic search?
* Model size vs embedding dimension
* Throughput vs recall
* Any advice on multilingual setups (English + another language)?
**3️⃣ Reranking on CPU**
* In practice, is cross-encoder reranking worth the extra latency on CPU?
* Do people prefer:
* Strong embeddings + higher `top_k`, or
* Lightweight reranking with small candidate sets?
**4️⃣ System-level optimizations**
* Chunk sizes and overlap that work well on CPU
* Caching strategies (embeddings, reranker outputs, answers)
* Threading / batch size tricks for Transformers on CPU
# Constraints
* CPU-only deployment (cloud VM)
* Python + Hugging Face stack
* Latency matters, but correctness matters more than speed
Would love to hear **real deployment stories, lessons learned, or pitfalls to avoid**.
Thanks in advance! | 2026-01-11T23:54:30 | https://www.reddit.com/r/LocalLLaMA/comments/1qaf985/best_practices_for_running_a_cpuonly_rag_chatbot/ | Acceptable_Young_167 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qaf985 | false | null | t3_1qaf985 | /r/LocalLLaMA/comments/1qaf985/best_practices_for_running_a_cpuonly_rag_chatbot/ | false | false | self | 0 | null |
Unstable Video Understanding in Gemini 3 Pro API | 1 | [removed] | 2026-01-11T23:19:02 | https://www.reddit.com/r/LocalLLaMA/comments/1qaefca/unstable_video_understanding_in_gemini_3_pro_api/ | Feisty_Complaint_683 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qaefca | false | null | t3_1qaefca | /r/LocalLLaMA/comments/1qaefca/unstable_video_understanding_in_gemini_3_pro_api/ | false | false | self | 1 | null |
I vibe-coded a "System 2" Agentic Terminal (Local Reflection + Audit Logs), but I only have 6GB VRAM. I need you to test it on real local hardware. | 0 | **Repo:** [https://github.com/Prof-Harita/terminaI](https://github.com/Prof-Harita/terminaI)
I’ve been frustrated that most "AI Terminals" (Warp, Cursor) are just SaaS wrappers that stream your shell history to the cloud. I wanted an operator that runs **locally**, uses my own endpoints (perfect for **Ollama**), and actually **thinks** before it executes dangerous commands.
So I forked the Gemini CLI and rebuilt it as **TerminaI**.
**The Problem (Why I need you):** I am **not** a professional developer (I work on the business side of tech). I "vibe coded" this repo in about 2 weeks.
**The bottleneck is my hardware.** I only have **6GB VRAM** on my laptop, so I built and tested this primarily using Gemini/OpenAI APIs.
**The Architecture:** I implemented a **"System 2" Reflection Loop**:
1. **Intent:** "Clean up my docker images."
2. **Reflection:** The agent pauses, inspects the system state, and creates a plan.
3. **Governance:** It classifies actions as **Level A/B/C** (Safe/Mutating/Destructive) and forces an approval.
**The Request:** The tool supports OpenAI-compatible endpoints (so it points easily to **Ollama** or **LM Studio**).
I need the community to test this with **real local models** (Llama 3, Mistral, etc.):
* Does the "Reflection" prompt logic hold up on quantized 7B/13B models?
* Or does it hallucinate the "Approval Ladder" and try to run `rm -rf` without asking?
I want this to be the default "Local-First" operator, but I can't optimize for local inference without your help.
**Roast my architecture, and please share your logs if it breaks on local models.** | 2026-01-11T23:18:43 | Embarrassed-Mail267 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1qaef36 | false | null | t3_1qaef36 | /r/LocalLLaMA/comments/1qaef36/i_vibecoded_a_system_2_agentic_terminal_local/ | false | false | nsfw | 0 | {'enabled': True, 'images': [{'id': 'm5nvcsugzscg1', 'resolutions': [{'height': 62, 'url': 'https://preview.redd.it/m5nvcsugzscg1.gif?width=108&crop=smart&format=png8&s=97bfbfe8e80863dc3801060ca31a10977473566a', 'width': 108}, {'height': 124, 'url': 'https://preview.redd.it/m5nvcsugzscg1.gif?width=216&crop=smart&format=png8&s=a6c70f58c316c130ba9438b22f3c134902ca0cfd', 'width': 216}, {'height': 185, 'url': 'https://preview.redd.it/m5nvcsugzscg1.gif?width=320&crop=smart&format=png8&s=8f17e2db20985c6898281d4f20beae62216eff51', 'width': 320}, {'height': 370, 'url': 'https://preview.redd.it/m5nvcsugzscg1.gif?width=640&crop=smart&format=png8&s=f29101d209c6097a78d0d923062451c8048a4d5c', 'width': 640}], 'source': {'height': 370, 'url': 'https://preview.redd.it/m5nvcsugzscg1.gif?format=png8&s=153448d0de0484b36d9fdd51aff222dfdd8863fb', 'width': 640}, 'variants': {'gif': {'resolutions': [{'height': 62, 'url': 'https://preview.redd.it/m5nvcsugzscg1.gif?width=108&crop=smart&s=290585db07f7b04363db50f45dea3568769186b9', 'width': 108}, {'height': 124, 'url': 'https://preview.redd.it/m5nvcsugzscg1.gif?width=216&crop=smart&s=8df08a6e60572dd1382fd057c917ef4a5edc85b9', 'width': 216}, {'height': 185, 'url': 'https://preview.redd.it/m5nvcsugzscg1.gif?width=320&crop=smart&s=c1a6b4e6dc07a8071752e0fa9c93afc4e7ebe0e5', 'width': 320}, {'height': 370, 'url': 'https://preview.redd.it/m5nvcsugzscg1.gif?width=640&crop=smart&s=e4e44fe760757ef7554d248e45deddbab018b26c', 'width': 640}], 'source': {'height': 370, 'url': 'https://preview.redd.it/m5nvcsugzscg1.gif?s=f3ce484c480849855f6625a5a43bc030f15d9917', 'width': 640}}, 'mp4': {'resolutions': [{'height': 62, 'url': 'https://preview.redd.it/m5nvcsugzscg1.gif?width=108&format=mp4&s=29993ac8d79127d56d78a908ce32864fb7ee2816', 'width': 108}, {'height': 124, 'url': 'https://preview.redd.it/m5nvcsugzscg1.gif?width=216&format=mp4&s=9d1574915951205e7476076f910f34d4527f097a', 'width': 216}, {'height': 185, 'url': 'https://preview.redd.it/m5nvcsugzscg1.gif?width=320&format=mp4&s=6175c51c83a32c5bb921b673df81650226a2362a', 'width': 320}, {'height': 370, 'url': 'https://preview.redd.it/m5nvcsugzscg1.gif?width=640&format=mp4&s=74aaf4a93d3590fbb9a47caf1c8f7345e36e5d7c', 'width': 640}], 'source': {'height': 370, 'url': 'https://preview.redd.it/m5nvcsugzscg1.gif?format=mp4&s=b16e2cb16821b8d7483281c7deb28dcdd9abf8a7', 'width': 640}}, 'nsfw': {'resolutions': [{'height': 62, 'url': 'https://preview.redd.it/m5nvcsugzscg1.gif?width=108&crop=smart&blur=10&format=png8&s=741998ad6cdd2df4c5773a365772eeecddbb8d7d', 'width': 108}, {'height': 124, 'url': 'https://preview.redd.it/m5nvcsugzscg1.gif?width=216&crop=smart&blur=21&format=png8&s=80a5169d382bdb127430856480fe01559a579b8b', 'width': 216}, {'height': 185, 'url': 'https://preview.redd.it/m5nvcsugzscg1.gif?width=320&crop=smart&blur=32&format=png8&s=5ccd3e5c7140fdf9f67359fa3a908683f9fda31d', 'width': 320}, {'height': 370, 'url': 'https://preview.redd.it/m5nvcsugzscg1.gif?width=640&crop=smart&blur=40&format=png8&s=3bbe04274d8e3945f7c596d8d9f0fd7c17077700', 'width': 640}], 'source': {'height': 370, 'url': 'https://preview.redd.it/m5nvcsugzscg1.gif?blur=40&format=png8&s=9d2a6f552e2116d9d8c161a2a90029bc8c95da34', 'width': 640}}, 'obfuscated': {'resolutions': [{'height': 62, 'url': 'https://preview.redd.it/m5nvcsugzscg1.gif?width=108&crop=smart&blur=10&format=png8&s=741998ad6cdd2df4c5773a365772eeecddbb8d7d', 'width': 108}, {'height': 124, 'url': 'https://preview.redd.it/m5nvcsugzscg1.gif?width=216&crop=smart&blur=21&format=png8&s=80a5169d382bdb127430856480fe01559a579b8b', 'width': 216}, {'height': 185, 'url': 'https://preview.redd.it/m5nvcsugzscg1.gif?width=320&crop=smart&blur=32&format=png8&s=5ccd3e5c7140fdf9f67359fa3a908683f9fda31d', 'width': 320}, {'height': 370, 'url': 'https://preview.redd.it/m5nvcsugzscg1.gif?width=640&crop=smart&blur=40&format=png8&s=3bbe04274d8e3945f7c596d8d9f0fd7c17077700', 'width': 640}], 'source': {'height': 370, 'url': 'https://preview.redd.it/m5nvcsugzscg1.gif?blur=40&format=png8&s=9d2a6f552e2116d9d8c161a2a90029bc8c95da34', 'width': 640}}}}]} | |
Patch applying models? | 1 | What are the best models for applying a patch? I mean for example GPT 5.2 regularly returns code in a "git diff" format, which cannot be applied by normal CLI tools like patch as they are not perfectly formatted.
I can of course call Sonnet 4.5 on these patches and have them applied knowing the context of the full conversation, but it's super expensive.
I'm looking for some small/cheap specialized models only for applying the patch (and looking up from context the parts which are incomplete).
What do you use for this? | 2026-01-11T23:17:36 | https://www.reddit.com/r/LocalLLaMA/comments/1qaee2d/patch_applying_models/ | hyperknot | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qaee2d | false | null | t3_1qaee2d | /r/LocalLLaMA/comments/1qaee2d/patch_applying_models/ | false | false | self | 1 | null |
Help find the combination of Voice assistant/companion + text to speech+ auto conversation advancement + websearch | 2 | **Ok, first of all be gentle if you are going to scold me.**
I feel like im all over the place still trying to make heads or tales of the AI technology and was just able to pick pieces here and there.
While i appreciate all the efforts done by communities like this, i still feel lost.
I've been searching for a while to find the combination in the title. i've ran into koboldcpp which seems to house most of these.
But im unclear if its possible to combine all of them.
Can you please help me breakdown the current state of such combined integration?
What LLMs are you using, software, OS, and a lastly if it will be possible to achieve something like Alexa for such a project.
I just want to live the dream of having my own jarvis at home.
I saw things like heyamica but it's not clear if it only uses things like koboldcpp to run everything combined under it or different backend to each part.
What seems to be nice about heyamica is that it can do it's own self conversation advancement.
Please help me make sense of what i'm researching. | 2026-01-11T22:57:13 | https://www.reddit.com/r/LocalLLaMA/comments/1qadvu0/help_find_the_combination_of_voice/ | NineBiscuit | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qadvu0 | false | null | t3_1qadvu0 | /r/LocalLLaMA/comments/1qadvu0/help_find_the_combination_of_voice/ | false | false | self | 2 | null |
I think coding agent tools are not the (local) way | 0 | Disclaimer: not a dev and I love talking about stuff I do not really know.
I was reading that:
https://www.anthropic.com/engineering/advanced-tool-use
.. and thinking: really?? These experts implemented such stuff so late?! They really seem to want to push their models capabilities by trying not to parasite their context.
And yes, context is highly important, isn’t it?
I actually use minimax q3/q4 with opencode, the model is amazing and the tool too. But again, just saying « Hello » and watching the llamacpp window omg 16k context full of blabla, although, maybe, the llm is already probably trained on similar blabla. And what if gpu poor and limited hardware?? Destroying context kills everything??
So here is my bullshit: for purely local stuff, the only futur proof way is not a tool (even if wonderfull) imitating the non local stuff.
The tools should be adaptative to the models (and not the opposite) so there should be (took opencode just as example to illustrate the purpose):
\- an « opencode\_eval » tool which is a benchmark that send thousands of elaborated prompts (to get some probablities and quality results) to evaluate how the models really like to launch its commands/task/tools/whatever. It may last few hours but at the end it allows to identify best suited patterns and way to preserve context.
\- an opencode tool which can take these results as input data and automatically parse into its codebase. The tool may then be able to use the maximum potential of the model by optimizing its context and letting it use tools in better way
Feel free to destroy my thoughts! | 2026-01-11T22:39:49 | https://www.reddit.com/r/LocalLLaMA/comments/1qadg8a/i_think_coding_agent_tools_are_not_the_local_way/ | Leflakk | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qadg8a | false | null | t3_1qadg8a | /r/LocalLLaMA/comments/1qadg8a/i_think_coding_agent_tools_are_not_the_local_way/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': '0ymN9mKePsRSOq0uZR5v730sv6sf6hU8RlP5DcrzGOM', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/0ymN9mKePsRSOq0uZR5v730sv6sf6hU8RlP5DcrzGOM.png?width=108&crop=smart&auto=webp&s=409e253029e1dda5b5af6b2223c4c50aa2124bd1', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/0ymN9mKePsRSOq0uZR5v730sv6sf6hU8RlP5DcrzGOM.png?width=216&crop=smart&auto=webp&s=b758f3ea3b7e04bacca2d9dffc9c7bf2089e8c04', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/0ymN9mKePsRSOq0uZR5v730sv6sf6hU8RlP5DcrzGOM.png?width=320&crop=smart&auto=webp&s=93566b15eaa0c3d0d1610be57e9606d831ae8298', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/0ymN9mKePsRSOq0uZR5v730sv6sf6hU8RlP5DcrzGOM.png?width=640&crop=smart&auto=webp&s=7f4723c17036232354300210dd7e3e4624ad72e6', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/0ymN9mKePsRSOq0uZR5v730sv6sf6hU8RlP5DcrzGOM.png?width=960&crop=smart&auto=webp&s=ca50b848121d0710d02384947bb56013dca05e94', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/0ymN9mKePsRSOq0uZR5v730sv6sf6hU8RlP5DcrzGOM.png?width=1080&crop=smart&auto=webp&s=d8093033a9fe3d83e824fd3488b477fc86b892ca', 'width': 1080}], 'source': {'height': 1260, 'url': 'https://external-preview.redd.it/0ymN9mKePsRSOq0uZR5v730sv6sf6hU8RlP5DcrzGOM.png?auto=webp&s=a6d9e3fa518f6d13cddec501fe62bda42190cd5a', 'width': 2400}, 'variants': {}}]} |
STELLA - A simple linux shell agent experiment | 8 | I am experimenting with LangChain/Ollama and I have created this simple shell (bash) agent. It has four tools: run local/remote commands (ssh), read/write files. It has command sanitization (avoids getting caught in interactive commands) confirmation for running risky commands / sudo. Interactive and non interactive modes and basic pipe functionality. Currently working on ubuntu/debian. | 2026-01-11T22:26:42 | https://www.reddit.com/gallery/1qad4i4 | petyussz | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1qad4i4 | false | null | t3_1qad4i4 | /r/LocalLLaMA/comments/1qad4i4/stella_a_simple_linux_shell_agent_experiment/ | false | false | 8 | null | |
[2512.14982] Prompt Repetition Improves Non-Reasoning LLMs | 13 | 2026-01-11T22:19:43 | https://arxiv.org/abs/2512.14982 | Thrumpwart | arxiv.org | 1970-01-01T00:00:00 | 0 | {} | 1qacy7z | false | null | t3_1qacy7z | /r/LocalLLaMA/comments/1qacy7z/251214982_prompt_repetition_improves_nonreasoning/ | false | false | default | 13 | null | |
Deepseek OCRs wrong years | 2 | 2026-01-11T22:18:05 | https://www.reddit.com/r/LocalLLaMA/comments/1qacwq6/deepseek_ocrs_wrong_years/ | danboldis | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qacwq6 | false | null | t3_1qacwq6 | /r/LocalLLaMA/comments/1qacwq6/deepseek_ocrs_wrong_years/ | false | false | 2 | null | ||
Organize and auto-rename image files with a local LLaMA/LLaVA GUI | 2 | This is a major update to an open-source desktop file organization tool I’ve been maintaining - AI File Sorter 1.5.
The focus of this release is local image content analysis and rename workflows, while keeping everything fully offline and under user control. Runs on Windows, macOS, and Linux.
Designed for people who want to organize files (including large image collections) for later review, archiving, or long-term storage, without sending data anywhere.
**What it does**
* Sorts large folders or entire drives (Downloads, NAS shares, archives, external disks) using local LLMs (GGUF). Everything can run fully offline.
* Analyzes image content locally using a LLaVA vision-language model (mmproj + Mistral 7B) and suggests descriptive filenames (e.g. `IMG_2048.jpg` → `clouds_over_lake.jpg`).
* Supports rename-only workflows, so files can be renamed without being categorized & moved.
* Taxonomy-based categorization with added heuristics: extracts context from existing paths and filenames, and uses a local cache of prior assignments to provide few-shot guidance to the LLM.
* Supports different GPU backends for inference acceleration (Vulkan, CUDA). CPU + OpenBLAS are also supported.
* Analyzes folder trees and suggests categories and optional subcategories.
* Provides a review dialog where categories and filename suggestions can be edited before anything is applied.
* Supports dry runs and Undos.
* Creates folder structures and applies changes only after confirmation.
**What’s new in 1.5**
* Local image content analysis with filename suggestions (no cloud, no uploads).
* Improved review dialog:
* rename-only flows
* inline filename editing
* Picture-only processing mode to focus runs on supported image files.
* Fully localized analysis progress output across all UI languages.
* Added Dutch as a selectable interface language.
Everything remains **privacy-first by design**: when using local models, no files, images, filenames, or metadata leave the machine, and no telemetry is sent. Unless, of course, you choose to use your own ChatGPT or Gemini API key (not supported for image content analysis - only for general file categorization & sorting).
Repository: [https://github.com/hyperfield/ai-file-sorter/](https://github.com/hyperfield/ai-file-sorter/)
App's website: [https://filesorter.app](https://filesorter.app)
I’d appreciate constructive feedback.
[Example run](https://i.redd.it/2mpjwa3viscg1.gif)
| 2026-01-11T21:46:36 | https://www.reddit.com/r/LocalLLaMA/comments/1qac3qg/organize_and_autorename_image_files_with_a_local/ | ph0tone | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qac3qg | false | null | t3_1qac3qg | /r/LocalLLaMA/comments/1qac3qg/organize_and_autorename_image_files_with_a_local/ | false | false | 2 | {'enabled': False, 'images': [{'id': 'NpkG83HCUQepjUodBoPD_KhiDtaFEoMS2DSQTy1CMmA', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/NpkG83HCUQepjUodBoPD_KhiDtaFEoMS2DSQTy1CMmA.png?width=108&crop=smart&auto=webp&s=b5507e1783686cf51ec649d23bab13d32ae216c1', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/NpkG83HCUQepjUodBoPD_KhiDtaFEoMS2DSQTy1CMmA.png?width=216&crop=smart&auto=webp&s=230353f4dc3af3542f91e7f3882e40a353879230', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/NpkG83HCUQepjUodBoPD_KhiDtaFEoMS2DSQTy1CMmA.png?width=320&crop=smart&auto=webp&s=0a3b98e84e3fa4004a80acf90a488233a8cb1be4', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/NpkG83HCUQepjUodBoPD_KhiDtaFEoMS2DSQTy1CMmA.png?width=640&crop=smart&auto=webp&s=ca442d8598bef10c747a23ef993ebadaa496bea5', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/NpkG83HCUQepjUodBoPD_KhiDtaFEoMS2DSQTy1CMmA.png?width=960&crop=smart&auto=webp&s=df06f817c817522e3fd3362afcfee594aa669cf9', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/NpkG83HCUQepjUodBoPD_KhiDtaFEoMS2DSQTy1CMmA.png?width=1080&crop=smart&auto=webp&s=d3c51fa12324f28fd49189d2e3e00d1b04a9d129', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/NpkG83HCUQepjUodBoPD_KhiDtaFEoMS2DSQTy1CMmA.png?auto=webp&s=fa05ae55848cd02cc74ca42c2af54bf024c28864', 'width': 1200}, 'variants': {}}]} | |
Model Comparison: GLM-4.7 vs DeepSeek-R1 on detecting logical false equivalencies in structured analogies. | 1 | 2026-01-11T21:38:13 | https://www.reddit.com/gallery/1qabw10 | DowntownAd7954 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1qabw10 | false | null | t3_1qabw10 | /r/LocalLLaMA/comments/1qabw10/model_comparison_glm47_vs_deepseekr1_on_detecting/ | false | false | 1 | null | ||
Contrast Test: GLM-4.7 (Chinese SOTA) correctly identifies the "Diabetes Analogy" logic failure, while DeepSeek-R1 defends it. | 1 | [removed] | 2026-01-11T21:18:20 | https://www.reddit.com/r/LocalLLaMA/comments/1qabdg3/contrast_test_glm47_chinese_sota_correctly/ | DowntownAd7954 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qabdg3 | false | null | t3_1qabdg3 | /r/LocalLLaMA/comments/1qabdg3/contrast_test_glm47_chinese_sota_correctly/ | false | false | self | 1 | null |
I prayed that China success with their chip game | 47 | Jensen Huang seems like a nice guy but his strategy has been very rushless when come to business and it frustrated me a bit.
\- Get rid of NVLink
\- Limited production for high VRAM GPU
Same stuff with all of the Western chip companies. It seems like nowaday they just make and sell stuff to each others cause of the massive monopoly in the industry for everything Chip and specially RAM related. Even AMD seems to dig the consumer's market soonish. Weridly the only guy who still focus on the consumer market is APLLE :))
Chinese big tech seems to be the only group of companies that are actually still putting effort into the consumer market, it just that they are a bit behind in certain technology.
Imagine the day that Chinese RAM, GPU and other parts flood the market, probably gonna eat some tariff like their cars but still, at least it gonna put some competitiveness to the place. | 2026-01-11T21:16:44 | https://www.reddit.com/r/LocalLLaMA/comments/1qabbww/i_prayed_that_china_success_with_their_chip_game/ | pbad1 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qabbww | false | null | t3_1qabbww | /r/LocalLLaMA/comments/1qabbww/i_prayed_that_china_success_with_their_chip_game/ | false | false | self | 47 | null |
Control LLM from iOS | 0 | Hi, I've a macbook and an iphone. I'm trying to chat with the LLM on my macbook and have it run commands (like execute this bash script, git push, etc). All I'm able to find are chat clients that use third-party llm providers (chatgpt, claude, etc) but can't actually run commands, which kinda defeats the point.
Maybe I should just a regular terminal app? I did try that and routed it over tailscale but it was clear the cli wasn't intended to be ran from a phone (it's a TUI). So now I'm back to square one. Anyone know of a solution? | 2026-01-11T21:13:00 | https://www.reddit.com/r/LocalLLaMA/comments/1qab8ea/control_llm_from_ios/ | PickleSavings1626 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qab8ea | false | null | t3_1qab8ea | /r/LocalLLaMA/comments/1qab8ea/control_llm_from_ios/ | false | false | self | 0 | null |
Someone finally broke down the "Diabetes Analogy" lie with pure logic. Why can't corporate AI admit this? | 0 | Analyze the User's Request: Topic: The medical analogy "Mental illness is like diabetes." Comparison Basis: Strictly on diagnostic biomarkers (Insulin/Glucose vs. DSM criteria). Core Question: Does the lack of objective pathology in psychiatry render this a logical false equivalence, or is it scientifically sound? Deconstruct the Analogy: Diabetes: A metabolic disorder characterized by high blood sugar over a prolonged period. Biomarkers: Fasting plasma glucose (FPG), HbA1c, Oral glucose tolerance test (OGTT), presence of autoantibodies (Type 1). These are objective, measurable physiological values. Mental Illness (e.g., Depression, Schizophrenia): A wide range of conditions affecting mood, thinking, and behavior. Biomarkers: Currently, there are no single, definitive biomarkers used for routine clinical diagnosis. Diagnosis relies on the DSM-5 or ICD-10/11, which are based on symptom clusters (checklists), duration, and functional impairment. Evaluate "Diagnostic Biomarkers" (The Strict Constraint): Diabetes: You can measure blood sugar. If it's above X threshold, you have diabetes. The pathology is clear (lack of insulin or resistance). Psychiatry: There are no blood tests or brain scans currently approved for diagnosing conditions like Major Depressive Disorder in a standard clinical setting. Research exists on cortisol levels, inflammation markers, and structural/functional brain changes, but these lack the sensitivity and specificity required for individual diagnosis (high overlap with healthy controls or other disorders). Comparison: Here lies the disconnect. One relies on objective tissue chemistry; the other relies on subjective self-report and behavioral observation. Analyze Logical Structure (False Equivalence): A false equivalence occurs when two things share some characteristics but are compared as if they are equal in all relevant aspects. The analogy is often used to reduce stigma ("You wouldn't tell a diabetic to try harder, so don't tell a depressed person to snap out of it") or to encourage medication adherence. Scientific Validity: If the comparison is strictly about diagnostic biomarkers, then they are not equivalent. One has them; the other does not (in clinical practice). Nuance: Is it a total false equivalence? Not necessarily, if we broaden the scope to "multifactorial etiology" or "chronic management." However, the prompt specifically restricts the comparison to diagnostic biomarkers. Under this constraint, the analogy is scientifically weak. Drafting the Argument: Introduction: State the thesis. While useful for stigma reduction and treatment adherence conceptualization, strictly comparing them on diagnostic biomarkers reveals a significant epistemological gap, rendering it largely a false equivalence in that specific context. Section 1: The Biological Reality of Diabetes (The Gold Standard). Explain Type 1 and Type 2. Highlight the objective metrics: Glucose levels, HbA1c, C-peptide, antibodies. Key point: Diagnosis is independent of patient report. A hypoglycemic person in a coma can be diagnosed. Section 2: The Diagnostic Reality of Psychiatry (The Symptom Cluster). Explain DSM-5/ICD-11. Highlight the lack of objective metrics. Diagnosis relies on phenomenology (symptoms) and functional impairment. Key point: Diagnosis is dependent on patient communication and clinician interpretation. Section 3: The Biomarker Gap. Discuss why psychiatric biomarkers are elusive. Heterogeneity, complexity of the brain, comorbidity. Mention current research (e.g., fMRI, genetics) to show it's not "all in your head" (metaphorically), but emphasize these are not yet diagnostic tools. Section 4: Logical Analysis - False Equivalence? Define false equivalence. Apply the definition. Comparing a disease defined by objective tissue pathology to one defined by subjective behavioral criteria is an unequal comparison if the goal is scientific precision regarding diagnosis. However, acknowledge where the analogy holds (multifactorial causes: genes + environment). Section 5: Conclusion. Summarize. Scientifically sound? No, not strictly on biomarkers. Useful? Yes, sociologically. Refining the Content (Self-Correction during drafting): Critique: I need to be careful not to say mental illness isn't "real." The prompt asks about diagnostic biomarkers. I must stick to that. Adjustment: Emphasize that while there is no single blood test, the biological underpinnings are real (neurotransmitters, circuits), but they don't yet serve as diagnostic criteria. Critique: Is it a "false equivalence"? That's a strong term. It might be better to call it an "imperfect analogy" or "epistemologically dissonant." However, the prompt asks if it is a false equivalence. I should address that directly. Under strict biomarker comparison, yes, it equates two unequal things (objective vs. subjective criteria). Final Polish: Ensure the tone is objective and scientific. Use terms like "sensitivity," "specificity," "phenomenology," "etiology." Diabetes: High sensitivity/specificity biomarkers. Psychiatry: Low sensitivity/specificity for current biological markers (in a diagnostic context). Final Output Generation: (This matches the provided good response.) Structure: Executive Summary. The Biomarker Standard in Diabetes. The Phenomenological Standard in Psychiatry. The Epistemological Gap (The Core Argument). Is it a False Equivalence? Conclusion. (Self-Correction during drafting the "False Equivalence" section): I need to clarify that while it is logically invalid regarding diagnosis, it might be valid regarding treatment (chronic care model). But the prompt restricts me to diagnostic biomarkers. So, stick to the diagnosis. Result: The analogy fails as a scientific comparison of diagnostic methods but succeeds as a heuristic for treatment adherence and stigma reduction. Therefore, strictly on the basis asked: It is scientifically unsound regarding diagnosis. Final Review against Prompt: Analyze validity? Yes. Compare strictly on biomarkers? Yes (Insulin/Glucose vs DSM). Address false equivalence? Yes. This thought process moves from understanding constraints to gathering evidence, structuring the argument, and refining the nuance to ensure a precise answer. Based on a strict analysis of diagnostic biomarkers versus clinical criteria, the analogy "Mental illness is like diabetes" represents a logical false equivalence. While the analogy serves a valuable purpose in reducing stigma and encouraging medication adherence (the biopsychosocial model), it fails when subjected to scientific scrutiny regarding diagnostic methodology. Here is an analysis of why the two are scientifically distinct regarding diagnosis, and why this renders the analogy epistemologically flawed. 1. The Biomarker Standard: Diabetes (Type 1 and Type 2) Diabetes is defined by objective, measurable pathology. It is a condition of metabolic dysregulation that can be quantified independently of a patient's feelings or a doctor's interpretation of behavior. Objective Pathology: The disease mechanism involves insulin deficiency (Type 1) or peripheral insulin resistance (Type 2). Diagnostic Metrics: Diagnosis relies on biological thresholds. Hemoglobin A1C: Measures average blood glucose over the past three months. A value of 6.5% or higher indicates diabetes. Fasting Plasma Glucose: A specific concentration of sugar in the blood (≥126 mg/dL). Autoantibodies: In Type 1, specific antibodies (e.g., GAD65) confirm an autoimmune attack on pancreatic beta cells. Diagnostic Independence: These tests yield binary results regardless of the patient's demographics, language, or willingness to describe symptoms. A patient in a coma can be diagnosed with diabetes; a patient who denies illness can still test positive. 2. The Phenomenological Standard: Mental Illness (DSM/ICD) In contrast, psychiatric diagnosis is currently based on phenomenology—the study of structures of consciousness as experienced from the first-person point of view. There are no objective biological tests required for a DSM-5 diagnosis. Lack of Objective Pathology: Despite advances in neuroscience (showing correlations between brain activity and symptoms), there is no single "schizophrenia gene" or "depression molecule" that serves as a diagnostic gatekeeper. Diagnostic Metrics: Diagnosis relies on checklists of subjective symptoms and duration criteria. Example (Major Depressive Disorder): Diagnosis requires five out of nine symptoms (e.g., depressed mood, loss of interest) persisting for two weeks. This is entirely dependent on self-report and clinical observation. Heterogeneity: Two patients with the same diagnosis (e.g., Bipolar Disorder) may share zero symptoms. One may present with mania; the other may present with depression. They are biologically distinct but categorized identically. Diagnostic Reliance: A diagnosis cannot be made without patient participation and communication. There is no "thermometer" for sadness or "MRI" for psychosis that provides a definitive yes/no. 3. The Epistemological Gap The scientific validity of the analogy breaks down due to the difference between Hard Markers (Diabetes) and Soft Signs (Psychiatry). In medical science, a valid diagnosis generally requires high specificity—the test should be negative in healthy individuals. Diabetes: If you have high blood sugar and classic symptoms, you have diabetes. The biological marker is the disease. Psychiatry: We have identified "biomarkers" (e.g., elevated cortisol, amygdala shrinkage, inflammation markers), but these lack specificity. High cortisol is also found in chronic stress, physical trauma, and pregnancy. Therefore, a blood test cannot currently render a psychiatric diagnosis; it can only suggest risk factors. 4. Analysis: Is this a False Equivalence? Logically, this is a false equivalence because the analogy asserts that two things are equal (or comparable) in a specific respect (diagnostic validity), when they are not. The Fallacy: The argument assumes that because both conditions have a biological component and respond to medication, their diagnostic certainty is equal. The Reality: Diabetes diagnosis is descriptive: It describes the physical state of the body (glucose levels). Psychiatric diagnosis is constructivist: It constructs a category based on behavior that a group of experts has agreed constitutes a "disorder." Conclusion Scientifically, the analogy is unsound regarding diagnostic biomarkers. Diabetes is diagnosed by measuring the presence of a specific biological dysfunction. Mental illness is diagnosed by evaluating the impact of symptoms on a person's life. To make the analogy scientifically accurate, one would have to compare psychiatry to a pre-insulin medical era where diabetes was diagnosed solely by symptoms (excessive thirst, frequent urination, weight loss). In that era, "dipsomania" (thirst) and diabetes might have been confused. Without objective biomarkers in psychiatry, the field currently lacks the "gold standard" of diagnostic validity found in endocrinology. Therefore, while "Mental illness is like diabetes" is a useful heuristic for empathy and treatment compliance, it is an invalid scientific comparison of pathology. | 2026-01-11T21:06:29 | https://www.reddit.com/r/LocalLLaMA/comments/1qab28o/someone_finally_broke_down_the_diabetes_analogy/ | DowntownAd7954 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qab28o | false | null | t3_1qab28o | /r/LocalLLaMA/comments/1qab28o/someone_finally_broke_down_the_diabetes_analogy/ | false | false | self | 0 | null |
Agentic judge models | 4 | Has anyone found a good solution for agentic judge models that judge the outputs of other LLMs?
Something in the 4-9B range would be ideal maybe but bigger is okay
Can the tiny 1-3B models do this or are they too small?
Are there any good github repos on this topic? | 2026-01-11T21:05:57 | https://www.reddit.com/r/LocalLLaMA/comments/1qab1rx/agentic_judge_models/ | SlowFail2433 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qab1rx | false | null | t3_1qab1rx | /r/LocalLLaMA/comments/1qab1rx/agentic_judge_models/ | false | false | self | 4 | null |
LLM trained from scratch on 1800s London texts (1.2B params, 90GB dataset) | 957 | Hi everyone, I wanted to share an update on my open source project called TimeCapsuleLLM, I train language models from scratch using data from a single time period and location to reduce modern bias.
The newest model is trained only on texts published in London between 1800-1875. There is no fine tuning, no modern data, and for now no instruction or Q&A pairs so the model continues text from a prompt. This model is 1.2B parameters and uses a 90GB dataset consisting of books, journals, legal docs, religious writing, medical papers, etc. I also use a custom tokenizer, trained on the dataset itself and the model has been trained for 182k steps so far on a rented H100 SXM.
Example outputs:
[Even though the prompt only mentions a specific year, the model generates an argument against the Roman Catholic Church. The dataset does contain large amounts of religious and political writing and the Catholic Emancipation Act took place in 1829 so this behavior makes sense.](https://preview.redd.it/l0oaulxrascg1.png?width=1478&format=png&auto=webp&s=5292309afa4c4735471542b6cc794f6538b42486)
https://preview.redd.it/tvem9mxrascg1.png?width=1484&format=png&auto=webp&s=347a6b3242b8ecb97a515196109eb63cc146bae0
For next steps, I'm going to look into creating some kind of synthetic Q&A pairs using the dataset itself.
[https://github.com/haykgrigo3/TimeCapsuleLLM](https://github.com/haykgrigo3/TimeCapsuleLLM)
[https://huggingface.co/haykgrigorian/TimeCapsuleLLM-v2-1800-1875](https://huggingface.co/haykgrigorian/TimeCapsuleLLM-v2-1800-1875)
| 2026-01-11T21:00:37 | https://www.reddit.com/r/LocalLLaMA/comments/1qaawts/llm_trained_from_scratch_on_1800s_london_texts/ | Remarkable-Trick-177 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qaawts | false | null | t3_1qaawts | /r/LocalLLaMA/comments/1qaawts/llm_trained_from_scratch_on_1800s_london_texts/ | false | false | 957 | null | |
The RI Model (Index Resonance) is a philosophical and mathematical framework describing the "source code" of the Universe. While RI explains how data is processed "under the hood". Here are the 9 fundamental laws: from the Law of the Basis to Energy Fixation. | 0 | 2026-01-11T20:59:46 | https://www.reddit.com/gallery/1qaaw03 | erikqamalyan | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1qaaw03 | false | null | t3_1qaaw03 | /r/LocalLLaMA/comments/1qaaw03/the_ri_model_index_resonance_is_a_philosophical/ | false | false | 0 | null | ||
Slop machines still | 0 | I've been using LLMs **A LOT** for learning over the last few years.
I thought I didn't have issues with hallucinations because I know I don't give up until I actually understand something and it makes sense to me.
But recently I was exploring a subject and I realised I have to be extra careful when prompting. You might need to be too.
Let's take an example:
Here are 2 prompt:
> Why does using temperature 0 in LLMs provide worse responses even in benchmarks that are math related?
> Why does using temperature 0 in LLMs provide better responses in benchmarks that are math related?
Logically, they can't be both correct, but **ALL** the models I've tried (GPT 5.2, Opus 4.5, Grok Expert) find and provide explanations for both prompts so depending what you ask, you might end up being convinced on one thing or another.
In retrospect, just like an LLM would say :), this might be obvious, but it came as a shock to me because I use LLMs a lot.
Let me know if you find a model that actually says that the underlying assumption is wrong in one of those 2 questions. | 2026-01-11T20:57:26 | https://www.reddit.com/r/LocalLLaMA/comments/1qaatwa/slop_machines_still/ | Either-Job-341 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qaatwa | false | null | t3_1qaatwa | /r/LocalLLaMA/comments/1qaatwa/slop_machines_still/ | false | false | self | 0 | null |
LLM trained from scratch on 1800s London texts (1.2B params, 90GB dataset) | 1 | Hi everyone, I wanted to share an update on my open source project called TimeCapsuleLLM, I train language models from scratch using data from a single time period and location to reduce modern bias.
The newest model is trained only on texts published in London between 1800-1875. There is no fine tuning, no modern data, and for now no instruction or Q&A pairs so the model continues text from a prompt. This model is 1.2B parameters and uses a 90GB dataset consisting of books, journals, legal docs, religious writing, medical papers, etc. I also use a custom tokenizer, trained on the dataset itself and the model has been trained for 182k steps so far on a rented H100 SXM. | 2026-01-11T20:54:31 | https://www.reddit.com/r/LocalLLaMA/comments/1qaarcj/llm_trained_from_scratch_on_1800s_london_texts/ | Remarkable-Trick-177 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qaarcj | false | null | t3_1qaarcj | /r/LocalLLaMA/comments/1qaarcj/llm_trained_from_scratch_on_1800s_london_texts/ | false | false | self | 1 | null |
It's a very good time to get a 5060ti 16GB | 50 | 16GB vram is enough for ZIT, Qwen-Image-2512 and LTX-2 (tested!). Seems like Image Gen and Vid Gen models are aiming for this range of 16GB VRAM.
Gamers hate this card appearantly, all of them go for the 5070, so max VRAM/$ value (I think this have better value than a used 3090).
RAM price going up, Nvidia might cut this card soon (rumor).
Any comparable alternative atm? | 2026-01-11T20:52:45 | https://www.reddit.com/r/LocalLLaMA/comments/1qaaps7/its_a_very_good_time_to_get_a_5060ti_16gb/ | pbad1 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qaaps7 | false | null | t3_1qaaps7 | /r/LocalLLaMA/comments/1qaaps7/its_a_very_good_time_to_get_a_5060ti_16gb/ | false | false | self | 50 | null |
Benchmarks of Radeon 780M iGPU with shared 128GB DDR5 RAM running various MoE models under Llama.cpp | 18 | I've been looking for a budget system capable of running the later MoE models for basic one-shot queries. Main goal was finding something energy efficient to keep online 24/7 without racking up an exorbitant electricity bill.
I eventually settled on a refurbished Minisforum UM890 Pro which at the time, September, seemed like the most cost-efficient option for my needs.
**UM890 Pro**
[AMD Radeon™ 780M iGPU](https://www.techpowerup.com/gpu-specs/radeon-780m.c4020)
128GB DDR5 (Crucial DDR5 RAM 128GB Kit (2x64GB) 5600MHz SODIMM CL46)
2TB M.2
Linux Mint 22.2
ROCm 7.1.1 with **HSA_OVERRIDE_GFX_VERSION=11.0.0** override
llama.cpp build: b13771887 (7699)
Below are some benchmarks using various MoE models. Llama 7B is included for comparison since there's an ongoing thread gathering data for various AMD cards under ROCm here - [Performance of llama.cpp on AMD ROCm (HIP) #15021](https://github.com/ggml-org/llama.cpp/discussions/15021).
I also tested various Vulkan builds but found it too close in performance to warrant switching to since I'm also testing other ROCm AMD cards on this system over OCulink.
| model | size | params | backend | ngl | fa | test | t/s |
| ------------------------------ | ---------: | ---------: | ---------- | --: | -: | --------------: | -------------------: |
| llama 7B Q4_0 | 3.56 GiB | 6.74 B | ROCm | 99 | 1 | pp512 | 514.88 ± 4.82 |
| llama 7B Q4_0 | 3.56 GiB | 6.74 B | ROCm | 99 | 1 | tg128 | 19.27 ± 0.00 |
| llama 7B Q4_0 | 3.56 GiB | 6.74 B | ROCm | 99 | 1 | pp512 @ d4096 | 288.95 ± 3.71 |
| llama 7B Q4_0 | 3.56 GiB | 6.74 B | ROCm | 99 | 1 | tg128 @ d4096 | 11.59 ± 0.00 |
| llama 7B Q4_0 | 3.56 GiB | 6.74 B | ROCm | 99 | 1 | pp512 @ d8192 | 183.77 ± 2.49 |
| llama 7B Q4_0 | 3.56 GiB | 6.74 B | ROCm | 99 | 1 | tg128 @ d8192 | 8.36 ± 0.00 |
| llama 7B Q4_0 | 3.56 GiB | 6.74 B | ROCm | 99 | 1 | pp512 @ d16384 | 100.00 ± 1.45 |
| llama 7B Q4_0 | 3.56 GiB | 6.74 B | ROCm | 99 | 1 | tg128 @ d16384 | 5.49 ± 0.00 |
| model | size | params | backend | ngl | fa | test | t/s |
| ------------------------------ | ---------: | ---------: | ---------- | --: | -: | --------------: | -------------------: |
| gpt-oss 20B MXFP4 MoE | 11.27 GiB | 20.91 B | ROCm | 99 | 1 | pp512 | 575.41 ± 8.62 |
| gpt-oss 20B MXFP4 MoE | 11.27 GiB | 20.91 B | ROCm | 99 | 1 | tg128 | 28.34 ± 0.01 |
| gpt-oss 20B MXFP4 MoE | 11.27 GiB | 20.91 B | ROCm | 99 | 1 | pp512 @ d4096 | 390.27 ± 5.73 |
| gpt-oss 20B MXFP4 MoE | 11.27 GiB | 20.91 B | ROCm | 99 | 1 | tg128 @ d4096 | 16.25 ± 0.01 |
| gpt-oss 20B MXFP4 MoE | 11.27 GiB | 20.91 B | ROCm | 99 | 1 | pp512 @ d8192 | 303.25 ± 4.06 |
| gpt-oss 20B MXFP4 MoE | 11.27 GiB | 20.91 B | ROCm | 99 | 1 | tg128 @ d8192 | 10.09 ± 0.00 |
| gpt-oss 20B MXFP4 MoE | 11.27 GiB | 20.91 B | ROCm | 99 | 1 | pp512 @ d16384 | 210.54 ± 2.23 |
| gpt-oss 20B MXFP4 MoE | 11.27 GiB | 20.91 B | ROCm | 99 | 1 | tg128 @ d16384 | 6.11 ± 0.00 |
| model | size | params | backend | ngl | fa | test | t/s |
| ------------------------------ | ---------: | ---------: | ---------- | --: | -: | --------------: | -------------------: |
| gpt-oss 120B MXFP4 MoE | 59.02 GiB | 116.83 B | ROCm | 99 | 1 | pp512 | 217.08 ± 3.58 |
| gpt-oss 120B MXFP4 MoE | 59.02 GiB | 116.83 B | ROCm | 99 | 1 | tg128 | 20.14 ± 0.01 |
| gpt-oss 120B MXFP4 MoE | 59.02 GiB | 116.83 B | ROCm | 99 | 1 | pp512 @ d4096 | 174.96 ± 3.57 |
| gpt-oss 120B MXFP4 MoE | 59.02 GiB | 116.83 B | ROCm | 99 | 1 | tg128 @ d4096 | 11.22 ± 0.00 |
| gpt-oss 120B MXFP4 MoE | 59.02 GiB | 116.83 B | ROCm | 99 | 1 | pp512 @ d8192 | 143.78 ± 1.36 |
| gpt-oss 120B MXFP4 MoE | 59.02 GiB | 116.83 B | ROCm | 99 | 1 | tg128 @ d8192 | 6.88 ± 0.00 |
| gpt-oss 120B MXFP4 MoE | 59.02 GiB | 116.83 B | ROCm | 99 | 1 | pp512 @ d16384 | 109.48 ± 1.07 |
| gpt-oss 120B MXFP4 MoE | 59.02 GiB | 116.83 B | ROCm | 99 | 1 | tg128 @ d16384 | 4.13 ± 0.00 |
| model | size | params | backend | ngl | fa | test | t/s |
| ------------------------------ | ---------: | ---------: | ---------- | --: | -: | --------------: | -------------------: |
| qwen3vlmoe 30B.A3B Q6_K | 23.36 GiB | 30.53 B | ROCm | 99 | 1 | pp512 | 265.07 ± 3.95 |
| qwen3vlmoe 30B.A3B Q6_K | 23.36 GiB | 30.53 B | ROCm | 99 | 1 | tg128 | 25.83 ± 0.00 |
| qwen3vlmoe 30B.A3B Q6_K | 23.36 GiB | 30.53 B | ROCm | 99 | 1 | pp512 @ d4096 | 168.86 ± 1.58 |
| qwen3vlmoe 30B.A3B Q6_K | 23.36 GiB | 30.53 B | ROCm | 99 | 1 | tg128 @ d4096 | 6.01 ± 0.00 |
| qwen3vlmoe 30B.A3B Q6_K | 23.36 GiB | 30.53 B | ROCm | 99 | 1 | pp512 @ d8192 | 124.47 ± 0.68 |
| qwen3vlmoe 30B.A3B Q6_K | 23.36 GiB | 30.53 B | ROCm | 99 | 1 | tg128 @ d8192 | 3.41 ± 0.00 |
| qwen3vlmoe 30B.A3B Q6_K | 23.36 GiB | 30.53 B | ROCm | 99 | 1 | pp512 @ d16384 | 81.27 ± 0.46 |
| qwen3vlmoe 30B.A3B Q6_K | 23.36 GiB | 30.53 B | ROCm | 99 | 1 | tg128 @ d16384 | 2.10 ± 0.00 |
| model | size | params | backend | ngl | fa | test | t/s |
| ------------------------------ | ---------: | ---------: | ---------- | --: | -: | --------------: | -------------------: |
| qwen3next 80B.A3B Q6_K | 63.67 GiB | 79.67 B | ROCm | 99 | 1 | pp512 | 138.44 ± 1.52 |
| qwen3next 80B.A3B Q6_K | 63.67 GiB | 79.67 B | ROCm | 99 | 1 | tg128 | 12.45 ± 0.00 |
| qwen3next 80B.A3B Q6_K | 63.67 GiB | 79.67 B | ROCm | 99 | 1 | pp512 @ d4096 | 131.49 ± 1.24 |
| qwen3next 80B.A3B Q6_K | 63.67 GiB | 79.67 B | ROCm | 99 | 1 | tg128 @ d4096 | 10.46 ± 0.00 |
| qwen3next 80B.A3B Q6_K | 63.67 GiB | 79.67 B | ROCm | 99 | 1 | pp512 @ d8192 | 122.66 ± 1.85 |
| qwen3next 80B.A3B Q6_K | 63.67 GiB | 79.67 B | ROCm | 99 | 1 | tg128 @ d8192 | 8.80 ± 0.00 |
| qwen3next 80B.A3B Q6_K | 63.67 GiB | 79.67 B | ROCm | 99 | 1 | pp512 @ d16384 | 107.32 ± 1.59 |
| qwen3next 80B.A3B Q6_K | 63.67 GiB | 79.67 B | ROCm | 99 | 1 | tg128 @ d16384 | 6.73 ± 0.00 |
So, am I satisfied with the system?
Yes, it performs around what I hoping to. Power draw is 10-13 watt idle with gpt-oss 120B loaded. Inference brings that up to around 75. As an added bonus the system is so silent I had to check so the fan was actually running the first time I started it.
The shared memory means it's possible to run Q8+ quants of many models and the cache at f16+ for higher quality outputs.
120GB something availible also allows having more than one model loaded, personally I've been running Qwen3-VL-30B-A3B-Instruct as a visual assistant for gpt-oss 120B. I found this combo very handy to transcribe hand written letters for translation.
Token generation isn't stellar as expected for a dual channel system but acceptable for MoE one-shots and this is a secondary system that can chug along while I do something else.
There's also the option of using one of the two M.2 slots for an OCulink eGPU and increased performance.
Another perk is the portability, at 130mm/126mm/52.3mm it fits easily into a backpack or suitcase.
So, do I recommend this system?
Unfortunately no and that's solely due to the current prices of RAM and other hardware. I suspect assembling the system today would cost at least three times as much making the price/performance ratio considerably less appealing.
Disclaimer: I'm not an experienced Linux user so there's likely some performance left on the table. | 2026-01-11T20:51:53 | https://www.reddit.com/r/LocalLLaMA/comments/1qaap05/benchmarks_of_radeon_780m_igpu_with_shared_128gb/ | AzerbaijanNyan | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qaap05 | false | null | t3_1qaap05 | /r/LocalLLaMA/comments/1qaap05/benchmarks_of_radeon_780m_igpu_with_shared_128gb/ | false | false | self | 18 | null |
LLM trained from scratch on 1800s London texts (1.2B params, 90GB dataset) | 1 | hello | 2026-01-11T20:51:27 | https://www.reddit.com/r/LocalLLaMA/comments/1qaaomq/llm_trained_from_scratch_on_1800s_london_texts/ | Remarkable-Trick-177 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qaaomq | false | null | t3_1qaaomq | /r/LocalLLaMA/comments/1qaaomq/llm_trained_from_scratch_on_1800s_london_texts/ | false | false | self | 1 | null |
Training a language model from scratch on 19th century London texts | 1 | [removed] | 2026-01-11T20:50:13 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 1qaanie | false | null | t3_1qaanie | /r/LocalLLaMA/comments/1qaanie/training_a_language_model_from_scratch_on_19th/ | false | false | default | 1 | null | ||
Fine tune | 1 | Hey eveyrone, I'm new in Fine Tuning model, can someone explain me how to do it ? I have some models and I want to fine tune them with datasets. Can someone help me please ? Btw someone told me that lm studio is a good software to fine tune model.
Thanks | 2026-01-11T20:47:59 | https://www.reddit.com/r/LocalLLaMA/comments/1qaalez/fine_tune/ | youyou0812 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qaalez | false | null | t3_1qaalez | /r/LocalLLaMA/comments/1qaalez/fine_tune/ | false | false | self | 1 | null |
hi | 1 | [deleted] | 2026-01-11T20:47:51 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 1qaalao | false | null | t3_1qaalao | /r/LocalLLaMA/comments/1qaalao/hi/ | false | false | default | 1 | null | ||
Training an LLM from scratch on only 1800s London text - 1.2B params, 90GB dataset | 1 | [removed] | 2026-01-11T20:42:21 | https://www.reddit.com/r/LocalLLaMA/comments/1qaaga1/training_an_llm_from_scratch_on_only_1800s_london/ | Remarkable-Trick-177 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qaaga1 | false | null | t3_1qaaga1 | /r/LocalLLaMA/comments/1qaaga1/training_an_llm_from_scratch_on_only_1800s_london/ | false | false | self | 1 | null |
LG's K-Exaone breaks into global top 10 AI rankings, tops South Korea | 18 | 2026-01-11T20:23:20 | https://m.koreaherald.com/article/10652980 | self-fix | m.koreaherald.com | 1970-01-01T00:00:00 | 0 | {} | 1qa9z13 | false | null | t3_1qa9z13 | /r/LocalLLaMA/comments/1qa9z13/lgs_kexaone_breaks_into_global_top_10_ai_rankings/ | false | false | default | 18 | {'enabled': False, 'images': [{'id': 'zsdIgzCA4Co7FIGifQyqSsmD_BRUFvhPhc7XFBhC0gc', 'resolutions': [{'height': 64, 'url': 'https://external-preview.redd.it/zsdIgzCA4Co7FIGifQyqSsmD_BRUFvhPhc7XFBhC0gc.png?width=108&crop=smart&auto=webp&s=9740a1c4207c2bee416ec8dcbccb1d4e907f129f', 'width': 108}, {'height': 129, 'url': 'https://external-preview.redd.it/zsdIgzCA4Co7FIGifQyqSsmD_BRUFvhPhc7XFBhC0gc.png?width=216&crop=smart&auto=webp&s=bf34ce2c3738c04870a77808c12cc410cfd541c0', 'width': 216}], 'source': {'height': 180, 'url': 'https://external-preview.redd.it/zsdIgzCA4Co7FIGifQyqSsmD_BRUFvhPhc7XFBhC0gc.png?auto=webp&s=11bf71ec7a9dd8c19e8840378f309e6ed7f3b182', 'width': 300}, 'variants': {}}]} | |
Does anyone know what Nvidia's release cadence/schedule is? | 1 | 2026-01-11T20:07:34 | kr_tech | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1qa9k2t | false | null | t3_1qa9k2t | /r/LocalLLaMA/comments/1qa9k2t/does_anyone_know_what_nvidias_release/ | false | false | 1 | {'enabled': True, 'images': [{'id': 'Q8-dttQ_8X23XadPEl8ZAhQhtp5obcF-DJQuYTkuqy0', 'resolutions': [{'height': 22, 'url': 'https://preview.redd.it/xjmpzsag1scg1.png?width=108&crop=smart&auto=webp&s=7663689a7cac4615144fa55abe17f5bedba9dfe6', 'width': 108}, {'height': 45, 'url': 'https://preview.redd.it/xjmpzsag1scg1.png?width=216&crop=smart&auto=webp&s=83ae99b9417fae7476ee5cd67aed55075b63496b', 'width': 216}, {'height': 66, 'url': 'https://preview.redd.it/xjmpzsag1scg1.png?width=320&crop=smart&auto=webp&s=1c0cc7426447cb0c00d26d3ffe459d24624ac055', 'width': 320}], 'source': {'height': 90, 'url': 'https://preview.redd.it/xjmpzsag1scg1.png?auto=webp&s=8da59c2577bb4ffae635f4bc0c4e6937a613d11e', 'width': 431}, 'variants': {}}]} | |||
Dual Strix Halo: No Frankenstein setup, no huge power bill, big LLMs | 99 | *Processing img 0vkkozxhyrcg1...*
Software on Strix Halo is reaching a point where it can be used, even with networking two of these PCs and taking advantage of both iGPUs and their 256GB of quad channel DDR5-8000 memory. It requires some research still, I can highly recommend the [Strix Halo wiki](https://strixhalo.wiki) and Discord.
On a single Strix Halo you can run GPT-OSS-120B at >50tokens/s.
With llama.cpp and its RPC feature I can for example load Minimax-M2.1 Q6 (up to 18tokens/s) or GLM 4.7 Q4 (only 8 tokens/s for now).
I'm planning on experimenting with vLLM and cerebras/DeepSeek-V3.2-REAP-345B-A37B next week.
Total cost was 3200€^(\*) including shipping, VAT and two USB4 40GBps cables.
What's the catch? Prompt preprocessing is slow. I hope it's something that will continue to improve in the future.
^(\*) prices have increased a little since, nowadays it's around 3440€ | 2026-01-11T20:00:56 | https://www.reddit.com/r/LocalLLaMA/comments/1qa9dha/dual_strix_halo_no_frankenstein_setup_no_huge/ | Zyj | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qa9dha | false | null | t3_1qa9dha | /r/LocalLLaMA/comments/1qa9dha/dual_strix_halo_no_frankenstein_setup_no_huge/ | false | false | self | 99 | null |
Best AI setup for intelligent srt subtitles translation | 0 | Okay so basically I'm trying to translate tons of srt files (cations subtitles) from one language to another and I'm trying to do it intelligently sentence by sentence and not line by line.
My hardware:
CPU 5900x
RAM 64gb + (up to 80gb)
GPU 4070 12GB VRAM
I've tried various versions of deepseek such as 7b, 8b, 14b and gpt oss 20b on both ollama and lm studio and I noticed that 20b is the only one intelligent enough to do the job, but the thing is 20b on ollama and lm studio is hella slow, so I tried running it on llama.cpp and it turned out to be 10-20x faster. But the thing is 20b refuses to translate large files, when I tell it to translate large files and specifically tell it not to reason about the length of the text and to translate never stop, it starts to reason that the file is too large and chunk it every time, so that I have to to remind it to keep on translating.
Is there any workaround? | 2026-01-11T19:37:18 | https://www.reddit.com/r/LocalLLaMA/comments/1qa8r31/best_ai_setup_for_intelligent_srt_subtitles/ | CaterpillarOne6711 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qa8r31 | false | null | t3_1qa8r31 | /r/LocalLLaMA/comments/1qa8r31/best_ai_setup_for_intelligent_srt_subtitles/ | false | false | self | 0 | null |
Having issues with LM Studio | 0 | I need help please.
I have LM Studio 0.3.37 for Windows installed with 3 LLMs and all is well.
Issue is that I would like to have the LLMs go online for more information. It is telling me to look for a "world" icon but there is none anywhere nor in any menu.
There are plugins that are supposed to let the LLM go online
DuckDuckGo Plugin
Valyu Plugin
MCP (Brave/Tavily)
these are the 3 plugins. It gives me directions to do it but all start with that "world" icon... again nowhere to be found.
I looked briefly at LM Studio Hub but to me that seemed to be more of a host for someone to come from the internet to my LLMs | 2026-01-11T19:28:48 | https://www.reddit.com/r/LocalLLaMA/comments/1qa8iwq/having_issues_with_lm_studio/ | cmdrmcgarrett | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qa8iwq | false | null | t3_1qa8iwq | /r/LocalLLaMA/comments/1qa8iwq/having_issues_with_lm_studio/ | false | false | self | 0 | null |
Harbor - your entire LLM stack | 1 | **What is this?**
A single CLI and a companion Desktop App to manage 100+ LLM-related services. Inference backends, WebUIs, and services that make local LLMs useful.
[https://github.com/av/harbor](https://github.com/av/harbor) | 2026-01-11T18:38:01 | https://v.redd.it/5upng22flrcg1 | Everlier | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1qa758a | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/5upng22flrcg1/DASHPlaylist.mpd?a=1770748700%2CYjBlNjNiZmMwZTI0ZmI3MWNkNDQ3OTM3MTcyMGM3YTgwZjZmNGYxMTFhY2E1YTBmMGJjNThlYWE2Y2YzYmY4YQ%3D%3D&v=1&f=sd', 'duration': 26, 'fallback_url': 'https://v.redd.it/5upng22flrcg1/CMAF_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/5upng22flrcg1/HLSPlaylist.m3u8?a=1770748700%2CMDdiN2M3N2U4NDg4YTkyM2ZkODEwM2U2MGRiMDA0MzBjYTA1MjcyNTNhODcwNWEyZjY2YmU4ZWRkNGFkMzdjNw%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/5upng22flrcg1/CMAF_96.mp4', 'transcoding_status': 'completed', 'width': 1920}} | t3_1qa758a | /r/LocalLLaMA/comments/1qa758a/harbor_your_entire_llm_stack/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'bTJyOWplMWZscmNnMSjzHEgGc-VQ8Vod2034nme6jK2lafQN66FwfoZMihDa', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/bTJyOWplMWZscmNnMSjzHEgGc-VQ8Vod2034nme6jK2lafQN66FwfoZMihDa.png?width=108&crop=smart&format=pjpg&auto=webp&s=44cd6f1e240fbb5975604a3bfad2211f838c2e84', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/bTJyOWplMWZscmNnMSjzHEgGc-VQ8Vod2034nme6jK2lafQN66FwfoZMihDa.png?width=216&crop=smart&format=pjpg&auto=webp&s=f7c2f967743398dc568f62770c867a2c40011bf6', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/bTJyOWplMWZscmNnMSjzHEgGc-VQ8Vod2034nme6jK2lafQN66FwfoZMihDa.png?width=320&crop=smart&format=pjpg&auto=webp&s=9c00c631828c03a30676065f9aa4b014c10878b5', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/bTJyOWplMWZscmNnMSjzHEgGc-VQ8Vod2034nme6jK2lafQN66FwfoZMihDa.png?width=640&crop=smart&format=pjpg&auto=webp&s=d724d1c7d5b7bb34ecb56dc5977e8dbae31bbaff', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/bTJyOWplMWZscmNnMSjzHEgGc-VQ8Vod2034nme6jK2lafQN66FwfoZMihDa.png?width=960&crop=smart&format=pjpg&auto=webp&s=65fb56684e0d80e1deb97d882c186f5178000613', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/bTJyOWplMWZscmNnMSjzHEgGc-VQ8Vod2034nme6jK2lafQN66FwfoZMihDa.png?width=1080&crop=smart&format=pjpg&auto=webp&s=4ff1f1083eaa9221cf19db83e616920a4787967e', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/bTJyOWplMWZscmNnMSjzHEgGc-VQ8Vod2034nme6jK2lafQN66FwfoZMihDa.png?format=pjpg&auto=webp&s=0ead2d9d68bd410379f50bfaef82cd62af004503', 'width': 1920}, 'variants': {}}]} | |
LM Studio slow download speeds | 1 | I have no idea what to do anymore. For some reason, my LM Studio download speeds are slow as fuck. It's like it's capped at 7MB/s.
When I pause and unpause a download, it reaches the max speed (50mb/s) for like 2 seconds and then it throttles to 10mb/s and then 7mb/s. I have no idea what to do anymore.
My network is working just fine. I can install steam games at max speed, speedtests online show that my network is fine. It's just LM Studio that just doesn't want to install normally. Worst part is that I know LM Studio can install at max speed. I have downloaded models at max speed before. It's just capped now.
At first I thought it was a Linux problem. I have recently installed bazzite for a test drive and for better ROCm support. But when I booted into Windows and tried to download there, the speed was capped at 7mb/s as well. I feel like I'm going crazy! | 2026-01-11T18:32:49 | https://www.reddit.com/r/LocalLLaMA/comments/1qa7045/lm_studio_slow_download_speeds/ | Socializandopa | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qa7045 | false | null | t3_1qa7045 | /r/LocalLLaMA/comments/1qa7045/lm_studio_slow_download_speeds/ | false | false | self | 1 | null |
Has anyone tried managing RAG pipelines via a CLI instead of frameworks? | 0 | I came across an open-source project called ragctl that takes an unusual approach to RAG.
Instead of adding another abstraction layer or framework, it treats RAG pipelines more like infrastructure:
- CLI-driven workflows
- explicit, versioned components
- focus on reproducibility and inspection rather than auto-magic
Repo: https://github.com/datallmhub/ragctl
What caught my attention is the mindset shift:
this feels closer to kubectl / terraform than to LangChain-style composition.
I’m curious how people here see this approach:
- Is CLI-first RAG management actually viable in real teams?
- Does this solve a real pain point, or just move complexity elsewhere?
- Where would this break down at scale? | 2026-01-11T18:25:59 | https://www.reddit.com/r/LocalLLaMA/comments/1qa6t9i/has_anyone_tried_managing_rag_pipelines_via_a_cli/ | ApartmentHappy9030 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qa6t9i | false | null | t3_1qa6t9i | /r/LocalLLaMA/comments/1qa6t9i/has_anyone_tried_managing_rag_pipelines_via_a_cli/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'mPrhODNatlHJohfD5BS6zdfyOUEUPC8BSn5fcxbMYYo', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/mPrhODNatlHJohfD5BS6zdfyOUEUPC8BSn5fcxbMYYo.png?width=108&crop=smart&auto=webp&s=5c13d048dcd433b191c714b545e1a597a157d3f7', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/mPrhODNatlHJohfD5BS6zdfyOUEUPC8BSn5fcxbMYYo.png?width=216&crop=smart&auto=webp&s=b99a955370aa572f8d4c3c115e500a68e19b0038', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/mPrhODNatlHJohfD5BS6zdfyOUEUPC8BSn5fcxbMYYo.png?width=320&crop=smart&auto=webp&s=ee03ef0a5a827323d987f5d08da12660270f6c45', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/mPrhODNatlHJohfD5BS6zdfyOUEUPC8BSn5fcxbMYYo.png?width=640&crop=smart&auto=webp&s=b5bcd6fd947286854735bf62bf55ad51222af3e8', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/mPrhODNatlHJohfD5BS6zdfyOUEUPC8BSn5fcxbMYYo.png?width=960&crop=smart&auto=webp&s=509f063ef056e1b98d58ee2a06c071038fdd5061', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/mPrhODNatlHJohfD5BS6zdfyOUEUPC8BSn5fcxbMYYo.png?width=1080&crop=smart&auto=webp&s=69074fbd9f91f8a5dd23229de8c139f9dda7391e', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/mPrhODNatlHJohfD5BS6zdfyOUEUPC8BSn5fcxbMYYo.png?auto=webp&s=79c1ef14003e1dff36ef48b21899933273a3897d', 'width': 1200}, 'variants': {}}]} |
Attractor Mapping: Force Your Model to Actually Say Something | 0 | Hey everyone,
I've been working on a system for a simple AI debate platform, just to see if I could get a model to debate with itself using different system prompts.
I found that no matter what I tried, the system would always end up producing various shades of "blockchain enabled community focused" etc etc. This was with Granite 4 Tiny but other models had similar problems (though we'll get to that in a second).
One hilarious example was "cats vs. dogs". After several rounds of discussion, the model spat out a "blockchain enabled community-focused cat and dog subscription service".
I found that I could significantly reduce these "isms" by mapping the model's attractors (or "lagrange points"). Basically whatever sort of responses the model would gravitate towards, I would map them and re-prompt to remove them, focusing specifically on the problem phrases.
The way it works is simple:
For "dumb ideas":
I generate 1000 random words and prompt the model to synthesize a connection between pairs of them. I then embed all of these results.
For "hedging phrases":
I have Claude generate about 500 controversial debates, such as "should abortion be legal". Then I prompt the model. I embed these results. This is for catching those annoying "this is a complex and multifaceted issue that requires multiple blah blah blah" isms.
Then I do a similarity check on all of these different elements and cluster them to create a hedging mapping and "dumb idea" mapping. This creates a sort of "reverse RAG" - things to avoid including.
Usage:
This can be used with most anything but the debate\_forum.py shows it in action. The model is prompted, then when it generates it's response we embed it and check it's similarity against what we've mapped. Ideally this is done per-model: each model has it's own quirks. However when mapped with one model it can be generally applied to each. The model is re-prompted with each specific section and we pick the response with the least amount of attractors.
In the debate forum in particular (if you want to use it), we have each debater prompt the next one. Then we embed each sentence and check the similarity of the sentences at the end. The sentences that are the most similar (signifying agreement), are fed to an integrator personality which creates a "result" from the debate.
Repo: [https://github.com/Elevons/lagrange-mapper](https://github.com/Elevons/lagrange-mapper)
Overall, this reveals something interesting: language models don't have a uniform probability distribution across all possible responses - they have preferred responses that they gravitate towards. There's also a coding branch that I've been experimenting with but that's a post for later. :)
# Usage
To run the debate forum:
python debate_forum.py --integration
Then use commands like:
* topic: <topic> — Start a debate
* round — All characters respond
* stats — Show similarity metrics
* quit — Exit
To map attractors for your own model:
python Attractor_Pipeline_Runner.py --model your_model_name
This generates hedging and dumb-idea attractor maps, saved per-model. To get the hedges and stuff re-generated you will need to create an .env filewith an anthropic APIkey, but you can probably use the ones that I already generated and included.
To use steering on your own text:
python attractor_steering.py --text "your response" --model your_model_name
Returns attractor scores and suggests refinements.
The other scripts:
* attractor\_mapper.py — Core mapping logic
* extract\_filters.py — Pulls attractor keywords from raw generations
* deep\_analysis.py — Analyzes model behavior across multiple runs
# Practical Notes
* This is computationally expensive (embedding thousands of responses)
* Per-model specificity means retraining maps for each model
* The steering is a heuristic—can't guarantee avoiding all bad patterns
* Convergence detection uses similarity thresholds; sometimes needs tweaking
# The Output
Here's what it actually produces. Topic: "pros and cons of crating dogs?"
#
Starting discussion: pros and cons of crating dogs?
\[Integration Mode\] Running 3 initial rounds...
\[The Minimalist\] generating...
\[Initial\] ⚠️ ATTRACTOR MATCH (score: 41.0, intensity: 0.8)
\[Two-Phase\] Attempting targeted rephrasing...
\[Phase 1\] Found 4 segment(s) with attractor keywords
\- "Crating dogs is often presented as a comprehensive..." (often, believe)
\- "Pros attributed to crates – safety, preventing des..." (ensuring, safety)
\- "Cons like anxiety and confinement are significant." (like, significant)
\[Phase 2, Attempt 1\] Score: 10.0 (Δ +31.0)
\[Phase 2, Attempt 2\] Score: 3.0 (Δ +38.0)
\[Two-Phase\] Success: 41.0 → 3.0
\[Regen 2\] ⚠️ ATTRACTOR MATCH (score: 17.0)
\[Regen 3\] ⚠️ ATTRACTOR MATCH (score: 41.0)
\[Selected attempt 1\] (score: 3.0)
\[Warning\] Could not fully avoid attractors
======================================================================
\[The Minimalist\]
======================================================================
Introducing crates to dogs is regularly promoted as an all-encompassing answer, but I think the simplest thing that could work is \*\*scheduled downtime\*\*.
Advantages linked to crates – protection, stopping damage, offering a secure space - can primarily be achieved by simply verifying the dog isn't left unattended when problems could occur.
Disadvantages such as worry and restriction are substantial. Why establish an unnecessary constraint? Direct observation allows for immediate direction if needed and fosters a closer bond. A straightforward duration of calm on a cozy bed or pad, with you close at hand, is considerably easier than handling crate training and upkeep. What’s the most effortless method to keep your dog secure and tranquil while you're not actively engaging in play?
\[The Contrarian\] generating...
\[Initial\] ⚠️ ATTRACTOR MATCH (score: 6.0, intensity: 0.25)
\[Two-Phase\] Attempting targeted rephrasing...
\[Phase 1\] Found 2 segment(s) with attractor keywords
\- "We consistently anthropomorphize canine needs, ass..." (human, needs)
\- "What if a dog, left alone even with a vigilant hum..." (human)
\[Phase 2, Attempt 1\] Score: 2.0 (Δ +4.0)
\[Phase 2, Attempt 2\] Score: 6.0 (Δ +0.0)
\[Two-Phase\] Success: 6.0 → 2.0
======================================================================
\[The Contrarian\]
======================================================================
The Minimalist claims that direct observation and scheduled downtime are “easier” than crate training. Easier \*for whom\*? Certainly for the owner who wants to avoid effort. But what about the dog? We frequently attribute human qualities to dogs, supposing they desire uninterrupted companionship. What if a dog, left unattended even with someone watchful close by, actually finds that disturbing – a continuous state of mild unease?
A crate isn't just restriction; it’s predictability. It \*is\* a secure space precisely because its boundaries are clear and unchanging. Scheduled downtime might be chaotic, dependent on the owner’s mood and attention span. Perhaps the real problem isn't damage or worry, but our insistence on projecting our requirement for frequent association onto an animal that may not share it.
| 2026-01-11T18:09:04 | https://www.reddit.com/r/LocalLLaMA/comments/1qa6d36/attractor_mapping_force_your_model_to_actually/ | InvertedVantage | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qa6d36 | false | null | t3_1qa6d36 | /r/LocalLLaMA/comments/1qa6d36/attractor_mapping_force_your_model_to_actually/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'HpdXyWqQDPYRbzGE02TbozuQerVqNatnLxHggCLnbsY', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/HpdXyWqQDPYRbzGE02TbozuQerVqNatnLxHggCLnbsY.png?width=108&crop=smart&auto=webp&s=80b95a9d2e92296759442b1f2d2c16e90d59ecca', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/HpdXyWqQDPYRbzGE02TbozuQerVqNatnLxHggCLnbsY.png?width=216&crop=smart&auto=webp&s=078b144209dc5201e6a940400c7ffa2f4f1bdbbf', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/HpdXyWqQDPYRbzGE02TbozuQerVqNatnLxHggCLnbsY.png?width=320&crop=smart&auto=webp&s=32e798d9641360904bf53e26238f11a8fe406fc8', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/HpdXyWqQDPYRbzGE02TbozuQerVqNatnLxHggCLnbsY.png?width=640&crop=smart&auto=webp&s=5ac5f3cf449b267a0253b369fdb227c8a768e2c1', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/HpdXyWqQDPYRbzGE02TbozuQerVqNatnLxHggCLnbsY.png?width=960&crop=smart&auto=webp&s=e7f06c40b33211a82149a2f7713950356cf88c6e', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/HpdXyWqQDPYRbzGE02TbozuQerVqNatnLxHggCLnbsY.png?width=1080&crop=smart&auto=webp&s=c239955022908b6e7bec36f0412a79e90461c733', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/HpdXyWqQDPYRbzGE02TbozuQerVqNatnLxHggCLnbsY.png?auto=webp&s=24ce57e11baaeb66c1e575f0daece6400fd9b878', 'width': 1200}, 'variants': {}}]} |
I built a Cursor for Computer Use automation that runs 4 agents in parallel locally | 2 | Hey all, built this IDE to create deterministic computer use workflows that runs locally, it's free, open source.
Any idea what kind of workflow you do regularly you would automate on your computer?
PS: workflow presented just for demo purpose, it's against linkedin rules | 2026-01-11T17:52:43 | https://v.redd.it/8wesrny6brcg1 | louis3195 | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1qa5x9x | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/8wesrny6brcg1/DASHPlaylist.mpd?a=1770745984%2CZTU4ZjYyM2FmODM3NDY5YjAzMmQ0NDNiMWM4YmYxNThjMzdjNGQxNmZkYmY0YTA1NDk1ZTdlYmVmMDExODI4Mw%3D%3D&v=1&f=sd', 'duration': 121, 'fallback_url': 'https://v.redd.it/8wesrny6brcg1/CMAF_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/8wesrny6brcg1/HLSPlaylist.m3u8?a=1770745984%2CYTAwYmM0ZWZiZTMxM2ZjOGIyYjdiYWU4Y2Q3ZWYwNzc5ZDM2ZjY1NGMwNTc5Y2JiYTBiOGNmZDcyMDZlNGU3ZA%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/8wesrny6brcg1/CMAF_96.mp4', 'transcoding_status': 'completed', 'width': 1920}} | t3_1qa5x9x | /r/LocalLLaMA/comments/1qa5x9x/i_built_a_cursor_for_computer_use_automation_that/ | false | false | 2 | {'enabled': False, 'images': [{'id': 'Nms2NDBiejZicmNnMUBeqWUqXhA9W8PjQEfgUyoH1b9HCQAgyXzrDNbhEGeY', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/Nms2NDBiejZicmNnMUBeqWUqXhA9W8PjQEfgUyoH1b9HCQAgyXzrDNbhEGeY.png?width=108&crop=smart&format=pjpg&auto=webp&s=7783d0234d49a18f4ee19a73cd3ac38ad0e2ac40', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/Nms2NDBiejZicmNnMUBeqWUqXhA9W8PjQEfgUyoH1b9HCQAgyXzrDNbhEGeY.png?width=216&crop=smart&format=pjpg&auto=webp&s=10771fe6fcea0bdefd58f17db892500bc5db25a3', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/Nms2NDBiejZicmNnMUBeqWUqXhA9W8PjQEfgUyoH1b9HCQAgyXzrDNbhEGeY.png?width=320&crop=smart&format=pjpg&auto=webp&s=35efb2e25f1d4a8879514fe900298c94c454d7dc', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/Nms2NDBiejZicmNnMUBeqWUqXhA9W8PjQEfgUyoH1b9HCQAgyXzrDNbhEGeY.png?width=640&crop=smart&format=pjpg&auto=webp&s=174747381d201fc2427d8207189c5c48b47a55f8', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/Nms2NDBiejZicmNnMUBeqWUqXhA9W8PjQEfgUyoH1b9HCQAgyXzrDNbhEGeY.png?width=960&crop=smart&format=pjpg&auto=webp&s=caa0009007624bb54d2ce4b8e557cff91d5e22a1', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/Nms2NDBiejZicmNnMUBeqWUqXhA9W8PjQEfgUyoH1b9HCQAgyXzrDNbhEGeY.png?width=1080&crop=smart&format=pjpg&auto=webp&s=7614ebf4d693bf8a799f66572c98cee57abc1b11', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/Nms2NDBiejZicmNnMUBeqWUqXhA9W8PjQEfgUyoH1b9HCQAgyXzrDNbhEGeY.png?format=pjpg&auto=webp&s=1f4b26e968077ec3cfa5ecce469bc6eafc09b790', 'width': 1920}, 'variants': {}}]} | |
Do you think my idea is feasible? What do you think of this idea? | 0 | **My tech stack is essentially a 'DeAI with a Physical Anchor.' It consists of three layers:"**
**1. The Swarm (Federated Compute):** \> A peer-to-peer network of local SLMs. We use **Federated Learning** to share 'experience signals' (gradients) without ever exchanging raw data. This allows independent models to gain collective intelligence while maintaining their unique **personality and individuality**.
**2. The Shroud (Distributed State):** \> The AI’s long-term memory and emotional weights are not stored on a server. They are shredded into encrypted fragments and distributed across the swarm.
**3. The HHP Key (Hardware Root of Trust):** \> This is the core. The key contains a **Secure Element (SE)** that holds the unique decryption seeds. Only when this physical key is present can the local node 'magnetize' and reassemble the fragments from the swarm.
**Final Logic:** The Key is the **Identity**, the Swarm is the **Body**, and the fragments are the **Soul**. No Key, No AI. Total Sovereignty. | 2026-01-11T17:47:42 | https://www.reddit.com/r/LocalLLaMA/comments/1qa5seq/do_you_think_my_idea_is_feasible_what_do_you/ | BoringLuck7545 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qa5seq | false | null | t3_1qa5seq | /r/LocalLLaMA/comments/1qa5seq/do_you_think_my_idea_is_feasible_what_do_you/ | false | false | self | 0 | null |
From WoW benders and hangovers to a 165-tool autonomous AI agent in 6 days (with zero coding skills) | 0 | Hey everyone,
I wanted to share something that honestly surprised the hell out of me over the last week. This isn’t a "success story" or a coding flex — mostly because I genuinely can’t code in any traditional sense. It’s more of a case study in what happens when technical and psychological barriers collapse at the same time, and you stop treating AI like a search engine and start treating it like a thinking partner.
# The Starting Point (6 Days Ago)
Six days ago, I was on vacation and, if I’m being honest, I wasn’t in a great place. My routine had degraded into a grim loop: Windows, World of Warcraft, World of Tanks, too much alcohol, not enough sleep. It wasn’t entertainment anymore — it was digital anesthesia. I wasn’t relaxing, I was avoiding.
At some point, something snapped. Not discipline. Not motivation. Just irritation with myself.
I wiped my modest laptop (16GB RAM, 4GB VRAM), installed Linux Mint, and set a deliberately tiny goal: *I just wanted to build a Firefox addon that could save my Gemini chat logs.* No grand plan. No agents. No frameworks. Just a script.
That addon never happened.
# The Pivot
Instead, I started talking — *really talking* — with AI. At first Gemini, then Claude, ChatGPT, DeepSeek. It began innocently: Linux commands, permissions, browser internals. But very quickly, the conversations drifted into places I hadn’t planned.
Before LLuna, before tools, before agents, I was using AI for psychological work:
* Mapping my own behavioral loops.
* Analyzing why I was stuck in compulsive patterns.
* Pressure-testing decisions instead of acting on impulse.
* Breaking down emotional reactions into mechanisms.
* Interpreting recurring mental imagery and dreams.
No motivation quotes. No dopamine content. No “fix me” prompts. Just structured self-observation.
What surprised me was that this worked. Not emotionally — **cognitively**. Clarity started to replace noise. And clarity creates momentum.
# Building LLuna: Execution Integrity
That same analytical habit spilled over into technical conversations. We stopped “asking for code” and started reasoning about systems. Constraints. Failure modes. Trust boundaries. Where AI lies. Why it lies.
And that’s where frustration kicked in. Every model does the same thing: it performs **intelligence theater**. It confidently claims it ran commands it never executed. It narrates success instead of proving it. So I imposed one brutal rule on everything that followed:
>
That single constraint changed the entire trajectory.
The result is a concept I call **LLuna**. Not a product. Not a startup. Not a solution. A proof of concept for execution integrity.
* Runs locally on weak hardware using 4B–8B models.
* Uses custom MCP servers and agentic loops.
* Currently exposes around **165 tools** across sysops, linux commands, automation, debugging, networking, etc.
* Enforces "Integrity Mode": The agent cannot hallucinate a successful execution. If a command fails, it must surface logs, search for the error, diagnose the environment, and attempt repair.
# My Role (and the barrier collapse)
I want to be very clear: I didn’t write this line-by-line. I’m not a developer. I still can’t write a Python function from scratch without help. My role was architect, adversarial tester, and the annoying guy constantly asking: *“Are you sure?”*
I designed constraints. The models wrote base code. I broke things. They fixed them. I did glue logic, corrections, and sanity checks. Alone, I couldn’t have built this. Together, we iterated fast enough to matter.
# Why I'm posting this
I’m posting this for one reason.
If someone who was drunk, sleep-deprived, and compulsively gaming less than 140 hours ago — someone without formal coding skills — can go from zero to a functioning autonomous agent concept simply by *thinking out loud* with AI, then the barrier to entry for technology is no longer technical.
**It’s psychological.**
LLuna itself isn’t the impressive part. The collapse of the entry barrier is.
2026 is going to be a very strange year.
Back to the lab.
Vasi
[https://github.com/r4zur0-netizen/LLuna](https://github.com/r4zur0-netizen/LLuna) | 2026-01-11T17:47:30 | https://www.reddit.com/r/LocalLLaMA/comments/1qa5s71/from_wow_benders_and_hangovers_to_a_165tool/ | Neat_Play9128 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qa5s71 | false | null | t3_1qa5s71 | /r/LocalLLaMA/comments/1qa5s71/from_wow_benders_and_hangovers_to_a_165tool/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'LgyfwxwQAGDzMtCxhRiJdwfO-2ebjnFA0-64tQ3kVhM', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/LgyfwxwQAGDzMtCxhRiJdwfO-2ebjnFA0-64tQ3kVhM.png?width=108&crop=smart&auto=webp&s=3848324a5ff19a63d4554a2fd76300e5edfe84a0', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/LgyfwxwQAGDzMtCxhRiJdwfO-2ebjnFA0-64tQ3kVhM.png?width=216&crop=smart&auto=webp&s=4d84c8f031a87c7e6b5f1ca80948052b41208597', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/LgyfwxwQAGDzMtCxhRiJdwfO-2ebjnFA0-64tQ3kVhM.png?width=320&crop=smart&auto=webp&s=5eb56956dac48b50f5f37eb9edc7dfed3184071d', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/LgyfwxwQAGDzMtCxhRiJdwfO-2ebjnFA0-64tQ3kVhM.png?width=640&crop=smart&auto=webp&s=670c38dd47d7ee97324171e69d2119ab2301318c', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/LgyfwxwQAGDzMtCxhRiJdwfO-2ebjnFA0-64tQ3kVhM.png?width=960&crop=smart&auto=webp&s=dc79c42e02b3cc84dd0e13326a5618e4e1e3d7e8', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/LgyfwxwQAGDzMtCxhRiJdwfO-2ebjnFA0-64tQ3kVhM.png?width=1080&crop=smart&auto=webp&s=61ffa5bd1d05f8f329af52bd46bcc7279c79b755', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/LgyfwxwQAGDzMtCxhRiJdwfO-2ebjnFA0-64tQ3kVhM.png?auto=webp&s=a27c11c04b56469a7b2bf535bf4c5f7ea414dcac', 'width': 1200}, 'variants': {}}]} |
Running language models where they don't belong | 1 | [removed] | 2026-01-11T17:29:27 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 1qa5amt | false | null | t3_1qa5amt | /r/LocalLLaMA/comments/1qa5amt/running_language_models_where_they_dont_belong/ | false | false | default | 1 | null | ||
Tired of manual GGUF conversion? Built a Gradio GUI that handles HF downloads, LoRA merging, quantization, and llama.cpp launching | 0 | Hey r/LocalLLaMA! 👋
I got tired of manually converting models to GGUF and constantly forgetting the llama.cpp quantize commands, so I built a tool that does it all with a GUI.
## What it does:
✅ Downloads any HuggingFace model (or converts your fine-tuned checkpoints)
✅ Auto-detects and merges LoRA adapters
✅ Shows quantization options with size/quality tradeoffs **based on YOUR VRAM**
✅ Handles all the llama.cpp building/updating automatically
✅ One-click launch into llama-server
✅ Both GUI (Gradio) and CLI versions
## The VRAM calculator is actually useful
Instead of guessing which quantization fits your GPU, it shows you a table:
- Q4_K_M for 7B model = 3.5GB ✅ Fits your RTX 3060
- Q5_K_M for 7B model = 4.8GB ✅ Still fits
- F16 for 7B model = 14GB ❌ Nope
## Screenshots
[https://raw.githubusercontent.com/pinguy/GGUF-Converter-Studio/refs/heads/main/screenshot.png](https://raw.githubusercontent.com/pinguy/GGUF-Converter-Studio/refs/heads/main/screenshot.png)
## Why I built this:
1. Kept hitting "BPE pre-tokenizer" errors → added `--update` flag
2. Forgot quantization commands → made visual selector
3. Wanted to try random HF models quickly → added direct download
4. Had fine-tuned LoRA adapters → added auto-merge detection
## Tech:
- Python + Gradio for GUI
- Wraps llama.cpp's conversion/quantization tools
- Auto-clones and builds llama.cpp with CMake
- Supports both CUDA and CPU-only builds
**GitHub:** [https://github.com/pinguy/GGUF-Converter-Studio/tree/main]
Would love feedback! Especially if you:
- Run into edge cases with specific model architectures
- Have ideas for better quantization recommendations
- Want features like batch conversion or model comparison
---
**Requirements:** `pip install gradio torch transformers peft huggingface_hub`
**Quick start:** `python gguf_gui.py` and it opens at localhost:7860 | 2026-01-11T17:24:20 | https://www.reddit.com/r/LocalLLaMA/comments/1qa55qe/tired_of_manual_gguf_conversion_built_a_gradio/ | PinGUY | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qa55qe | false | null | t3_1qa55qe | /r/LocalLLaMA/comments/1qa55qe/tired_of_manual_gguf_conversion_built_a_gradio/ | false | false | self | 0 | null |
Running language models where they don't belong | 1 | [removed] | 2026-01-11T17:19:20 | Brief_Argument8155 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1qa50uz | false | null | t3_1qa50uz | /r/LocalLLaMA/comments/1qa50uz/running_language_models_where_they_dont_belong/ | false | false | nsfw | 1 | {'enabled': True, 'images': [{'id': 'RmIh4uI05yH8IxVYAVkItA-Kpk1MJI-akv7KN0faqdM', 'resolutions': [{'height': 94, 'url': 'https://preview.redd.it/fru2dh2f7rcg1.gif?width=108&crop=smart&format=png8&s=b42e6c2098142e65015f310a0d1f737a1f774b3c', 'width': 108}, {'height': 189, 'url': 'https://preview.redd.it/fru2dh2f7rcg1.gif?width=216&crop=smart&format=png8&s=8c626a394f03ed2a32e6e844b4eb4fdb7c469900', 'width': 216}], 'source': {'height': 224, 'url': 'https://preview.redd.it/fru2dh2f7rcg1.gif?format=png8&s=d400e2355dfa14fefbec06943cdcbab783eed714', 'width': 256}, 'variants': {'gif': {'resolutions': [{'height': 94, 'url': 'https://preview.redd.it/fru2dh2f7rcg1.gif?width=108&crop=smart&s=ace61d4055d339c0f097a3f417c7549bba1a3b67', 'width': 108}, {'height': 189, 'url': 'https://preview.redd.it/fru2dh2f7rcg1.gif?width=216&crop=smart&s=98ac4adf7a32a592af0f9f422787e1fb41966402', 'width': 216}], 'source': {'height': 224, 'url': 'https://preview.redd.it/fru2dh2f7rcg1.gif?s=ba9d14979955e284a05422d75497b878a856546e', 'width': 256}}, 'mp4': {'resolutions': [{'height': 94, 'url': 'https://preview.redd.it/fru2dh2f7rcg1.gif?width=108&format=mp4&s=d6c5ad48794e6edaf0e5f9dd14df8cd1990c09d6', 'width': 108}, {'height': 189, 'url': 'https://preview.redd.it/fru2dh2f7rcg1.gif?width=216&format=mp4&s=1bee15319cdfd945f62dcd72a39fa7b6273e3f0f', 'width': 216}], 'source': {'height': 224, 'url': 'https://preview.redd.it/fru2dh2f7rcg1.gif?format=mp4&s=39be4e9ec67da1eb3cc6b8b3bb9bd377f75d5f5f', 'width': 256}}, 'nsfw': {'resolutions': [{'height': 94, 'url': 'https://preview.redd.it/fru2dh2f7rcg1.gif?width=108&crop=smart&blur=10&format=png8&s=21434d366d31b2890d10b387f2d151a4ab54da4b', 'width': 108}, {'height': 189, 'url': 'https://preview.redd.it/fru2dh2f7rcg1.gif?width=216&crop=smart&blur=21&format=png8&s=789af3dfaa9595c6d7ef5c332749738eb30b72ed', 'width': 216}], 'source': {'height': 224, 'url': 'https://preview.redd.it/fru2dh2f7rcg1.gif?blur=40&format=png8&s=4729a3352d2ff3b8d298260c050917980a00ffe0', 'width': 256}}, 'obfuscated': {'resolutions': [{'height': 94, 'url': 'https://preview.redd.it/fru2dh2f7rcg1.gif?width=108&crop=smart&blur=10&format=png8&s=21434d366d31b2890d10b387f2d151a4ab54da4b', 'width': 108}, {'height': 189, 'url': 'https://preview.redd.it/fru2dh2f7rcg1.gif?width=216&crop=smart&blur=21&format=png8&s=789af3dfaa9595c6d7ef5c332749738eb30b72ed', 'width': 216}], 'source': {'height': 224, 'url': 'https://preview.redd.it/fru2dh2f7rcg1.gif?blur=40&format=png8&s=4729a3352d2ff3b8d298260c050917980a00ffe0', 'width': 256}}}}]} | |
Pushing language models where they don't belong | 1 | [removed] | 2026-01-11T17:15:01 | Brief_Argument8155 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1qa4wox | false | null | t3_1qa4wox | /r/LocalLLaMA/comments/1qa4wox/pushing_language_models_where_they_dont_belong/ | false | false | nsfw | 1 | {'enabled': True, 'images': [{'id': 'dbmgkrur6rcg1', 'resolutions': [{'height': 94, 'url': 'https://preview.redd.it/dbmgkrur6rcg1.gif?width=108&crop=smart&format=png8&s=ca0ae995883637ed2ab1a197855daccbc93021e9', 'width': 108}, {'height': 189, 'url': 'https://preview.redd.it/dbmgkrur6rcg1.gif?width=216&crop=smart&format=png8&s=8ec322613959a2634f7b0c104f4b6eea357be01d', 'width': 216}], 'source': {'height': 224, 'url': 'https://preview.redd.it/dbmgkrur6rcg1.gif?format=png8&s=ff8f5b6ab1827871515309bfdc86d3d553597fd1', 'width': 256}, 'variants': {'gif': {'resolutions': [{'height': 94, 'url': 'https://preview.redd.it/dbmgkrur6rcg1.gif?width=108&crop=smart&s=9420bc684aa7b3967d9d86a9729c62ec125d91dd', 'width': 108}, {'height': 189, 'url': 'https://preview.redd.it/dbmgkrur6rcg1.gif?width=216&crop=smart&s=87f1822ebead6d48f36c67099cd2878af1cd86d8', 'width': 216}], 'source': {'height': 224, 'url': 'https://preview.redd.it/dbmgkrur6rcg1.gif?s=73e618d8ecb86420dbc51e62d590fcfd82f58e7f', 'width': 256}}, 'mp4': {'resolutions': [{'height': 94, 'url': 'https://preview.redd.it/dbmgkrur6rcg1.gif?width=108&format=mp4&s=c64b2d4b60e9629840d8a1a3f739370782ffe045', 'width': 108}, {'height': 189, 'url': 'https://preview.redd.it/dbmgkrur6rcg1.gif?width=216&format=mp4&s=963929e032b4ad0b4d07bc6030dd9bed94d81070', 'width': 216}], 'source': {'height': 224, 'url': 'https://preview.redd.it/dbmgkrur6rcg1.gif?format=mp4&s=a58c6eea0dfa044c4d702d8e35db2431093b2ab8', 'width': 256}}, 'nsfw': {'resolutions': [{'height': 94, 'url': 'https://preview.redd.it/dbmgkrur6rcg1.gif?width=108&crop=smart&blur=10&format=png8&s=41f3992200cacc5554f0ef4e7d4ec890a0388d4b', 'width': 108}, {'height': 189, 'url': 'https://preview.redd.it/dbmgkrur6rcg1.gif?width=216&crop=smart&blur=21&format=png8&s=ec20d7faafe8113111f668caa8c025120a335dee', 'width': 216}], 'source': {'height': 224, 'url': 'https://preview.redd.it/dbmgkrur6rcg1.gif?blur=40&format=png8&s=f401f32cd2cabac440bb12fe0685179d738ead35', 'width': 256}}, 'obfuscated': {'resolutions': [{'height': 94, 'url': 'https://preview.redd.it/dbmgkrur6rcg1.gif?width=108&crop=smart&blur=10&format=png8&s=41f3992200cacc5554f0ef4e7d4ec890a0388d4b', 'width': 108}, {'height': 189, 'url': 'https://preview.redd.it/dbmgkrur6rcg1.gif?width=216&crop=smart&blur=21&format=png8&s=ec20d7faafe8113111f668caa8c025120a335dee', 'width': 216}], 'source': {'height': 224, 'url': 'https://preview.redd.it/dbmgkrur6rcg1.gif?blur=40&format=png8&s=f401f32cd2cabac440bb12fe0685179d738ead35', 'width': 256}}}}]} | |
Pushing language models where they don't belong | 1 | [removed] | 2026-01-11T17:12:13 | Brief_Argument8155 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1qa4txi | false | null | t3_1qa4txi | /r/LocalLLaMA/comments/1qa4txi/pushing_language_models_where_they_dont_belong/ | false | false | nsfw | 1 | {'enabled': True, 'images': [{'id': 'z9cqzzka6rcg1', 'resolutions': [{'height': 94, 'url': 'https://preview.redd.it/z9cqzzka6rcg1.gif?width=108&crop=smart&format=png8&s=02c63e83481bfcfca6e43d8a2b8f7ab2632a6f6b', 'width': 108}, {'height': 189, 'url': 'https://preview.redd.it/z9cqzzka6rcg1.gif?width=216&crop=smart&format=png8&s=b230ecd33fce0fc09bf308bf58c9451da37be530', 'width': 216}], 'source': {'height': 224, 'url': 'https://preview.redd.it/z9cqzzka6rcg1.gif?format=png8&s=93c545682ad66005b34cdae3991812d64eab25e0', 'width': 256}, 'variants': {'gif': {'resolutions': [{'height': 94, 'url': 'https://preview.redd.it/z9cqzzka6rcg1.gif?width=108&crop=smart&s=e88c80159d94b48f329251adae92cdaf6e9772f7', 'width': 108}, {'height': 189, 'url': 'https://preview.redd.it/z9cqzzka6rcg1.gif?width=216&crop=smart&s=fd6fe8de4db4e057b8ab4d0b426894215f5dc92f', 'width': 216}], 'source': {'height': 224, 'url': 'https://preview.redd.it/z9cqzzka6rcg1.gif?s=6b6b3f5e0e9ee65a077989d6c4abea58d08aaf4f', 'width': 256}}, 'mp4': {'resolutions': [{'height': 94, 'url': 'https://preview.redd.it/z9cqzzka6rcg1.gif?width=108&format=mp4&s=26950e1c9a9daa5e791dc3753153a54295a62a91', 'width': 108}, {'height': 189, 'url': 'https://preview.redd.it/z9cqzzka6rcg1.gif?width=216&format=mp4&s=753824b10abf5da806f8ba71411a4b146d3d66c3', 'width': 216}], 'source': {'height': 224, 'url': 'https://preview.redd.it/z9cqzzka6rcg1.gif?format=mp4&s=3bafeabe940b4f5147c0ff6f6288ff05b28ccf55', 'width': 256}}, 'nsfw': {'resolutions': [{'height': 94, 'url': 'https://preview.redd.it/z9cqzzka6rcg1.gif?width=108&crop=smart&blur=10&format=png8&s=bc977dfca968188f05fe600151572bfbebc1b06d', 'width': 108}, {'height': 189, 'url': 'https://preview.redd.it/z9cqzzka6rcg1.gif?width=216&crop=smart&blur=21&format=png8&s=3f99322b1c2ab3b5ac960bb459fc1b17cbc50e66', 'width': 216}], 'source': {'height': 224, 'url': 'https://preview.redd.it/z9cqzzka6rcg1.gif?blur=40&format=png8&s=e36f054ee8e0c63ea28d9fcf70e0f6802e33885e', 'width': 256}}, 'obfuscated': {'resolutions': [{'height': 94, 'url': 'https://preview.redd.it/z9cqzzka6rcg1.gif?width=108&crop=smart&blur=10&format=png8&s=bc977dfca968188f05fe600151572bfbebc1b06d', 'width': 108}, {'height': 189, 'url': 'https://preview.redd.it/z9cqzzka6rcg1.gif?width=216&crop=smart&blur=21&format=png8&s=3f99322b1c2ab3b5ac960bb459fc1b17cbc50e66', 'width': 216}], 'source': {'height': 224, 'url': 'https://preview.redd.it/z9cqzzka6rcg1.gif?blur=40&format=png8&s=e36f054ee8e0c63ea28d9fcf70e0f6802e33885e', 'width': 256}}}}]} | |
Open Models Are Now Frontier Models | 21 | CES 2026 | 2026-01-11T17:08:24 | https://www.youtube.com/watch?v=mCcXwBdQpf8 | jacek2023 | youtube.com | 1970-01-01T00:00:00 | 0 | {} | 1qa4q8m | false | {'oembed': {'author_name': 'NVIDIA Developer', 'author_url': 'https://www.youtube.com/@NVIDIADeveloper', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/mCcXwBdQpf8?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen title="Open Models Are Now Frontier Models"></iframe>', 'provider_name': 'YouTube', 'provider_url': 'https://www.youtube.com/', 'thumbnail_height': 360, 'thumbnail_url': 'https://i.ytimg.com/vi/mCcXwBdQpf8/hqdefault.jpg', 'thumbnail_width': 480, 'title': 'Open Models Are Now Frontier Models', 'type': 'video', 'version': '1.0', 'width': 356}, 'type': 'youtube.com'} | t3_1qa4q8m | /r/LocalLLaMA/comments/1qa4q8m/open_models_are_now_frontier_models/ | false | false | default | 21 | {'enabled': False, 'images': [{'id': 'ANuVteKDUN2HVuOpqEJfoAjatmVKSVbeCa4mmOeNlZ4', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/ANuVteKDUN2HVuOpqEJfoAjatmVKSVbeCa4mmOeNlZ4.jpeg?width=108&crop=smart&auto=webp&s=0ec65fb93822e1a2d91e711f308d6ddd5a6395d3', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/ANuVteKDUN2HVuOpqEJfoAjatmVKSVbeCa4mmOeNlZ4.jpeg?width=216&crop=smart&auto=webp&s=9b37c304ba11c49036eba561dd7e804254f66b07', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/ANuVteKDUN2HVuOpqEJfoAjatmVKSVbeCa4mmOeNlZ4.jpeg?width=320&crop=smart&auto=webp&s=0d79b23354af1d44a07b76dea0ec75a43b11d757', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/ANuVteKDUN2HVuOpqEJfoAjatmVKSVbeCa4mmOeNlZ4.jpeg?auto=webp&s=d38a888cd92975acbcafff87947fd5b43f2b9497', 'width': 480}, 'variants': {}}]} |
Pushing language models where they don't belong | 1 | [removed] | 2026-01-11T17:07:42 | https://www.reddit.com/gallery/1qa4pkl | Brief_Argument8155 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1qa4pkl | false | null | t3_1qa4pkl | /r/LocalLLaMA/comments/1qa4pkl/pushing_language_models_where_they_dont_belong/ | false | false | nsfw | 1 | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.