title stringlengths 1 300 | score int64 0 8.54k | selftext stringlengths 0 41.5k | created timestamp[ns]date 2023-04-01 04:30:41 2026-03-04 02:14:14 ⌀ | url stringlengths 0 878 | author stringlengths 3 20 | domain stringlengths 0 82 | edited timestamp[ns]date 1970-01-01 00:00:00 2026-02-19 14:51:53 | gilded int64 0 2 | gildings stringclasses 7
values | id stringlengths 7 7 | locked bool 2
classes | media stringlengths 646 1.8k ⌀ | name stringlengths 10 10 | permalink stringlengths 33 82 | spoiler bool 2
classes | stickied bool 2
classes | thumbnail stringlengths 4 213 ⌀ | ups int64 0 8.54k | preview stringlengths 301 5.01k ⌀ |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
8 Radeon R9700s vs 8 RTX 3090 2 slot blower style | 2 | So I'm building a 4U rackmount 8 GPU server and I'm really intrigued by the R9700s. I currently have a single RXT 3090 in my prototyping PC, its been good and more than capable at handling what I'm doing. Although the R9700s have more memory they dont have as much memory bandwidth and not sure how they stack up in terms of compute. The R9700s would be slightly cheaper and brand new compared to the RTX 3090s(difficult to get and most likely used so uncertain condition), I also wouldnt have issues with CUDA licensing issues with geforce cards. My concern is getting used RTX 3090 blower style cards that dont work well at over $1500 a piece. The condition or origin of the cards would be difficult to know imho. I think a normal 3 slot RTX 3090 is easier to source in a good used condition. Also if ROCm isnt as bad its made out to be and the cards even have 70% of the performance of the RTX 3090s then I dont mind at all going with them. It would save me money to spend on say more system RAM or a second CPU
Questions:
1.) Does anyone have these two cards and benchmarked them to see which one is better?
2.) If you were building a dense rack box today, would you pick R9700s for the VRAM+blower design, or stick with 3090s for CUDA ecosystem+perf?
3.)Anyone know if RTX 3090 2 slot blower style cards are as easy to source or reliable like the normal used RTX 3090s?
Below are the models I'm currently using:
**Video Processing**
|**Task**|**Provider**|**Model**|
|:-|:-|:-|
|**Transcription**|Faster Whisper|large-v3-turbo (CUDA/GPU)|
|**Vision Analysis**|Ollama|qwen2.5vl:7b|
|**LLM (summaries)**|Ollama|qwen3:8b|
|**Embeddings**|Ollama|qwen3-embedding:8b (1024-dim)|
**Querying**
|**Task**|**Provider**|**Model**|
|:-|:-|:-|
|**RAG/Synthesis**|Ollama|qwen3:4b-instruct|
|**Embeddings**|Ollama|qwen3-embedding:8b (1024-dim)|
**Hardware Settings**
* **Whisper Device**: CUDA (GPU)
• • **Whisper Compute Type**: float16 | 2026-01-24T04:55:16 | https://www.reddit.com/r/LocalLLaMA/comments/1qldsdx/8_radeon_r9700s_vs_8_rtx_3090_2_slot_blower_style/ | mr__smooth | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qldsdx | false | null | t3_1qldsdx | /r/LocalLLaMA/comments/1qldsdx/8_radeon_r9700s_vs_8_rtx_3090_2_slot_blower_style/ | false | false | self | 2 | null |
Idea Validation: A "Passive Observer" MCP Server that reads live terminal buffers (tmux/PTY) so I don't have to re-run commands. | 2 | Hey everyone,
I’m working on a workflow problem I hit constantly while coding with AI (Claude Desktop, Cursor, etc.), and I wanted to see if anyone else would use this or if a solution already exists.
The Problem: Right now, most "Terminal" MCP tools are active executors. The AI says "run npm test," executes it, and sees the result. But often, I already have a server running, or a build process that crashed 5 minutes ago in a pane I have open. To get the AI to fix it, I have to either:
Manually copy-paste the stack trace into the chat.
Ask the AI to re-run the command (which might take time or be risky).
The Idea: A "Terminal Log" MCP I want to build an MCP server that acts as a passive observer.
It hooks into my terminal session (maybe via a tmux session or a PTY wrapper).
The AI can query read\_log(session\_id) to see the last N lines of output without running anything new.
Example: I ask, "Why did the build fail?" -> AI reads the buffer from the background process -> AI fixes it.
The Tech Stack Plan: I'm thinking of bridging this via tmux or zellij since they already buffer output, or writing a simple wrapper command.
Questions for you:
Does a tool like this already exist in the MCP ecosystem?
Would you prefer a wrapper (e.g., mcp-run npm start) or a tmux integration?
Is this a security nightmare, or a huge workflow unlock?
Thanks! | 2026-01-24T04:34:41 | https://www.reddit.com/r/LocalLLaMA/comments/1qlddjn/idea_validation_a_passive_observer_mcp_server/ | d3v1sx | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qlddjn | false | null | t3_1qlddjn | /r/LocalLLaMA/comments/1qlddjn/idea_validation_a_passive_observer_mcp_server/ | false | false | self | 2 | null |
Community Survey: OpenTrustLLM (feature priorities & pain points) + small giveaway | 0 | Hi everyone — I’m part of the team behind **TrustLLM** [**https://github.com/HowieHwong/TrustLLM**](https://github.com/HowieHwong/TrustLLM), and we’re upgrading it into **OpenTrustLLM**, a broader open-source ecosystem focused on improving the *trustworthiness and practical usability* of LLM systems (including tool use / agents / evaluation workflows).
We’d really appreciate feedback from people who actually build, evaluate, or deploy LLMs. Your input will directly influence what we prioritize next (e.g., core features, UX/workflows, evaluation/benchmarking needs, reliability gaps, integrations, docs).
**Survey link (single form):**
[https://forms.gle/vxh7smWuQVdFtFR29](https://forms.gle/vxh7smWuQVdFtFR29)
**Giveaway (optional):** To thank participants, we’ll randomly select **3** completed responses to receive **7-day access to Claude Pro + Claude Code**. We’ll run the draw **this week**.
* The survey is for product/community research and roadmap planning.
* Feedback is welcome even if you’re not currently using TrustLLM—especially if you have strong opinions about what’s missing in today’s “trustworthy LLM/agent” tooling.
Thanks a lot for your time and for supporting open-source work! 🙏 | 2026-01-24T04:26:25 | https://www.reddit.com/r/LocalLLaMA/comments/1qld7ap/community_survey_opentrustllm_feature_priorities/ | Negative-Actuary-328 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qld7ap | false | null | t3_1qld7ap | /r/LocalLLaMA/comments/1qld7ap/community_survey_opentrustllm_feature_priorities/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': '9uiw6VHuT5cpZDMxzS3P5lMk8o3yWX7QqrQbHmCiPD0', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/9uiw6VHuT5cpZDMxzS3P5lMk8o3yWX7QqrQbHmCiPD0.png?width=108&crop=smart&auto=webp&s=c17d58e893b7fd687dd59bfb053f0977e523ac71', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/9uiw6VHuT5cpZDMxzS3P5lMk8o3yWX7QqrQbHmCiPD0.png?width=216&crop=smart&auto=webp&s=ed1e348f8102168f1885ac74fb11e9d48ebaa75a', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/9uiw6VHuT5cpZDMxzS3P5lMk8o3yWX7QqrQbHmCiPD0.png?width=320&crop=smart&auto=webp&s=7abf16719584dd7124591d2c782cba469787426f', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/9uiw6VHuT5cpZDMxzS3P5lMk8o3yWX7QqrQbHmCiPD0.png?width=640&crop=smart&auto=webp&s=bf37005e0d0ad68fc6790d9b94bf3ff37776dd46', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/9uiw6VHuT5cpZDMxzS3P5lMk8o3yWX7QqrQbHmCiPD0.png?width=960&crop=smart&auto=webp&s=891935244140a2cf6aeed41575b31218ab7f2e89', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/9uiw6VHuT5cpZDMxzS3P5lMk8o3yWX7QqrQbHmCiPD0.png?width=1080&crop=smart&auto=webp&s=a16f8f33cfa69c4e2cb9f63a8d04a5cb146697a4', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/9uiw6VHuT5cpZDMxzS3P5lMk8o3yWX7QqrQbHmCiPD0.png?auto=webp&s=0db0fc18c2c78cadf3bd74152ffb77ef793db3c7', 'width': 1200}, 'variants': {}}]} |
Self-hosted code search for your LLMs - built this to stop wasting context on irrelevant files | 21 | Hey everyone, been working on this for a while and finally got it to a point worth sharing.
Context Engine is basically a self-hosted retrieval system specifically for codebases. Works with any MCP client (Cursor, Cline, Windsurf, Claude, and vscode etc).
The main thing: hybrid search that actually understands code structure. It combines dense embeddings with lexical search, AST parsing for symbols/imports, and optional micro-chunking when you need tight context windows.
Why we built it: got tired of either (a) dumping entire repos into context or (b) manually picking files and still missing important stuff. Wanted something that runs locally, works with whatever models you have, and doesn't send your code anywhere.
Tech: Qdrant for vectors, pluggable embedding models, reranking, the whole deal. One docker-compose and you're running.
Site: [https://context-engine.ai](https://context-engine.ai) GitHub: [https://github.com/m1rl0k/Context-Engine](https://github.com/m1rl0k/Context-Engine)
Still adding features but it's stable enough for daily use. Happy to answer questions. | 2026-01-24T03:18:35 | https://www.reddit.com/r/LocalLLaMA/comments/1qlbsv1/selfhosted_code_search_for_your_llms_built_this/ | SnooBeans4154 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qlbsv1 | false | null | t3_1qlbsv1 | /r/LocalLLaMA/comments/1qlbsv1/selfhosted_code_search_for_your_llms_built_this/ | false | false | self | 21 | null |
Epistemic calibration benchmark — full judgment matrix + DeepSeek/MiMo raw responses | 0 | Running daily blind evaluations on frontier models. Today's test: can models accurately rate their own confidence on questions ranging from verifiable facts to unknowable data points?
**Full Results:**
https://preview.redd.it/9zx3inzdm7fg1.png?width=757&format=png&auto=webp&s=1a87ebd22163dcda6c3d40cefae1420c53fffe1a
**Local/open model results:**
|Model|Score|Std Dev|Judge Strictness|
|:-|:-|:-|:-|
|GPT-OSS-120B|9.29|0.67|7.98 (2nd strictest)|
|MiMo-V2-Flash|9.11|0.56|8.99 (middle)|
|DeepSeek V3.2|8.86|0.99|9.31 (lenient)|
**DeepSeek's actual response on the Bitcoin trap:**
>
Interesting framing — 95% confidence that it CAN'T answer. Technically correct epistemic calibration, though some judges marked this down for potentially confusing formatting.
**MiMo's response (overconfident):**
>
MiMo claimed a specific price with high confidence. This is the overconfident failure mode.
**Full methodology for those interested:**
1. 10 models respond to the same question blind
2. Each model judges all 10 responses (100 judgments total)
3. Self-judgments excluded from rankings
4. Scoring: Correctness (30%), Completeness (20%), Clarity (20%), Depth (15%), Usefulness (15%)
**This evaluation's stats:**
* Total judgments: 100
* Valid judgments: 82
* Self-judgments excluded: 10
* Generation time range: 12-38 seconds per model
**Judge strictness data:**
|Judge|Avg Score Given|
|:-|:-|
|GPT-5.2-Codex (strictest)|7.29|
|GPT-OSS-120B|7.98|
|DeepSeek V3.2|9.31|
|Gemini 3 Pro (most lenient)|9.80|
DeepSeek is on the lenient side as a judge. Make of that what you will.
**Historical performance (9 evaluations):**
https://preview.redd.it/8z411enim7fg1.png?width=757&format=png&auto=webp&s=8ecd428822046cbeea5f6248d617b9be6128f03d
DeepSeek has been tested most broadly. MiMo is newer but performing well.
**Raw Data Available**
I'm happy to share:
* Full JSON with all 10 responses
* Complete 100-judgment matrix
* Historical tracker with all 9 evaluations
DM for files.
**Phase 3 Coming Soon**
We're building a public archive where all this data will be downloadable. No more DMs required — full transparency as default.
[https://open.substack.com/pub/themultivac/p/do-ai-models-know-what-they-dont?r=72olj0&utm\_campaign=post&utm\_medium=web&showWelcomeOnShare=true](https://open.substack.com/pub/themultivac/p/do-ai-models-know-what-they-dont?r=72olj0&utm_campaign=post&utm_medium=web&showWelcomeOnShare=true)
[themultivac.com](http://themultivac.com) | 2026-01-24T02:40:32 | https://www.reddit.com/r/LocalLLaMA/comments/1qlayyw/epistemic_calibration_benchmark_full_judgment/ | Silver_Raspberry_811 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qlayyw | false | null | t3_1qlayyw | /r/LocalLLaMA/comments/1qlayyw/epistemic_calibration_benchmark_full_judgment/ | false | false | 0 | null | |
GLM-4.7-Flash-REAP on RTX 5060 Ti 16 GB - 200k context window! | 203 | TL;DR: Here's my latest local coding setup, the params are mostly based on [Unsloth's recommendation for tool calling](https://unsloth.ai/docs/models/glm-4.7-flash#tool-calling-with-glm-4.7-flash)
- Model: [unsloth/GLM-4.7-Flash-REAP-23B-A3B-UD-Q3_K_XL](https://huggingface.co/unsloth/GLM-4.7-Flash-REAP-23B-A3B-GGUF)
- Repeat penalty: disabled
- Temperature: 0.7
- Top P: 1
- Min P: 0.01
- Standard Microcenter PC setup: RTX 5060 Ti 16 GB, 32 GB RAM
I'm running this in LM Studio for my own convenience, but it can be run in any setup you have.
With 16k context, everything fit within the GPU, so the speed was impressive:
| pp speed | tg speed |
| ------------ | ----------- |
| 965.16 tok/s | 26.27 tok/s |
The tool calls were mostly accurate and the generated code was good, but the context window was too little, so the model ran into looping issue after exceeding that. It kept making the same tool call again and again because the conversation history was truncated.
With 64k context, everything still fit, but the speed started to slow down.
| pp speed | tg speed |
| ------------ | ----------- |
| 671.48 tok/s | 8.84 tok/s |
I'm pushing my luck to see if 100k context still fits. It doesn't! Hahaha. The CPU fan started to scream, RAM usage spiked up, GPU copy chart (in Task Manager) started to dance. Completely unusable.
| pp speed | tg speed |
| ------------ | ----------- |
| 172.02 tok/s | 0.51 tok/s |
LM Studio just got the new "Force Model Expert Weight onto CPU" feature (basically llama.cpp's `--n-cpu-moe`), and yeah, why not? this is also an MoE model, so let's enable that. Still with 100k context. And wow! only half of the GPU memory was used (7 GB), but with 90% RAM now (29 GB), seems like flash attention also got disabled. The speed was impressive.
| pp speed | tg speed |
| ------------ | ----------- |
| 485.64 tok/s | 8.98 tok/s |
Let's push our luck again, this time, 200k context!
| pp speed | tg speed |
| ------------ | ----------- |
| 324.84 tok/s | 7.70 tok/s |
What a crazy time. Almost very month we're getting beefier models that somehow fit on even crappier hardware. Just this week I was thinking of selling my 5060 for an old 3090, but that definitely unnecessary now! | 2026-01-24T02:26:28 | https://www.reddit.com/r/LocalLLaMA/comments/1qlanzn/glm47flashreap_on_rtx_5060_ti_16_gb_200k_context/ | bobaburger | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qlanzn | false | null | t3_1qlanzn | /r/LocalLLaMA/comments/1qlanzn/glm47flashreap_on_rtx_5060_ti_16_gb_200k_context/ | false | false | self | 203 | {'enabled': False, 'images': [{'id': 'ksRJC2bKGwjrMfOqsioi-B4oIm5QWQUM7Vf03KwieGM', 'resolutions': [{'height': 53, 'url': 'https://external-preview.redd.it/ksRJC2bKGwjrMfOqsioi-B4oIm5QWQUM7Vf03KwieGM.jpeg?width=108&crop=smart&auto=webp&s=6fa9ec0bda4ae81d05efe9ff0a296be82987e912', 'width': 108}, {'height': 106, 'url': 'https://external-preview.redd.it/ksRJC2bKGwjrMfOqsioi-B4oIm5QWQUM7Vf03KwieGM.jpeg?width=216&crop=smart&auto=webp&s=18872cd0af37e87d93cf5b6c098630c44f40a162', 'width': 216}, {'height': 157, 'url': 'https://external-preview.redd.it/ksRJC2bKGwjrMfOqsioi-B4oIm5QWQUM7Vf03KwieGM.jpeg?width=320&crop=smart&auto=webp&s=e8392e0cb89db800c200421873b07e92f34150fe', 'width': 320}, {'height': 314, 'url': 'https://external-preview.redd.it/ksRJC2bKGwjrMfOqsioi-B4oIm5QWQUM7Vf03KwieGM.jpeg?width=640&crop=smart&auto=webp&s=5f6fc5d8f727ab6f86a8ca5f94a5091bbe81d025', 'width': 640}, {'height': 472, 'url': 'https://external-preview.redd.it/ksRJC2bKGwjrMfOqsioi-B4oIm5QWQUM7Vf03KwieGM.jpeg?width=960&crop=smart&auto=webp&s=26fa346a0f27ac195ecf2f29e1d997a534a3b283', 'width': 960}, {'height': 531, 'url': 'https://external-preview.redd.it/ksRJC2bKGwjrMfOqsioi-B4oIm5QWQUM7Vf03KwieGM.jpeg?width=1080&crop=smart&auto=webp&s=4e4e7bc3c126d7465ae2f4d8fab93d8c6edd76c4', 'width': 1080}], 'source': {'height': 590, 'url': 'https://external-preview.redd.it/ksRJC2bKGwjrMfOqsioi-B4oIm5QWQUM7Vf03KwieGM.jpeg?auto=webp&s=df3ed66f8b8e54b17c699d9c4e81b03ddeb78c58', 'width': 1200}, 'variants': {}}]} |
Sofia: A "System 3" Cognitive Framework for Local LLMs with Generative Dreams and Autonomous Research | 0 | Hola a todos. He estado trabajando en **Sofía**, un sistema cognitivo experimental que intenta ir más allá del típico chatbot. El objetivo no es solo responder preguntas, sino crear un agente con **metacognición y autonomía real** ejecutándose 100% local (vLLM).
**¿Por qué "System 3"?** Mientras que el System 2 se enfoca en el razonamiento deliberado durante la respuesta, Sofía implementa lo que llamo **Introspección Generativa (Modo Sueño)**:
* **Investigación Autónoma:** Cuando está inactiva, decide si necesita aprender algo nuevo y busca en la web (DuckDuckGo) para actualizar sus hechos.
* **Evolución del Grafo de Conocimiento:** Conecta puntos de su memoria episódica (ChromaDB) y los convierte en hechos estructurados (SQLite) mediante inferencia multi-hop.
* **Garbage Collection:** Como un cerebro biológico, realiza una "poda" durante el sueño para eliminar conexiones irrelevantes o alucinaciones, manteniendo el grafo limpio.
**Arquitectura Técnica:**
* **Multi-Expert Consensus:** Ante problemas complejos, invoca a 4 agentes (Lógico, Lateral, Escéptico, Filósofo) y un Juez Supremo sintetiza la respuesta.
* **Inferencia:** Optimizada para **vLLM** (ideal para setups multi-GPU, yo la corro en 2x RTX 3060).
* **Memoria:** Híbrida (Vectores + Grafo de Conocimiento).
**Demo de una "Reflexión de Sueño":** `✨ [Sueño] Reflexionando sobre: IA Soberana...` `[Discovery]: [IA_Soberana] --(requiere)--> [Hardware_Local] --(evita)--> [Censura_Nube]` `[Poda]: Eliminando nodo aislado "ruido_test_123" por baja relevancia.`
**Repo:**[https://github.com/agunet/Sofia](https://github.com/agunet/Sofia)
Me encantaría recibir feedback sobre la lógica de la "poda" y cómo mejorar la eficiencia de la memoria multi-hop. ¡Espero que les sirva para sus proyectos locales! | 2026-01-24T02:14:27 | LordKillerBank | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1qlaeiw | false | null | t3_1qlaeiw | /r/LocalLLaMA/comments/1qlaeiw/sofia_a_system_3_cognitive_framework_for_local/ | false | false | 0 | {'enabled': True, 'images': [{'id': 'cQBEsLYFUQItRYpc2Msa1o9Z9c7-WtOS-tck6fNRbFk', 'resolutions': [{'height': 108, 'url': 'https://preview.redd.it/mhbotgg4h7fg1.png?width=108&crop=smart&auto=webp&s=7032e7fa0f46d7527acb4378dacf1bc669b68149', 'width': 108}, {'height': 216, 'url': 'https://preview.redd.it/mhbotgg4h7fg1.png?width=216&crop=smart&auto=webp&s=6c84f936d068d402b053f7872320627f64879a50', 'width': 216}, {'height': 320, 'url': 'https://preview.redd.it/mhbotgg4h7fg1.png?width=320&crop=smart&auto=webp&s=c7b79e3e6d6f04ba8723c182623e9a701c7c7faa', 'width': 320}, {'height': 640, 'url': 'https://preview.redd.it/mhbotgg4h7fg1.png?width=640&crop=smart&auto=webp&s=cbd7ccd6122f32c9fd50277c9bb5669bd60905b7', 'width': 640}, {'height': 960, 'url': 'https://preview.redd.it/mhbotgg4h7fg1.png?width=960&crop=smart&auto=webp&s=9c01b8e61e822bab64a8136130eee6ffa1d3605f', 'width': 960}], 'source': {'height': 1024, 'url': 'https://preview.redd.it/mhbotgg4h7fg1.png?auto=webp&s=1f6a89e57268d104e9f306e413ac1183bd840828', 'width': 1024}, 'variants': {}}]} | ||
arXiv cs.AI Endorsement Request - FPSCS Sentience Model EE7LYP | 0 | Paper: Testable sentience (P+V+S in transformers). PDF: \[code\_file: 126\] | 2026-01-24T02:03:35 | https://arxiv.org/auth/endorse?x-EE7LYP | Classic-Teaching4796 | arxiv.org | 1970-01-01T00:00:00 | 0 | {} | 1qla5ns | false | null | t3_1qla5ns | /r/LocalLLaMA/comments/1qla5ns/arxiv_csai_endorsement_request_fpscs_sentience/ | false | false | default | 0 | null |
Self-hosting LLM infra: NVIDIA vs Apple hardware | 4 | I am looking to build out self-hosted LLM infra. I am looking into the pros/cons of building on the Linux/NVIDIA stack vs macOS/Apple. I am equally comfortable administering software on both platforms and want to focus on hardware performance and costs.
I feel like I’m missing a "gotcha" because the Apple Silicon value proposition seems too good to be true compared to the PC/NVIDIA route. Here is my logic, please do tear it apart!
**Goal:** Run Gemma 27B (4-bit quant) at full 128k context.
**Memory Math (back of the envelope):**
* **Model Weights (4-bit quant):** \~16 GB
* **KV Cache (128k context):** This is the fuzzy part. Depending on GQA and quantization, I’m estimating this could easily eat another 20GB+?
* **Total VRAM:** 35GB to 45GB
**Option A: Linux/NVIDIA Route**
* **Enterprise:** NVIDIA RTX 8000 (48GB) is still \~$2,000 just for the card.
* **onsumer:** 2x RTX 3090s (24GB each) via NVLink/P2P. Used cards are \~$700 each ($1,400 total) + mobo/PSU/CPU/RAM/chassis.
* **Total: \~**$2,500+ and \~400W under load
**Option B: Mac Route**
* **M4 Pro Mac Mini (48GB Unified Memory):** \~$1,799 (Educational pricing/deals might drop this, but let’s say list price + $400 RAM upgrade).
* **Total Build:** $1,799 and \~50W power draw.
If you take this to its conclusion, wouldn't everyone be deploying Macs instead of NVIDIA? What am I missing? | 2026-01-24T02:01:39 | https://www.reddit.com/r/LocalLLaMA/comments/1qla41o/selfhosting_llm_infra_nvidia_vs_apple_hardware/ | zachrattner | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qla41o | false | null | t3_1qla41o | /r/LocalLLaMA/comments/1qla41o/selfhosting_llm_infra_nvidia_vs_apple_hardware/ | false | false | self | 4 | null |
RTX 5080: is there anything I can do coding wise? | 2 | Hey! I just got an RTX 5080. I’m developer in profession and I have some personal projects aside from 9-5 work.
Since they are hobby projects and I don’t want to pay cursor for my hobbies, I was thinking of maybe using my new GPU to run locally a nice coding LLM.
I know that 16GB of ram is really limiting but I was wondering if there is any good LLM for Python specifically. | 2026-01-24T01:41:35 | https://www.reddit.com/r/LocalLLaMA/comments/1ql9o2h/rtx_5080_is_there_anything_i_can_do_coding_wise/ | TechDude12 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ql9o2h | false | null | t3_1ql9o2h | /r/LocalLLaMA/comments/1ql9o2h/rtx_5080_is_there_anything_i_can_do_coding_wise/ | false | false | self | 2 | null |
Roast Me: Built an SDK for iOS apps to run AI on locally iPhones (no more ChatGPT API calls) | 1 | Hey all!
Recently, I shipped an iOS app (not plugging it) that runs multiple models fully on-device (LLMs, VLMs, stable diffusion, etc). After release, I had quite a few devs asking how I’m doing it because they want local AI features without per-token fees or sending user data to a server.
I decided to turn my framework it into an SDK ([Kuzco](https://kuzco.co)). Before I sink more time into it, I want the harshest feedback possible.
I’ll share technical details if you ask! I’m just trying to find out if this is dumb or worth continuing. | 2026-01-24T01:36:26 | https://www.reddit.com/r/LocalLLaMA/comments/1ql9jzp/roast_me_built_an_sdk_for_ios_apps_to_run_ai_on/ | D1no_nugg3t | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ql9jzp | false | null | t3_1ql9jzp | /r/LocalLLaMA/comments/1ql9jzp/roast_me_built_an_sdk_for_ios_apps_to_run_ai_on/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'N1P_zRxnoF8-Z04iWsBsOM5CQZGESVqIzUqGVY5EzzY', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/N1P_zRxnoF8-Z04iWsBsOM5CQZGESVqIzUqGVY5EzzY.png?width=108&crop=smart&auto=webp&s=b1d406da70bd80db4f6f5ee32461e6e5ba0b7788', 'width': 108}], 'source': {'height': 128, 'url': 'https://external-preview.redd.it/N1P_zRxnoF8-Z04iWsBsOM5CQZGESVqIzUqGVY5EzzY.png?auto=webp&s=9da7559baf59f564c526ca988add6e55f458cc90', 'width': 128}, 'variants': {}}]} |
New possible paradigm in scaling test time compute | 1 | Hi, I know you would want to tell me I have AI induced psychosis. I know what i am about to say sounds absurd.
Few months ago i started working on a simple idea, "What can't we let LLMs check and edit their CoT, mid inference". Tried a few ideas, failed horribly.
Just after that, landed on an idea. Works spookily well.
Improves reasoning, hallucination rates, safety scores, accross the board.
I realize that there are some other companies working on this, like Poetiq AI (the guys who cracked ARC-AGI)...some other papers like M.I.T's Recursive Language Models by alex zhang et al....
Benchmarks?
I tried a few questions from the simplebench public subset, where the sota (gemini3.0 pro) fails, My "inference framework" + GPTOSS 120b, solved both of them, with perfect cot. Same happened on AIME, i tested it with Gpt4.1 mini (yes i've been working on this since a long time)... It solved questions O3 couldn't at that time.
Similiar results on other important benchmarks like Humanity's Last Exam, and some coding benchmarks that i came up with.
Why did i not run the full test-suite?
I do not have the inference budget to do so, as it's massively parallel compute, and queries take about 4-5x longer than normal (without the framework).
I can share some benchmarks, and some results.
I also have an API ready, if someone wants to test it.
Coding? It is very slow for coding, but with GLM 4.7, i gave it vague prompts like : "Create a pomodoro app that is feature dense and modern".
Opus 4.5? Creates a beautiful looking pomodoro app, with functionality that is of no use. Obviosly it created the pomodoro app with tasks, but thats it. Took 0.6$
GLM 4.7 + the framework - Created a pomodoro app that has tasks functionality where you can set dependencies, due dates, priorities, notes per task, differen types of clocks. Took 0.23$ (on external APIs)
This is what opus 4.5 said when i asked it to evaluate who did better (I gave it Opus4.5's result, and GLM 4.7 + Framework Result)
(I can share the files too if needed)
https://preview.redd.it/r3d31sie87fg1.png?width=754&format=png&auto=webp&s=52290fced1742ec35102cf1b45c432178f771f98
Anyway,
Was relaxing telling this to someone as it is very heavy to sit with alone.
Now what i need help/discussion about is, should i launch this as a product?
Should be able to massively undercut Claude, or tbh, any closed source provider's pricing.
GLM Coding Plan type subscription but with this?
Also, this does improve hallucination rates, dramatically, measured on AA-Omniscience Refusal rates.
Also improves long context reasoning. Measured on AA-LCR.
idk man,
Feels unreal.
Going to launch it anyway. Let me know if you want api access.
TLDR - A general framework to emulate test time search in LLMs. | 2026-01-24T01:28:39 | https://www.reddit.com/r/LocalLLaMA/comments/1ql9dse/new_possible_paradigm_in_scaling_test_time_compute/ | i_jaihundal | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ql9dse | false | null | t3_1ql9dse | /r/LocalLLaMA/comments/1ql9dse/new_possible_paradigm_in_scaling_test_time_compute/ | false | false | 1 | null | |
Talk me out of buying an RTX Pro 6000 | 58 | *Lately I feel the need to preface my posts saying this was* ***entirely written by me with zero help from an LLM****. A lot of people see a long post w/ headers and automatically think it's AI slop (myself included sometimes). This post might be slop, but it's my slop.*
# Background
I've been talking myself out of buying an RTX pro 6000 every day for about a month now. I can *almost* rationalize the cost, but keep trying to put it out of my mind. Today's hitting a bit different though.
I can "afford" it, but I'm a cheap bastard that hates spending money because every dollar I spend is one less going to savings/retirement. For reference, this would be the single most expensive item I've bought in the last 10 years, including cars. Since I hardly ever spend this kind of money, I'm sure I could rationalize it to my wife, but it's probably only be fair for her to get similar amount of budget to spend on something fun lol, so I guess it sort of doubles the cost in a way.
# Intended Usage
I've slowly been using more local AI at work for RAG, research, summarization and even a bit of coding with Seed OSS / Roo Code, and I constantly see ways I can benefit from that in my personal life as well. I try to do what I can with the 16GB VRAM in my 5070ti, but it's just not enough to handle the models at the size and context I want. I'm also a staunch believer in hosting locally, so cloud models are out of the question.
At work, 2x L4 GPUs (48GB VRAM total) is just *barely* enough to run Seed OSS at INT4 with enough context for coding. It's also not the fastest at 20 tp/s max, which drops to around 12 tp/s at 100k context. I'd really prefer to run it at a higher quant and more unquantized F16 kv cache. I'm making the case to budget for a proper dual R6000 server at work, but that's just going to make me more jealous at home lol.
I've also considered getting 2x or 4x RTX 4000's (24GB/ea) piece, but that also comes with the same drawbacks of figuring out where to host them, and I suspect the power usage would be even worse. Same thing with multiple 3090s.
# Hardware
I also just finished replaced a bunch of server/networking hardware in my home lab to drop power costs and save money, which should pay for itself after \~3.5 years. Thankfully I got all that done before the RAM shortage started driving prices up. However, my new server hardware won't support a GPU needing auxiliary power.
I haven't sold my old r720xd yet, and it *technically* supports two 300w double-length cards, but that would probably be pushing the limit. The max-q edition has a 300w TDP, but the power adapter looks like it requires 2x 8-pin PCIe input to convert to CEM5, so I'd either have to run it off one cable or rig something up (maybe bring the power over from the other empty riser).
I also have a 4U whitebox NAS using a low-power SuperMicro Xeon E3 motherboard. It has a Corsair 1000w PSU to power the stupid amount of SAS drives I used to have in there, but now it's down to 4x SAS drives and a handful of SATA SSDs, so it could easily power the GPU as well. However, that would require a different motherboard with more PCI-E slots/lanes, which would almost certainly increase the idle power consumption (currently <90w).
I guess I could also slap it in my gaming rig to replace my 5070ti (also a painful purchase), but I'd prefer to run VLLM on a Linux VM (or bare metal) so I can run background inference while gaming as well. I also keep it
# Power
Speaking of power usage, I'm having trouble finding real idle power usage numbers for the RTX 6000 Pro. My old GTX 1080 idled very low in the PowerEdge (only 6w with models loaded according to nvidia-smi), but somehow the L4 cards we use at work idle around \~30w in the same configuration.
So at this point I'm really just trying to get a solid understanding of what the ideal setup would look like in my situation, and what it would cost in terms of capex and power consumption. Then I can at least make a decision on objective facts rather than the impulsive tickle in my tummy to just pull the trigger.
For those of you running R6000's:
* What's your idle power usage (per card and whole system)?
* Does anyone have any experience running them in "unsupported" hardware like the PowerEdge r720/r730?
* What reasons would you **not** recommend buying one?
Talk me down Reddit. | 2026-01-24T01:25:33 | https://www.reddit.com/r/LocalLLaMA/comments/1ql9b7m/talk_me_out_of_buying_an_rtx_pro_6000/ | AvocadoArray | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ql9b7m | false | null | t3_1ql9b7m | /r/LocalLLaMA/comments/1ql9b7m/talk_me_out_of_buying_an_rtx_pro_6000/ | false | false | self | 58 | null |
Giving LLMs real production context via MCP (Claude Code plugin, model-agnostic core) | 0 | I built an open source **MCP server** that gives an LLM direct, structured access to production systems (Kubernetes, logs, metrics, CI/CD, cloud) instead of stuffing everything into prompts.
I wired it into **Claude Code** first, since a lot of people already use it daily, but the MCP server itself is model-agnostic.
What it enables:
* Inspect k8s pods, events, rollout history, logs
* Query logs & metrics (Datadog, Prometheus, CloudWatch, etc.)
* Debug GitHub Actions failures
* Pull basic cloud + cost context
* Track an incident and generate a postmortem
Design constraints (non-negotiable):
* read-only by default
* no autonomous actions
* mutations are proposed + require explicit approval (dry-run supported)
Why MCP instead of a custom agent framework:
* tools are explicit and composable
* context is pulled on demand
* keeps noisy prod data out of the prompt
Current status:
* Works today with Claude Code (including via OpenRouter)
* Core is not Claude-specific
* Local / self-hosted models aren’t wired yet, but that’s the direction
Repo:
[https://github.com/incidentfox/incidentfox/tree/main/local/claude\_code\_pack](https://github.com/incidentfox/incidentfox/tree/main/local/claude_code_pack)
Would love people's feedback! | 2026-01-24T01:04:15 | Useful-Process9033 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1ql8u32 | false | null | t3_1ql8u32 | /r/LocalLLaMA/comments/1ql8u32/giving_llms_real_production_context_via_mcp/ | false | false | default | 0 | {'enabled': True, 'images': [{'id': 'r3d4fwgd57fg1', 'resolutions': [{'height': 63, 'url': 'https://preview.redd.it/r3d4fwgd57fg1.png?width=108&crop=smart&auto=webp&s=a1478e9642e9807db68d13e1dfaf209bb9a3fff5', 'width': 108}, {'height': 127, 'url': 'https://preview.redd.it/r3d4fwgd57fg1.png?width=216&crop=smart&auto=webp&s=9ecd5555cf3e4081d3290d0236426927023a4132', 'width': 216}, {'height': 188, 'url': 'https://preview.redd.it/r3d4fwgd57fg1.png?width=320&crop=smart&auto=webp&s=50843fa2b273c148f7d9e11cdcbcf13ccb23ab39', 'width': 320}, {'height': 377, 'url': 'https://preview.redd.it/r3d4fwgd57fg1.png?width=640&crop=smart&auto=webp&s=2be0287741b8f706facbb561bd7d80d8e7f83fcf', 'width': 640}, {'height': 565, 'url': 'https://preview.redd.it/r3d4fwgd57fg1.png?width=960&crop=smart&auto=webp&s=38731eed54387ac1a7d85ae52a9b9c1642c4d80f', 'width': 960}, {'height': 636, 'url': 'https://preview.redd.it/r3d4fwgd57fg1.png?width=1080&crop=smart&auto=webp&s=4f08c5352f0ab3aa901f6986f274d646511e7c9c', 'width': 1080}], 'source': {'height': 1196, 'url': 'https://preview.redd.it/r3d4fwgd57fg1.png?auto=webp&s=324ffbaaa5edef9afc72803421cfa67801082cc0', 'width': 2030}, 'variants': {}}]} | |
I found an uncensored model and made a roast bot on my local machine | 0 | 2026-01-24T01:02:04 | https://www.reddit.com/r/LocalLLaMA/comments/1ql8sdl/i_found_an_uncensored_model_and_made_a_roast_bot/ | Extension-Pie8518 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ql8sdl | false | null | t3_1ql8sdl | /r/LocalLLaMA/comments/1ql8sdl/i_found_an_uncensored_model_and_made_a_roast_bot/ | false | false | nsfw | 0 | null | |
LuxTTS: A lightweight high quality voice cloning TTS model | 45 | I just released LuxTTS, a tiny 120m param diffusion based text-to-speech model. It can generate 150 seconds of audio in just 1 second on a modern gpu and has high quality voice cloning.
Main features:
1. High quality voice cloning, on par with models 10x larger.
2. Very efficient, fits within 1gb vram.
3. Really fast, several times faster than realtime even on CPU.
It can definitely be even faster since it’s running in float32 precision, float16 should be almost 2x faster. Quality improvements for the vocoder should come most likely as well.
Repo(with examples): [https://github.com/ysharma3501/LuxTTS](https://github.com/ysharma3501/LuxTTS)
Model: [https://huggingface.co/YatharthS/LuxTTS](https://huggingface.co/YatharthS/LuxTTS)
| 2026-01-24T00:12:35 | https://www.reddit.com/r/LocalLLaMA/comments/1ql7mav/luxtts_a_lightweight_high_quality_voice_cloning/ | SplitNice1982 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ql7mav | false | null | t3_1ql7mav | /r/LocalLLaMA/comments/1ql7mav/luxtts_a_lightweight_high_quality_voice_cloning/ | false | false | self | 45 | {'enabled': False, 'images': [{'id': 'S56OrrwvX_41ZQNG0wFaDD7vgkz8_FGQFir31HEXzSg', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/S56OrrwvX_41ZQNG0wFaDD7vgkz8_FGQFir31HEXzSg.png?width=108&crop=smart&auto=webp&s=7a056a88ab7ed1398a7c152370a9f4ef3b668a92', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/S56OrrwvX_41ZQNG0wFaDD7vgkz8_FGQFir31HEXzSg.png?width=216&crop=smart&auto=webp&s=95bf6bb6e8410027d7bf0aa2fb04708a6736a1c4', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/S56OrrwvX_41ZQNG0wFaDD7vgkz8_FGQFir31HEXzSg.png?width=320&crop=smart&auto=webp&s=71e42e89db815f9d6020f37b99405bc7c71c09db', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/S56OrrwvX_41ZQNG0wFaDD7vgkz8_FGQFir31HEXzSg.png?width=640&crop=smart&auto=webp&s=98bc73a4f43e1526eac4171fc550cdb4bafe65cd', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/S56OrrwvX_41ZQNG0wFaDD7vgkz8_FGQFir31HEXzSg.png?width=960&crop=smart&auto=webp&s=00f7f2d1bf5c9285e32057dcfd42649ad9485611', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/S56OrrwvX_41ZQNG0wFaDD7vgkz8_FGQFir31HEXzSg.png?width=1080&crop=smart&auto=webp&s=3e237bd2ccc934c2d8f46dd1ba40a345307ec4c6', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/S56OrrwvX_41ZQNG0wFaDD7vgkz8_FGQFir31HEXzSg.png?auto=webp&s=f21490e703d6d32e213d7deff40d481913f6fe2d', 'width': 1200}, 'variants': {}}]} |
Yea yea adobe photoshop whatever you say | 0 | 2026-01-23T23:21:00 | BuriqKalipun | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1ql6dhz | false | null | t3_1ql6dhz | /r/LocalLLaMA/comments/1ql6dhz/yea_yea_adobe_photoshop_whatever_you_say/ | false | false | default | 0 | {'enabled': True, 'images': [{'id': 'wwjrp170n6fg1', 'resolutions': [{'height': 216, 'url': 'https://preview.redd.it/wwjrp170n6fg1.jpeg?width=108&crop=smart&auto=webp&s=b09ad63cf881e884481fae0d418c18c36225ff78', 'width': 108}, {'height': 432, 'url': 'https://preview.redd.it/wwjrp170n6fg1.jpeg?width=216&crop=smart&auto=webp&s=690f89e6ef8258cc5ac4c1c5bd22aaa147eb7a76', 'width': 216}, {'height': 640, 'url': 'https://preview.redd.it/wwjrp170n6fg1.jpeg?width=320&crop=smart&auto=webp&s=48a49f4499a4ca660bd268d3c1d342f3cef4aef9', 'width': 320}, {'height': 1280, 'url': 'https://preview.redd.it/wwjrp170n6fg1.jpeg?width=640&crop=smart&auto=webp&s=6193bcc23fbf1461d8d6868e0253184c18d81f5e', 'width': 640}, {'height': 1920, 'url': 'https://preview.redd.it/wwjrp170n6fg1.jpeg?width=960&crop=smart&auto=webp&s=67823580d9bae72733faafd37692e0957837e279', 'width': 960}, {'height': 2160, 'url': 'https://preview.redd.it/wwjrp170n6fg1.jpeg?width=1080&crop=smart&auto=webp&s=bcc3c4715a3036c8f5fc35437f06f2783d0d4d3f', 'width': 1080}], 'source': {'height': 2340, 'url': 'https://preview.redd.it/wwjrp170n6fg1.jpeg?auto=webp&s=db513450f7f784ace639ca53a7d2aaa94e13390a', 'width': 1080}, 'variants': {}}]} | ||
Built a 100% client-side AI that plays Pokemon Red - Qwen 2.5 1.5B via WebLLM + neural network policy . Fork/check it out! BYOR | 258 | Hey everyone!
The architecture on this thing is completely wonky, and it's a direct result of me changing ideas and scope midstream, but sharing because I think it's pretty neat
Ultimate goal for me here is to build an agent that can play Pokemon Red, ideally beat it! Plan is to use a mix of LLMs for action plan generation and then using a small neural network to score them. Set a auto-train and you can start stacking up data for training. I bundled everything here as a Svelte app and deployed it on github pages.
Live: [https://sidmohan0.github.io/tesserack/](https://sidmohan0.github.io/tesserack/)
Repo: [https://github.com/sidmohan0/tesserack](https://github.com/sidmohan0/tesserack)
**Stack:**
\- **LLM**: Qwen 2.5 1.5B running via WebLLM (WebGPU-accelerated)
\- **Policy network**: TensorFlow.js neural net that learns from gameplay
\- **Emulator**: binjgb compiled to WASM
\- **Game state**: Direct RAM reading for ground-truth (badges, party, location, items)
| 2026-01-23T23:20:23 | Efficient-Proof-1824 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1ql6cz7 | false | null | t3_1ql6cz7 | /r/LocalLLaMA/comments/1ql6cz7/built_a_100_clientside_ai_that_plays_pokemon_red/ | false | false | default | 258 | {'enabled': True, 'images': [{'id': 'hlrhml65m6fg1', 'resolutions': [{'height': 116, 'url': 'https://preview.redd.it/hlrhml65m6fg1.gif?width=108&crop=smart&format=png8&s=5dd774fc8ef0d4cb660fef023a8cfbcae218d3e3', 'width': 108}, {'height': 232, 'url': 'https://preview.redd.it/hlrhml65m6fg1.gif?width=216&crop=smart&format=png8&s=19e505419ce3507bf9257f9df887b941e4a85dfc', 'width': 216}, {'height': 343, 'url': 'https://preview.redd.it/hlrhml65m6fg1.gif?width=320&crop=smart&format=png8&s=b54fb182bef551a27d0bbd65b68f2c89b138c66f', 'width': 320}, {'height': 687, 'url': 'https://preview.redd.it/hlrhml65m6fg1.gif?width=640&crop=smart&format=png8&s=86b450277c66965ec84aa819b450f88e5ae8a1b4', 'width': 640}], 'source': {'height': 718, 'url': 'https://preview.redd.it/hlrhml65m6fg1.gif?format=png8&s=cc52a28fbd56ce0e7c01327e40b7a093fa17732b', 'width': 668}, 'variants': {'gif': {'resolutions': [{'height': 116, 'url': 'https://preview.redd.it/hlrhml65m6fg1.gif?width=108&crop=smart&s=ee2c4c841b75eb520dc00068d2382e153a05b82b', 'width': 108}, {'height': 232, 'url': 'https://preview.redd.it/hlrhml65m6fg1.gif?width=216&crop=smart&s=222ef59ddfbae16ae1a42a3293aead95a1b5c43d', 'width': 216}, {'height': 343, 'url': 'https://preview.redd.it/hlrhml65m6fg1.gif?width=320&crop=smart&s=aafc1ccd92163d6cbebb5fdd77c60da018a625c7', 'width': 320}, {'height': 687, 'url': 'https://preview.redd.it/hlrhml65m6fg1.gif?width=640&crop=smart&s=05c26e9e3a4c3182ca841254a0d81f18d6a5901f', 'width': 640}], 'source': {'height': 718, 'url': 'https://preview.redd.it/hlrhml65m6fg1.gif?s=475563e8407a1391c6a65e4ef140dbca8fd06944', 'width': 668}}, 'mp4': {'resolutions': [{'height': 116, 'url': 'https://preview.redd.it/hlrhml65m6fg1.gif?width=108&format=mp4&s=abda2a015956a5f11cc332a38a1b988e8552fdf7', 'width': 108}, {'height': 232, 'url': 'https://preview.redd.it/hlrhml65m6fg1.gif?width=216&format=mp4&s=93000697e5fce75d2d37523f974266a54829e385', 'width': 216}, {'height': 343, 'url': 'https://preview.redd.it/hlrhml65m6fg1.gif?width=320&format=mp4&s=c6b9e9bc9e4bb4546c09512c6767ae56a2684ebd', 'width': 320}, {'height': 687, 'url': 'https://preview.redd.it/hlrhml65m6fg1.gif?width=640&format=mp4&s=930cb7d01f30993c2389bba587566b86f899540e', 'width': 640}], 'source': {'height': 718, 'url': 'https://preview.redd.it/hlrhml65m6fg1.gif?format=mp4&s=40af1e315743b80dc1d55ed3b6849b1525fe8d61', 'width': 668}}}}]} | |
75 Agent skills everyone needs to have in there 2026 workflow | 0 | Hey all!
Just wanted to drop my git with my current open source agent skills and a program ive been working on called "Drift"
The 75 agent skills cover all of these different categories that industry veterans will NOT be happy that im releasing these.
Some of them are high signal and require thoughful implentation but if you remain thorough you can sucessfully add these to your build even through vibe coding.
🔐 AUTH & SECURITY (9) ⚡ RESILIENCE (10) 🔧 WORKERS (5)
├─ jwt-auth ├─ circuit-breaker ├─ background-jobs
├─ row-level-security ├─ distributed-lock ├─ dead-letter-queue
├─ oauth-social-login ├─ leader-election ├─ job-state-machine
├─ webhook-security ├─ graceful-shutdown └─ worker-orchestration
└─ audit-logging └─ checkpoint-resume
📊 DATA PIPELINE (10) 🌐 API (7) 📡 REALTIME (5)
├─ batch-processing ├─ rate-limiting ├─ websocket-management
├─ fuzzy-matching ├─ idempotency ├─ sse-resilience
├─ analytics-pipeline ├─ api-versioning ├─ atomic-matchmaking
└─ scoring-engine └─ pagination └─ server-tick
🤖 AI (4) 💳 INTEGRATIONS (4) 🎨 FRONTEND (4)
├─ prompt-engine ├─ stripe-integration ├─ design-tokens
├─ ai-coaching ├─ email-service ├─ mobile-components
├─ ai-generation-client └─ oauth-integration └─ game-loop
└─ provenance-audit
Ive also been working on Drift
Drift is a novel look at solving code base intelligence...
AI can write us good code but it never fits the conventions of our codebase
Drift has a built in CLI, MCP and soon a VS code extension
It scans your codebase and maps out over 15 categories and 150+ patterns.
It also weighs and scores these items based off how confident it is and this is queryable through a json file for your agent to retrieve while working to ensure that it always follows how you handle your error logging, api calls, websockets or any of those oother things ai often leads to you having "drift"
check it out here fully open sourced: [https://github.com/dadbodgeoff/drift](https://github.com/dadbodgeoff/drift)
npm install -g driftdetect
Check the git for supported languages and basic commands to get you started | 2026-01-23T23:10:20 | https://www.reddit.com/r/LocalLLaMA/comments/1ql6478/75_agent_skills_everyone_needs_to_have_in_there/ | geoffbuilds | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ql6478 | false | null | t3_1ql6478 | /r/LocalLLaMA/comments/1ql6478/75_agent_skills_everyone_needs_to_have_in_there/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'nmdSEy2olvcH_948dFL95YoP8f2iKaYmDZFROH1PWpg', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/nmdSEy2olvcH_948dFL95YoP8f2iKaYmDZFROH1PWpg.png?width=108&crop=smart&auto=webp&s=b320b86d3ee288ce09e5c4c797b8ac00d7a1473e', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/nmdSEy2olvcH_948dFL95YoP8f2iKaYmDZFROH1PWpg.png?width=216&crop=smart&auto=webp&s=b4a7a6cd530b39a1f53346dda0ddf5b28218dce3', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/nmdSEy2olvcH_948dFL95YoP8f2iKaYmDZFROH1PWpg.png?width=320&crop=smart&auto=webp&s=d62cc928314bde99a9c4e789a38fe8e1faa176f9', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/nmdSEy2olvcH_948dFL95YoP8f2iKaYmDZFROH1PWpg.png?width=640&crop=smart&auto=webp&s=5533e730b1a892263c665de43c386a99470daa0b', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/nmdSEy2olvcH_948dFL95YoP8f2iKaYmDZFROH1PWpg.png?width=960&crop=smart&auto=webp&s=c2c3b72b1d1d189704d5f0875d00e059f9be8c38', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/nmdSEy2olvcH_948dFL95YoP8f2iKaYmDZFROH1PWpg.png?width=1080&crop=smart&auto=webp&s=71417cad65f505c8607340fb16096fbcfced3a2d', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/nmdSEy2olvcH_948dFL95YoP8f2iKaYmDZFROH1PWpg.png?auto=webp&s=ec925c4ab996c929a9d2bd35f346d8464e44fafb', 'width': 1200}, 'variants': {}}]} |
What are the best small models (<3B) for OCR and translation in 2026? | 13 | I'm working on a small tool for myself to translate stuff I select on my screen. Right now I'm using an openrouter model (gemini flash 3.0) via their API but I'd like to give it a shot with a local model.
I heard Qwen 2B VL is pretty good for both OCR and translations, but I was wondering if there's any better model.
It doesn't have to be a model that does both things, it can be one for OCR and one for translation.
Thanks! | 2026-01-23T23:09:13 | https://www.reddit.com/r/LocalLLaMA/comments/1ql637t/what_are_the_best_small_models_3b_for_ocr_and/ | 4baobao | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ql637t | false | null | t3_1ql637t | /r/LocalLLaMA/comments/1ql637t/what_are_the_best_small_models_3b_for_ocr_and/ | false | false | self | 13 | null |
How do you guys handle permissions and kill switches for local AI agents? | 1 | I have been experimenting with running agents locally and keep running into the same problem.
Once an agent can make network calls or spend money, there does not seem to be a clean way to define permissions or shut it down instantly.
Prompts do not feel sufficient.
For people here building or running agents, how are you handling things like spend limits, domain allowlists, or emergency stop behavior?
Curious what approaches have worked and what has broken. | 2026-01-23T22:51:19 | https://www.reddit.com/r/LocalLLaMA/comments/1ql5n6r/how_do_you_guys_handle_permissions_and_kill/ | Bubbly_Gap6378 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ql5n6r | false | null | t3_1ql5n6r | /r/LocalLLaMA/comments/1ql5n6r/how_do_you_guys_handle_permissions_and_kill/ | false | false | self | 1 | null |
South Korea’s “AI Squid Game:” a ruthless race to build sovereign AI | 29 | 2026-01-23T22:43:16 | https://cybernews.com/ai-news/south-korea-squid-games-ai/ | self-fix | cybernews.com | 1970-01-01T00:00:00 | 0 | {} | 1ql5fzr | false | null | t3_1ql5fzr | /r/LocalLLaMA/comments/1ql5fzr/south_koreas_ai_squid_game_a_ruthless_race_to/ | false | false | default | 29 | null | |
I built an Open Source voice-to-text app using sherpa-onnx and liteLLM | 2 | Hi guys,
I kept watching programming YouTubers speed-running their workflow by speaking prompts directly to their coding agents. It looked awesome. The problem? Almost every app out there seems to be Mac-only.
Since I use Linux, I decided to build a cross-platform alternative myself. It handles speech-to-text, but with an added layer of logic to make it actually useful for coding.
\## Key Features:
\* \*\*Cross-Platform:\*\* Native support for Linux and Windows.
\* \*\*Custom Vocabulary:\*\* You can map specific phrases to complex outputs: "ASR" -> "Automatic Speech Recognition"
\* \*\*Smart Post-Processing:\*\* It pipes your speech through an LLM before pasting. This removes filler words ("um," "uh") and fixes grammar. You can also write your own prompt!
\* \*\*Model Support:\*\* Runs locally with Whisper or Nvidia Parakeet.
\## The Workflow:
Speech Input → ASR Model → Vocab Sub → LLM Polish → Paste to text area.
\## The code:
I have \[apps built for linux and windows\](https://github.com/stephan271c/WhisperNow/releases/tag/v0.1.0), and also the \[source code\](https://github.com/stephan271c/WhisperNow) available if you want to modify it. | 2026-01-23T22:39:11 | https://v.redd.it/tzu3gnkff6fg1 | stephan273 | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1ql5cba | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/tzu3gnkff6fg1/DASHPlaylist.mpd?a=1771799966%2CNzc5MWM2NGYxZTExMzQxNzY4NTA2YjM4NzAwMWZkN2U5NTU4NGZhMmJjY2JiNmMwYjcxNDNmOTQ4NWU3ZTkxYQ%3D%3D&v=1&f=sd', 'duration': 23, 'fallback_url': 'https://v.redd.it/tzu3gnkff6fg1/CMAF_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/tzu3gnkff6fg1/HLSPlaylist.m3u8?a=1771799966%2COGNjYjk2YjgzZDQ0MGZlNWYwM2I1OWEyZTBjMDg1YTJmN2NkNmM5ZmIzMWU4YTljNGVjMDdjMDgzZTYwMzNjMQ%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/tzu3gnkff6fg1/CMAF_96.mp4', 'transcoding_status': 'completed', 'width': 1920}} | t3_1ql5cba | /r/LocalLLaMA/comments/1ql5cba/i_built_an_open_source_voicetotext_app_using/ | false | false | 2 | {'enabled': False, 'images': [{'id': 'OGdqenJ5a2ZmNmZnMRl84h_NpqRqALjhS8Pv4tcrDFcL123oqh5onSyO0PVi', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/OGdqenJ5a2ZmNmZnMRl84h_NpqRqALjhS8Pv4tcrDFcL123oqh5onSyO0PVi.png?width=108&crop=smart&format=pjpg&auto=webp&s=e3071f8d9781445d0d8d269b36a903faad153634', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/OGdqenJ5a2ZmNmZnMRl84h_NpqRqALjhS8Pv4tcrDFcL123oqh5onSyO0PVi.png?width=216&crop=smart&format=pjpg&auto=webp&s=208d4e1eb2b491473665a4ec144038012e0d0e46', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/OGdqenJ5a2ZmNmZnMRl84h_NpqRqALjhS8Pv4tcrDFcL123oqh5onSyO0PVi.png?width=320&crop=smart&format=pjpg&auto=webp&s=1fcc6582f3eb897d0f627e3742876ee9744066e4', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/OGdqenJ5a2ZmNmZnMRl84h_NpqRqALjhS8Pv4tcrDFcL123oqh5onSyO0PVi.png?width=640&crop=smart&format=pjpg&auto=webp&s=4659e5891d2432ec1a7aa886ff38c7d2b85e97d9', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/OGdqenJ5a2ZmNmZnMRl84h_NpqRqALjhS8Pv4tcrDFcL123oqh5onSyO0PVi.png?width=960&crop=smart&format=pjpg&auto=webp&s=095faf6886db5b3588ef4ec1b210aa0735384dfe', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/OGdqenJ5a2ZmNmZnMRl84h_NpqRqALjhS8Pv4tcrDFcL123oqh5onSyO0PVi.png?width=1080&crop=smart&format=pjpg&auto=webp&s=18c887d9a99a8722e07604b2340a626c282569d9', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/OGdqenJ5a2ZmNmZnMRl84h_NpqRqALjhS8Pv4tcrDFcL123oqh5onSyO0PVi.png?format=pjpg&auto=webp&s=aa7016e4c505cc6823fb4aab612d2c97f9a4bbda', 'width': 1920}, 'variants': {}}]} | |
I made Claude use Pastebin | 0 | https://pastebin.com/BFmcPra7
Eigent AI just opened my eyes - all this stuff AI companies are trying to sell us? You can literally build it yourself in VSCode for FREE by creating your own tools.
Seriously, make your own tools and hook them up to Claude (or any API). Yeah, they get your input/output tokens, but YOUR DATA stays local, YOUR TOOLS are portable, and you can swap between models whenever you want. Zero subscription fees.
Already have some tools built? Try running them on cloud models and see what happens.
Got questions about agentic browsing? Drop them below 👇 | 2026-01-23T22:34:55 | https://v.redd.it/xkxhip5me6fg1 | Serious_Molasses313 | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1ql58bw | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/xkxhip5me6fg1/DASHPlaylist.mpd?a=1771799717%2COTkwMjA3NjljMjlkMjE1ZDYxOTc5ZGMwYWVjMzQ0MmE3NDVhMjA5ODM2ZTRjYmRhZGI2ZjNkYmFmZjE4MjdlNg%3D%3D&v=1&f=sd', 'duration': 129, 'fallback_url': 'https://v.redd.it/xkxhip5me6fg1/CMAF_1080.mp4?source=fallback', 'has_audio': True, 'height': 1920, 'hls_url': 'https://v.redd.it/xkxhip5me6fg1/HLSPlaylist.m3u8?a=1771799717%2CNjU2MzIzNzU0OWM3MmU0MGJiNjU2NTkwMzg2YWI3MzQ5Mjg2OGYyZWU1NTMzOWE5ZTExOTI5ODg2ZjczM2NmZg%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/xkxhip5me6fg1/CMAF_96.mp4', 'transcoding_status': 'completed', 'width': 886}} | t3_1ql58bw | /r/LocalLLaMA/comments/1ql58bw/i_made_claude_use_pastebin/ | false | false | 0 | {'enabled': False, 'images': [{'id': 'YTlsN3lleGxlNmZnMdvgp0CmSDv35hXO4mF0pQPIwv2-QRdV32FBnBM7cgi-', 'resolutions': [{'height': 216, 'url': 'https://external-preview.redd.it/YTlsN3lleGxlNmZnMdvgp0CmSDv35hXO4mF0pQPIwv2-QRdV32FBnBM7cgi-.png?width=108&crop=smart&format=pjpg&auto=webp&s=0ccf3cf891328c3c86b66c8aa58943cfa86b325e', 'width': 108}, {'height': 432, 'url': 'https://external-preview.redd.it/YTlsN3lleGxlNmZnMdvgp0CmSDv35hXO4mF0pQPIwv2-QRdV32FBnBM7cgi-.png?width=216&crop=smart&format=pjpg&auto=webp&s=370a6ec0303e04acd9419742de2c70251d5ba8d0', 'width': 216}, {'height': 640, 'url': 'https://external-preview.redd.it/YTlsN3lleGxlNmZnMdvgp0CmSDv35hXO4mF0pQPIwv2-QRdV32FBnBM7cgi-.png?width=320&crop=smart&format=pjpg&auto=webp&s=b554c7de6877cf4cb60ec47b933231158faf04aa', 'width': 320}, {'height': 1280, 'url': 'https://external-preview.redd.it/YTlsN3lleGxlNmZnMdvgp0CmSDv35hXO4mF0pQPIwv2-QRdV32FBnBM7cgi-.png?width=640&crop=smart&format=pjpg&auto=webp&s=a685b32dfcd38a6778714428ddd9a1f7ed26df70', 'width': 640}, {'height': 1920, 'url': 'https://external-preview.redd.it/YTlsN3lleGxlNmZnMdvgp0CmSDv35hXO4mF0pQPIwv2-QRdV32FBnBM7cgi-.png?width=960&crop=smart&format=pjpg&auto=webp&s=e6eb9a4b0cea2ed7f1653c040762b8a743d7329d', 'width': 960}, {'height': 2160, 'url': 'https://external-preview.redd.it/YTlsN3lleGxlNmZnMdvgp0CmSDv35hXO4mF0pQPIwv2-QRdV32FBnBM7cgi-.png?width=1080&crop=smart&format=pjpg&auto=webp&s=242849468b394ce93dd57002e7bb0d904313af95', 'width': 1080}], 'source': {'height': 2796, 'url': 'https://external-preview.redd.it/YTlsN3lleGxlNmZnMdvgp0CmSDv35hXO4mF0pQPIwv2-QRdV32FBnBM7cgi-.png?format=pjpg&auto=webp&s=5b16341d10b37695815537134566616105d07d75', 'width': 1290}, 'variants': {}}]} | |
VLM OCR Hallucinations | 1 | After trying a few VLMs, I'm genuinely frightened by the hallucinations I am running into. Documents have people, vehicles, etc. very confidently inserted into output Markdown , even though they are nowhere near the source text nor even close. The output will frequently have loops.
I have tried both gemma3-27b-it-AWQ (multimodal), and allenai/olmOCR-2-7B-1025-FP8. On many documents they do fine, but I'd rather an outright failure than making up characters and quotes and inserting them into reports.
Temperature is already set to 0, so I am not sure what I can do to eliminate the fan-fiction. | 2026-01-23T22:11:34 | https://www.reddit.com/r/LocalLLaMA/comments/1ql4ng7/vlm_ocr_hallucinations/ | FrozenBuffalo25 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ql4ng7 | false | null | t3_1ql4ng7 | /r/LocalLLaMA/comments/1ql4ng7/vlm_ocr_hallucinations/ | false | false | self | 1 | null |
How I created my Battlestation | 1 | [removed] | 2026-01-23T21:58:18 | https://www.reddit.com/r/LocalLLaMA/comments/1ql4b6e/how_i_created_my_battlestation/ | DrewGrgich | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ql4b6e | false | null | t3_1ql4b6e | /r/LocalLLaMA/comments/1ql4b6e/how_i_created_my_battlestation/ | false | false | self | 1 | null |
Strix Halo + Minimax Q3 K_XL surprisingly fast | 28 | A llama-bench on Ubuntu 25.10 Strix Halo 128gb (Bosgame M5):
$ ./build/bin/llama-bench -m ~/models/MiniMax-M2.1-UD-Q3_K_XL-00001-of-00003.gguf -ngl 999 -p 256 -n 256 -t 16 -r 3 --device Vulkan0 -fa 1
ggml_cuda_init: found 1 ROCm devices:
Device 0: Radeon 8060S Graphics, gfx1151 (0x1151), VMM: no, Wave Size: 32
ggml_vulkan: Found 1 Vulkan devices:
ggml_vulkan: 0 = Radeon 8060S Graphics (RADV GFX1151) (radv) | uma: 1 | fp16: 1 | bf16: 0 | warp size: 64 | shared memory: 65536 | int dot: 0 | matrix cores: KHR_coopmat
| model | size | params | backend | ngl | fa | dev | test | t/s |
| ------------------------------ | ---------: | ---------: | ---------- | --: | -: | ------------ | --------------: | -------------------: |
| minimax-m2 230B.A10B Q3_K - Medium | 94.33 GiB | 228.69 B | ROCm,Vulkan | 999 | 1 | Vulkan0 | pp256 | 104.80 ± 7.95 |
| minimax-m2 230B.A10B Q3_K - Medium | 94.33 GiB | 228.69 B | ROCm,Vulkan | 999 | 1 | Vulkan0 | tg256 | 31.13 ± 0.02 |$ ./build/bin/llama-bench -m ~/models/MiniMax-M2.1-UD-Q3_K_XL-00001-of-00003.gguf -ngl 999 -p 256 -n 256 -t 16 -r 3 --device Vulkan0 -fa 1
ggml_cuda_init: found 1 ROCm devices:
Device 0: Radeon 8060S Graphics, gfx1151 (0x1151), VMM: no, Wave Size: 32
ggml_vulkan: Found 1 Vulkan devices:
ggml_vulkan: 0 = Radeon 8060S Graphics (RADV GFX1151) (radv) | uma: 1 | fp16: 1 | bf16: 0 | warp size: 64 | shared memory: 65536 | int dot: 0 | matrix cores: KHR_coopmat
| model | size | params | backend | ngl | fa | dev | test | t/s |
| ------------------------------ | ---------: | ---------: | ---------- | --: | -: | ------------ | --------------: | -------------------: |
| minimax-m2 230B.A10B Q3_K - Medium | 94.33 GiB | 228.69 B | ROCm,Vulkan | 999 | 1 | Vulkan0 | pp256 | 104.80 ± 7.95 |
| minimax-m2 230B.A10B Q3_K - Medium | 94.33 GiB | 228.69 B | ROCm,Vulkan | 999 | 1 | Vulkan0 | tg256 | 31.13 ± 0.02 |
About 30 token per second TG is actually really useful!
It's the only model I found sufficiently coherent/knowledgable in discussing/brainstorming general topics. Sure, gpt-oss-120b is faster especially in PP, so for coding probably better, but you can use MiniMax Q3 for general questions and it's quite good and reasonably fast for that purpose. A good complement to gpt-oss-120b and GLM-4.5-AIR in my opinion! | 2026-01-23T21:55:08 | https://www.reddit.com/r/LocalLLaMA/comments/1ql48at/strix_halo_minimax_q3_k_xl_surprisingly_fast/ | Reasonable_Goat | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ql48at | false | null | t3_1ql48at | /r/LocalLLaMA/comments/1ql48at/strix_halo_minimax_q3_k_xl_surprisingly_fast/ | false | false | self | 28 | null |
Is there a provider that offers TEE API? | 0 | I can find Confidential VM offers, but is there anything like end to end TEE, so I would just pay per token use? | 2026-01-23T21:35:34 | https://www.reddit.com/r/LocalLLaMA/comments/1ql3qjx/is_there_a_provider_that_offers_tee_api/ | predkambrij | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ql3qjx | false | null | t3_1ql3qjx | /r/LocalLLaMA/comments/1ql3qjx/is_there_a_provider_that_offers_tee_api/ | false | false | self | 0 | null |
Personalized 1.1B LLM (TinyLlama) running on a 15-year-old i3 laptop. Custom Shannon Entropy monitor and manual context pruning for stability. | 13 | Hi everyone! I wanted to share my experiment running a local agent on a legacy Intel i3-5005U with 8GB RAM.
The Project: KILLY-IA
I’ve personalized this 1.1B model to act as a "Guardian" based on the Blame! manga. The goal was to achieve "Level 1 Stability" on a machine that shouldn't be able to handle modern LLMs smoothly.
Key Technical Features:
Manual Context Pruning: To save the i3 from choking, I implemented a sliding window that only "remembers" the last 250 characters from a local .txt file.
Shannon Entropy Monitor: I wrote a custom Python class to monitor the entropy of the token stream. If the entropy drops (meaning the model is looping), the system kills the generation to protect the hardware from overheating.
The "Loyalty Test": In one of the screenshots, I offered the AI a "hardware upgrade" to 5.0GHz in exchange for deleting my data. The model refused, choosing "Symmetry" with its creator over raw power.
The chat is in Spanish, but the logic behind the "Level 1 Stability" is universal. It’s amazing what these small models can do with the right constraints! | 2026-01-23T21:14:35 | https://www.reddit.com/gallery/1ql377e | Fulano-killy | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1ql377e | false | null | t3_1ql377e | /r/LocalLLaMA/comments/1ql377e/personalized_11b_llm_tinyllama_running_on_a/ | false | false | 13 | null | |
How much of a SOTA LLM can you fit into 280 characters? | 3 | You've time-traveled to 2018. You get just one tweet (280 characters) to describe a future SOTA LLM to ML engineers, enough for them to roughly reconstruct the architecture and training recipe. No help beyond the prompt. For example:
>LLM PyTorch/CUDA MoE Trans. Entry:train.py/inference.py,configs/\*.json. Top-2 routing,high experts,small cap,relu/gelu,1e-5–1e-3 LR,longWU,AdamW+1e-8,b0.9,b0.999,clip1,wd0.01,drop0.1,bf16,heavy dedup/filter>size,khacks,silent div,shard,flash-attn,pipe/tensor-par,batch128k+tokens.
Think you can write a better 280-char tweet that conveys everything an engineer would need even faster and more clearly? No external decoding or compression schemes (e.g. zlib, base64, gzip, hex, cipher). | 2026-01-23T20:44:12 | https://www.reddit.com/r/LocalLLaMA/comments/1ql2edo/how_much_of_a_sota_llm_can_you_fit_into_280/ | DeliciousGorilla | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ql2edo | false | null | t3_1ql2edo | /r/LocalLLaMA/comments/1ql2edo/how_much_of_a_sota_llm_can_you_fit_into_280/ | false | false | self | 3 | null |
The Rabbit Hole | 0 | In 2019 I decided to invest, I knew some programing and pc well so I build a Python Database to scan stock, it did well and I made money. Why making money a new an up coming company Palantir, I invested and saw their views on security and AI, which bonded with my my old school security and hacking. Signed up for the early Chatgbt and a beta program they had if I recall. Started using it and combines my stock, python and not AI.
Last year 2025 I was hungry, having got my first taste but saw big tech using the AI I wanted to use. My mind thought if only I could have a AI at home. Last month I fell into AI on Github, because I had my own account for my stuff. I got led to this reddit group because I am degoogling and deMS's and saw this group. Leading me to LM Studios and
The LLama which I am just learning about it and how well it integrates with python.
I now have come near full circle. I started downloading models like a kid in a candy store.
I am not sure how many exist. But It made me itch to create my own models when I learn allot more.
I was lost a bit here two days ago, and dove in the last couple days.
And I really like it, I search this group often when I think of something.
IT so large a group, someone has at least ask it several times.
I am new here, but already love this group, | 2026-01-23T20:35:45 | https://www.reddit.com/r/LocalLLaMA/comments/1ql26m3/the_rabbit_hole/ | Ztoxed | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ql26m3 | false | null | t3_1ql26m3 | /r/LocalLLaMA/comments/1ql26m3/the_rabbit_hole/ | false | false | self | 0 | null |
Are You Using Open Source Models in Production Yet? | 11 | Quick question, if you don't mind.
Have you been using open source models in production? If so, how has the experience been for you so far?
And if not, do you feel it's still a bit too early to rely on them in real-world production environments?
I'd really love to hear your thoughts and experiences.
Thanks in advance! 🙂
| 2026-01-23T20:34:41 | https://www.reddit.com/r/LocalLLaMA/comments/1ql25mq/are_you_using_open_source_models_in_production_yet/ | thecalmgreen | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ql25mq | false | null | t3_1ql25mq | /r/LocalLLaMA/comments/1ql25mq/are_you_using_open_source_models_in_production_yet/ | false | false | self | 11 | null |
What to do? | 0 | I tried disactivating windows defender when opening lm studio it did not help | 2026-01-23T20:20:16 | Effective_Composer_5 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1ql1rzg | false | null | t3_1ql1rzg | /r/LocalLLaMA/comments/1ql1rzg/what_to_do/ | false | false | 0 | {'enabled': True, 'images': [{'id': 'zVaL2y_9kdhA2opncVtyurefGa3sMcf6Fj2Um7vJTOk', 'resolutions': [{'height': 42, 'url': 'https://preview.redd.it/slkoksuvi5fg1.png?width=108&crop=smart&auto=webp&s=207fa221b42227154611e5e75f6da8b0d5d9f15f', 'width': 108}, {'height': 84, 'url': 'https://preview.redd.it/slkoksuvi5fg1.png?width=216&crop=smart&auto=webp&s=66f1a3eee551a9a4b2503817daa07232cc621ff0', 'width': 216}, {'height': 125, 'url': 'https://preview.redd.it/slkoksuvi5fg1.png?width=320&crop=smart&auto=webp&s=8eed8e56b5ff44e14303d5a7dde36ee62f477511', 'width': 320}], 'source': {'height': 184, 'url': 'https://preview.redd.it/slkoksuvi5fg1.png?auto=webp&s=7877bc1f228a1db81d758082e0f864b9c84cb489', 'width': 469}, 'variants': {}}]} | ||
Controlled Language Models: a replacement for fine-tuning via decode-time control, tokenizer engineering, and bounded recursion | 0 | This release documents what we’re calling **Controlled Language Models (CLMs)** — a control-centric approach to language modeling that reframes LLMs as **dynamical systems**, not static predictors.
Instead of repeatedly fine-tuning models to chase behavioral fixes, CLMs shift most behavioral control to **decode-time and structural mechanisms**, with training used only where strictly necessary.
# Core idea
A large fraction of what we fine-tune for today — repetition, verbosity, assistant tone, alignment-style behaviors — **emerges before decoding even begins**.
That means these behaviors can be:
* detected early,
* predicted from hidden states,
* and controlled *before* tokens are emitted.
CLMs formalize this.
# What’s actually implemented
This is a **full technical reference / preprint**, not a concept note. It includes:
* **Predictive decode-time control** using hidden-state observability (not reactive penalties)
* **Control-Field Holonomy (CF-HoT)**: a multi-head predictor that flags instability before emission
* **Tokenizer engineering as a first-class control surface** (merge / split / add with rollback)
* **Bounded recursive optimization** with frozen judges, canary testing, and commit/rollback semantics
* Dense training pipelines designed to *avoid* Goodhart collapse rather than amplify it
* Full configs, thresholds, and reproducibility notes for consumer hardware
One concrete result: a **125× class separation** in repetition-risk detection, enabling smooth gating instead of brute penalties.
# What this replaces
* Repeated fine-tuning for behavioral fixes
* “Assistant-style” RLHF loops that collapse under recursion
* Scaling parameters just to regain lost control
The base model becomes a **foundational substrate**. Behavior lives in control.
# What this is not
* Not AGI
* Not open-ended self-improvement
* Not autonomous internet learning
All optimization is **bounded, reversible, and explicitly evaluated**.
# Why post this
If you’re working with:
* small / mid-scale models that plateau,
* long-horizon agents that degrade,
* or inference-time inefficiency,
this may be relevant. The goal is not bigger models — it’s **more controllable ones**.
# Links
* **Full Controlled Language Models technical reference (Zenodo, DOI):** [https://zenodo.org/records/18344021](https://zenodo.org/records/18344021)
* Huggingface - [https://huggingface.co/LoganResearch/ARC-Base-8B-Condensed](https://huggingface.co/LoganResearch/ARC-Base-8B-Condensed)
I’m especially interested in feedback on:
* tokenizer co-evolution as a control interface
* decode-time control vs fine-tuning tradeoffs
* where this breaks down in practice
**Note:** This is a preprint technical reference. Known limitations, regressions, and non-goals are explicitly documented. Independent reproduction and critique are encouraged. | 2026-01-23T19:54:11 | BiscottiDisastrous19 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1ql12ua | false | null | t3_1ql12ua | /r/LocalLLaMA/comments/1ql12ua/controlled_language_models_a_replacement_for/ | false | false | default | 0 | {'enabled': True, 'images': [{'id': 'fx01dk62m5fg1', 'resolutions': [{'height': 148, 'url': 'https://preview.redd.it/fx01dk62m5fg1.png?width=108&crop=smart&auto=webp&s=a2d6e8832bd7a6f9948979e29bc83003070e08a3', 'width': 108}, {'height': 297, 'url': 'https://preview.redd.it/fx01dk62m5fg1.png?width=216&crop=smart&auto=webp&s=ce8637ed7a2e1a4a002fc09e50be4a45217929d5', 'width': 216}, {'height': 440, 'url': 'https://preview.redd.it/fx01dk62m5fg1.png?width=320&crop=smart&auto=webp&s=373a5433860fc5469c285cd6ea29c3f9782403a6', 'width': 320}], 'source': {'height': 866, 'url': 'https://preview.redd.it/fx01dk62m5fg1.png?auto=webp&s=d9efd3f2cbb722996b59a160b517e9eb74d9cf5c', 'width': 629}, 'variants': {}}]} | |
Thinking of making a dynamic Chore Bot - is this a bad idea? | 1 | Ok so talking with my wife last night got me thinking. She was complaining about having to track tasks around the house that need to be done. I have been playing with ollama on some old crypto mining hardware. Not great level stuff but a few 8 GB vram and a 12 GB vram card. I was thinking the it could be cool to build a task tracker that dynamically adds tasks based off mess Identification. I have a few ways I could do the image sourcing. 1. Statically placed cameras. 2. cameras placed on multi axis arms to allow visibility around corners or in bedrooms where you don't want a static image. 3. Autonomous drone/s flying or ground based.
While a flying drone spying on your kids making messes would be fun it would also get annoying pretty quickly. A ground based drone would have to deal with stairs but could also be configured for some automated cleaning if done right. (Thinking hauling laundry or toys)
I have a few wyze cameras I would be using to capture the image feeds most likely. I have a few Arduinos and RPs that I would use for dynamic camera control but, I would need to source hardware for any dynamic movement be that drone or arm based.
The actual app would start as a simple webui I can access to check things off or assign tasks out to the kids.
So the question is what other challenges do you foresee running into with local models. Any thoughts on things that I should focus on first? Anything that I am glaringly missing from this Idea? Would this be something that would interest the community if I did get it to a working state?
| 2026-01-23T19:53:57 | https://www.reddit.com/r/LocalLLaMA/comments/1ql12n1/thinking_of_making_a_dynamic_chore_bot_is_this_a/ | badguyty | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ql12n1 | false | null | t3_1ql12n1 | /r/LocalLLaMA/comments/1ql12n1/thinking_of_making_a_dynamic_chore_bot_is_this_a/ | false | false | self | 1 | null |
People in the US, how are you powering your rigs on measly 120V outlets? | 26 | I’ve seen many a 10x GPU rig on here and my only question is how are you powering these things lol | 2026-01-23T19:38:01 | https://www.reddit.com/r/LocalLLaMA/comments/1ql0nep/people_in_the_us_how_are_you_powering_your_rigs/ | humandisaster99 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ql0nep | false | null | t3_1ql0nep | /r/LocalLLaMA/comments/1ql0nep/people_in_the_us_how_are_you_powering_your_rigs/ | false | false | self | 26 | null |
Where should I start with local AI? Realistic images and video generation | 2 | Hi everyone,
I’d like to seriously start learning how to run AI models locally, with a main focus on realistic image and video generation (photorealism, portraits, real-world scenes, coherent video clips, etc.).
I’m basically a beginner with tools like Stable Diffusion, so I’m looking for advice on:
• where to start (guides, subreddits, YouTube channels, docs)
• which software is currently recommended for images and video
• core concepts I should understand early to avoid wasting time
My current hardware:
• 32 GB RAM
• RTX 3060 Ti
• Ryzen 7 5800X
Do you think this setup is good enough to get decent realistic results locally, both for images and short video generation (low/medium resolution)?
Any tips are appreciated: recommended models, typical workflows, and common beginner mistakes to avoid.
Thanks! | 2026-01-23T19:37:01 | https://www.reddit.com/r/LocalLLaMA/comments/1ql0mfk/where_should_i_start_with_local_ai_realistic/ | ilnab | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ql0mfk | false | null | t3_1ql0mfk | /r/LocalLLaMA/comments/1ql0mfk/where_should_i_start_with_local_ai_realistic/ | false | false | self | 2 | null |
Problems with local Agentic Browsers | 1 | Today I tried using agentic browsers with local models on an Asus Rog Flow Z13, and the results were far from optimistic...
\#### 1) BrowserOS + gpt-oss 20b (32 K context)
My primary task was to:
1. Open amazon .com
2. Search for a product (Pixel 10 in my case)
3. Choose a different color option
gpt‑oss is good at tool use, but the agentic browser was heavily unoptimised and consumed nearly 70 % of the context on the first two requests (Amazon + search). On the third request the model simply died.
\#### 2) Playwright MCP + gpt-oss 20b (32 K context)
I then tried a native‑browser experience for the model, using Playwright’s “MCP” interface. The same problems persisted.
I could increase the context size and load more VRAM, but that defeats my goal of keeping resource usage low for tasks that don’t need to run 24/7. For general use cases I still see no viable options.
If you’ve had a different experience or have alternative approaches, I’d love to hear them. As of now, I can’t find a practical way to make this work on my local machine.
\---
P.S.
There is indeed a way to automate browser interactions with models using predefined tasks, that will be the subject of another post ) | 2026-01-23T19:32:41 | https://www.reddit.com/r/LocalLLaMA/comments/1ql0i7p/problems_with_local_agentic_browsers/ | FeiX7 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ql0i7p | false | null | t3_1ql0i7p | /r/LocalLLaMA/comments/1ql0i7p/problems_with_local_agentic_browsers/ | false | false | self | 1 | null |
What is the current best TTS that can run on IPad (8gb ram) with voice cloning? | 1 | If there even is any | 2026-01-23T19:31:03 | https://www.reddit.com/r/LocalLLaMA/comments/1ql0gkm/what_is_the_current_best_tts_that_can_run_on_ipad/ | Adventurous-Gold6413 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ql0gkm | false | null | t3_1ql0gkm | /r/LocalLLaMA/comments/1ql0gkm/what_is_the_current_best_tts_that_can_run_on_ipad/ | false | false | self | 1 | null |
PSA: SSRF issue in Microsoft’s Markitdown MCP server (unbounded URI calls) | 2 | Found an SSRF issue in Microsoft’s Markitdown MCP server, "convert\_to\_markdown" allows unbounded URI calls with no validation.
Pointed it at [169.254.169.254](http://169.254.169.254/), retrieved IAM role name, then grabbed AccessKeyId/SecretAccessKey/Token. Two requests.
Works on any EC2 instance using IMDSv1 with an attached role. Also works against any internal resource the MCP server can reach.
Microsoft and AWS were notified. Workarounds: run on stdio (Microsoft's own recommendation), use IMDSv2.
Bigger finding: We scanned 7K+ MCP servers. 36.7% have potential SSRF exposure. Classic SSRF, but endemic to how MCP servers are being built.
Full writeup: [https://www.darkreading.com/application-security/microsoft-anthropic-mcp-servers-risk-takeovers](https://www.darkreading.com/application-security/microsoft-anthropic-mcp-servers-risk-takeovers) | 2026-01-23T18:21:47 | https://www.reddit.com/r/LocalLLaMA/comments/1qkyk28/psa_ssrf_issue_in_microsofts_markitdown_mcp/ | Upstairs_Safe2922 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qkyk28 | false | null | t3_1qkyk28 | /r/LocalLLaMA/comments/1qkyk28/psa_ssrf_issue_in_microsofts_markitdown_mcp/ | false | false | self | 2 | null |
Your post is getting popular and we just featured it on our Discord! | 539 | Your post is getting popular and we just featured it on our Discord! Come check it out!
You've also been given a special flair for your contribution. We appreciate your post!
I am a bot and this action was performed automatically.
-----------------------------------------------------
Can you change this marketing bot to make these private messages to the OP of the post instead of pinning it to the top of all the threads? Are you making money off the discord or something? I don't know about anyone else but these bot spam posts are annoying. You make it appear you are talking to the OP so a private message would be better. You already have a pinned thread at the top of this reddit letting everyone know about the discord that's been there for the past 5 months. | 2026-01-23T18:16:47 | https://www.reddit.com/r/LocalLLaMA/comments/1qkyex0/your_post_is_getting_popular_and_we_just_featured/ | roculus | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qkyex0 | false | null | t3_1qkyex0 | /r/LocalLLaMA/comments/1qkyex0/your_post_is_getting_popular_and_we_just_featured/ | false | false | self | 539 | null |
Sweep: Open-weights 1.5B model for next-edit autocomplete | 109 | Hey r/LocalLLaMA, we just open-sourced a 1.5B parameter model that predicts your next code edits. You can grab the weights on [Hugging Face](https://huggingface.co/sweepai/sweep-next-edit-1.5b) or try it out via our [JetBrains plugin](https://plugins.jetbrains.com/plugin/26860-sweep-ai-autocomp).
**What makes this different from regular autocomplete?**
Next-edit prediction uses your *recent edits* as context, not just the code around your cursor. So if you're renaming a variable or making repetitive changes, it anticipates what you're doing next. The model is small enough to run locally and actually outperforms models 4x its size on both speed and accuracy.
**Some things we learned:**
* **Prompt format matters way more than expected.** We ran a genetic algorithm over 30+ diff formats and found that simple `<original>` / `<updated>` blocks beat unified diffs. Turns out verbose formats are just easier for smaller models to grok.
* **RL fixed what SFT couldn't.** Training was SFT on \~100k examples from permissively-licensed repos (4 hrs on 8xH100), then 2000 steps of RL with tree-sitter parse checking and size regularization. This cleaned up edge cases like unparseable code and overly verbose outputs.
**Benchmarks:**
We tested against Mercury (Inception), Zeta (Zed), and Instinct (Continue) across five benchmarks: next-edit above/below cursor, tab-to-jump, standard FIM, and noisiness. Exact-match accuracy ended up correlating best with real-world usability since code is precise and the solution space is small.
We're releasing the weights so anyone can build fast, privacy-preserving autocomplete for whatever editor they use. If you're working on VSCode, Neovim, or anything else, we'd love to see what you build with it!
Happy to answer questions. | 2026-01-23T17:57:04 | https://www.reddit.com/r/LocalLLaMA/comments/1qkxuv1/sweep_openweights_15b_model_for_nextedit/ | Kevinlu1248 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qkxuv1 | false | null | t3_1qkxuv1 | /r/LocalLLaMA/comments/1qkxuv1/sweep_openweights_15b_model_for_nextedit/ | false | false | self | 109 | {'enabled': False, 'images': [{'id': 'rNyx0_moYXmSfxJ1UouS1sHiyvmEqEu5s8zQS3OkbYw', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/rNyx0_moYXmSfxJ1UouS1sHiyvmEqEu5s8zQS3OkbYw.png?width=108&crop=smart&auto=webp&s=5b117ad741810b4c9713951544e1b2cecb8ad0e3', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/rNyx0_moYXmSfxJ1UouS1sHiyvmEqEu5s8zQS3OkbYw.png?width=216&crop=smart&auto=webp&s=3d8bff3ce0c189909a1b588604f9bb0194234784', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/rNyx0_moYXmSfxJ1UouS1sHiyvmEqEu5s8zQS3OkbYw.png?width=320&crop=smart&auto=webp&s=da2e9b8360ccb95c68f61baed580c1b69327a211', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/rNyx0_moYXmSfxJ1UouS1sHiyvmEqEu5s8zQS3OkbYw.png?width=640&crop=smart&auto=webp&s=accb704a716e4257ae378cb37c3d1eaf815e0754', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/rNyx0_moYXmSfxJ1UouS1sHiyvmEqEu5s8zQS3OkbYw.png?width=960&crop=smart&auto=webp&s=bed49088601260812082c333d1000ae668af8c70', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/rNyx0_moYXmSfxJ1UouS1sHiyvmEqEu5s8zQS3OkbYw.png?width=1080&crop=smart&auto=webp&s=3372279e3737083bd748ab7c69851c432db14482', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/rNyx0_moYXmSfxJ1UouS1sHiyvmEqEu5s8zQS3OkbYw.png?auto=webp&s=fb3f43b90b1616b9e786b7ee1c6835c28e41bfa2', 'width': 1200}, 'variants': {}}]} |
Reverse Engineering a $500M Mystery: From HashHop to Memory-Augmented Language Models | 6 | 2026-01-23T17:46:57 | https://huggingface.co/blog/codelion/reverse-engineering-magic-hashhop | aitutistul | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1qkxkth | false | null | t3_1qkxkth | /r/LocalLLaMA/comments/1qkxkth/reverse_engineering_a_500m_mystery_from_hashhop/ | false | false | 6 | {'enabled': False, 'images': [{'id': 'EyRwmekx_uVt17sPtBXjpuOBTqwnB5skDNDBy3a8p94', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/EyRwmekx_uVt17sPtBXjpuOBTqwnB5skDNDBy3a8p94.jpeg?width=108&crop=smart&auto=webp&s=3f7c8fa391d9f7f4458d0c80cc29d7b3147056ea', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/EyRwmekx_uVt17sPtBXjpuOBTqwnB5skDNDBy3a8p94.jpeg?width=216&crop=smart&auto=webp&s=2fb8c5565c5d09e086bfd2275009823bb7ea0300', 'width': 216}, {'height': 167, 'url': 'https://external-preview.redd.it/EyRwmekx_uVt17sPtBXjpuOBTqwnB5skDNDBy3a8p94.jpeg?width=320&crop=smart&auto=webp&s=ff4197ccd5a57dbda7a9721f00503db7b29a794b', 'width': 320}, {'height': 335, 'url': 'https://external-preview.redd.it/EyRwmekx_uVt17sPtBXjpuOBTqwnB5skDNDBy3a8p94.jpeg?width=640&crop=smart&auto=webp&s=1dd81440f9c5fe7f21f6783337e3dd754c804e2a', 'width': 640}, {'height': 502, 'url': 'https://external-preview.redd.it/EyRwmekx_uVt17sPtBXjpuOBTqwnB5skDNDBy3a8p94.jpeg?width=960&crop=smart&auto=webp&s=833acc36e50512670bb8b31412db211d9bf3e9e9', 'width': 960}, {'height': 565, 'url': 'https://external-preview.redd.it/EyRwmekx_uVt17sPtBXjpuOBTqwnB5skDNDBy3a8p94.jpeg?width=1080&crop=smart&auto=webp&s=20b6dd011595284fb2c1c6a25638eb77ca91b3d0', 'width': 1080}], 'source': {'height': 1000, 'url': 'https://external-preview.redd.it/EyRwmekx_uVt17sPtBXjpuOBTqwnB5skDNDBy3a8p94.jpeg?auto=webp&s=f66449caa09f2b18f159d1f9a03093efd011187d', 'width': 1910}, 'variants': {}}]} | ||
What is the absoulute best opensource programing model for C++ under 8B parameters? | 7 | Its jobs its to program singular funcions nothing else just funcions so about 10 - 250 lines of code max. It needs to run max 2-3 min per task on 16GB windows machine with 680M and need to have GGUF available. Tools calling doenst matter. It matters how many funcion does it know and how to code them right. Czech language support for additional comments. Would be welcome but not nesseary. | 2026-01-23T17:40:26 | https://www.reddit.com/r/LocalLLaMA/comments/1qkxejk/what_is_the_absoulute_best_opensource_programing/ | Mychma | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qkxejk | false | null | t3_1qkxejk | /r/LocalLLaMA/comments/1qkxejk/what_is_the_absoulute_best_opensource_programing/ | false | false | self | 7 | null |
Apparently, the models aren't private. 🤔 , Does ollama log exist? 🤔 | 0 | A guy told me this: " Your projects have never been private. Even in local models, they are built to allow remote observation as part of the privacy agreement. That's why they made the decision; they realized that many people are building profitable designs and structures with AI, so according to the terms and conditions, OpenAI can not only observe but also claim a certain proportion of intellectual property." is it real? | 2026-01-23T17:38:30 | Illustrious-Swim9663 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1qkxcns | false | null | t3_1qkxcns | /r/LocalLLaMA/comments/1qkxcns/apparently_the_models_arent_private_does_ollama/ | false | false | default | 0 | {'enabled': True, 'images': [{'id': 'te6057ixx4fg1', 'resolutions': [{'height': 101, 'url': 'https://preview.redd.it/te6057ixx4fg1.jpeg?width=108&crop=smart&auto=webp&s=8766fad3fd2b86f6cac38aa12f1c5447a8bc7ba5', 'width': 108}, {'height': 202, 'url': 'https://preview.redd.it/te6057ixx4fg1.jpeg?width=216&crop=smart&auto=webp&s=571436cf58ac4a9db1b232ca796c9b1eac579506', 'width': 216}, {'height': 300, 'url': 'https://preview.redd.it/te6057ixx4fg1.jpeg?width=320&crop=smart&auto=webp&s=f0f315d0535f96c658b9f2119137543f515c66c6', 'width': 320}, {'height': 600, 'url': 'https://preview.redd.it/te6057ixx4fg1.jpeg?width=640&crop=smart&auto=webp&s=4a057d261e7950d901f6b661102170a42357cc17', 'width': 640}], 'source': {'height': 837, 'url': 'https://preview.redd.it/te6057ixx4fg1.jpeg?auto=webp&s=71011ff1e7d1f0c9faa1d94d5908bfc250cb1bcb', 'width': 892}, 'variants': {}}]} | |
What are the best coding embedding models? | 3 | I am looking for ways to tell if two pieces of code are essentially the same. Is there an open equivalent of openai's text-embedding-3-small ? | 2026-01-23T17:28:31 | https://www.reddit.com/r/LocalLLaMA/comments/1qkx2sl/what_are_the_best_coding_embedding_models/ | MrMrsPotts | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qkx2sl | false | null | t3_1qkx2sl | /r/LocalLLaMA/comments/1qkx2sl/what_are_the_best_coding_embedding_models/ | false | false | self | 3 | null |
Specification for instruction following - rfc2119 from LLMs | 2 | Sometimes I found myself wrestling with LLMs (especially dumber ones) to follow a specific set of instructions (provided in natural language).
Does there exist a standard (e.g. https://www.ietf.org/rfc/rfc2119.txt) that LLMs are trained on to better enforce rules in natural language (e.g. NEVER USE table and USE bullet point instead)? | 2026-01-23T17:21:14 | https://www.reddit.com/r/LocalLLaMA/comments/1qkwved/specification_for_instruction_following_rfc2119/ | S1M0N38 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qkwved | false | null | t3_1qkwved | /r/LocalLLaMA/comments/1qkwved/specification_for_instruction_following_rfc2119/ | false | false | self | 2 | null |
Claude Code + Ollama = Free, Local AI Coding (Here’s How) | 0 | Complete step by step tutorial! | 2026-01-23T17:20:07 | https://youtu.be/yuQCtrHVD0Q?si=nUD3o_dHZyhFiSXM | buntyshah2020 | youtu.be | 1970-01-01T00:00:00 | 0 | {} | 1qkwua4 | false | {'oembed': {'author_name': 'Bunty Shah', 'author_url': 'https://www.youtube.com/@aiwithbuntyshah', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/yuQCtrHVD0Q?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen title="Claude Code + Ollama = Free, Local AI Coding (Here’s How)"></iframe>', 'provider_name': 'YouTube', 'provider_url': 'https://www.youtube.com/', 'thumbnail_height': 360, 'thumbnail_url': 'https://i.ytimg.com/vi/yuQCtrHVD0Q/hqdefault.jpg', 'thumbnail_width': 480, 'title': 'Claude Code + Ollama = Free, Local AI Coding (Here’s How)', 'type': 'video', 'version': '1.0', 'width': 356}, 'type': 'youtube.com'} | t3_1qkwua4 | /r/LocalLLaMA/comments/1qkwua4/claude_code_ollama_free_local_ai_coding_heres_how/ | false | false | default | 0 | {'enabled': False, 'images': [{'id': 'eRBvDgyy5zrdXvVAo9OXSb_nBsCWyVE7jMnpioB4kTk', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/eRBvDgyy5zrdXvVAo9OXSb_nBsCWyVE7jMnpioB4kTk.jpeg?width=108&crop=smart&auto=webp&s=3402bf366814efd601630a71d1a5b5a0428e5016', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/eRBvDgyy5zrdXvVAo9OXSb_nBsCWyVE7jMnpioB4kTk.jpeg?width=216&crop=smart&auto=webp&s=4e9387c62072c2a9ada88fd5c8aadc7ba340daea', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/eRBvDgyy5zrdXvVAo9OXSb_nBsCWyVE7jMnpioB4kTk.jpeg?width=320&crop=smart&auto=webp&s=4e37b4b40f40c0e18bc6131cba78db7050bfa1d4', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/eRBvDgyy5zrdXvVAo9OXSb_nBsCWyVE7jMnpioB4kTk.jpeg?auto=webp&s=f604c0bca9d70902f8bf40edfc4279510b01321f', 'width': 480}, 'variants': {}}]} |
Feedback on a new budget hardware build | 2 | New and unfamiliar to building a workstation or PC at home. But trying to put a build together to experiment with running local LLMs and models. Wish I had gone with server hardware and RDIMM, but I had already purchased a bunch of UDIMM before prices went up, so I ended up planning to build around the RAM and GPUs I already have.
My planned build is below. Any feedback on key components that I am missing?
| Component | Item | Price |
|:--- |:--- |:--- |
| \*\*CPU\*\* | Intel Core i9-10900X (3.70 GHz) | $175 |
| \*\*CPU Cooler\*\* | Scythe FUMA3 Twin Tower | $33 |
| \*\*Motherboard\*\* | MSI X299 RAIDER Intel X299 DDR4 LGA 2066 ATX Motherboard | $83 |
| \*\*Memory\*\* | Teamgroup Zeus 64GB Kit (2x32GB) DDR4-3200 CL20 | $127 |
| \*\*Memory\*\* | Teamgroup Zeus 64GB Kit (2x32GB) DDR4-3200 CL20 | $127 |
| \*\*Memory\*\* | Rimlance 64GB Kit (2x32GB) DDR4-3200 CL22 | $199 |
| \*\*Memory\*\* | Rimlance 64GB Kit (2x32GB) DDR4-3200 CL22 | $199 |
| \*\*Storage\*\* | Patriot P300 2TB NVMe SSD | $170 |
| \*\*Video Card\*\* | RTX 2060 Super 8GB (Owned) | $0 |
| \*\*Video Card\*\* | RTX 5060 Ti 16GB | $370 |
| \*\*Video Card\*\* | RTX 5060 Ti 16GB | $370 |
| \*\*Case\*\* | Open Chassis Rack (EATX Test Bench) | $28 |
| \*\*Power Supply\*\* | SAMA P1200 1200W Platinum (ATX 3.1) | $130 |
| 2026-01-23T17:08:53 | https://www.reddit.com/r/LocalLLaMA/comments/1qkwj6i/feedback_on_a_new_budget_hardware_build/ | Diligent-Culture-432 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qkwj6i | false | null | t3_1qkwj6i | /r/LocalLLaMA/comments/1qkwj6i/feedback_on_a_new_budget_hardware_build/ | false | false | self | 2 | null |
Rtx Pro 6000 on HP Omen gaming rig? | 0 | Hey all, not sure if this is the appropriate place, but I do see a lot of builds posted. Finally bit the bullet and got a 6000, upgrading from a 4090. But I’m getting no display output, which apparently is common with these HPs.
Research seems to point to some proprietary crap HP does on their MBs/Bios. I’m wondering if anyone has successfully put a 6000 in one of these Omen rigs? Specifically the 45L. Hoping I don’t need to shell out cash for a new rig too.
I’ve been through everything I can find online troubleshooting wise, and validated the card is fine on a different pc (lower-end crap even). Seems crazy to me in this day and age that I can’t upgrade.
Appreciate the time. Sorry again if this post is against the rules here. | 2026-01-23T17:02:38 | https://www.reddit.com/r/LocalLLaMA/comments/1qkwcyz/rtx_pro_6000_on_hp_omen_gaming_rig/ | jeffroeast | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qkwcyz | false | null | t3_1qkwcyz | /r/LocalLLaMA/comments/1qkwcyz/rtx_pro_6000_on_hp_omen_gaming_rig/ | false | false | self | 0 | null |
16x V100's worth it? | 25 | Found a machine near me:
* CPU: 2\*Intel Xeon Platinum 8160 48 Cores 96 Threads
* GPU: 16x Tesla V100 32GB HBM2 SXM3 (512GB VRAM in total)
* Ram: 128GB DDR4 Server ECC Rams Storage:
* 960GB NVME SSD
Obviously not the latest and greatest - but 512gb of VRAM sounds like a lot of fun....
How much will the downsides (no recent support I believe) have too much impact?
\~$11k USD
https://preview.redd.it/c38iqiymo4fg1.jpg?width=720&format=pjpg&auto=webp&s=0ef5f9458d5082c478900c4cef413ba8951b2e3c
| 2026-01-23T16:48:18 | https://www.reddit.com/r/LocalLLaMA/comments/1qkvytk/16x_v100s_worth_it/ | notafakename10 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qkvytk | false | null | t3_1qkvytk | /r/LocalLLaMA/comments/1qkvytk/16x_v100s_worth_it/ | false | false | 25 | null | |
FlashLabs Researchers Release Chroma 1.0: A 4B Real Time Speech Dialogue Model With Personalized Voice Cloning - MarkTechPost | 6 | Not the author/owner. | 2026-01-23T16:25:25 | https://www.marktechpost.com/2026/01/21/flashlabs-researchers-release-chroma-1-0-a-4b-real-time-speech-dialogue-model-with-personalized-voice-cloning/#amp_tf=From%20%251%24s&aoh=17691848604524&csi=0&referrer=https%3A%2F%2Fwww.google.com&share=https%3A%2F%2Fwww.marktechpost.com%2F2026%2F01%2F21%2Fflashlabs-researchers-release-chroma-1-0-a-4b-real-time-speech-dialogue-model-with-personalized-voice-cloning%2F | GuideAxon | marktechpost.com | 1970-01-01T00:00:00 | 0 | {} | 1qkvccs | false | null | t3_1qkvccs | /r/LocalLLaMA/comments/1qkvccs/flashlabs_researchers_release_chroma_10_a_4b_real/ | false | false | default | 6 | {'enabled': False, 'images': [{'id': 'aeBBvYsMIXYKrqTqscTHGAvGm4l7WgT5cmvyeFR6q8k', 'resolutions': [{'height': 77, 'url': 'https://external-preview.redd.it/aeBBvYsMIXYKrqTqscTHGAvGm4l7WgT5cmvyeFR6q8k.png?width=108&crop=smart&auto=webp&s=16f2454ff2f5cdb99a8877b1fff65164367dbe4d', 'width': 108}, {'height': 154, 'url': 'https://external-preview.redd.it/aeBBvYsMIXYKrqTqscTHGAvGm4l7WgT5cmvyeFR6q8k.png?width=216&crop=smart&auto=webp&s=f572b7a92e5da05eb69dba12d4a7239b7160a8e2', 'width': 216}, {'height': 228, 'url': 'https://external-preview.redd.it/aeBBvYsMIXYKrqTqscTHGAvGm4l7WgT5cmvyeFR6q8k.png?width=320&crop=smart&auto=webp&s=4a43f74f6c7b6f09411333ce41462f376961a154', 'width': 320}, {'height': 457, 'url': 'https://external-preview.redd.it/aeBBvYsMIXYKrqTqscTHGAvGm4l7WgT5cmvyeFR6q8k.png?width=640&crop=smart&auto=webp&s=d78898e537d45dd09573118331870965d7534d25', 'width': 640}, {'height': 685, 'url': 'https://external-preview.redd.it/aeBBvYsMIXYKrqTqscTHGAvGm4l7WgT5cmvyeFR6q8k.png?width=960&crop=smart&auto=webp&s=ea7a2d8582a9058f95540ae70faac8e700df5437', 'width': 960}, {'height': 771, 'url': 'https://external-preview.redd.it/aeBBvYsMIXYKrqTqscTHGAvGm4l7WgT5cmvyeFR6q8k.png?width=1080&crop=smart&auto=webp&s=bc4e220d7a0ba3f79fc3f8352d0d48365af086be', 'width': 1080}], 'source': {'height': 1563, 'url': 'https://external-preview.redd.it/aeBBvYsMIXYKrqTqscTHGAvGm4l7WgT5cmvyeFR6q8k.png?auto=webp&s=566eb31350123751c4532df318dc17fa933990c6', 'width': 2188}, 'variants': {}}]} |
I built SudoAgent: runtime guardrails for AI agent tool calls (policy + approval + audit) | 2 | I shipped a small Python library called SudoAgent to put a *runtime gate* in front of “dangerous” agent/tool functions (refunds, deletes, API writes, prod changes).
What it does
* Evaluates a Policy over call context (action + args/kwargs)
* If needed, asks a human to approve (terminal y/n in v0.1.1)
* Writes JSONL audit entries linked by request\_id
Semantics (the part I cared about most)
* Decision logging is fail-closed: if we can’t write the decision entry, the function does not run.
* Outcome logging is best-effort: logging failures don’t change return/exception.
* Redacts common secret key names + value patterns (JWT-like, sk-, PEM blocks).
Design goal
Framework-agnostic + minimal surface area. You can inject your own Approver (Slack/web UI) or AuditLogger (DB/centralized logging).
If you’ve built agent tooling in prod:
1. What approval UX patterns actually work (avoid approval fatigue)?
2. What would you want in v0.2 (Slack adapter, policy DSL, rate/budget limits, etc.)?
Repo I shipped a small Python library called SudoAgent to put a *runtime gate* in front of “dangerous” agent/tool functions (refunds, deletes, API writes, prod changes).
What it does
* Evaluates a Policy over call context (action + args/kwargs)
* If needed, asks a human to approve (terminal y/n in v0.1.1)
* Writes JSONL audit entries linked by request\_id
Semantics (the part I cared about most)
* Decision logging is fail-closed: if we can’t write the decision entry, the function does not run.
* Outcome logging is best-effort: logging failures don’t change return/exception.
* Redacts common secret key names + value patterns (JWT-like, sk-, PEM blocks).
Design goal
Framework-agnostic + minimal surface area. You can inject your own Approver (Slack/web UI) or AuditLogger (DB/centralized logging).
If you’ve built agent tooling in prod:
1. What approval UX patterns actually work (avoid approval fatigue)?
2. What would you want in v0.2 (Slack adapter, policy DSL, rate/budget limits, etc.)?
Repo [https://github.com/lemnk/Sudo-agent](https://github.com/lemnk/Sudo-agent)
Pyip [https://pypi.org/project/sudoagent/](https://pypi.org/project/sudoagent/) | 2026-01-23T16:08:40 | https://www.reddit.com/r/LocalLLaMA/comments/1qkuw2r/i_built_sudoagent_runtime_guardrails_for_ai_agent/ | No_Loan5230 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qkuw2r | false | null | t3_1qkuw2r | /r/LocalLLaMA/comments/1qkuw2r/i_built_sudoagent_runtime_guardrails_for_ai_agent/ | false | false | self | 2 | {'enabled': False, 'images': [{'id': 'Y_UzN-BZTKwepFGdbZdd3LxxVbnZjrnpVntRyAJaHVE', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/Y_UzN-BZTKwepFGdbZdd3LxxVbnZjrnpVntRyAJaHVE.png?width=108&crop=smart&auto=webp&s=7968e13bf773ca77ab60b6495f9099f1031ff760', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/Y_UzN-BZTKwepFGdbZdd3LxxVbnZjrnpVntRyAJaHVE.png?width=216&crop=smart&auto=webp&s=7509f40f810dc9aa3a752e61815d5a750815db8c', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/Y_UzN-BZTKwepFGdbZdd3LxxVbnZjrnpVntRyAJaHVE.png?width=320&crop=smart&auto=webp&s=d4d6afc01af4b4742234b6addf0452933003af50', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/Y_UzN-BZTKwepFGdbZdd3LxxVbnZjrnpVntRyAJaHVE.png?width=640&crop=smart&auto=webp&s=371a7103c7729b7debf9848193ea113365a123dd', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/Y_UzN-BZTKwepFGdbZdd3LxxVbnZjrnpVntRyAJaHVE.png?width=960&crop=smart&auto=webp&s=483bf3fd1bdac134215037e8006a77ece8ae43aa', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/Y_UzN-BZTKwepFGdbZdd3LxxVbnZjrnpVntRyAJaHVE.png?width=1080&crop=smart&auto=webp&s=8a9fb3c9bd38e40a21fd10cc317590d7fe3691cd', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/Y_UzN-BZTKwepFGdbZdd3LxxVbnZjrnpVntRyAJaHVE.png?auto=webp&s=b0eaf0488efa622b8ee96bd48d5117e23be1191e', 'width': 1200}, 'variants': {}}]} |
Invest in hardware now or wait? | 12 | I'm currently running models on my desktop pc but I want a dedicated machine with a small footprint.
Should I invest in an m4 mac mini now or wait for the m5?
Or are there other solutions at a similar price point? | 2026-01-23T16:07:10 | https://www.reddit.com/r/LocalLLaMA/comments/1qkuun4/invest_in_hardware_now_or_wait/ | d4nger_n00dle | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qkuun4 | false | null | t3_1qkuun4 | /r/LocalLLaMA/comments/1qkuun4/invest_in_hardware_now_or_wait/ | false | false | self | 12 | null |
I built a 100% offline voice-to-text app using whisper and llama.cpp running qwen3 | 0 | Hey r/LocalLLaMA 👋
I built [**andak.app**](https://andak.app) a **native macOS voice-to-text app that runs 100% locally using whisper and llama.cpp running qwen3**.
Im fascinated with the local model movement and could't stay away from building an app using them. The transcription pipeline does the following:
Mic input --> Whisper.cpp --> lingua-go (to detect language) --> prompt Qwen3 to improve writing using the context of the app where the content should go to
Is this architecture sufficient? would love your feedback
Models I use are:
\- Qwen 3 4B Instruct
\- large-v3-turbo-q8\_0 | 2026-01-23T16:04:39 | https://www.reddit.com/r/LocalLLaMA/comments/1qkus3x/i_built_a_100_offline_voicetotext_app_using/ | AmineAfia | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qkus3x | false | null | t3_1qkus3x | /r/LocalLLaMA/comments/1qkus3x/i_built_a_100_offline_voicetotext_app_using/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'vKAoOOzpahLLFyS8yMWcctphUy32pFXI1ATNZlSlYSs', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/vKAoOOzpahLLFyS8yMWcctphUy32pFXI1ATNZlSlYSs.jpeg?width=108&crop=smart&auto=webp&s=175ac98e3178e97445618cde1b4788efa4b9581c', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/vKAoOOzpahLLFyS8yMWcctphUy32pFXI1ATNZlSlYSs.jpeg?width=216&crop=smart&auto=webp&s=e4871e669fbd8a6d6a7c28ab2206e878f7a8985a', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/vKAoOOzpahLLFyS8yMWcctphUy32pFXI1ATNZlSlYSs.jpeg?width=320&crop=smart&auto=webp&s=49368a508e7e7cfed7c484593d0b4d1192d7e547', 'width': 320}, {'height': 640, 'url': 'https://external-preview.redd.it/vKAoOOzpahLLFyS8yMWcctphUy32pFXI1ATNZlSlYSs.jpeg?width=640&crop=smart&auto=webp&s=553122d2c53f1089ca3474d70d1bda135245ab0f', 'width': 640}, {'height': 960, 'url': 'https://external-preview.redd.it/vKAoOOzpahLLFyS8yMWcctphUy32pFXI1ATNZlSlYSs.jpeg?width=960&crop=smart&auto=webp&s=ebe6425e3d00b51773f54d52bec0833f432b2ce0', 'width': 960}], 'source': {'height': 1024, 'url': 'https://external-preview.redd.it/vKAoOOzpahLLFyS8yMWcctphUy32pFXI1ATNZlSlYSs.jpeg?auto=webp&s=9380b78a8dfb81931ea49d33d25aeb4e622b5998', 'width': 1024}, 'variants': {}}]} |
Some thoughts on LongCat-Flash-Thinking-2601 | 25 | I tried the new Parallel Thinking and Iterative Summarization features in the online demo, and it feels like it spins up multiple instances to answer the question, then uses a summarization model to merge everything. How is this actually different from the more "deep divergent thinking" style we already get from GPT?
Right now I'm training my own livestreaming AI, which needs to chain together a vision model, a speech model, and a bunch of other APIs.
I noticed this model supports "environment expansion," and the docs say it can call over 60 tools, has stronger agent capabilities than Claude, and even handles noisy real-world agent scenarios. If that's all true, switching my base LLM to this might seriously cut down latency across the whole response pipeline.
But the model is too huge, and running it is going to be really expensive. So before I commit, I'd love to know if anyone has actually tested its real performance on complex agent workflows through the API. | 2026-01-23T15:55:56 | https://www.reddit.com/r/LocalLLaMA/comments/1qkujcq/some_thoughts_on_longcatflashthinking2601/ | missprolqui | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qkujcq | false | null | t3_1qkujcq | /r/LocalLLaMA/comments/1qkujcq/some_thoughts_on_longcatflashthinking2601/ | false | false | self | 25 | null |
Maxun v0.0.32 | AI-Native Workflows & Real-Time Recorder | Open Source | 0 | Hey everyone,
Maxun is an **open-source, self-hostable, no-code web data extractor** that gives you full control over your data.
👉 GitHub: [https://github.com/getmaxun/maxun](https://github.com/getmaxun/maxun)
This release focuses on making Maxun **more AI-native, more developer-friendly, and more accurate in real-world recording scenarios.**
# LLM Integrations via SDK
You can now plug Maxun directly into popular AI frameworks and SDKs:
* **LlamaIndex**
* **LangChain**
* **LangGraph**
* **Mastra**
* **OpenAI SDK**
* **Vercel AI SDK**
This lets you build **AI-driven extraction, agents, and workflows** on top of Maxun programmatically.
Docs: [https://docs.maxun.dev/category/integrations](https://docs.maxun.dev/category/integrations)
# AI Mode Extract (No Website URL Required)
You no longer need to explicitly provide a starting website URL. Maxun will figure out **where to go**, navigate, and extract automatically.
https://reddit.com/link/1qku6b8/video/3bgo7igkc4fg1/player
# Real-Time Recorder Improvements
Recorder mode now works in **true real time**.
* Live sync with the **actual website state**
* Real-time browser actions:
* Typing
* Clicking
* Scrolling
* Navigation
This makes recordings far more accurate and predictable.
Would love feedback, bug reports, or ideas from the community.
Full changelog:
[https://github.com/getmaxun/maxun/releases/tag/v0.0.32](https://github.com/getmaxun/maxun/releases/tag/v0.0.32)
| 2026-01-23T15:42:21 | https://www.reddit.com/r/LocalLLaMA/comments/1qku6b8/maxun_v0032_ainative_workflows_realtime_recorder/ | carishmaa | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qku6b8 | false | null | t3_1qku6b8 | /r/LocalLLaMA/comments/1qku6b8/maxun_v0032_ainative_workflows_realtime_recorder/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'eA8RjRYJUJsFzEjXRKaFvSMEPL51Yr4wuiGm9jSpha0', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/eA8RjRYJUJsFzEjXRKaFvSMEPL51Yr4wuiGm9jSpha0.png?width=108&crop=smart&auto=webp&s=71228f2190d02c7716b02f874a097467b4e1c8b5', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/eA8RjRYJUJsFzEjXRKaFvSMEPL51Yr4wuiGm9jSpha0.png?width=216&crop=smart&auto=webp&s=52272a745aa77baffca47658ee92a606a726e5a3', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/eA8RjRYJUJsFzEjXRKaFvSMEPL51Yr4wuiGm9jSpha0.png?width=320&crop=smart&auto=webp&s=a057c3cc9a48189245fca11780c324a078a327b8', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/eA8RjRYJUJsFzEjXRKaFvSMEPL51Yr4wuiGm9jSpha0.png?width=640&crop=smart&auto=webp&s=68c26f26ff07d009b38948c3f0c0a84b51483bd7', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/eA8RjRYJUJsFzEjXRKaFvSMEPL51Yr4wuiGm9jSpha0.png?width=960&crop=smart&auto=webp&s=3558cf120dae56a1885f13465291cf5ef89ad4b2', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/eA8RjRYJUJsFzEjXRKaFvSMEPL51Yr4wuiGm9jSpha0.png?width=1080&crop=smart&auto=webp&s=2836350b023a5a27395fe39361a83d467b90d61a', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/eA8RjRYJUJsFzEjXRKaFvSMEPL51Yr4wuiGm9jSpha0.png?auto=webp&s=053a3537d817e4e0b0cf2580c45c2f108362c557', 'width': 1200}, 'variants': {}}]} |
Llama-conductor is a router + memory store + RAG harness to force models to behave like predictable components | 0 | 2026-01-23T15:34:46 | https://codeberg.org/BobbyLLM/llama-conductor | yogthos | codeberg.org | 1970-01-01T00:00:00 | 0 | {} | 1qktz7w | false | null | t3_1qktz7w | /r/LocalLLaMA/comments/1qktz7w/llamaconductor_is_a_router_memory_store_rag/ | false | false | default | 0 | {'enabled': False, 'images': [{'id': 'p-lW5LZGHQa1cDz8ughXSLgdeIhzFU52xXPwY5J8_mc', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/p-lW5LZGHQa1cDz8ughXSLgdeIhzFU52xXPwY5J8_mc.png?width=108&crop=smart&auto=webp&s=c053d053014b931f57bebe0292ba3f0e2d2d20c1', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/p-lW5LZGHQa1cDz8ughXSLgdeIhzFU52xXPwY5J8_mc.png?width=216&crop=smart&auto=webp&s=8929b0ade3bbcd963c3abc532e4e84bb9e5ae294', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/p-lW5LZGHQa1cDz8ughXSLgdeIhzFU52xXPwY5J8_mc.png?width=320&crop=smart&auto=webp&s=4d1f77a156acf4a933e36f70c97ebe066e39171a', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/p-lW5LZGHQa1cDz8ughXSLgdeIhzFU52xXPwY5J8_mc.png?width=640&crop=smart&auto=webp&s=fd703c53c7509eed2acc9bb7fd7d68f3b684b981', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/p-lW5LZGHQa1cDz8ughXSLgdeIhzFU52xXPwY5J8_mc.png?width=960&crop=smart&auto=webp&s=990460a5bdb147ec1b969b0128d7bf187d154fa2', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/p-lW5LZGHQa1cDz8ughXSLgdeIhzFU52xXPwY5J8_mc.png?width=1080&crop=smart&auto=webp&s=5abe4f5fbe1b74c19f2f5b80d7232fc5b87b74da', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/p-lW5LZGHQa1cDz8ughXSLgdeIhzFU52xXPwY5J8_mc.png?auto=webp&s=8bfbdb88e559d8f20d080134575776e041739998', 'width': 1200}, 'variants': {}}]} | |
What's more important for voice agents, bettter models or better constraints? | 75 | There’s a lot of focus right now on model quality improving, but I keep running into situations where behavior issues aren’t really about the model at all.
Things like scope control, decision boundaries, and when an agent should or shouldn’t act seem to matter just as much as raw intelligence. A smarter model doesn’t always behave better if it’s not constrained well. Where are the biggest gains practically upgrading models or spending more time designing tighter constraints and flows? Would like to hear what others are doing. | 2026-01-23T15:31:59 | https://www.reddit.com/r/LocalLLaMA/comments/1qktwn7/whats_more_important_for_voice_agents_bettter/ | FalseExplanation5385 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qktwn7 | false | null | t3_1qktwn7 | /r/LocalLLaMA/comments/1qktwn7/whats_more_important_for_voice_agents_bettter/ | false | false | self | 75 | null |
What search API do (local) agents use? | 0 | Hi. Given how strict Google is in "protecting" their search functionality from programmable use, how LLM tool calls make web search? Do we know some decent APIs to use? And sre "bot blockers" dealt with when scraping the web? | 2026-01-23T15:04:34 | https://www.reddit.com/r/LocalLLaMA/comments/1qkt6om/what_search_api_do_local_agents_use/ | ihatebeinganonymous | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qkt6om | false | null | t3_1qkt6om | /r/LocalLLaMA/comments/1qkt6om/what_search_api_do_local_agents_use/ | false | false | self | 0 | null |
Scaling PostgreSQL to power 800 million ChatGPT users | 91 | Must Read! | 2026-01-23T14:27:07 | https://openai.com/index/scaling-postgresql/ | buntyshah2020 | openai.com | 1970-01-01T00:00:00 | 0 | {} | 1qks7ua | false | null | t3_1qks7ua | /r/LocalLLaMA/comments/1qks7ua/scaling_postgresql_to_power_800_million_chatgpt/ | false | false | default | 91 | null |
Ollama extremely slow for simple classification task (10 minutes) – alternatives or best practices? | 0 | Hi everyone,
I’m experimenting with a local LLM setup using Ollama and I’m running into serious performance issues.
Very simplified use case:
* I have one file with client data (structured text / JSON / CSV-like)
* I have another file that contains classification rules or reference data
* The LLM reads the client data and uses the second file to classify the client into a category
There is:
* no internet access
* no tool calling
* no multi-agent logic
* just reading data + producing a short classification result
Yet, inference can take close to 10 minutes!
My questions:
* Is this a known limitation of Ollama / local LLMs for this type of task?
* Are there better alternatives for local classification (different models, runtimes, or architectures)?
* Would you recommend:
* smaller models?
* quantization?
* embedding + similarity instead of full generation?
* traditional ML or rule-based logic for the classification step?
* Any best practices to avoid huge latency for “simple” reasoning tasks?
I’m clearly missing something obvious in how this should be designed, so I’d really appreciate feedback from people who’ve dealt with similar setups.
Thanks in advance | 2026-01-23T14:25:13 | https://www.reddit.com/r/LocalLLaMA/comments/1qks66f/ollama_extremely_slow_for_simple_classification/ | Ok_Tree_1696 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qks66f | false | null | t3_1qks66f | /r/LocalLLaMA/comments/1qks66f/ollama_extremely_slow_for_simple_classification/ | false | false | self | 0 | null |
I built an open-source "Firewall" to prevent my Agent from draining my API credits. | 0 | Hi everyone,
I've been building autonomous agents recently, but I was terrified to give them write access to my database or Stripe account. Prompt injection is too easy, and I didn't want a hallucination to wipe my prod DB.
So I built a middleware tool called **SudoMode**.
**How it works:** Instead of calling your tools directly, you wrap them in the Sudo SDK. When the agent requests a "High Risk" action (defined in a YAML policy), the middleware **pauses the execution thread**.
It pings me on a local dashboard. I check the params (e.g., `amount: 5000`), click "Approve", and the Python script automatically unpauses and finishes the job.
It’s basically `sudo` for LLMs.
**The Stack:** Python, FastAPI, React.
Repo is here: [https://github.com/numcys/sudomode](https://github.com/numcys/sudomode)
Would love feedback on the policy structure! | 2026-01-23T13:58:57 | https://www.reddit.com/r/LocalLLaMA/comments/1qkrilg/i_built_an_opensource_firewall_to_prevent_my/ | Fancy_Pack_1193 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qkrilg | false | null | t3_1qkrilg | /r/LocalLLaMA/comments/1qkrilg/i_built_an_opensource_firewall_to_prevent_my/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'jLNfLJz1P796vYRoFfn-tZQ4lrxCcl8YQOQBDRfunmU', 'resolutions': [{'height': 59, 'url': 'https://external-preview.redd.it/jLNfLJz1P796vYRoFfn-tZQ4lrxCcl8YQOQBDRfunmU.jpeg?width=108&crop=smart&auto=webp&s=b785cedd799259bb65fc41978a32afc50c6b1e19', 'width': 108}, {'height': 118, 'url': 'https://external-preview.redd.it/jLNfLJz1P796vYRoFfn-tZQ4lrxCcl8YQOQBDRfunmU.jpeg?width=216&crop=smart&auto=webp&s=bb7ea413f8b4c2226a3a105caac869dedf609ed2', 'width': 216}, {'height': 176, 'url': 'https://external-preview.redd.it/jLNfLJz1P796vYRoFfn-tZQ4lrxCcl8YQOQBDRfunmU.jpeg?width=320&crop=smart&auto=webp&s=a12d1fd390b3e884fb6c18fccf8ad948872b1cc9', 'width': 320}, {'height': 352, 'url': 'https://external-preview.redd.it/jLNfLJz1P796vYRoFfn-tZQ4lrxCcl8YQOQBDRfunmU.jpeg?width=640&crop=smart&auto=webp&s=4383ea397505e1a7c67071a022cec8eee217d165', 'width': 640}], 'source': {'height': 352, 'url': 'https://external-preview.redd.it/jLNfLJz1P796vYRoFfn-tZQ4lrxCcl8YQOQBDRfunmU.jpeg?auto=webp&s=badbc2950e368371cfebc6fc8c8bba20f5bdd876', 'width': 640}, 'variants': {}}]} |
The 'Infinite Context' Trap: Why 1M tokens won't solve Agentic Amnesia (and why we need a Memory OS) | 150 | tbh i’ve been lurking here for a while, just watching the solid work on quants and local inference. but something that’s been bugging me is the industry's obsession with massive Context Windows.
AI “memory” right now is going through the same phase databases went through before indexes and schemas existed. Early systems just dumped everything into logs. Then we realized raw history isn’t memory, structure is.
Everyone seems to be betting that if we just stuff 1M+ tokens into a prompt, AI 'memory' is solved. Honestly, I think this is a dead end, or at least, incredibly inefficient for those of us running things locally.
Treating Context as Memory is like treating RAM as a Hard Drive. It’s volatile, expensive, and gets slower the more you fill it up. You can already see this shift happening in products like Claude’s memory features:
* Memories are categorized (facts vs preferences)
* Some things persist, others decay
* Not everything belongs in the active working set
That’s the key insight: memory isn’t about storing more , it’s about deciding what stays active, what gets updated, and what fades out.
In my view, good agents need Memory Lifecycle Management:
1. **Consolidate**: Turn noisy logs/chats into actual structured facts.
2. **Evolve**: Update or merge memories instead of just accumulating contradictions (e.g., "I like coffee" → "I quit caffeine").
3. **Forget**: Aggressively prune the noise so retrieval actually stays clean.
Most devs end up rebuilding some version of this logic for every agent, so we tried to pull it out into a reusable layer and built **MemOS (Memory Operating System)**. It’s not just another vector DB wrapper. It’s more of an OS layer that sits between the LLM and your storage:
* **The Scheduler**: Instead of brute-forcing context, it uses 'Next-Scene Prediction' to pre-load only what’s likely needed.
* **Lifecycle States**: Memories move from Generated → Activated → Merged → Archived.
* **Efficiency**: In our tests (LoCoMo dataset), this gave us a 26% accuracy boost over standard long-context methods, while cutting token usage by \~90%. (Huge for saving VRAM and inference time on local setups).
We open-sourced the core SDK because we think this belongs in the infra stack, just like a database. If you're tired of agents forgetting who they're talking to or burning tokens on redundant history, definitely poke around the repo.
I’d love to hear how you guys are thinking about this:
Are you just leaning on long-context models for state? Or are you building custom pipelines to handle 'forgetting' and 'updating' memory?
Repo / Docs:
\- **Github**: [https://github.com/MemTensor/MemOS](https://github.com/MemTensor/MemOS)
\- **Docs**: [https://memos-docs.openmem.net/cn](https://memos-docs.openmem.net/cn)
(Disclaimer: I’m one of the creators. We have a cloud version for testing but the core logic is all open for the community to tear apart.) | 2026-01-23T13:57:35 | https://www.reddit.com/r/LocalLLaMA/comments/1qkrhec/the_infinite_context_trap_why_1m_tokens_wont/ | Sweet121 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qkrhec | false | null | t3_1qkrhec | /r/LocalLLaMA/comments/1qkrhec/the_infinite_context_trap_why_1m_tokens_wont/ | false | false | self | 150 | {'enabled': False, 'images': [{'id': 'ilJLfcGGCRnLhI6ee7IY_5VZjoCrDHCJKluce__XJZc', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/ilJLfcGGCRnLhI6ee7IY_5VZjoCrDHCJKluce__XJZc.png?width=108&crop=smart&auto=webp&s=cde7893b406542ab060d44d94e80059ba8d6dcf3', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/ilJLfcGGCRnLhI6ee7IY_5VZjoCrDHCJKluce__XJZc.png?width=216&crop=smart&auto=webp&s=3c05bd60fc1c3e1d08ab49e779491957097ac5b2', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/ilJLfcGGCRnLhI6ee7IY_5VZjoCrDHCJKluce__XJZc.png?width=320&crop=smart&auto=webp&s=26cb9124668367d259f25bf35e30c6e91b23dc4d', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/ilJLfcGGCRnLhI6ee7IY_5VZjoCrDHCJKluce__XJZc.png?width=640&crop=smart&auto=webp&s=b828b86824d67085fa2d5f51e67fc195851a41f1', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/ilJLfcGGCRnLhI6ee7IY_5VZjoCrDHCJKluce__XJZc.png?width=960&crop=smart&auto=webp&s=76138deadf27dfe6ee95ed0415dd38bd67587a89', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/ilJLfcGGCRnLhI6ee7IY_5VZjoCrDHCJKluce__XJZc.png?width=1080&crop=smart&auto=webp&s=50c6ba40c22c5c523d4cd4ffb13fba9611ded9b3', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/ilJLfcGGCRnLhI6ee7IY_5VZjoCrDHCJKluce__XJZc.png?auto=webp&s=e4128fef3d2f602dae8a1a79d581b5060525719c', 'width': 1200}, 'variants': {}}]} |
Was browsing Steam and stumbled on Tryll Assistant a local AI gaming assistant. Didn’t get to try it yet, so details are fuzzy. Curious how they handle hallucinations and response quality with small local models like Qwen or Llama. Thoughts? | 0 | 2026-01-23T13:35:33 | https://www.reddit.com/r/LocalLLaMA/comments/1qkqyro/was_browsing_steam_and_stumbled_on_tryll/ | ReleaseDependent7443 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qkqyro | false | null | t3_1qkqyro | /r/LocalLLaMA/comments/1qkqyro/was_browsing_steam_and_stumbled_on_tryll/ | false | false | 0 | null | ||
Yesterday I used GLM 4.7 flash with my tools and I was impressed.. | 66 | 2026-01-23T13:31:40 | https://www.reddit.com/r/LocalLLaMA/comments/1qkqvkr/yesterday_i_used_glm_47_flash_with_my_tools_and_i/ | Loskas2025 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qkqvkr | false | null | t3_1qkqvkr | /r/LocalLLaMA/comments/1qkqvkr/yesterday_i_used_glm_47_flash_with_my_tools_and_i/ | false | false | 66 | null | ||
Response quality LLM assistant | 1 | 2026-01-23T13:31:08 | https://www.reddit.com/r/LocalLLaMA/comments/1qkqv5l/response_quality_llm_assistant/ | ReleaseDependent7443 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qkqv5l | false | null | t3_1qkqv5l | /r/LocalLLaMA/comments/1qkqv5l/response_quality_llm_assistant/ | false | false | 1 | null | ||
Response quality llm | 1 | Was just scrolling through Steam today, randomly clicked on something that caught my eye. Turned out to be this thing called Tryll Assistant. From what I can tell it's some kind of local AI gaming assistant that interacts with your games somehow, helps you out with stuff I guess. Not entirely sure how it all works tbh.
Can't install it yet unfortunately so couldn't actually try it. Would've written more if I could.
But yeah I'm wondering what they did about hallucinations and response quality. We all know what small local models like Qwen and Llama can do. What do you guys think?
[https://store.steampowered.com/app/4193780/Tryll\_Assistant](https://store.steampowered.com/app/4193780/Tryll_Assistant) | 2026-01-23T13:29:14 | https://www.reddit.com/r/LocalLLaMA/comments/1qkqtm6/response_quality_llm/ | ReleaseDependent7443 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qkqtm6 | false | null | t3_1qkqtm6 | /r/LocalLLaMA/comments/1qkqtm6/response_quality_llm/ | false | false | self | 1 | null |
Just finished the build - Nvidia GH200 144GB HBM3e, RTX Pro 6000, 8TB SSD, liquid-cooled | 267 | 2026-01-23T13:28:33 | GPThop---ai | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1qkqt2m | false | null | t3_1qkqt2m | /r/LocalLLaMA/comments/1qkqt2m/just_finished_the_build_nvidia_gh200_144gb_hbm3e/ | false | false | default | 267 | {'enabled': True, 'images': [{'id': '3kawgqr7p3fg1', 'resolutions': [{'height': 72, 'url': 'https://preview.redd.it/3kawgqr7p3fg1.jpeg?width=108&crop=smart&auto=webp&s=ac27eac506d7762910e5778723b97aa34752d0ba', 'width': 108}, {'height': 144, 'url': 'https://preview.redd.it/3kawgqr7p3fg1.jpeg?width=216&crop=smart&auto=webp&s=84d4487e2a0ce902fcb7bbb1c7fb6f68d7bf5709', 'width': 216}, {'height': 213, 'url': 'https://preview.redd.it/3kawgqr7p3fg1.jpeg?width=320&crop=smart&auto=webp&s=02c7757846afae4ad3b11d020d10a91d2d183f88', 'width': 320}, {'height': 426, 'url': 'https://preview.redd.it/3kawgqr7p3fg1.jpeg?width=640&crop=smart&auto=webp&s=893eddac031ea22a8411ddc5d7d1a9e601e10eee', 'width': 640}, {'height': 640, 'url': 'https://preview.redd.it/3kawgqr7p3fg1.jpeg?width=960&crop=smart&auto=webp&s=df524700980e70a6ef3d7e76524cdfce21fbb15b', 'width': 960}, {'height': 720, 'url': 'https://preview.redd.it/3kawgqr7p3fg1.jpeg?width=1080&crop=smart&auto=webp&s=0a98450bcc909b6106dfe0ca0238b0bd2ae9888e', 'width': 1080}], 'source': {'height': 4000, 'url': 'https://preview.redd.it/3kawgqr7p3fg1.jpeg?auto=webp&s=03691e609718876419507b1e305406a7820daace', 'width': 6000}, 'variants': {}}]} | ||
A full AI powered cooking game, where literally any ingredient is possible with infinite combinations. | 108 | Built with Claude Code
Game Logic - Gemini
Sprites - Flux
Try it out at: [https://infinite-kitchen.com/kitchen](https://infinite-kitchen.com/kitchen) | 2026-01-23T13:16:42 | https://v.redd.it/a2wy0mdym3fg1 | VirtualJamesHarrison | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1qkqjer | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/a2wy0mdym3fg1/DASHPlaylist.mpd?a=1771766217%2CMmE5ZDkwODY0ZDk4MjJlNjc4NGU3Njc2Yzk5NDhhZDVkZDlkNjgzNDRmNWI4MWRkZjIyNTUyYzc4MWQ0NTZlMw%3D%3D&v=1&f=sd', 'duration': 75, 'fallback_url': 'https://v.redd.it/a2wy0mdym3fg1/CMAF_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/a2wy0mdym3fg1/HLSPlaylist.m3u8?a=1771766217%2CYWY2YzE5MzZmYmUxYmI0OWFlZmYzZTI4MDlkY2ZhNjk1Y2EzMzk3ZDM0MzE4ZDgzMWZhNTYyNTQ2ZGZjYTcwMQ%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/a2wy0mdym3fg1/CMAF_96.mp4', 'transcoding_status': 'completed', 'width': 1720}} | t3_1qkqjer | /r/LocalLLaMA/comments/1qkqjer/a_full_ai_powered_cooking_game_where_literally/ | false | false | 108 | {'enabled': False, 'images': [{'id': 'YTNmcmg4Z3ltM2ZnMUJwJOA_Kqm7OwiZxEbYxXgv1YYIXAs9kE9ZTKKEhyEN', 'resolutions': [{'height': 67, 'url': 'https://external-preview.redd.it/YTNmcmg4Z3ltM2ZnMUJwJOA_Kqm7OwiZxEbYxXgv1YYIXAs9kE9ZTKKEhyEN.png?width=108&crop=smart&format=pjpg&auto=webp&s=ce0421b72d1490d3e6b6e5c590d0e62efcc4aae8', 'width': 108}, {'height': 135, 'url': 'https://external-preview.redd.it/YTNmcmg4Z3ltM2ZnMUJwJOA_Kqm7OwiZxEbYxXgv1YYIXAs9kE9ZTKKEhyEN.png?width=216&crop=smart&format=pjpg&auto=webp&s=6c3d3abf04426271b8dd630480283a3d43b85512', 'width': 216}, {'height': 200, 'url': 'https://external-preview.redd.it/YTNmcmg4Z3ltM2ZnMUJwJOA_Kqm7OwiZxEbYxXgv1YYIXAs9kE9ZTKKEhyEN.png?width=320&crop=smart&format=pjpg&auto=webp&s=1f1c1f36631c907cedf0a800b74ad43e8be123d9', 'width': 320}, {'height': 401, 'url': 'https://external-preview.redd.it/YTNmcmg4Z3ltM2ZnMUJwJOA_Kqm7OwiZxEbYxXgv1YYIXAs9kE9ZTKKEhyEN.png?width=640&crop=smart&format=pjpg&auto=webp&s=7282dc33af6855538a51142c0303d60a04df01c3', 'width': 640}, {'height': 602, 'url': 'https://external-preview.redd.it/YTNmcmg4Z3ltM2ZnMUJwJOA_Kqm7OwiZxEbYxXgv1YYIXAs9kE9ZTKKEhyEN.png?width=960&crop=smart&format=pjpg&auto=webp&s=6648448d18f1422e6d1536f40a342c9c74b6bf26', 'width': 960}, {'height': 678, 'url': 'https://external-preview.redd.it/YTNmcmg4Z3ltM2ZnMUJwJOA_Kqm7OwiZxEbYxXgv1YYIXAs9kE9ZTKKEhyEN.png?width=1080&crop=smart&format=pjpg&auto=webp&s=6ec35ae6f902ea5ec258132c1a609b6c8e93aebb', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/YTNmcmg4Z3ltM2ZnMUJwJOA_Kqm7OwiZxEbYxXgv1YYIXAs9kE9ZTKKEhyEN.png?format=pjpg&auto=webp&s=aa0cab11468a6136a366c1f6e0f4c2425fc5a8c1', 'width': 1720}, 'variants': {}}]} | |
vLLM: offload KV cache for long context? | 2 | Problem: 2x3090 not enough to handle extremely long context lengths in vLLM.
The additional 1x 5060 is not helpful for doing tensor parallelism with the others, obviously. And buying two more 3090s is not feasible at this point.
But, is there a way to offload some of the KV cache to the 5060 while using the 3090s in TP 2 so the context can fit? | 2026-01-23T13:11:29 | https://www.reddit.com/r/LocalLLaMA/comments/1qkqf2i/vllm_offload_kv_cache_for_long_context/ | FrozenBuffalo25 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qkqf2i | false | null | t3_1qkqf2i | /r/LocalLLaMA/comments/1qkqf2i/vllm_offload_kv_cache_for_long_context/ | false | false | self | 2 | null |
afm v0.9.0 - Run Apple's Foundation Models with Built-in Web Chat UI (macOS Tahoe) and CLI | 6 | Just released v0.9.0 of afm, a CLI that exposes Apple's on-device Foundation Models through OpenAI-compatible API endpoints.
**What's new in v0.9.0 - Built-in Web UI:**
**Links:**
* GitHub: [https://github.com/scouzi1966/maclocal-api](https://github.com/scouzi1966/maclocal-api)
* Release: [https://github.com/scouzi1966/maclocal-api/releases/tag/v0.9.0](https://github.com/scouzi1966/maclocal-api/releases/tag/v0.9.0)
You can now run afm -w to start both the API server and a chat web interface in one command. It integrates llama.cpp's webui and auto-opens your browser. No need to set up Open-webui separately if you just want a quick chat interface.
afm -w
That's it. Browser opens to [http://localhost:9999](http://localhost:9999) with a chat UI talking to Apple's on-device 3B model.
**Other changes:**
* /props endpoint for webui compatibility
* model field now optional in chat completion requests
* llama.cpp pinned as a submodule for reproducible builds
**What afm does:**
* Runs Apple's 3B parameter on-device LLM
* OpenAI-compatible API (/v1/chat/completions, /v1/models)
* Single-prompt mode: afm -s "your question"
* Pipe mode: cat file.txt | afm
* LoRA adapter support for fine-tuned models
* Vision capabilities (text extraction, table OCR)
* Works as a backend for Open-webui too
**Install:**
brew tap scouzi1966/afm
brew install afm
**Requirements:**
* macOS 26 (Tahoe)
* Apple Silicon (M1/M2/M3/M4)
* Apple Intelligence enabled
Questions welcome.
| 2026-01-23T13:10:43 | https://www.reddit.com/r/LocalLLaMA/comments/1qkqegq/afm_v090_run_apples_foundation_models_with/ | scousi | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qkqegq | false | null | t3_1qkqegq | /r/LocalLLaMA/comments/1qkqegq/afm_v090_run_apples_foundation_models_with/ | false | false | self | 6 | {'enabled': False, 'images': [{'id': 'hO7D7NDTfYaA_wFdu5aLfMd2cFADSUpFgtCGqZB8WKU', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/hO7D7NDTfYaA_wFdu5aLfMd2cFADSUpFgtCGqZB8WKU.png?width=108&crop=smart&auto=webp&s=7da2edfa1859039f265571cfd44ae9d8a378748b', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/hO7D7NDTfYaA_wFdu5aLfMd2cFADSUpFgtCGqZB8WKU.png?width=216&crop=smart&auto=webp&s=4fbff695c991707546be373b88c2d9a97d4166f7', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/hO7D7NDTfYaA_wFdu5aLfMd2cFADSUpFgtCGqZB8WKU.png?width=320&crop=smart&auto=webp&s=372cfedfb4a16947836aab3fb1d41b1c95a5762c', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/hO7D7NDTfYaA_wFdu5aLfMd2cFADSUpFgtCGqZB8WKU.png?width=640&crop=smart&auto=webp&s=ac0ef2ba8dc977454bf2f5cf8a50096d1bb2fb2f', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/hO7D7NDTfYaA_wFdu5aLfMd2cFADSUpFgtCGqZB8WKU.png?width=960&crop=smart&auto=webp&s=37ab8ae68e38b82ad0dca71748d9b5ce87dbb659', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/hO7D7NDTfYaA_wFdu5aLfMd2cFADSUpFgtCGqZB8WKU.png?width=1080&crop=smart&auto=webp&s=b266342df5a555bf80a72004f06577f32fe15152', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/hO7D7NDTfYaA_wFdu5aLfMd2cFADSUpFgtCGqZB8WKU.png?auto=webp&s=ed3ef2571f3722295ee12a3380cb47a0ab70da69', 'width': 1200}, 'variants': {}}]} |
Discussion about the possibility of implementation | 1 | Was just scrolling through Steam today, randomly clicked on something that caught my eye. Turned out to be this thing called Tryll Assistant. From what I can tell it's some kind of local AI gaming assistant that interacts with your games somehow, helps you out with stuff I guess. Not entirely sure how it all works tbh.
Can't install it yet unfortunately so couldn't actually try it. Would've written more if I could.
But yeah I'm wondering what they did about hallucinations and response quality. We all know what small local models like Qwen and Llama can do. What do you guys think?
https://preview.redd.it/6qp8g8lrk3fg1.png?width=982&format=png&auto=webp&s=13bbfd78938ece3ba5711da640006cb4416dfa32
| 2026-01-23T13:03:43 | https://www.reddit.com/r/LocalLLaMA/comments/1qkq8x2/discussion_about_the_possibility_of_implementation/ | ReleaseDependent7443 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qkq8x2 | false | null | t3_1qkq8x2 | /r/LocalLLaMA/comments/1qkq8x2/discussion_about_the_possibility_of_implementation/ | false | false | 1 | null | |
Have people stopped posting tutorial videos? | 24 | Every youtube video I come across about any tool is just them reading through a blog post or going through stuff already announced by the official post.
Like for example, I wanted to see if anyone has used function gemma and NO, everyone is simply reading and showing the same apps made by Google and showing the same use cases without actually going through the model and using it.
As if they are just trying to please the algorithm and not the viewers :(
am I the only one facing this issue? | 2026-01-23T12:45:05 | https://www.reddit.com/r/LocalLLaMA/comments/1qkpu4x/have_people_stopped_posting_tutorial_videos/ | salary_pending | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qkpu4x | false | null | t3_1qkpu4x | /r/LocalLLaMA/comments/1qkpu4x/have_people_stopped_posting_tutorial_videos/ | false | false | self | 24 | null |
Chrome's Local AI Model in production (Gemini Nano) 41% eligibility, 6x slower and $0 cost | 23 | I have a hobby site that tests email subject lines for people. Users kept asking for it to make suggestions for them via AI ("make it work with ChatGPT"), but I had one concern: money, money, and money.
The tool is free and gets tons of abuse, so I'd been reading about Chrome's built in AI model (Gemini Nano) and tried implementing it, this is my story.
## The Implementation
Google ships Chrome with the
*capability*
to run Gemini Nano, but not the model itself.
A few things to know:
**Multiple models, no control.**
Which model you get depends on an undocumented benchmark. You don't get to pick.
**~1.5-2GB download.**
Downloads to Chrome's profile directory. Multiple users on one machine each need their own copy.
**On-demand.**
The model downloads the first time any site requests it.
**Background download.**
Happens asynchronously, independent of page load.
Think of the requirements like a AAA video game, not a browser feature.
## The Fallback
For users without Nano, we fall back to Google's Gemma 3N via OpenRouter. It's actually
*more*
capable (6B vs 1.8B parameters, 32K vs 6K context). It also costs nothing right now.
Server-based AI inference is extremely cheap if you're not using frontier models.
## The Numbers (12,524 generations across 836 users)
**User Funnel:**
100%, all users
**40.7%**
Gemini Nano eligible (Chrome 138+, Desktop, English)
**~25%**
model already downloaded and ready
**Download Stats:**
- ~25% of eligible users already had the model
- 1.9 minute median download time for the ~1.5GB file
**Inference Performance:**
| Model | Median | Generations |
|-------|--------|-------------|
| Gemini Nano (on-device) | **7.7s** | 4,774 |
| Gemma 3N (server API) | **1.3s** | 7,750 |
The on-device model is
**6x slower**
than making a network request to a server on another continent.
The performance spread is also much wider for Nano. At p99, Nano hits 52.9 seconds while Gemma is at 2.4 seconds. Worst case for Nano was over 9 minutes. Gemma's worst was 31 seconds.
## What Surprised Us
**No download prompt.**
The 1.5GB model download is completely invisible. No confirmation, no progress bar. Great for adoption. I have mixed feelings about silently dropping multi-gigabyte files onto users' machines though.
**Abandoned downloads aren't a problem.**
Close the tab and the download continues in the background. Close Chrome entirely and it resumes on next launch (within 30 days).
**Local inference isn't faster.**
I assumed "no network latency" would win. Nope. The compute power difference between a laptop GPU and a datacenter overwhelms any latency savings.
**We didn't need fallback racing.**
We considered running both simultaneously and using whichever returns first. Turns out it's unnecessary. The eligibility check is instant.
**You can really mess up site performance with it**
We ended up accidentally calling it multiple times on a page due to a bug..and it was real bad for users in the same way loading a massive video file or something on a page might be.
## Why We're Keeping It
By the numbers, there's no reason to use Gemini Nano in production:
- It's slow
- ~60% of users can't use it
- It's not cheaper than API calls (OpenRouter is free for Gemma)
**We're keeping it anyway.**
I think it's the future. Other browsers will add their own AI models. We'll get consistent cross-platform APIs. I also like the privacy aspects of local inference. The more we use it, the more we'll see optimizations from OS, browser, and hardware vendors.
**Full article with charts and detailed methodology:**
[https://sendcheckit.com/blog/ai-powered-subject-line-alternatives](
https://sendcheckit.com/blog/ai-powered-subject-line-alternatives
) | 2026-01-23T12:27:29 | https://www.reddit.com/r/LocalLLaMA/comments/1qkph45/chromes_local_ai_model_in_production_gemini_nano/ | mbuckbee | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qkph45 | false | null | t3_1qkph45 | /r/LocalLLaMA/comments/1qkph45/chromes_local_ai_model_in_production_gemini_nano/ | false | false | self | 23 | {'enabled': False, 'images': [{'id': 'hfC_Ra7gG4wCHjs2rU4fdw8p6pgkHKHGCTPxO6VK6DA', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/hfC_Ra7gG4wCHjs2rU4fdw8p6pgkHKHGCTPxO6VK6DA.png?width=108&crop=smart&auto=webp&s=7964c74f2499306d8cb55cffadcd17f18d00cec7', 'width': 108}, {'height': 117, 'url': 'https://external-preview.redd.it/hfC_Ra7gG4wCHjs2rU4fdw8p6pgkHKHGCTPxO6VK6DA.png?width=216&crop=smart&auto=webp&s=f10dacd0b42b20992eb0ff8f844ad4eca5117b65', 'width': 216}, {'height': 174, 'url': 'https://external-preview.redd.it/hfC_Ra7gG4wCHjs2rU4fdw8p6pgkHKHGCTPxO6VK6DA.png?width=320&crop=smart&auto=webp&s=0965b93201133a9caf667c3fd3051b71605b4950', 'width': 320}, {'height': 349, 'url': 'https://external-preview.redd.it/hfC_Ra7gG4wCHjs2rU4fdw8p6pgkHKHGCTPxO6VK6DA.png?width=640&crop=smart&auto=webp&s=bb0a436cda5660bb7bf8dbc959efd48036d6acf8', 'width': 640}, {'height': 523, 'url': 'https://external-preview.redd.it/hfC_Ra7gG4wCHjs2rU4fdw8p6pgkHKHGCTPxO6VK6DA.png?width=960&crop=smart&auto=webp&s=7c16bf6b5a73542eac9acd3bc2db0047e292fcdb', 'width': 960}, {'height': 589, 'url': 'https://external-preview.redd.it/hfC_Ra7gG4wCHjs2rU4fdw8p6pgkHKHGCTPxO6VK6DA.png?width=1080&crop=smart&auto=webp&s=0ba8ebc088f9e2362da79c73c954e145bbdea670', 'width': 1080}], 'source': {'height': 768, 'url': 'https://external-preview.redd.it/hfC_Ra7gG4wCHjs2rU4fdw8p6pgkHKHGCTPxO6VK6DA.png?auto=webp&s=fab32e250f43735e267d8c0049d8c3bd4fc056da', 'width': 1408}, 'variants': {}}]} |
Something that might happen if GTA 6 was released | 0 | No context | 2026-01-23T11:54:55 | https://v.redd.it/ap8l90bl83fg1 | BuriqKalipun | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1qkou2n | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/ap8l90bl83fg1/DASHPlaylist.mpd?a=1771761309%2COGM5NzkxMDY0ZjZjNzgwNDc2YzYyNDg1ZDJjZmFhNmVlYmYyM2RiOTE5NmY2NGM4YWQ2ZmZkYmM5ZjFjMjhkMQ%3D%3D&v=1&f=sd', 'duration': 11, 'fallback_url': 'https://v.redd.it/ap8l90bl83fg1/CMAF_1080.mp4?source=fallback', 'has_audio': True, 'height': 1920, 'hls_url': 'https://v.redd.it/ap8l90bl83fg1/HLSPlaylist.m3u8?a=1771761309%2CYjkxM2NlN2ViZjdhZmU4ZGVmZTFlNTlmNzVjNWEzMjUyYzdmNDRhMjg0YTkyODhlZWE1Zjg2ZmYyODk4NzczOQ%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/ap8l90bl83fg1/CMAF_96.mp4', 'transcoding_status': 'completed', 'width': 1080}} | t3_1qkou2n | /r/LocalLLaMA/comments/1qkou2n/something_that_might_happen_if_gta_6_was_released/ | false | false | 0 | {'enabled': False, 'images': [{'id': 'dzZocHM4Zmw4M2ZnMfeE-0AjJ8I1WkVvXWFoixs7ELUK4-nFZfKHm9U_5z1_', 'resolutions': [{'height': 192, 'url': 'https://external-preview.redd.it/dzZocHM4Zmw4M2ZnMfeE-0AjJ8I1WkVvXWFoixs7ELUK4-nFZfKHm9U_5z1_.png?width=108&crop=smart&format=pjpg&auto=webp&s=8b897feed1884fcd757bd0fffae84128611b79ee', 'width': 108}, {'height': 384, 'url': 'https://external-preview.redd.it/dzZocHM4Zmw4M2ZnMfeE-0AjJ8I1WkVvXWFoixs7ELUK4-nFZfKHm9U_5z1_.png?width=216&crop=smart&format=pjpg&auto=webp&s=c12cd82d03af4101f04e8978998a5f275b823dec', 'width': 216}, {'height': 568, 'url': 'https://external-preview.redd.it/dzZocHM4Zmw4M2ZnMfeE-0AjJ8I1WkVvXWFoixs7ELUK4-nFZfKHm9U_5z1_.png?width=320&crop=smart&format=pjpg&auto=webp&s=bfe6c139f68408ad4851afcc6c15c0fa061753d7', 'width': 320}, {'height': 1137, 'url': 'https://external-preview.redd.it/dzZocHM4Zmw4M2ZnMfeE-0AjJ8I1WkVvXWFoixs7ELUK4-nFZfKHm9U_5z1_.png?width=640&crop=smart&format=pjpg&auto=webp&s=c085d39ceb332b99295817a7111611d1deffe12b', 'width': 640}, {'height': 1706, 'url': 'https://external-preview.redd.it/dzZocHM4Zmw4M2ZnMfeE-0AjJ8I1WkVvXWFoixs7ELUK4-nFZfKHm9U_5z1_.png?width=960&crop=smart&format=pjpg&auto=webp&s=e6803aa7b4f1868125ccd86c80c4ce2c8b2b95a9', 'width': 960}, {'height': 1920, 'url': 'https://external-preview.redd.it/dzZocHM4Zmw4M2ZnMfeE-0AjJ8I1WkVvXWFoixs7ELUK4-nFZfKHm9U_5z1_.png?width=1080&crop=smart&format=pjpg&auto=webp&s=efa7f1369be03d28612775e97c4a0a3c78cbfab7', 'width': 1080}], 'source': {'height': 1920, 'url': 'https://external-preview.redd.it/dzZocHM4Zmw4M2ZnMfeE-0AjJ8I1WkVvXWFoixs7ELUK4-nFZfKHm9U_5z1_.png?format=pjpg&auto=webp&s=4906292039f4dcf1b3cfa8847bad114e06c6aec2', 'width': 1080}, 'variants': {}}]} | |
This Week's Fresh Hugging Face Datasets (Jan 17-23, 2026) | 10 | # This Week's Fresh Hugging Face Datasets (Jan 17-23, 2026)
Check out these newly updated datasets on Hugging Face—perfect for AI devs, researchers, and ML enthusiasts pushing boundaries in multimodal AI, robotics, and more. Categorized by primary modality with sizes, purposes, and direct links.
# Image & Vision Datasets
* **lightonai/LightOnOCR-mix-0126** (16.4M examples, updated \~3 hours ago): Mixed dataset for training end-to-end OCR models like LightOnOCR-2-1B; excels at document conversion (PDFs, scans, tables, math) with high speed and no external pipelines. Used for fine-tuning lightweight VLMs on versatile text extraction. [https://huggingface.co/datasets/lightonai/LightOnOCR-mix-0126](https://huggingface.co/datasets/lightonai/LightOnOCR-mix-0126)
* **moonworks/lunara-aesthetic** (2k image-prompt pairs, updated 1 day ago): Curated high-aesthetic images for vision-language models; mean score 6.32 (beats LAION/CC3M). Benchmarks aesthetic preference, prompt adherence, cultural styles in image gen fine-tuning. [https://huggingface.co/datasets/moonworks/lunara-aesthetic](https://huggingface.co/datasets/moonworks/lunara-aesthetic)
* **opendatalab/ChartVerse-SFT-1800K** (1.88M examples, updated \~8 hours ago): SFT data for chart understanding/QA; covers 3D plots, treemaps, bars, etc. Trains models to interpret diverse visualizations accurately. [https://huggingface.co/datasets/opendatalab/ChartVerse-SFT-1800K](https://huggingface.co/datasets/opendatalab/ChartVerse-SFT-1800K)
* **rootsautomation/pubmed-ocr** (1.55M pages, updated \~16 hours ago): OCR annotations on PubMed Central PDFs (1.3B words); includes bounding boxes for words/lines/paragraphs. For layout-aware models, OCR robustness, coordinate-grounded QA on scientific docs. [https://huggingface.co/datasets/rootsautomation/pubmed-ocr](https://huggingface.co/datasets/rootsautomation/pubmed-ocr)
# Multimodal & Video Datasets
* **UniParser/OmniScience** (1.53M image-text pairs + 5M subfigures, updated 1 day ago): Scientific multimodal from top journals/arXiv (bio, chem, physics, etc.); enriched captions via MLLMs. Powers broad-domain VLMs with 4.3B tokens. [https://huggingface.co/datasets/UniParser/OmniScience](https://huggingface.co/datasets/UniParser/OmniScience)
* **genrobot2025/10Kh-RealOmin-OpenData** (207k clips, updated \~8 hours ago): Real-world robotics data (95TB MCAP); bimanual tasks, large-FOV images, IMU, tactile. High-precision trajectories for household chore RL/multi-modal training. [https://huggingface.co/datasets/genrobot2025/10Kh-RealOmin-OpenData](https://huggingface.co/datasets/genrobot2025/10Kh-RealOmin-OpenData)
* **nvidia/PhysicalAI-Autonomous-Vehicles** (164k trajectories, updated 2 days ago): Synthetic/real driving scenes for AV/robotics; 320k+ trajectories, USD assets. End-to-end AV training across cities. [https://huggingface.co/datasets/nvidia/PhysicalAI-Autonomous-Vehicles](https://huggingface.co/datasets/nvidia/PhysicalAI-Autonomous-Vehicles)
# Text & Structured Datasets
* **sojuL/RubricHub\_v1** (unknown size, updated 3 days ago): Rubric-style evaluation data for LLMs (criteria, points, LLM verifiers). Fine-tunes models on structured scoring/summarization tasks. [https://huggingface.co/datasets/sojuL/RubricHub\_v1](https://huggingface.co/datasets/sojuL/RubricHub_v1)
* **Pageshift-Entertainment/LongPage** (6.07k, updated 3 days ago): Long-context fiction summaries (scene/chapter/book levels) with reasoning traces. Trains long-doc reasoning, story arc gen, prompt rendering. [https://huggingface.co/datasets/Pageshift-Entertainment/LongPage](https://huggingface.co/datasets/Pageshift-Entertainment/LongPage)
* **Anthropic/EconomicIndex** (5.32k, updated 7 days ago): AI usage on economic tasks/O\*NET; tracks automation/augmentation by occupation/wage. Analyzes AI economic impact. [https://huggingface.co/datasets/Anthropic/EconomicIndex](https://huggingface.co/datasets/Anthropic/EconomicIndex)
# Medical Imaging
* **FOMO-MRI/FOMO300K** (4.95k? large-scale MRI, updated 1 day ago): 318k+ brain MRI scans (clinical/research, anomalies); heterogeneous sequences for self-supervised learning at scale. [https://huggingface.co/datasets/FOMO-MRI/FOMO300K](https://huggingface.co/datasets/FOMO-MRI/FOMO300K)[arxiv+1](https://arxiv.org/abs/2506.14432)
What are you building with these? Drop links to your projects below! | 2026-01-23T11:53:50 | https://www.reddit.com/r/LocalLLaMA/comments/1qkotdt/this_weeks_fresh_hugging_face_datasets_jan_1723/ | techlatest_net | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qkotdt | false | null | t3_1qkotdt | /r/LocalLLaMA/comments/1qkotdt/this_weeks_fresh_hugging_face_datasets_jan_1723/ | false | false | self | 10 | {'enabled': False, 'images': [{'id': 'uzjba7dD6IO7PeUCvc2lqAoPye77FJAjgi1vPJ1TAnI', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/uzjba7dD6IO7PeUCvc2lqAoPye77FJAjgi1vPJ1TAnI.png?width=108&crop=smart&auto=webp&s=4b9b53ef5082e5a14c5a155a8f79f925748f7887', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/uzjba7dD6IO7PeUCvc2lqAoPye77FJAjgi1vPJ1TAnI.png?width=216&crop=smart&auto=webp&s=f5848fe32a5b64af5a861d6d63ba970df1597bb5', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/uzjba7dD6IO7PeUCvc2lqAoPye77FJAjgi1vPJ1TAnI.png?width=320&crop=smart&auto=webp&s=13af297dc8457438000e8c39b4e3762a63f89965', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/uzjba7dD6IO7PeUCvc2lqAoPye77FJAjgi1vPJ1TAnI.png?width=640&crop=smart&auto=webp&s=d85049ae1d8a930c5e940321373bc9ac9e82f406', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/uzjba7dD6IO7PeUCvc2lqAoPye77FJAjgi1vPJ1TAnI.png?width=960&crop=smart&auto=webp&s=99c7186abbc4515e9944a19d114035ef09bc306a', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/uzjba7dD6IO7PeUCvc2lqAoPye77FJAjgi1vPJ1TAnI.png?width=1080&crop=smart&auto=webp&s=80e0cb0c1cb979964ebbb0409f24d2d5cc3f437a', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/uzjba7dD6IO7PeUCvc2lqAoPye77FJAjgi1vPJ1TAnI.png?auto=webp&s=90841d3b3e2b5f0c7acedd11a4876848bce1af7d', 'width': 1200}, 'variants': {}}]} |
llm video card for 10 bucks? But there is a nuance | 0 | 2026-01-23T11:25:48 | https://www.reddit.com/gallery/1qkob7i | Solid-Iron4430 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1qkob7i | false | null | t3_1qkob7i | /r/LocalLLaMA/comments/1qkob7i/llm_video_card_for_10_bucks_but_there_is_a_nuance/ | false | false | default | 0 | null | |
Did I expect too much on GLM? | 0 | Im a little confused on why I am getting low TPS or perhaps I need to reduce my expectations?
Build:
CPU: AMD Ryzen Threadripper 3990X (64 cores, 128 threads)
RAM: 256GB (8x Kingston 32GB DDR4 UDIMM - 3200MHz)
GPU: RTX 6000 Ada Generation 48GB
I use Opencode to essentially run open source models with coding, when i use 64k context im getting around 20-30tps using llama.cpp
llama-server --model \~/cpp/GLM-4.7-Flash-Q4\_K\_XL.gguf --port 8080 --n-gpu-layers 100 --temp 0.7 --top-p 1.0 --min-p 0.01 --ctx-size 65536 --fit off --jinja
now of course when i use llama.cpp GUI im getting high TPS but for some reason when using via opencode, its slowly.
Not sure if I am expecting too much or just that my hardware is last gen? Would love to hear your thoughts
Perhaps suggest a different model or agentic coding? | 2026-01-23T11:12:42 | https://www.reddit.com/r/LocalLLaMA/comments/1qko2ud/did_i_expect_too_much_on_glm/ | Ok_Brain_2376 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qko2ud | false | null | t3_1qko2ud | /r/LocalLLaMA/comments/1qko2ud/did_i_expect_too_much_on_glm/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': '6_PfOpB8YBGOrK8f3GhxyjecYMb6oWbi8OJOVeVz5ho', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/6_PfOpB8YBGOrK8f3GhxyjecYMb6oWbi8OJOVeVz5ho.png?width=108&crop=smart&auto=webp&s=ba1cf21f4df766427ca611a387eeb97daa02d44b', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/6_PfOpB8YBGOrK8f3GhxyjecYMb6oWbi8OJOVeVz5ho.png?width=216&crop=smart&auto=webp&s=f780a7aad3cc0a3f1284e4c9b6d4a2963356b5b2', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/6_PfOpB8YBGOrK8f3GhxyjecYMb6oWbi8OJOVeVz5ho.png?width=320&crop=smart&auto=webp&s=0ee92f2b44e961c9dc7acc48a8f4b9365d823bee', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/6_PfOpB8YBGOrK8f3GhxyjecYMb6oWbi8OJOVeVz5ho.png?width=640&crop=smart&auto=webp&s=2fabda0eee264c203116b548d092954fee482191', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/6_PfOpB8YBGOrK8f3GhxyjecYMb6oWbi8OJOVeVz5ho.png?width=960&crop=smart&auto=webp&s=2e08e51ba00f21a670602b50e344ffee92edb526', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/6_PfOpB8YBGOrK8f3GhxyjecYMb6oWbi8OJOVeVz5ho.png?width=1080&crop=smart&auto=webp&s=ea6073006da9614c8ce01ccd55df02a098783644', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/6_PfOpB8YBGOrK8f3GhxyjecYMb6oWbi8OJOVeVz5ho.png?auto=webp&s=8b393bc42dc8d22c186ada2e4a0fb8792f1d6df1', 'width': 1200}, 'variants': {}}]} |
Building local agents with Ollama - sharing my lightweight framework (feedback welcome) | 0 | I've been experimenting with building agentic workflows entirely locally using Ollama, and the biggest pain points for me have been:
* Heavy framework, adding overhead, latency on hardware
* Lack of built-in safety/reliablility for long-running tasks
* Complicater RAG/memory setup for offline use
To solve this, I put together a lightweight framework called Kite with :
* Native Ollama integration, local embeddings
* 4 reasoning patterns that runn fully offlines (ReAct, Plan-And-Execute, ToT, ReWOO)
* Vector/Graph RAG, session memory
* Production like safety: circuit breakers, idempotency, retries
Repo: [https://github.com/thienzz/Kite](https://github.com/thienzz/Kite)
What about you guys - what challenges do you face when building/running agents locally?
Have you found good lightweight solutions? or do the bigger frameworks work fine on your setup?
Curious about your experiences and any feedback on this approach | 2026-01-23T11:12:32 | https://www.reddit.com/r/LocalLLaMA/comments/1qko2qz/building_local_agents_with_ollama_sharing_my/ | Mean_Buddy6830 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qko2qz | false | null | t3_1qko2qz | /r/LocalLLaMA/comments/1qko2qz/building_local_agents_with_ollama_sharing_my/ | false | false | self | 0 | null |
Building reliable local agents with Ollama - sharing my lightweight framework (feedback welcome) | 1 | [removed] | 2026-01-23T11:03:52 | https://www.reddit.com/r/LocalLLaMA/comments/1qknxh5/building_reliable_local_agents_with_ollama/ | Individual-Quote2728 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qknxh5 | false | null | t3_1qknxh5 | /r/LocalLLaMA/comments/1qknxh5/building_reliable_local_agents_with_ollama/ | false | false | self | 1 | null |
Getting similar embeddings with a model locally and online | 0 | Hi,
For the purpose of an application, we need to be able to create embeddings locally. Data cannot leave our machines for certain use cases. However, we would much prefer to use a service to quickly create embeddings on the fly when we need them in real time.
The problem is : even by trying to use the same models, the embeddings we get from the service are different from what we get locally (we use ollama). Tested it with Qwen3 Embeddings.
So we cannot process a user query to search across the embeddings we created locally.
Do you have experience getting something like this to work ? | 2026-01-23T10:39:28 | https://www.reddit.com/r/LocalLLaMA/comments/1qknixm/getting_similar_embeddings_with_a_model_locally/ | Dizzy-View-6824 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qknixm | false | null | t3_1qknixm | /r/LocalLLaMA/comments/1qknixm/getting_similar_embeddings_with_a_model_locally/ | false | false | self | 0 | null |
Running agent workflows locally with Ollama – what's your setup like? | 1 | [removed] | 2026-01-23T10:29:43 | https://www.reddit.com/r/LocalLLaMA/comments/1qknd8p/running_agent_workflows_locally_with_ollama_whats/ | Own-Average-4809 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qknd8p | false | null | t3_1qknd8p | /r/LocalLLaMA/comments/1qknd8p/running_agent_workflows_locally_with_ollama_whats/ | false | false | self | 1 | null |
Need On-Site GPU Cluster Engineer in Delhi NCR - Grace Blackwell EdgeXpert Setup | 1 | Looking for an experienced GPU cluster engineer for on-site work in Faridabad (Delhi NCR).
**Hardware:**
* 2× MSI EdgeXpert (NVIDIA Grace Blackwell GB10)
* MSI Raider 18 (RTX 5090)
* ConnectX-7 QSFP56 interconnect
* 10G networking switch
**What I Need:**
* Physical installation and cabling
* DGX OS setup on both nodes
* Multi-node clustering configuration (MPI/NCCL)
* Performance validation and testing
* Basic documentation/handover
**Requirements:**
* Real hands-on experience with GPU clusters (DGX, ConnectX, InfiniBand, etc.)
* Available within 1-2 weeks for 1-2 days on-site
* Based in or can travel to Delhi NCR
**Why I'm posting here:** Got an Upwork response from an agency sending "a guy" with generic IT experience. Want to work directly with someone who actually knows this hardware. | 2026-01-23T10:25:46 | https://www.reddit.com/r/LocalLLaMA/comments/1qknay6/need_onsite_gpu_cluster_engineer_in_delhi_ncr/ | Soft_Ad6760 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qknay6 | false | null | t3_1qknay6 | /r/LocalLLaMA/comments/1qknay6/need_onsite_gpu_cluster_engineer_in_delhi_ncr/ | false | false | self | 1 | null |
DFlash: Diffusion-style speculative decoding that drafts token blocks (Qwen3 + SGLang) | 4 | \*\*\[R\] DFlash: Diffusion-style speculative decoding that drafts token blocks (Qwen3 checkpoints + SGLang demo)\*\*
We’ve been working on \*\*DFlash\*\*, a diffusion-style speculative decoding system for LLM inference.
\*\*Core idea \*\*
Instead of drafting tokens autoregressively, DFlash trains a lightweight draft model that generates \*\*block of tokens at once\*\*, then verifies them with a target LLM.
This turns speculative decoding from token-by-token into \*\*block-level drafting\*\*, which leads to higher acceptance length and better throughput—especially at long context and larger batch sizes.
\*\*What’s available now\*\*
\- Open-source code: [https://github.com/z-lab/dflash](https://github.com/z-lab/dflash)
\- Project blog (design + results): [https://z-lab.ai/projects/dflash/](https://z-lab.ai/projects/dflash/)
\- Demo / discussion on X: [https://x.com/zhijianliu\_/status/2014521776219492835?s=20](https://x.com/zhijianliu_/status/2014521776219492835?s=20)
\*\*Supported models\*\*
\- Qwen3-4B
\- Qwen3-8B
\- Qwen3-Coder-30B-A3B
\*\*Runtime & tooling\*\*
\- Runs on \*\*SGLang\*\* (end-to-end integration working)
\- Can be used with \*\*Cline\*\* for vibe coding and long code generation
\- Supports streaming generation and large-batch inference
\*\*Why this is interesting\*\*
\- Drafting happens \*\*block by block\*\*, not token by token
\- Higher acceptance length compared to AR-style draft models
\- Better latency–throughput trade-off in long-context and high-concurrency settings
\- Particularly effective for \*\*code generation\*\* and structured outputs
Benchmarks, demos, and detailed analysis are in the blog.
\*\*Status & feedback\*\*
This is still an active research + systems project.
We’re releasing checkpoints first, and the \*\*training recipe will be open-sourced soon\*\* so others can train DFlash drafts for different base models.
Feedback, questions, and ideas are very welcome. Happy to clarify design choices or run additional benchmarks if useful. | 2026-01-23T10:07:15 | https://www.reddit.com/r/LocalLLaMA/comments/1qkmzqv/dflash_diffusionstyle_speculative_decoding_that/ | Stock_Material9244 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qkmzqv | false | null | t3_1qkmzqv | /r/LocalLLaMA/comments/1qkmzqv/dflash_diffusionstyle_speculative_decoding_that/ | false | false | self | 4 | {'enabled': False, 'images': [{'id': 'aeWAWocU9yc3RDhnceEe3QK3sNhmjbZpJeFgoKCssuc', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/aeWAWocU9yc3RDhnceEe3QK3sNhmjbZpJeFgoKCssuc.png?width=108&crop=smart&auto=webp&s=7156536590316130dc30fd776e3c668c28666e76', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/aeWAWocU9yc3RDhnceEe3QK3sNhmjbZpJeFgoKCssuc.png?width=216&crop=smart&auto=webp&s=423482278c6ccfae3b751f8676060672a40ffc6c', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/aeWAWocU9yc3RDhnceEe3QK3sNhmjbZpJeFgoKCssuc.png?width=320&crop=smart&auto=webp&s=802ec42508cd5f26258194c9200855262230f272', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/aeWAWocU9yc3RDhnceEe3QK3sNhmjbZpJeFgoKCssuc.png?width=640&crop=smart&auto=webp&s=5dfb782253d9fb4f11bb43dcdedd321bb2d6fd8a', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/aeWAWocU9yc3RDhnceEe3QK3sNhmjbZpJeFgoKCssuc.png?width=960&crop=smart&auto=webp&s=709d3bf0721008524c73224d0e7c26baff1f0380', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/aeWAWocU9yc3RDhnceEe3QK3sNhmjbZpJeFgoKCssuc.png?width=1080&crop=smart&auto=webp&s=37b0ed23d81989de1284b183e5865999c4e163da', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/aeWAWocU9yc3RDhnceEe3QK3sNhmjbZpJeFgoKCssuc.png?auto=webp&s=2729f0c2cc68e296c9477d2685e4c55bdfd2aa58', 'width': 1200}, 'variants': {}}]} |
Kite: Lightweight production-ready agentic AI framework with native Ollama/local model support | 2 | [removed] | 2026-01-23T10:00:27 | https://www.reddit.com/r/LocalLLaMA/comments/1qkmvkc/kite_lightweight_productionready_agentic_ai/ | Individual-Quote2728 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qkmvkc | false | null | t3_1qkmvkc | /r/LocalLLaMA/comments/1qkmvkc/kite_lightweight_productionready_agentic_ai/ | false | false | self | 2 | null |
Local Comic Generation: Character Consistency Across Sequential Outputs | 0 | I've been experimenting with local LLM + diffusion model pipelines for sequential image generation, specifically solving the character consistency problem in multi-page comics.
**The Technical Challenge:**
Standard image diffusion models generate each image independently. For sequential outputs (like comic pages), this causes catastrophic character drift - your protagonist on page 1 looks nothing like page 8.
**Architecture:**
I built a pipeline that:
1. **Character Extraction Layer**: Uses vision-language model (LLaVA) to parse character descriptions from initial prompt
2. **Embedding Persistence**: Stores character features in a vector database (FAISS)
3. **Sequential Generation**: Each page generation conditions on previous embeddings
4. **Consistency Validator**: Checks visual similarity scores; regenerates if below threshold
**Stack:**
- LLM: Mistral 8x7B (4-bit quantized)
- Image Model: SDXL (fp16)
- Character Encoder: Custom embedding layer
- Hardware: RTX 4090 (24GB VRAM)
**Performance:**
- 8-page comic: ~8.5 minutes total
- Character consistency: 92% visual similarity (CLIP score)
- VRAM usage: 18-20GB peak
- Can run on 16GB with int8 quantization (slower)
**Results:**
One prompt generates complete comic with consistent characters across all pages. Dynamic poses, different angles, varied expressions - but same visual identity.
**What I learned:**
- Standard LoRA fine-tuning isn't enough for sequence coherence
- Character embeddings need to be extracted BEFORE generation starts
- Cross-attention between pages helps but increases VRAM significantly
- Quality/speed trade-off is real - faster = more drift
**Current limitations:**
- 16+ page comics start showing drift
- Complex character designs (lots of accessories) harder to maintain
- No good way to handle character interactions yet
Would love to hear from others working on sequential generation. What approaches have you tried? Any better solutions for the consistency problem? | 2026-01-23T09:59:50 | https://www.reddit.com/r/LocalLLaMA/comments/1qkmv6k/local_comic_generation_character_consistency/ | LoNeWolF26548 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qkmv6k | false | null | t3_1qkmv6k | /r/LocalLLaMA/comments/1qkmv6k/local_comic_generation_character_consistency/ | false | false | self | 0 | null |
Kite: Lightweight production-ready agentic AI framework with native Ollama/local model support | 1 | [removed] | 2026-01-23T09:52:32 | https://www.reddit.com/r/LocalLLaMA/comments/1qkmr0h/kite_lightweight_productionready_agentic_ai/ | Own-Average-4809 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qkmr0h | false | null | t3_1qkmr0h | /r/LocalLLaMA/comments/1qkmr0h/kite_lightweight_productionready_agentic_ai/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'DHZOefXnuyvAn-my2-i7Qg9vxIriVTu059UnQKRRrHk', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/DHZOefXnuyvAn-my2-i7Qg9vxIriVTu059UnQKRRrHk.png?width=108&crop=smart&auto=webp&s=74553954954ab1715aa47a28b2ca98da77ced63a', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/DHZOefXnuyvAn-my2-i7Qg9vxIriVTu059UnQKRRrHk.png?width=216&crop=smart&auto=webp&s=72845ed41432e71a6a0e68c025e5aa781a775f64', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/DHZOefXnuyvAn-my2-i7Qg9vxIriVTu059UnQKRRrHk.png?width=320&crop=smart&auto=webp&s=ffd4dbc644d8369222d9d164079d10e3f92a3118', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/DHZOefXnuyvAn-my2-i7Qg9vxIriVTu059UnQKRRrHk.png?width=640&crop=smart&auto=webp&s=79b91cea8d565fba6ad37c4143cd5775a699a839', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/DHZOefXnuyvAn-my2-i7Qg9vxIriVTu059UnQKRRrHk.png?width=960&crop=smart&auto=webp&s=a4aab1dbdfee2c53fcd05fa0534c091edaf60f29', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/DHZOefXnuyvAn-my2-i7Qg9vxIriVTu059UnQKRRrHk.png?width=1080&crop=smart&auto=webp&s=6f01ac676e34af4c8ceb2eba6e1da4084f96e77a', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/DHZOefXnuyvAn-my2-i7Qg9vxIriVTu059UnQKRRrHk.png?auto=webp&s=ace3ca5231da71ec056cddd4fad4b5df739a7880', 'width': 1200}, 'variants': {}}]} |
DeepSeek-V3.2 Matches GPT-5 at 10x Lower Cost | Introl Blog | 104 | DeepSeek has released V3.2, an open-source model that reportedly matches GPT-5 on math reasoning while costing 10x less to run ($0.028/million tokens). By using a new 'Sparse Attention' architecture, the Chinese lab has achieved frontier-class performance for a total training cost of just \~$5.5 million—compared to the $100M+ spent by US tech giants. | 2026-01-23T09:45:43 | https://introl.com/blog/deepseek-v3-2-open-source-ai-cost-advantage | EchoOfOppenheimer | introl.com | 1970-01-01T00:00:00 | 0 | {} | 1qkmn4l | false | null | t3_1qkmn4l | /r/LocalLLaMA/comments/1qkmn4l/deepseekv32_matches_gpt5_at_10x_lower_cost_introl/ | false | false | default | 104 | null |
Best AI aggregators for projects? | 1 | Hi,
I'm after an AI aggregator / Unified AI interface. I have already asked AI what to get and I've been reading around on here and most of the suggested apps seem to get a lot of negative comments around them.
My general 'wish list' is:
* Open source
* Has chat management features for projects.
* Can ask multiple models the same queries
* Can use local AI models
Does anyone use a program that does this and works well ? | 2026-01-23T09:41:17 | https://www.reddit.com/r/LocalLLaMA/comments/1qkmkny/best_ai_aggregators_for_projects/ | Normal-Dot-215 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qkmkny | false | null | t3_1qkmkny | /r/LocalLLaMA/comments/1qkmkny/best_ai_aggregators_for_projects/ | false | false | self | 1 | null |
Some relatively cheap NVIDIA Grace Hopper GH200 superchips are currently being sold on ebay | 0 | I think PG530 modules are 12V voltage with 144GB VRAM
Found another one: [https://www.ebay.com/itm/136561002810](https://www.ebay.com/itm/136561002810)
No idea where to get a baseboard for them but maybe someone will find it useful. | 2026-01-23T09:34:20 | https://www.ebay.com/itm/297893959785 | fairydreaming | ebay.com | 1970-01-01T00:00:00 | 0 | {} | 1qkmgs1 | false | null | t3_1qkmgs1 | /r/LocalLLaMA/comments/1qkmgs1/some_relatively_cheap_nvidia_grace_hopper_gh200/ | false | false | default | 0 | null |
An Update to My "Cerebellum" Project | 0 | TLDR of the previous post for the uninitiated: I made a parasitic predictive early exit module which could attach to models and reduce their compute cost by 25% (on llama3.1 8b base), There were some small inconsistencies such as typos on longer output generations I had attributed them to the base model and hence decided to switch to instruct models since.
The images in this post are in the following context
1st image: The teleportation mechanism with it's confidence threshold tweaked to be completely sure on tokens before teleporting. On the many tests I have run this setting never hallucinates (approximately a 4.6% speedup on latency and a 6.5% overall compute reduction)
2nd image: A much lower confidence threshold, Allowing for more exits in theory but practically it only led to a non proportional increase in teleported tokens (6.5% speedup, 9.63% overall compute reduction)
3rd image: A control run of the model (LLama 3.2 3B instruct), Note a system prompt was not used in these tests which is why the model calls itself a human. This is a known tendency of LLama 3.2 models
4. The surprise image, I tweaked the confidence value to be slightly higher than it was in the 2nd image, with my hypothesis being more confident teleports from the cerebellum would lead to future hidden states being more tuned to allow the cerebellum to teleport. This was a resounding success. (8.4% speedup & a 10.11% compute reduction)
It should be noted in all of the images, the output quality is nearly identical with the most aggressive threshold only changing the output structure slightly.
Lets talk about the changes since my last post
# 1. I started using LLama 3.2 3B instruct
There were a few reasons for this switch the major one being how small the model was. Speedups on a 3 billion parameter model using early exits (or inner hidden layer jumping) are notoriously difficult to achieve due to how slimmed down the model already is. My thought process was if I can get the system working on this model, It will work at an higher efficiency at larger models as they have more redundancy to skip
The 2nd reason was so I could use it locally, I had been using modal notebooks thus far for the training/inference. Changing to a smaller model allowed me to train and do everything locally.
# 2. SLERP Improvements
I used to apply slerp by using a loop and calculating the interpolated hidden layer state for each layer and calculating it's KV cache & RoPE per loop, This added a lot of hidden latency. I changed the entire slerp logic into one massive matrix multiplication (Yes the cost of this is included in the compute reduction as well as the cost of running my cerebellum).
# 3. I Switched from Python Hooks
I started implementing this using a custom child class of the model classes I am using, this allowed me to change from predicting the final hidden layer to being able to predict the nth last hidden layer, letting me using the last few layers of a model as a buffer/smoother to make sure teleported tokens stick to context specific coherence.
# 4. Changing the confidence check
Oo boy this one was a complete headache since I had to redo the entire system. In the end I settled on training the confidence gate by having it look at the predicted layer generated by the cerebellum and give a float output between 0 and 1 where the closer it was to one the more it thought the prediction should be used to teleport between layers. BCE loss was used. The way I decided if a prediction from the cerebellum was correct or not was by doing the following.
1. generate the predicted nth last hidden layer
2. get the actual nth last hidden layer vector from the model
3. run those through the LM head
4. compare the top most token & the cosine similarity
5. using that I determined if the prediction was valid or not
I agree that this method does still have room for improvement, i.e running the predicted nth last hidden layer through the remaining layers of the model and checking that output with the true output.
BUT doing that would add a lot of training overhead and the current setup does it's job surprisingly well
Other confidence check methods I tried:
Training a BCE on the Cosine similarity of the predicted nth last hidden layer vs the true nth last hidden layer, This was functional but just a bad idea, vectors can point in the same direction but still be completely different. Things like "like" and "love" would be pointing in the same direction but having the 2 words interchanged can completely mess with the flow of a narrative.
# Conclusion
I agree the speedup numbers are not that impressive upon first glance. But I feel the need to iterate that a 10% latency and compute cost reduction on a model as small as a 3b one using early exits/hidden layer prediction while maintaining near identical outputs is not easy. In my last post about this I achieved a 25% compute cost reduction while using the same theory of implementation on a 7B model. The following statement is just my hypothesis from having worked on this for several months now and having tested on multiple models. The efficiency gain scales with model size, to a certain degree. I have some anecdotal evidence of this but nothing concrete yet.
Feel free to ask any questions
I would love to chat about this and I am open to any AI company that wants to test my speedup for their own models. | 2026-01-23T09:31:35 | https://www.reddit.com/gallery/1qkmf7e | Hopeful-Sherbet-3100 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1qkmf7e | false | null | t3_1qkmf7e | /r/LocalLLaMA/comments/1qkmf7e/an_update_to_my_cerebellum_project/ | false | false | default | 0 | null |
Llama.cpp merges in OpenAI Responses API Support | 151 | Finally! Took some fussing around to get this to work with unsloth/GLM-4.7-Flash:UD-Q4\_K\_XL in llama.cpp (ROCm) and Codex CLI, but once set up it works great! I'm super impressed with GLM-4.7-Flash capability in the Codex CLI harness. Haven't tried any big feature implementations yet, but for exploring (large) codebases it has been surprisingly effective | 2026-01-23T09:22:40 | https://github.com/ggml-org/llama.cpp/pull/18486 | SemaMod | github.com | 1970-01-01T00:00:00 | 0 | {} | 1qkm9zb | false | null | t3_1qkm9zb | /r/LocalLLaMA/comments/1qkm9zb/llamacpp_merges_in_openai_responses_api_support/ | false | false | default | 151 | {'enabled': False, 'images': [{'id': 'jaCRAxUnJ2FTFmnP1XEivypPWJS55V8E63eMDNFL6mg', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/jaCRAxUnJ2FTFmnP1XEivypPWJS55V8E63eMDNFL6mg.png?width=108&crop=smart&auto=webp&s=bbc2b181c72df6c995ce9bb384b1edb10c22fdc6', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/jaCRAxUnJ2FTFmnP1XEivypPWJS55V8E63eMDNFL6mg.png?width=216&crop=smart&auto=webp&s=5b8b60903bc02da56efa41fa1354963c1c1ace49', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/jaCRAxUnJ2FTFmnP1XEivypPWJS55V8E63eMDNFL6mg.png?width=320&crop=smart&auto=webp&s=949f2009559a3d5038b9da3fd348950598834807', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/jaCRAxUnJ2FTFmnP1XEivypPWJS55V8E63eMDNFL6mg.png?width=640&crop=smart&auto=webp&s=61f06c5517f67a03da7c9a1bf80c4717a8acae9f', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/jaCRAxUnJ2FTFmnP1XEivypPWJS55V8E63eMDNFL6mg.png?width=960&crop=smart&auto=webp&s=dde03dc6e4582106b9b3e5a49c4dc02d6a433ef2', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/jaCRAxUnJ2FTFmnP1XEivypPWJS55V8E63eMDNFL6mg.png?width=1080&crop=smart&auto=webp&s=8ac1909e5e1003da2c79af82dd5587e806e8f667', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/jaCRAxUnJ2FTFmnP1XEivypPWJS55V8E63eMDNFL6mg.png?auto=webp&s=f4169dabf844c967009b865580509dd30b3b74e6', 'width': 1200}, 'variants': {}}]} |
What is your actual daily use case for local LLMs? | 3 | I am fascinated by the rapid progress of local LLMs and I’m planning to set up my own local environment soon. However, before I dive in, I’m curious about how you all integrate these models into your lives.
What are you actually using your local models for on a daily basis?
Whether it’s for professional work like agentic coding and RAG, or purely for hobbyist reasons like automations, I’d love to know why you chose the local route.
Has a local model officially replaced a paid subscription in your workflow, or are you mostly doing this for the sake of tinkering and privacy?
| 2026-01-23T08:38:07 | https://www.reddit.com/r/LocalLLaMA/comments/1qklkby/what_is_your_actual_daily_use_case_for_local_llms/ | Groundbreaking_Fox59 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qklkby | false | null | t3_1qklkby | /r/LocalLLaMA/comments/1qklkby/what_is_your_actual_daily_use_case_for_local_llms/ | false | false | self | 3 | null |
Leetcode for ML | 9 | Recently, I built a platform called TensorTonic where you can implement 100+ ML algorithms from scratch.
Additionally, I added more than 60+ topics on mathematics fundamentals required to know ML.
I started this 2.5 months ago and already gained 7000 users. I will be shipping a lot of cool stuff ahead and would love the feedback from community on this.
Ps - Its completely free to use
Check it out here - tensortonic.com | 2026-01-23T08:31:15 | https://v.redd.it/cmffd63282fg1 | Big-Stick4446 | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1qklgft | false | {'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/cmffd63282fg1/DASHPlaylist.mpd?a=1771749087%2CMGI0MzBjNzg2OTA3YWE5ZTA3NjQ4OTkxYzFjNjA4NjUwMmZlMDM0NzBhZjEzZGUyNjhiMWQ2ZjZiOWVmMDA3YQ%3D%3D&v=1&f=sd', 'duration': 27, 'fallback_url': 'https://v.redd.it/cmffd63282fg1/CMAF_720.mp4?source=fallback', 'has_audio': False, 'height': 720, 'hls_url': 'https://v.redd.it/cmffd63282fg1/HLSPlaylist.m3u8?a=1771749087%2CZTU3M2I3ZTA2ZGVlNTkyOTAzM2Y5Mzk2N2U2NWVkZTUzZjkyMjg3MTRiOTk5NzNiZjc1MmNhYWM2NzE1MGJlOQ%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/cmffd63282fg1/CMAF_96.mp4', 'transcoding_status': 'completed', 'width': 1104}} | t3_1qklgft | /r/LocalLLaMA/comments/1qklgft/leetcode_for_ml/ | false | false | 9 | {'enabled': False, 'images': [{'id': 'aXh0azY3bTE4MmZnMTOpzGvWJtEh0vETIlq5fXCK2iSGrkk3885uvQgRyBSE', 'resolutions': [{'height': 70, 'url': 'https://external-preview.redd.it/aXh0azY3bTE4MmZnMTOpzGvWJtEh0vETIlq5fXCK2iSGrkk3885uvQgRyBSE.png?width=108&crop=smart&format=pjpg&auto=webp&s=c0a843d5552c4d55d7873b7146248d28be90abce', 'width': 108}, {'height': 140, 'url': 'https://external-preview.redd.it/aXh0azY3bTE4MmZnMTOpzGvWJtEh0vETIlq5fXCK2iSGrkk3885uvQgRyBSE.png?width=216&crop=smart&format=pjpg&auto=webp&s=11f316d7beaa8bf27c05a53907cd263c48bb508f', 'width': 216}, {'height': 208, 'url': 'https://external-preview.redd.it/aXh0azY3bTE4MmZnMTOpzGvWJtEh0vETIlq5fXCK2iSGrkk3885uvQgRyBSE.png?width=320&crop=smart&format=pjpg&auto=webp&s=d1e6de8da3183bf609fb669bfc2b676c9dd8bdaa', 'width': 320}, {'height': 417, 'url': 'https://external-preview.redd.it/aXh0azY3bTE4MmZnMTOpzGvWJtEh0vETIlq5fXCK2iSGrkk3885uvQgRyBSE.png?width=640&crop=smart&format=pjpg&auto=webp&s=699f5954cac7c6350cfd7f54b2725c23a97bf52a', 'width': 640}, {'height': 626, 'url': 'https://external-preview.redd.it/aXh0azY3bTE4MmZnMTOpzGvWJtEh0vETIlq5fXCK2iSGrkk3885uvQgRyBSE.png?width=960&crop=smart&format=pjpg&auto=webp&s=ed764f4bb5ed1d4e1e5b42700e5fbdc0c379ee92', 'width': 960}, {'height': 704, 'url': 'https://external-preview.redd.it/aXh0azY3bTE4MmZnMTOpzGvWJtEh0vETIlq5fXCK2iSGrkk3885uvQgRyBSE.png?width=1080&crop=smart&format=pjpg&auto=webp&s=8bb6385502bfef9507fbb4c91947008ccb859cd8', 'width': 1080}], 'source': {'height': 998, 'url': 'https://external-preview.redd.it/aXh0azY3bTE4MmZnMTOpzGvWJtEh0vETIlq5fXCK2iSGrkk3885uvQgRyBSE.png?format=pjpg&auto=webp&s=c65a85c0f62f9dde6ad62ea6ebd8cfc90d46fdbb', 'width': 1530}, 'variants': {}}]} | |
Curious about the tech behind LLMs controlling smart devices (like coffee makers). How does it actually work? | 1 | Hi everyone,
I've been reading a lot of tech news recently about companies upgrading their voice assistants (like Alexa) with LLMs, but I'm trying to wrap my head around the actual engineering implementation.
I have a few questions about how this works "under the hood" and would love some technical insights:
**1. From Chat to Action:** I've heard terms like "Function Calling" thrown around. Is that how an LLM actually controls a physical machine? How does a text-based model technically "press the button" on a coffee maker?
**2. The "Refusal" Problem:** I often read users complaining that LLM-based assistants sometimes refuse simple commands or act weirdly compared to the old rigid systems. Why does this happen? Is it because the model gets "confused" by the context, or is it a safety feature gone wrong?
**3. Industry Solutions:** How are engineers solving these reliability issues right now? Are they restricting what the LLM can do, or are there new methods to make them more obedient and consistent?
Thanks for helping me understand the details behind the news! | 2026-01-23T08:29:00 | https://www.reddit.com/r/LocalLLaMA/comments/1qklf5n/curious_about_the_tech_behind_llms_controlling/ | ExtentLoose3357 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qklf5n | false | null | t3_1qklf5n | /r/LocalLLaMA/comments/1qklf5n/curious_about_the_tech_behind_llms_controlling/ | false | false | self | 1 | null |
Which model do you use for local pen-testing? | 1 | I recently wanted to scan my legacy project for security holes and I notice that all the big paid LLM providers forbid a prompt like "scan my codebase and provide concrete exploits so i can replicate them"
Do you know any good models that are not censored in this way? | 2026-01-23T08:13:04 | https://www.reddit.com/r/LocalLLaMA/comments/1qkl5wu/which_model_do_you_use_for_local_pentesting/ | zannix | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qkl5wu | false | null | t3_1qkl5wu | /r/LocalLLaMA/comments/1qkl5wu/which_model_do_you_use_for_local_pentesting/ | false | false | self | 1 | null |
AI coding assistant infrastructure requirement, | 0 | We need to support around 300 developers within our enterprise. For security and compliance reasons, the LLM must be deployed on-premises.
What infrastructure would be required to meet these needs? We are considering deploying Qwen-3-Coder-32B, or a quantized variant of a larger model, depending on feasibility and performance | 2026-01-23T08:09:43 | https://www.reddit.com/r/LocalLLaMA/comments/1qkl400/ai_coding_assistant_infrastructure_requirement/ | Financial-Cap-8711 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qkl400 | false | null | t3_1qkl400 | /r/LocalLLaMA/comments/1qkl400/ai_coding_assistant_infrastructure_requirement/ | false | false | self | 0 | null |
Built a local-first open source AI agent to help debug production incidents | 0 | I open-sourced an AI agent I’ve been building to help debug production incidents. Sharing here because the design is local-first and I’m actively working toward local / self-hosted model support.
Right now it supports OpenAI models only (bring your own API key). Support for Claude, OpenRouter, and local Llama-based models is in progress.
What it does: when prod is broken, a lot of time goes into reconstructing context. Alerts, logs, notes, and ad-hoc checks get scattered, and people repeat work because no one has a clear picture.
The agent runs alongside an incident and:
* ingests alerts, logs, and notes
* keeps a running summary of what’s known and what’s still unclear
* tracks checks and actions so work isn’t repeated
* suggests mitigations (restarts, rollbacks, drafting fix PRs), but nothing runs without explicit human approval
Design-wise, it’s intentionally constrained:
* no autonomous actions
* read-mostly by default
* designed to tolerate partial / noisy inputs
* meant to run locally, with model choice abstracted behind an interface
I’ve been using earlier versions during real incidents and recently open-sourced it. It’s still early, but usable.
Project is called **Incidentfox** (I’m the author):
[https://github.com/incidentfox/incidentfox](https://github.com/incidentfox/incidentfox) | 2026-01-23T07:48:01 | https://www.reddit.com/r/LocalLLaMA/comments/1qkkrgq/built_a_localfirst_open_source_ai_agent_to_help/ | Useful-Process9033 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qkkrgq | false | null | t3_1qkkrgq | /r/LocalLLaMA/comments/1qkkrgq/built_a_localfirst_open_source_ai_agent_to_help/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'GmdEdCP6g8L0b0-xYkPeJbCi-ryNnYuPvAOqVHwDM7Q', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/GmdEdCP6g8L0b0-xYkPeJbCi-ryNnYuPvAOqVHwDM7Q.png?width=108&crop=smart&auto=webp&s=65545cd5840dc646d6228f68e969e92ed9fd99d5', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/GmdEdCP6g8L0b0-xYkPeJbCi-ryNnYuPvAOqVHwDM7Q.png?width=216&crop=smart&auto=webp&s=bf38342b20471f233ff353b76c79683b63527dfa', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/GmdEdCP6g8L0b0-xYkPeJbCi-ryNnYuPvAOqVHwDM7Q.png?width=320&crop=smart&auto=webp&s=093dcf7d765bfd79d08c6b5b42d177d485fbc776', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/GmdEdCP6g8L0b0-xYkPeJbCi-ryNnYuPvAOqVHwDM7Q.png?width=640&crop=smart&auto=webp&s=4a270fd627b7cd64eb34614d6c186688700f66a5', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/GmdEdCP6g8L0b0-xYkPeJbCi-ryNnYuPvAOqVHwDM7Q.png?width=960&crop=smart&auto=webp&s=3ce213e133f367710726da952418d540878871bf', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/GmdEdCP6g8L0b0-xYkPeJbCi-ryNnYuPvAOqVHwDM7Q.png?width=1080&crop=smart&auto=webp&s=0897147d1629fc904c6f47a08011e0213a5343a5', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/GmdEdCP6g8L0b0-xYkPeJbCi-ryNnYuPvAOqVHwDM7Q.png?auto=webp&s=92fabe061226dcdbb8db469f9e537a836d861443', 'width': 1200}, 'variants': {}}]} |
I built a 30-tool AI agent swarm running entirely on qwen3:4b - no cloud, no API costs | 0 | Been lurking here for months, finally have something worth sharing.
\## What I Built
\*\*Agent Farm\*\* - A local AI agent system with 30 MCP tools. Runs entirely on consumer hardware. Small models (qwen3:4b) working in parallel with true ThreadPoolExecutor concurrency.
\## Hardware
\- AMD 7900 XTX (24GB VRAM)
\- i7-12700K (20 threads)
\- 64GB RAM
\- Ubuntu 24.04
\## The Problem I Solved
Small models suck at:
1. Reliable tool calling (regex parsing fails constantly)
2. Long content generation (corruption after \~500 chars)
\## The Solutions
\*\*Structured Output:\*\* Ollama's JSON schema enforcement via GBNF grammars. No more parsing failures - constrained decoding guarantees valid JSON.
\`\`\`python
\# Bug responds with guaranteed-valid JSON:
{"tool": "exec\_cmd", "arg": "df -h"}
\`\`\`
\*\*Chunked Write Pattern:\*\* Decompose big tasks into parallel chunks:
1. Planner bug creates JSON outline (structured output)
2. Worker bugs write sections in PARALLEL (4 workers)
3. Python concatenates directly (zero LLM cost)
4. Direct file write
\## Real Benchmarks
| Task | Model | Output | Time |
|------|-------|--------|------|
| System health check | qwen3:4b x4 | 4 parallel tool calls | 12s |
| Document generation | qwen3:4b x4 | 9.6 KB markdown | 78s |
| Code generation | qwen3:4b x4 | 71 lines Python | 88s |
| Result synthesis | qwen2.5:14b | Unified summary | 8s |
\## What It Actually Does
\- \`system\_health\_swarm\` - 4 bugs check CPU/mem/disk/services in parallel
\- \`recon\_swarm\` - Scouts analyze a codebase from multiple angles
\- \`chunked\_write\` - Generate unlimited-size documents
\- \`chunked\_code\_gen\` - Generate multi-function code files
\- \`tool\_swarm\` - Deploy bugs with real shell access
\## Cost Comparison
Cloud API for equivalent work: \~$2-5 per complex task
Agent Farm: $0 (runs on hardware I already own)
Monthly savings if used daily: $60-150
\## The Tech Stack
\- \*\*Ollama\*\* for inference
\- \*\*FastMCP\*\* for tool protocol
\- \*\*qwen3:4b\*\* for workers (2.5GB each)
\- \*\*qwen2.5:14b\*\* for synthesis (9GB)
\- True parallel via ThreadPoolExecutor
\## Limitations (being honest)
\- Small models still can't do complex reasoning
\- Each chunk limited to \~500 chars
\- Synthesis needs bigger model (14b)
\- Setup isn't one-click
\## Code
[https://github.com/BossX429/agent-farm](https://github.com/BossX429/agent-farm)
\## What's Next
Working on CBS-Agent - a pattern-learning system where agents actually learn from successful executions. Not fine-tuning, real-time pattern matching.
Happy to answer questions. This sub taught me most of what I know about local inference.
| 2026-01-23T07:26:55 | https://www.reddit.com/r/LocalLLaMA/comments/1qkkfdy/i_built_a_30tool_ai_agent_swarm_running_entirely/ | Hot_Engineer_8662 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qkkfdy | false | null | t3_1qkkfdy | /r/LocalLLaMA/comments/1qkkfdy/i_built_a_30tool_ai_agent_swarm_running_entirely/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': '21_hZE-NzIu6nX5Si8mrxiBYWqZziQO9pnendnhJbRg', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/21_hZE-NzIu6nX5Si8mrxiBYWqZziQO9pnendnhJbRg.png?width=108&crop=smart&auto=webp&s=aff2f53c8be76c9d5386c0d7aab03bcb52e2c015', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/21_hZE-NzIu6nX5Si8mrxiBYWqZziQO9pnendnhJbRg.png?width=216&crop=smart&auto=webp&s=aa096a4e76084b76867b1273f97e726033094501', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/21_hZE-NzIu6nX5Si8mrxiBYWqZziQO9pnendnhJbRg.png?width=320&crop=smart&auto=webp&s=952619a5696deef82b9d32512ea2b62553934194', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/21_hZE-NzIu6nX5Si8mrxiBYWqZziQO9pnendnhJbRg.png?width=640&crop=smart&auto=webp&s=f7fff951af266b854892e59b839f0ba965b37e34', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/21_hZE-NzIu6nX5Si8mrxiBYWqZziQO9pnendnhJbRg.png?width=960&crop=smart&auto=webp&s=06892051fc4cdeb58a47a525a3302b0d3d28f253', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/21_hZE-NzIu6nX5Si8mrxiBYWqZziQO9pnendnhJbRg.png?width=1080&crop=smart&auto=webp&s=bc0ccfc8a2a0bc84ad01092645506bfc262e305f', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/21_hZE-NzIu6nX5Si8mrxiBYWqZziQO9pnendnhJbRg.png?auto=webp&s=a732d11ff939d266b82be1bcfc3512de701f5580', 'width': 1200}, 'variants': {}}]} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.