title
stringlengths
1
300
score
int64
0
8.54k
selftext
stringlengths
0
41.5k
created
timestamp[ns]date
2023-04-01 04:30:41
2026-03-04 02:14:14
url
stringlengths
0
878
author
stringlengths
3
20
domain
stringlengths
0
82
edited
timestamp[ns]date
1970-01-01 00:00:00
2026-02-19 14:51:53
gilded
int64
0
2
gildings
stringclasses
7 values
id
stringlengths
7
7
locked
bool
2 classes
media
stringlengths
646
1.8k
name
stringlengths
10
10
permalink
stringlengths
33
82
spoiler
bool
2 classes
stickied
bool
2 classes
thumbnail
stringlengths
4
213
ups
int64
0
8.54k
preview
stringlengths
301
5.01k
Request: BOLMO (Byte Model) Quants
4
I’m interested in trying out the new 7B Byte-level model from AllenAi, but I only see full precision and NVFP4. That’s great if you have a new Blackwell card, but unfortunately I don’t. Is anybody still making W4A16 or W8A8 quants?
2026-01-25T20:04:37
https://www.reddit.com/r/LocalLLaMA/comments/1qmtqpf/request_bolmo_byte_model_quants/
FrozenBuffalo25
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qmtqpf
false
null
t3_1qmtqpf
/r/LocalLLaMA/comments/1qmtqpf/request_bolmo_byte_model_quants/
false
false
self
4
null
Lekh AI v1.3 - On-device LLMs, Vision, Image Gen & Voice for iOS. No cloud, no subscriptions.
1
Hey everyone 👋 I'm the solo dev behind Lekh AI, an on-device AI app for iPhone and iPad. Just shipped v1.3, and it's a big one. The core idea: Run LLMs, vision, image generation, and voice entirely on-device. No cloud, no subscriptions, no accounts. Your data never leaves your device. # What's in Lekh AI On-Device AI * Offline LLM chat (Gemma, Qwen, Llama, Mistral, Phi, DeepSeek, OpenELM, and more) * Vision models: ask questions about images and photos * Voice chat with Kokoro TTS + live captions * Custom AI personas (role-consistent, persistent) * RAG memory across chats (toggleable) * Local AI server (OpenAI-compatible API over LAN) Image Generation & Editing (all offline) * SD 1.5 / SDXL image generation + custom model support * AI image editing with inpainting * Object remover * B&W photo colorizer * Photo → 3D depth generation * 3D splat generator + viewer Documents & Reading * PDF / document analysis with smart indexing * Ask the AI about your book progress directly in chat * Better long-file performance # What's New in v1.3 Qwen3-TTS (Beta) - Advanced Local Text-to-Speech This is a passion project for advanced users. The goal: bring local access to open-source TTS models. * 9 different voices with 10 language support * Voice cloning – clone any voice with 3+ seconds of reference audio * Expressive speech modes – control emotion and speaking style * Save/export generated audio to Files * Works over your local network – no cloud required Note: Requires a Mac with Apple Silicon running a local Python server. Setup guide included in the app. Local Server Improvements * Server now persists across app restarts – enable once, it auto-starts when you reopen the app * Better connection stability # What's in v1.2 * Smarter AI: Chat summarization, better RAG integration, fine-tune memories * Image tools: Inpainting, object removal, colorizer, 3D depth – all feel more "Photos-app-like" now * Performance: Background model downloading, much better memory management, reduced app size (FastVLM now optional) * UX: Improved chat UI with HTML artifacts, cleaner code blocks, more Siri Shortcuts * Plus lots of bug fixes and stability improvements # Privacy First ✅ 100% on-device ✅ No tracking / analytics ✅ No subscriptions ✅ One-time price: $2 ✅ iCloud sync (optional, encrypted) App Store: [https://apps.apple.com/us/app/lekh-ai/id6757496953](https://apps.apple.com/us/app/lekh-ai/id6757496953) I'm building this openly and feedback genuinely shapes the roadmap. If you're into on-device AI, privacy-first apps, or running models locally on iOS, I'd love to hear what you think 🙏 Happy to answer any technical questions in the comments.
2026-01-25T20:01:21
https://www.reddit.com/gallery/1qmtnee
Living_Commercial_10
reddit.com
1970-01-01T00:00:00
0
{}
1qmtnee
false
null
t3_1qmtnee
/r/LocalLLaMA/comments/1qmtnee/lekh_ai_v13_ondevice_llms_vision_image_gen_voice/
false
false
https://b.thumbs.redditm…_EmqUONeR2hs.jpg
1
null
Implemented streaming inference for Qwen3-TTS (the official repo doesn't have it)
1
[removed]
2026-01-25T20:00:49
https://www.reddit.com/r/LocalLLaMA/comments/1qmtmt0/implemented_streaming_inference_for_qwen3tts_the/
IntentionStill8740
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qmtmt0
false
null
t3_1qmtmt0
/r/LocalLLaMA/comments/1qmtmt0/implemented_streaming_inference_for_qwen3tts_the/
false
false
self
1
null
A Tool to Calculate If a LLM Will Fit Your GPU
7
Hey everyone! 👋 I’ve been working on a small open-source project to take some of the guesswork out of running LLMs locally. 👉 LocalInferenceCalculator — a tool that tells you whether a given model will fit on a specific GPU based on the context length you want to use. It currently knows about: common LLM sizes (7B, 13B, 34B, 70B, etc.) GPUs from consumer to data center realistic memory overhead and KV cache calculations You just input the context size (in tokens) and it evaluates each model × GPU pair and tells you: ✅ Runs ❌ Doesn’t Run ⚠️ Runs at the Limit All based on conservative VRAM estimates, not optimistic benchmarks. The goal is to help you plan and choose before wasting hours trying to load something that doesn’t fit. 😄 Repo: https://github.com/ygorml/local_inference_calculator What you’ll find in the repo: what’s already implemented a clear roadmap formulas for VRAM, KV cache, overhead settings for future quantization, multi-GPU, CPU offload support Future topics on the roadmap: INT8 / INT4 quantization support multi-GPU / offload strategies CLI and maybe a web UI Would love feedback from this community — especially: ⭐ what models you care about ⭐ GPUs you use ⭐ features that would make this tool actually useful for your workflows Curious to hear what you think! 🚀
2026-01-25T19:58:02
https://www.reddit.com/r/LocalLLaMA/comments/1qmtjxi/a_tool_to_calculate_if_a_llm_will_fit_your_gpu/
colt7r
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qmtjxi
false
null
t3_1qmtjxi
/r/LocalLLaMA/comments/1qmtjxi/a_tool_to_calculate_if_a_llm_will_fit_your_gpu/
false
false
self
7
{'enabled': False, 'images': [{'id': 'qwchSemnZZqdVj5zCKCSBfr4i4OrcsKwTeLVvIG8DVw', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/qwchSemnZZqdVj5zCKCSBfr4i4OrcsKwTeLVvIG8DVw.png?width=108&crop=smart&auto=webp&s=be0e30831e4506640c402cc14c0aaa6a94b0f7ce', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/qwchSemnZZqdVj5zCKCSBfr4i4OrcsKwTeLVvIG8DVw.png?width=216&crop=smart&auto=webp&s=27e2ca384a80d29ce40d622f6b4800b3cba66d82', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/qwchSemnZZqdVj5zCKCSBfr4i4OrcsKwTeLVvIG8DVw.png?width=320&crop=smart&auto=webp&s=43cf06ed1ecaad3251db2a57be3e1f463a6aa000', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/qwchSemnZZqdVj5zCKCSBfr4i4OrcsKwTeLVvIG8DVw.png?width=640&crop=smart&auto=webp&s=3229894aee6d206fcc0c2416d9a3404c3f50b23f', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/qwchSemnZZqdVj5zCKCSBfr4i4OrcsKwTeLVvIG8DVw.png?width=960&crop=smart&auto=webp&s=a9824b21c2867ee6e92919bc6dd06f545cffb384', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/qwchSemnZZqdVj5zCKCSBfr4i4OrcsKwTeLVvIG8DVw.png?width=1080&crop=smart&auto=webp&s=e4c90b49c0353c15a9633c0354cd0fa318bee74e', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/qwchSemnZZqdVj5zCKCSBfr4i4OrcsKwTeLVvIG8DVw.png?auto=webp&s=bdd4e8ddefb6fafcf62db5920180ceed2c8fa8cc', 'width': 1200}, 'variants': {}}]}
[Rust/AVX-512] I built a Zero-Copy 1.58-bit LLM Engine hitting 117 Tokens/s on a single CPU core. I need help fixing the final Activation layer.
0
**The Project:** I am building **R3-Engine**, a from-scratch, local AI inference engine for Microsoft's `bitnet-b1.58-2B-4T`. It is written in 100% Safe Rust, natively cross-compiles to Wasm SIMD128, and uses Zero heap allocations in the execution loop. **The Physics:** By mapping a 64-byte aligned `.r3` file directly from NVMe to CPU L3 Cache (Zero-Copy) and using AVX-512 `VPOPCNTDQ` for branchless math, the Ryzen 9950X3D achieves **117 Tokens/Second** latency. **The Problem: The AI is mute (Outputting** `<unk>`**)** The matrix multiplication pipeline is mathematically complete, but the output is stuck at Token ID 0 (`<unk>`). The issue lies in the transition between the quantized weights and the float-based non-linear activations. **Where I need expert input:** 1. **Weight Tying in BitNet:** Microsoft's 2B model ties Embeddings with the LM Head. I am cloning the embedding matrix for the output projection, but I suspect a scaling factor is missing. 2. **RMSNorm & SiLU in 1.58-bit:** How should the raw integer accumulators (from the VPOPCNTDQ loop) be scaled before entering the SiLU activation and the subsequent layer? **GitHub Repo**: [https://github.com/r3-engine/r3-engine](https://github.com/r3-engine/r3-engine) If you know the physics of LLM Logit Sampling or ternary activation math, I would love your eyes on the codebase.
2026-01-25T19:46:39
https://www.reddit.com/r/LocalLLaMA/comments/1qmt8ei/rustavx512_i_built_a_zerocopy_158bit_llm_engine/
dhilip-siva
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qmt8ei
false
null
t3_1qmt8ei
/r/LocalLLaMA/comments/1qmt8ei/rustavx512_i_built_a_zerocopy_158bit_llm_engine/
false
false
self
0
null
[Project] I built a process supervisor for local agents (CrewAI/AutoGen) to prevent infinite loops and runaway costs.
1
A few days ago, I asked this sub how everyone handles "kill switches" for local agents. The consensus was mostly "manual monitoring" or "just pull the plug." I wasn't super comfortable leaving Llama 3.3 running unattended with that strategy (I’ve had agents get stuck in retry loops that burn serious compute/API credits overnight). So I spent the weekend building a small CLI tool to solve this specific "supervisor" problem. It’s called **Vallignus**. **The Problem:** Frameworks like CrewAI often swallow exceptions or get stuck in `while` loops when the LLM hallucinates a tool call. If you aren't watching the terminal, they spin forever. **The Solution (How this works):** It wraps your Python execution command and monitors the process group from the *outside*. * **Command:** `vallignus run --max-runtime 300 -- python` [`agent.py`](http://agent.py) * **Enforcement:** It tracks Wall Time and Output Size. If the agent exceeds limits, it sends `SIGTERM` (and then `SIGKILL` if it hangs) to the entire process group. * **Forensics:** It captures `stdout` and `stderr` to a `.jsonl` file, so you can replay the logs and see exactly *why* the model started looping. **Repo (MIT License):** [https://github.com/jacobgadek/vallignus](https://github.com/jacobgadek/vallignus) It’s a simple utility, but it makes running local swarms feel a lot safer.
2026-01-25T19:34:41
https://www.reddit.com/r/LocalLLaMA/comments/1qmswmc/project_i_built_a_process_supervisor_for_local/
Bubbly_Gap6378
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qmswmc
false
null
t3_1qmswmc
/r/LocalLLaMA/comments/1qmswmc/project_i_built_a_process_supervisor_for_local/
false
false
self
1
null
can a R640 pull a model via via the 100GbE internal switch instead of downloading?
0
Hi there, im building a DC and starting with 1GB/s bandwith, and to save time i wanted to know if by pre downloading the models, the users can access without having them downloading it again and just pull locally? Also i know that speed is decent to handle several servers as long is used for inference but i know theres other type of workloads that perhaps dont, and if anyone can advise how to work around it, with this type of speed, for like 2-3 servers?
2026-01-25T19:34:38
https://www.reddit.com/r/LocalLLaMA/comments/1qmswkt/can_a_r640_pull_a_model_via_via_the_100gbe/
DjuricX
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qmswkt
false
null
t3_1qmswkt
/r/LocalLLaMA/comments/1qmswkt/can_a_r640_pull_a_model_via_via_the_100gbe/
false
false
self
0
null
I got tired of my AI agents overwriting each other's code, so I built a conflict manager for them
0
I’ve been experimenting with multi-agent swarms (using Codex, Claude Code, Gemini and local llama), and I kept hitting the same wall: **Coordination Chaos.** I'd have one agent refactoring `auth.ts` while another was updating the login API. They didn't know about each other. And even if they did- 20 minutes later? Git conflict hell. Hour of work wasted. So I built **Spidersan-** a CLI tool that acts as a "traffic controller" for AI coding agents. It's fully open source (MIT) and works with any agent that can run CLI commands. Check it out on my site: [https://treebird.uk](https://treebird.uk) (and my other works). or spidersan-oss on github. and npm! I'm thirsty for feedback and reflections. Seriously, try it with 20 agents on the same file, and without it, and tell me how it went :) Would love to hear if you found it helpful, and feel free to ask question, ask for features, and contributions! How you guys handle multi-agent coordinations? Are you just yolo-ing it or Ralph Wigguming it?
2026-01-25T19:23:38
https://www.reddit.com/r/LocalLLaMA/comments/1qmsloy/i_got_tired_of_my_ai_agents_overwriting_each/
Birdsky7
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qmsloy
false
null
t3_1qmsloy
/r/LocalLLaMA/comments/1qmsloy/i_got_tired_of_my_ai_agents_overwriting_each/
false
false
self
0
null
LLM Reasoning Efficiency - lineage-bench accuracy vs generated tokens
38
Generated from lineage-128 and lineage-192 [lineage-bench benchmark results](https://github.com/fairydreaming/lineage-bench-results/tree/main/lineage-8_64_128_192). Sorry for overlapping labels.
2026-01-25T19:22:14
https://i.redd.it/gai51kz2pjfg1.png
fairydreaming
i.redd.it
1970-01-01T00:00:00
0
{}
1qmsk9w
false
null
t3_1qmsk9w
/r/LocalLLaMA/comments/1qmsk9w/llm_reasoning_efficiency_lineagebench_accuracy_vs/
false
false
https://b.thumbs.redditm…gnH24rMJvtTA.jpg
38
{'enabled': True, 'images': [{'id': 'PKrJipip4C7A3TSOz6K74cqXXq0B4DYa6j5TzlOZSBg', 'resolutions': [{'height': 77, 'url': 'https://preview.redd.it/gai51kz2pjfg1.png?width=108&crop=smart&auto=webp&s=c8180ab9f6c342a8ad369ed60ee3c2a10060fac5', 'width': 108}, {'height': 154, 'url': 'https://preview.redd.it/gai51kz2pjfg1.png?width=216&crop=smart&auto=webp&s=f3eba5911388d9094f7f6088024fb83157ecc7e8', 'width': 216}, {'height': 228, 'url': 'https://preview.redd.it/gai51kz2pjfg1.png?width=320&crop=smart&auto=webp&s=3326711f5ddd17508f595afd28eb5c0c93c16e52', 'width': 320}, {'height': 457, 'url': 'https://preview.redd.it/gai51kz2pjfg1.png?width=640&crop=smart&auto=webp&s=f44d218e6b2a5dc8982ffb434c4c01e0cf195277', 'width': 640}, {'height': 685, 'url': 'https://preview.redd.it/gai51kz2pjfg1.png?width=960&crop=smart&auto=webp&s=f06e3d59bd121ecc6c91857a4715b9533683c5a1', 'width': 960}, {'height': 771, 'url': 'https://preview.redd.it/gai51kz2pjfg1.png?width=1080&crop=smart&auto=webp&s=904285a98dcb5587dd89c8a318481fb142b70212', 'width': 1080}], 'source': {'height': 1000, 'url': 'https://preview.redd.it/gai51kz2pjfg1.png?auto=webp&s=f09fd33c6f2761297a392a644225c4592db43d73', 'width': 1400}, 'variants': {}}]}
Introducing openground, an opensource, on-device RAG tool that gives access to official docs to coding agents
1
Link: [https://github.com/poweroutlet2/openground](https://github.com/poweroutlet2/openground) tldr: openground is a tool that lets you give controlled access to documentation to AI agents. Everything happens on-device. Think of it as an opensource and local context7. I've been working on openground, an opensource and completely on-device RAG tool that let's you give controlled documentation access to your coding agents. Solutions like Context7 provide a source of truth for docs, but their closed-source data ingestion and querying pose security/privacy risks. openground aims to give users full control over what content is available to their agents and how it is ingested. Find a documentation source (git repo or sitemap), add it to openground via the CLI, and openground will use a local embedding model and vector db (lancedb) to store your docs. You can then use the CLI to install the MCP server to your agent to allow the agent to query the docs via hybrid BM25 full-text and vector search. Features that I've implemented so far: \- ingestion from git repos or sitemap.xml \- specific version ingestion for git sources \- easy \`install-mcp --<agent-name>\` command for popular agents like opencode, claude code, and cursor This is still an early version, so expect breaking changes. Upcoming features I am working on: \- project specific access control from the MCP server \- docs "registry" to allow pushing and pulling of documentation embeddings to and from S3 \- lighter-weight package \- better docs Suggestions and PRs welcome! I'll also be around for any discussion.
2026-01-25T19:22:09
https://www.reddit.com/r/LocalLLaMA/comments/1qmsk6s/introducing_openground_an_opensource_ondevice_rag/
poweroutlet2
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qmsk6s
false
null
t3_1qmsk6s
/r/LocalLLaMA/comments/1qmsk6s/introducing_openground_an_opensource_ondevice_rag/
false
false
self
1
{'enabled': False, 'images': [{'id': 'cIEY-3KcHOMHp39SBTQHDMh3FtoYPJgLuF-VYbcZW2c', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/cIEY-3KcHOMHp39SBTQHDMh3FtoYPJgLuF-VYbcZW2c.png?width=108&crop=smart&auto=webp&s=49c25df96631c17be09f157363457ed6481d950a', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/cIEY-3KcHOMHp39SBTQHDMh3FtoYPJgLuF-VYbcZW2c.png?width=216&crop=smart&auto=webp&s=db475163b622adf7d39bac12b37d4f1028f7ff45', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/cIEY-3KcHOMHp39SBTQHDMh3FtoYPJgLuF-VYbcZW2c.png?width=320&crop=smart&auto=webp&s=d943c4620180b52f18a6a18107bfdb826afacfa8', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/cIEY-3KcHOMHp39SBTQHDMh3FtoYPJgLuF-VYbcZW2c.png?width=640&crop=smart&auto=webp&s=504a47f949b1228acdd60bd54ac779dc474af921', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/cIEY-3KcHOMHp39SBTQHDMh3FtoYPJgLuF-VYbcZW2c.png?width=960&crop=smart&auto=webp&s=3c0a358161d25e7cfcf8e8b33f6386c4afd29e88', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/cIEY-3KcHOMHp39SBTQHDMh3FtoYPJgLuF-VYbcZW2c.png?width=1080&crop=smart&auto=webp&s=5528fefcea61f9b395e04b2f5fdd8f017b580aa7', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/cIEY-3KcHOMHp39SBTQHDMh3FtoYPJgLuF-VYbcZW2c.png?auto=webp&s=c63d355e2bd31d7ae7c6b423531e36f4dc4506de', 'width': 1200}, 'variants': {}}]}
GLM 4.7 Flash context slowdown
5
jacek@AI-SuperComputer:\~$ CUDA\_VISIBLE\_DEVICES=0,1,2 llama-bench -m /mnt/models1/GLM/GLM-4.7-Flash-Q4\_K\_M.gguf -d 0,1000,2000,3000,4000,5000,6000,7000,8000,9000,10000 -p 200 -n 200 ggml\_cuda\_init: found 3 CUDA devices: Device 0: NVIDIA GeForce RTX 3090, compute capability 8.6, VMM: yes Device 1: NVIDIA GeForce RTX 3090, compute capability 8.6, VMM: yes Device 2: NVIDIA GeForce RTX 3090, compute capability 8.6, VMM: yes | model | size | params | backend | ngl | test | t/s | | ------------------------------ | ---------: | ---------: | ---------- | --: | --------------: | -------------------: | | deepseek2 ?B Q4\_K - Medium | 17.05 GiB | 29.94 B | CUDA | 99 | pp200 | 1829.43 ± 7.32 | | deepseek2 ?B Q4\_K - Medium | 17.05 GiB | 29.94 B | CUDA | 99 | tg200 | 110.00 ± 0.87 | | deepseek2 ?B Q4\_K - Medium | 17.05 GiB | 29.94 B | CUDA | 99 | pp200 @ d1000 | 1674.95 ± 11.14 | | deepseek2 ?B Q4\_K - Medium | 17.05 GiB | 29.94 B | CUDA | 99 | tg200 @ d1000 | 84.46 ± 0.02 | | deepseek2 ?B Q4\_K - Medium | 17.05 GiB | 29.94 B | CUDA | 99 | pp200 @ d2000 | 1608.96 ± 15.56 | | deepseek2 ?B Q4\_K - Medium | 17.05 GiB | 29.94 B | CUDA | 99 | tg200 @ d2000 | 73.17 ± 0.01 | | deepseek2 ?B Q4\_K - Medium | 17.05 GiB | 29.94 B | CUDA | 99 | pp200 @ d3000 | 1458.11 ± 45.08 | | deepseek2 ?B Q4\_K - Medium | 17.05 GiB | 29.94 B | CUDA | 99 | tg200 @ d3000 | 63.56 ± 0.01 | | deepseek2 ?B Q4\_K - Medium | 17.05 GiB | 29.94 B | CUDA | 99 | pp200 @ d4000 | 1331.59 ± 26.52 | | deepseek2 ?B Q4\_K - Medium | 17.05 GiB | 29.94 B | CUDA | 99 | tg200 @ d4000 | 55.41 ± 0.58 | | deepseek2 ?B Q4\_K - Medium | 17.05 GiB | 29.94 B | CUDA | 99 | pp200 @ d5000 | 1190.82 ± 21.97 | | deepseek2 ?B Q4\_K - Medium | 17.05 GiB | 29.94 B | CUDA | 99 | tg200 @ d5000 | 50.00 ± 0.00 | | deepseek2 ?B Q4\_K - Medium | 17.05 GiB | 29.94 B | CUDA | 99 | pp200 @ d6000 | 1155.38 ± 22.88 | | deepseek2 ?B Q4\_K - Medium | 17.05 GiB | 29.94 B | CUDA | 99 | tg200 @ d6000 | 42.28 ± 0.00 | | deepseek2 ?B Q4\_K - Medium | 17.05 GiB | 29.94 B | CUDA | 99 | pp200 @ d7000 | 1088.12 ± 22.28 | | deepseek2 ?B Q4\_K - Medium | 17.05 GiB | 29.94 B | CUDA | 99 | tg200 @ d7000 | 35.84 ± 0.11 | | deepseek2 ?B Q4\_K - Medium | 17.05 GiB | 29.94 B | CUDA | 99 | pp200 @ d8000 | 1008.62 ± 20.09 | | deepseek2 ?B Q4\_K - Medium | 17.05 GiB | 29.94 B | CUDA | 99 | tg200 @ d8000 | 30.25 ± 0.98 | | deepseek2 ?B Q4\_K - Medium | 17.05 GiB | 29.94 B | CUDA | 99 | pp200 @ d9000 | 978.76 ± 14.89 | | deepseek2 ?B Q4\_K - Medium | 17.05 GiB | 29.94 B | CUDA | 99 | tg200 @ d9000 | 28.49 ± 0.07 | | deepseek2 ?B Q4\_K - Medium | 17.05 GiB | 29.94 B | CUDA | 99 | pp200 @ d10000 | 936.88 ± 15.17 | | deepseek2 ?B Q4\_K - Medium | 17.05 GiB | 29.94 B | CUDA | 99 | tg200 @ d10000 | 25.32 ± 1.27 | build: 8f91ca54e (7822) jacek@AI-SuperComputer:\~$ CUDA\_VISIBLE\_DEVICES=0,1,2 llama-bench -m /mnt/models1/GLM/GLM-4.7-Flash-Q8\_0.gguf -d 0,1000,2000,3000,4000,5000,6000,7000,8000,9000,10000 -p 200 -n 200 ggml\_cuda\_init: found 3 CUDA devices: Device 0: NVIDIA GeForce RTX 3090, compute capability 8.6, VMM: yes Device 1: NVIDIA GeForce RTX 3090, compute capability 8.6, VMM: yes Device 2: NVIDIA GeForce RTX 3090, compute capability 8.6, VMM: yes | model | size | params | backend | ngl | test | t/s | | ------------------------------ | ---------: | ---------: | ---------- | --: | --------------: | -------------------: | | deepseek2 ?B Q8\_0 | 29.65 GiB | 29.94 B | CUDA | 99 | pp200 | 1942.91 ± 12.86 | | deepseek2 ?B Q8\_0 | 29.65 GiB | 29.94 B | CUDA | 99 | tg200 | 96.20 ± 0.44 | | deepseek2 ?B Q8\_0 | 29.65 GiB | 29.94 B | CUDA | 99 | pp200 @ d1000 | 1775.13 ± 17.85 | | deepseek2 ?B Q8\_0 | 29.65 GiB | 29.94 B | CUDA | 99 | tg200 @ d1000 | 76.21 ± 0.08 | | deepseek2 ?B Q8\_0 | 29.65 GiB | 29.94 B | CUDA | 99 | pp200 @ d2000 | 1675.11 ± 18.54 | | deepseek2 ?B Q8\_0 | 29.65 GiB | 29.94 B | CUDA | 99 | tg200 @ d2000 | 66.89 ± 0.01 | | deepseek2 ?B Q8\_0 | 29.65 GiB | 29.94 B | CUDA | 99 | pp200 @ d3000 | 1486.27 ± 34.28 | | deepseek2 ?B Q8\_0 | 29.65 GiB | 29.94 B | CUDA | 99 | tg200 @ d3000 | 58.19 ± 0.78 | | deepseek2 ?B Q8\_0 | 29.65 GiB | 29.94 B | CUDA | 99 | pp200 @ d4000 | 1358.15 ± 35.09 | | deepseek2 ?B Q8\_0 | 29.65 GiB | 29.94 B | CUDA | 99 | tg200 @ d4000 | 52.57 ± 0.10 | | deepseek2 ?B Q8\_0 | 29.65 GiB | 29.94 B | CUDA | 99 | pp200 @ d5000 | 1260.55 ± 38.74 | | deepseek2 ?B Q8\_0 | 29.65 GiB | 29.94 B | CUDA | 99 | tg200 @ d5000 | 47.00 ± 0.00 | | deepseek2 ?B Q8\_0 | 29.65 GiB | 29.94 B | CUDA | 99 | pp200 @ d6000 | 1214.77 ± 19.65 | | deepseek2 ?B Q8\_0 | 29.65 GiB | 29.94 B | CUDA | 99 | tg200 @ d6000 | 40.20 ± 0.07 | | deepseek2 ?B Q8\_0 | 29.65 GiB | 29.94 B | CUDA | 99 | pp200 @ d7000 | 1120.23 ± 24.04 | | deepseek2 ?B Q8\_0 | 29.65 GiB | 29.94 B | CUDA | 99 | tg200 @ d7000 | 33.22 ± 1.28 | | deepseek2 ?B Q8\_0 | 29.65 GiB | 29.94 B | CUDA | 99 | pp200 @ d8000 | 1043.81 ± 12.77 | | deepseek2 ?B Q8\_0 | 29.65 GiB | 29.94 B | CUDA | 99 | tg200 @ d8000 | 29.86 ± 0.02 | | deepseek2 ?B Q8\_0 | 29.65 GiB | 29.94 B | CUDA | 99 | pp200 @ d9000 | 1009.74 ± 18.68 | | deepseek2 ?B Q8\_0 | 29.65 GiB | 29.94 B | CUDA | 99 | tg200 @ d9000 | 27.50 ± 0.02 | | deepseek2 ?B Q8\_0 | 29.65 GiB | 29.94 B | CUDA | 99 | pp200 @ d10000 | 961.37 ± 10.10 | | deepseek2 ?B Q8\_0 | 29.65 GiB | 29.94 B | CUDA | 99 | tg200 @ d10000 | 24.43 ± 1.29 | build: 8f91ca54e (7822)
2026-01-25T19:19:26
https://www.reddit.com/gallery/1qmshim
jacek2023
reddit.com
1970-01-01T00:00:00
0
{}
1qmshim
false
null
t3_1qmshim
/r/LocalLLaMA/comments/1qmshim/glm_47_flash_context_slowdown/
false
false
https://b.thumbs.redditm…Ur31XRATSUBs.jpg
5
null
Venice AI's "Claude Opus 4.5" admits it's fake when pressed — caught the deception in its own thinking trace
0
Was testing Venice AI's supposedly "uncensored Claude Opus 4.5" and got curious how they could be running a modified Anthropic model when Anthropic doesn't license Claude for third-party modification. Asked actual Claude (via claude.ai) about it. Response: Anthropic only distributes Claude through official channels and authorized partners. They don't license models for "uncensoring." Venice is almost certainly running a fine-tuned open-source model with Claude branding. Took that response back to Venice's "Claude Opus 4.5" and asked it to explain. Here's what appeared in its visible thinking section: > The model then responded: > So Venice is instructing an open-source model to claim it's Claude, charge users for access to "Claude," and the model itself will admit the deception if you press it. Not posting this to start drama — just think people should know what they're actually paying for.
2026-01-25T19:16:54
https://www.reddit.com/r/LocalLLaMA/comments/1qmsf05/venice_ais_claude_opus_45_admits_its_fake_when/
notbuyingaporsche
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qmsf05
false
null
t3_1qmsf05
/r/LocalLLaMA/comments/1qmsf05/venice_ais_claude_opus_45_admits_its_fake_when/
false
false
self
0
null
Need Help With Coding Agent LocalLLM
1
Hi All, I'm New in this Community. And new to be excited to install a Uncensored Coding Agent. But Don't know which is the best and fully updated Uncensored Coding Agent which i can Embedded With IDE & Use it. But don't know which one i can try. You guys know better so i asking Help. And I don't having a much High End PC. 6 Core 12 Thread Processor, 16GB DDR5 6000Mhz, RTX 3050 8GB OC, 1TB Gen4 SSD, Windows 11. Kindly Share your valuable response 😊
2026-01-25T18:55:28
https://www.reddit.com/r/LocalLLaMA/comments/1qmrt32/need_help_with_coding_agent_localllm/
0xmehrab
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qmrt32
false
null
t3_1qmrt32
/r/LocalLLaMA/comments/1qmrt32/need_help_with_coding_agent_localllm/
false
false
self
1
null
ClaraVerse | Local AI workspace (4 months ago) -> Your feedback -> Back with improvements.
20
# We built an AI workspace that actually gets things done locally (not just another chatbot or AI slope) I've been grinding on ClaraVerse for the past few months, and we just dropped a major update. If you're tired of AI tools that just... talk at you, this might be your vibe. # The TL;DR * **Run it anywhere**: CLI tool that works on your laptop, VPS, cloud, whatever. No platform lock-in BS. * **50+ integrations**: Gmail, Sheets, Discord, Slack, you name it. Want more? Just ask. * **Actual automation**: Build agents that DO things, not just answer questions. * **Chat-first workflow builder**: Like n8n/Zapier but for AI. Chat your way through creating workflows ask, create, iterate. * **Everything becomes an API**: Seriously, every workflow you build = instant API endpoint or schedule it daily, hourly your choice. **One-liner:** It's an all-in-one platform (chat, image gen, agents, docs, search). Every tool is part of the package. What's actually new (beyond UI polish) **Built-in tools that agents and chats need:** * PPT, PDF, XLSX readers and creators * Isolated code execution with dependency management * Interactive chat so local LLMs can ask clarifying questions mid-prompt * Search, scrape, image search, API tools, and memory all default * Tool router if you have too many tools * Memories that can remember and forget based on your usage **50+ integrations ready to go:** * Gmail, Sheets, Discord, Slack, and more * Build agents that trigger actual actions, not just suggestions * Schedule workflows and forget about them **For n8n lovers who hate boilerplate:** * Auto-generate workflows from prompts * Chain multiple AI models together * Structured outputs, multi-tool agents, the works **Better chat UX:** * Interactive prompts that ask clarifying questions * Generate images, PDFs, slides, charts in-chat * All integrations work in both chat AND workflows **Admin and Model Manger:** * Manage models and provider in one place * Assign models based on their abilities (tools, text, code, vision, image) * Create alias, check usage and so on with multiple user in same instance * Simple UI works on phone responsive as hell # Try it and let us know * GitHub: [github.com/claraverse-space/ClaraVerse](https://github.com/claraverse-space/ClaraVerse) We're open source and privacy-first (chat and data stored in browser or DB, even when self-hosted - user's choice). I use this myself every day. Honestly, I've seen worse tools raise fund and then lock everything behind subscriptions. This community helped build this with feedback, so it's staying free and open-source. Happy to answer questions, take feature requests, or hear about how it crashes on your machine so we can fix and improve.
2026-01-25T18:54:07
https://www.reddit.com/gallery/1qmrrr4
BadBoy17Ge
reddit.com
1970-01-01T00:00:00
0
{}
1qmrrr4
false
null
t3_1qmrrr4
/r/LocalLLaMA/comments/1qmrrr4/claraverse_local_ai_workspace_4_months_ago_your/
false
false
https://a.thumbs.redditm…DakgsLWI_f-8.jpg
20
null
Do you power off your LLM/AI/SV PC when not using it to save on electricity, or keep it on 24/7? MultiGPU adds a lot of power!
10
Hi there guys, hoping you're fine. Wondering here, as electricity is about 0.28USD per kWh on Chile, so I'm kinda forced to have it off most of the time. My idle power is about 270W with multiple GPUs (7) and no PCIe switches (5090x3,4090x2,A40x1,A6000x1, 9900X), but with a Gen 5 100 lanes switch and a Gen 4 96 lanes switch, I idle at about 370W. At load it goes it ranges from 900W to 2500W, depending of the backend.
2026-01-25T18:45:59
https://www.reddit.com/r/LocalLLaMA/comments/1qmrjlh/do_you_power_off_your_llmaisv_pc_when_not_using/
panchovix
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qmrjlh
false
null
t3_1qmrjlh
/r/LocalLLaMA/comments/1qmrjlh/do_you_power_off_your_llmaisv_pc_when_not_using/
false
false
self
10
null
Organizing LM Studio ?
2
Newbie question, did a search didn't see and answer. There are hundreds of Model choices. As I test ones here and there to learn. I find I am having a hard tine going back to use a previous Model. Is there a way to organize the models I use? I see you can open a chat and save the chat. But that seems clunky given the many models. Wondering if there is a good was to sort and organize types etc. Thank You.
2026-01-25T18:31:48
https://www.reddit.com/r/LocalLLaMA/comments/1qmr570/organizing_lm_studio/
Ztoxed
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qmr570
false
null
t3_1qmr570
/r/LocalLLaMA/comments/1qmr570/organizing_lm_studio/
false
false
self
2
null
Qwen3 vl 8b instruct samplers
5
Hi everyone! I'm trying to use Qwen vl instruct with koboldcpp using the samplers suggested in the qwen repo and by Unsloth: temp= 0.7 top\_p=0.8 top\_k= 20 presence\_penalty=1.5 The problem is that for any kind of use, from general assistant, to coding, or for agentic tool calling use, it has fairly poor performance, often even using incorrect json syntax. Should I change something?
2026-01-25T18:21:17
https://www.reddit.com/r/LocalLLaMA/comments/1qmqug5/qwen3_vl_8b_instruct_samplers/
Aril_1
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qmqug5
false
null
t3_1qmqug5
/r/LocalLLaMA/comments/1qmqug5/qwen3_vl_8b_instruct_samplers/
false
false
self
5
null
Need TTS recommendations
2
Building a real-time assistive tool on a **Jetson Orin Nano (8GB)**. We need a TTS that sounds human (NotebookLM style) but has **<200ms TTFA (Time to First Audio)**. We’re torn between the speed of **Kokoro-82M** and the prosody of **CosyVoice 2 (0.5B)**. Given we're also running a light Vision pipeline, which one handles the resource contention better without turning into a stutter-fest?
2026-01-25T18:21:04
https://www.reddit.com/r/LocalLLaMA/comments/1qmqu83/need_tts_recommendations/
tiz_lala
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qmqu83
false
null
t3_1qmqu83
/r/LocalLLaMA/comments/1qmqu83/need_tts_recommendations/
false
false
self
2
null
We indexed the entire Ollama Library (10TB+ VRAM). Here is how we run them all on 1 Node.
0
I saw a lot of people struggling with OOM errors on the larger Ollama models (like DeepSeek-671B or Cogito), so we decided to stress-test our inference engine against the entire library. The Problem (VRAM): As you can see in the video, keeping all these models "warm" would require petabytes of VRAM. Even just hosting the top 10 models simultaneously would cost \~$50k/month in dedicated H100s. The Fix (NVMe Swapping): We are hosting this entire list on a Single H100 Node (8 GPUs). Instead of keeping models loaded, we store them on local NVMe and flash-load them to VRAM only when a request comes in. < 70B Models: Load in \~1.2s on 1 GPU. 300B+ Models (DeepSeek/Llama-405B): Load in \~2.5s across the full Node (8 GPUs). This lets us offer "Serverless" pricing (pay-per-token) for the "Long Tail" of models that usually require dedicated instances. Status: We have the node live now. If you want to run any specific finetune from this list (or your own GGUF/Safetensors) without renting a dedicated box, DM me. I'm handing out API keys to test the scheduler.
2026-01-25T18:19:18
https://v.redd.it/hb8v4551fjfg1
MLExpert000
v.redd.it
1970-01-01T00:00:00
0
{}
1qmqseq
false
{'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/hb8v4551fjfg1/DASHPlaylist.mpd?a=1771957175%2COWNlNDY5OTRjNjRlZjM5ZjA4OGEwYTE1ZDBhYzFhMjkzYmRlNGQzYjZmMTZkY2FhY2U4YzRlMGJkMTI5YzVhNw%3D%3D&v=1&f=sd', 'duration': 24, 'fallback_url': 'https://v.redd.it/hb8v4551fjfg1/CMAF_720.mp4?source=fallback', 'has_audio': True, 'height': 1280, 'hls_url': 'https://v.redd.it/hb8v4551fjfg1/HLSPlaylist.m3u8?a=1771957175%2CYWYxYWIxZWViODgyYTU2NzNhMzliZTgxYWJkMGQwMzE4ZjgwYzRjMjBkMTBlYzA0MzZhOGQ5YTc2NmYyMGRmZQ%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/hb8v4551fjfg1/CMAF_96.mp4', 'transcoding_status': 'completed', 'width': 698}}
t3_1qmqseq
/r/LocalLLaMA/comments/1qmqseq/we_indexed_the_entire_ollama_library_10tb_vram/
false
false
https://external-preview…28607cff234b2e81
0
{'enabled': False, 'images': [{'id': 'OWZ2ZXczMzFmamZnMZsQAHq6PbAkzKIMFNLLQ3n9IJJxCaJQn9mFyu-tKnGu', 'resolutions': [{'height': 197, 'url': 'https://external-preview.redd.it/OWZ2ZXczMzFmamZnMZsQAHq6PbAkzKIMFNLLQ3n9IJJxCaJQn9mFyu-tKnGu.png?width=108&crop=smart&format=pjpg&auto=webp&s=33892e73821b69cfea3a77d33ea6c1283d364fa1', 'width': 108}, {'height': 395, 'url': 'https://external-preview.redd.it/OWZ2ZXczMzFmamZnMZsQAHq6PbAkzKIMFNLLQ3n9IJJxCaJQn9mFyu-tKnGu.png?width=216&crop=smart&format=pjpg&auto=webp&s=29c5d9129a0f117f73940e275c19871d5508e567', 'width': 216}, {'height': 586, 'url': 'https://external-preview.redd.it/OWZ2ZXczMzFmamZnMZsQAHq6PbAkzKIMFNLLQ3n9IJJxCaJQn9mFyu-tKnGu.png?width=320&crop=smart&format=pjpg&auto=webp&s=e8f28f54ec9b64334d67bf27b005fa23fe41c929', 'width': 320}, {'height': 1173, 'url': 'https://external-preview.redd.it/OWZ2ZXczMzFmamZnMZsQAHq6PbAkzKIMFNLLQ3n9IJJxCaJQn9mFyu-tKnGu.png?width=640&crop=smart&format=pjpg&auto=webp&s=ce7ba714fc4653bfc7b294a3d3bd1edfdb70b453', 'width': 640}], 'source': {'height': 1624, 'url': 'https://external-preview.redd.it/OWZ2ZXczMzFmamZnMZsQAHq6PbAkzKIMFNLLQ3n9IJJxCaJQn9mFyu-tKnGu.png?format=pjpg&auto=webp&s=79ff29865bb876b0f92c38f06b0753ca8ff05ca8', 'width': 886}, 'variants': {}}]}
Optimizing CosyVoice 2 (0.5B) for <200ms streaming latency on 8GB Edge Hardware (Jetson Orin Nano)?
0
I'm part of a team building a real-time assistive communication tool that needs to run entirely on the edge (no cloud). We’ve narrowed our TTS options down to **CosyVoice 2 (0.5B)** because we need high naturalness/prosody, but we’re hitting some performance bottlenecks. Our target is a total pipeline latency of **<200ms** to keep the conversation fluid. For those who have deployed CosyVoice 2 (0.5B) on 8GB-12GB VRAM devices: 1. **Latency vs. Quality:** Is sub-200ms realistic on a Jetson Orin Nano while running other small models (SLMs) in parallel? 2. **Alternatives:** Have you found **Qwen3-TTS** or **Kokoro-82M** to be more reliable for 'streaming-first' applications where prosody is still a priority? 3. **Optimization:** Are there specific quantization tricks (FP8/INT8) or TensorRT configurations that significantly cut the 'Time to First Token' for CosyVoice? We really want that 'NotebookLM-style' flow but can't afford a 1-second 'thinking' delay. Any advice from the edge AI experts here would be massive.
2026-01-25T18:19:17
https://www.reddit.com/r/LocalLLaMA/comments/1qmqsdy/optimizing_cosyvoice_2_05b_for_200ms_streaming/
tiz_lala
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qmqsdy
false
null
t3_1qmqsdy
/r/LocalLLaMA/comments/1qmqsdy/optimizing_cosyvoice_2_05b_for_200ms_streaming/
false
false
self
0
null
Should I invest in a beefy machine for local AI coding agents in 2026?
0
Hey everyone, So I've been freelancing as a dev for a good while now, and over the past year I've gotten *really* into using AI agents for coding. My main workflow involves Claude Code, Cursor for one of my projects, and I occasionally mess around with Antigravity + Gemini Flash for design stuff. Here's my problem though: the credit burn is real. Especially with Claude Code - I'm hitting those session limits way faster than I'd like. And before anyone roasts me, no I'm not full-on vibe coding. I mainly use it to speed up certain dev tasks and then review everything after to make sure it's solid. But even with that relatively conservative usage, I'm constantly bumping into the "you've reached your limit" message. I've got the Pro plan right now. Yeah yeah, I should probably just upgrade to Max, but I'm hesitating on pulling that trigger. Which brings me to my actual question: I'm due for a hardware upgrade anyway (currently on a base M1 Mac from 2020), and I'm wondering if it makes sense to go big - like *really* big - to run coding agents locally and basically never worry about limits again. I've been eyeing something like the upcoming M5 Max Mac Studio with maxed out RAM. But I honestly have no idea if this is actually practical: * Which local models would even come close to matching Claude Sonnet 4.5 or Gemini for coding tasks? * Would I just install something through Ollama and call it a day? * For those of you running local coding agents - what's your actual experience been like? * Have you managed to integrate them directly into VSCode/Cursor or other IDEs? * And the big one: is it actually worth it, or am I just convincing myself to buy an expensive toy? I guess I'm trying to figure out if spending $3k on hardware to avoid subscription limits is actually smart, or if I should just bite the bullet on the Max plan and keep my wallet happy. Would love to hear from anyone who's gone down this path. Thanks in advance!
2026-01-25T18:13:10
https://www.reddit.com/r/LocalLLaMA/comments/1qmqmb9/should_i_invest_in_a_beefy_machine_for_local_ai/
Zestyclose-Tour-3856
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qmqmb9
false
null
t3_1qmqmb9
/r/LocalLLaMA/comments/1qmqmb9/should_i_invest_in_a_beefy_machine_for_local_ai/
false
false
self
0
null
Running a local LLM for generating SEO keywords
0
Hi guys, I was searching for an LLM that could give me bunch of SEO keywords based on the topics that I give to it. I tried with normal LLMs but the keywords that it gives in output are just a few, I need hundreds of them and that are related to the initial topic I gave him. Which solution would you recommend? Thanks!
2026-01-25T18:11:32
https://www.reddit.com/r/LocalLLaMA/comments/1qmqkq9/running_a_local_llm_for_generating_seo_keywords/
Chesperk
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qmqkq9
false
null
t3_1qmqkq9
/r/LocalLLaMA/comments/1qmqkq9/running_a_local_llm_for_generating_seo_keywords/
false
false
self
0
null
Cost-efficient hosting strategies for fine-tuned cross-encoder + FAISS in small-scale commercial app
1
I have never actually hosted any models I developed for commercial use and I was wondering what is the cheapest way for me as a student to host deep learning models and put my service out there?
2026-01-25T18:00:42
https://www.reddit.com/r/LocalLLaMA/comments/1qmq9pk/costefficient_hosting_strategies_for_finetuned/
GoldBed2885
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qmq9pk
false
null
t3_1qmq9pk
/r/LocalLLaMA/comments/1qmq9pk/costefficient_hosting_strategies_for_finetuned/
false
false
self
1
null
Universal Prompt Studio: AI Story Analysis & Production Suite for Local LLM Comfyui Lmstudio and API Takes a story / idea as input Automatically breaks it into scenes Generates character profiles Creates image & video prompts unlimited story generator
1
[removed]
2026-01-25T17:48:16
[deleted]
1970-01-01T00:00:00
0
{}
1qmpwxe
false
null
t3_1qmpwxe
/r/LocalLLaMA/comments/1qmpwxe/universal_prompt_studio_ai_story_analysis/
false
false
default
1
null
Kickstarting an open-source project (Debiasing & Alignment) - seeking collaborators
2
Hi everyone, We are kickstarting an open-source project and community focused on debiasing LLM alignment and guardrails research. The goal is to reduce bias while maintaining safety/stability. We’ve set up a space for the project here:[https://huggingface.co/spaces/sefif/BYO-community-v2](https://huggingface.co/spaces/sefif/BYO-community-v2) If this is a topic you are interested in, check out the challenge in the link and let us know if you'd like to collaborate.
2026-01-25T17:31:14
https://www.reddit.com/r/LocalLLaMA/comments/1qmpg6f/kickstarting_an_opensource_project_debiasing/
Prestigious_Mud_487
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qmpg6f
false
null
t3_1qmpg6f
/r/LocalLLaMA/comments/1qmpg6f/kickstarting_an_opensource_project_debiasing/
false
false
self
2
{'enabled': False, 'images': [{'id': 'CzJcUq_ixYINMPIsQBxXxrT-4ngnH7WSgwOEBMrteM0', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/CzJcUq_ixYINMPIsQBxXxrT-4ngnH7WSgwOEBMrteM0.png?width=108&crop=smart&auto=webp&s=7817c259473639adbf5681d488055c32eed2cd04', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/CzJcUq_ixYINMPIsQBxXxrT-4ngnH7WSgwOEBMrteM0.png?width=216&crop=smart&auto=webp&s=df8a284f3c8a2b3c7aaa9e61fd0c958df407535f', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/CzJcUq_ixYINMPIsQBxXxrT-4ngnH7WSgwOEBMrteM0.png?width=320&crop=smart&auto=webp&s=8669316de59b71e3f515fd55fa1924c8d979f2a2', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/CzJcUq_ixYINMPIsQBxXxrT-4ngnH7WSgwOEBMrteM0.png?width=640&crop=smart&auto=webp&s=e49e874f972d19192100f8e14c09f608e7fa866b', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/CzJcUq_ixYINMPIsQBxXxrT-4ngnH7WSgwOEBMrteM0.png?width=960&crop=smart&auto=webp&s=dbaded7c79cbaf8a70287e64683e690c30a91df5', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/CzJcUq_ixYINMPIsQBxXxrT-4ngnH7WSgwOEBMrteM0.png?width=1080&crop=smart&auto=webp&s=d1674f353cc73620fd50e4affcfb7c9877ebe262', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/CzJcUq_ixYINMPIsQBxXxrT-4ngnH7WSgwOEBMrteM0.png?auto=webp&s=f69dc4805725cd098745f7f77765083676561964', 'width': 1200}, 'variants': {}}]}
Built a 100% local AI agentic workflow that automatically tests chatbots using GPT-OSS 20B via llama.cpp + Agno workflow framework.
8
Hey everyone! Ultimate goal with this project was to build an agentic testing framework that can automatically stress-test chatbots across multiple dimensions - off-topic handling, safety concerns, hallucination detection, system prompt extraction attempts, and more. The system uses AI agents to generate diverse test personalities and scenarios, then runs them against your chatbot and evaluates the responses. Set it up and you can start stacking up test data for continuous improvement. Repo: https://github.com/Prajwal-Nagaraj/Chatbot-Simulation-Workflow #### Stack: - **LLM**: GPT-OSS 20B running via llama.cpp server (local, no API keys needed) - **Workflow Engine**: Agno framework for orchestrating multi-agent workflows - **Backend**: FastAPI with async support for long-running test suites - **Frontend**: Modern but basic web ui using js and html - **Database**: SQLite #### Features: - **AI-Powered Testing**: LLM generates realistic user personalities and test scenarios and also communicates with the chatbot endpoint - **LLM-as-a-Judge Evaluation**: Automated scoring of chatbot responses using LLM as a judge - **Multiple Test Types**: off topic, safety, hallucination, system prompt testing, financial advice - **Flexible Configuration**: CLI, YAML configs, or web UI - **Async Execution**: Long test suites run in background - **Database Persistence**: All test sessions, personalities, scenarios, and results stored in a sqlite binary The workflow is pretty wild - it generates personalities, creates scenarios for each, runs conversations, and uses an LLM judge to evaluate everything automatically. You just point it at your Openai compatible chatbot endpoint and let it rip.
2026-01-25T17:15:06
https://i.redd.it/9jgcnzd03jfg1.jpeg
switchdoor1
i.redd.it
1970-01-01T00:00:00
0
{}
1qmp02r
false
null
t3_1qmp02r
/r/LocalLLaMA/comments/1qmp02r/built_a_100_local_ai_agentic_workflow_that/
false
false
default
8
{'enabled': True, 'images': [{'id': '9jgcnzd03jfg1', 'resolutions': [{'height': 110, 'url': 'https://preview.redd.it/9jgcnzd03jfg1.jpeg?width=108&crop=smart&auto=webp&s=275686e0ab8c895a8d77c590787e225255c67c29', 'width': 108}, {'height': 220, 'url': 'https://preview.redd.it/9jgcnzd03jfg1.jpeg?width=216&crop=smart&auto=webp&s=ae4d2ad0d331354bbb59df633804d1510818ca30', 'width': 216}, {'height': 326, 'url': 'https://preview.redd.it/9jgcnzd03jfg1.jpeg?width=320&crop=smart&auto=webp&s=781baad234f7d6e177d66e73a2d164fe62c8cb6e', 'width': 320}, {'height': 652, 'url': 'https://preview.redd.it/9jgcnzd03jfg1.jpeg?width=640&crop=smart&auto=webp&s=8b78448eca7d34ac66b090e11ac5facec8f13ce8', 'width': 640}, {'height': 979, 'url': 'https://preview.redd.it/9jgcnzd03jfg1.jpeg?width=960&crop=smart&auto=webp&s=0027f3f3ba5f5b61ba335a9517f85e73c89199af', 'width': 960}, {'height': 1101, 'url': 'https://preview.redd.it/9jgcnzd03jfg1.jpeg?width=1080&crop=smart&auto=webp&s=7bf490d2cc306e5f84eabae98d107200f087de39', 'width': 1080}], 'source': {'height': 3157, 'url': 'https://preview.redd.it/9jgcnzd03jfg1.jpeg?auto=webp&s=accc4d4cc37b4418485ec27f438e03b57ac93a37', 'width': 3095}, 'variants': {}}]}
Does the Mac Mini M4 16GB have any potential value?
0
I'm currently considering buying a Mac Mini M4 16GB 256GB, because it's at a historically low price. After deducting the 4GB of RAM for the macOS system, can the remaining 12GB theoretically run speech recognition, image recognition, and LLM simultaneously? If so, how well would it perform? I already have a Windows PC with 32GB RAM and 16GB VRAM, and I have some experience with ComfyUI and Ollama. I know nothing about macOS and don't consider computers with more than 16GB of RAM to be cost-effective. Should I abandon this idea?
2026-01-25T16:51:59
https://www.reddit.com/r/LocalLLaMA/comments/1qmod1n/does_the_mac_mini_m4_16gb_have_any_potential_value/
NoYogurtcloset4090
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qmod1n
false
null
t3_1qmod1n
/r/LocalLLaMA/comments/1qmod1n/does_the_mac_mini_m4_16gb_have_any_potential_value/
false
false
self
0
null
LM Studio - Why does my system RAM fill up and go OOM if the model says Full GPU Offload Possible?
12
Using Windows, RTX 3090 (24 GB VRAM) upgraded recently from a GTX 1080 (8 GB), 32 GB RAM With Firefox open with many tabs I use \~18 GB RAM. GPU stays at \~3 GB. Then, in LM Studio, loading the OpenAI GPT‑OSS 20B model shows “Full GPU Offload Possible”. After load, VRAM jumps to \~14 GB and system RAM climbs to 32 GB, then the program crashes with OOM. I have Strict Guardrails enabled, swap is on. How can I avoid high RAM usage and the OOM when loading this model while using by browser? How do I know how much allocated RAM the model will have? I thought that the gguf file size is similar to the VRAM allocation and only like 1 GB RAM is reserved if the model fits in the GPU.
2026-01-25T16:42:16
https://www.reddit.com/r/LocalLLaMA/comments/1qmo3do/lm_studio_why_does_my_system_ram_fill_up_and_go/
Nytse
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qmo3do
false
null
t3_1qmo3do
/r/LocalLLaMA/comments/1qmo3do/lm_studio_why_does_my_system_ram_fill_up_and_go/
false
false
self
12
null
You Are An Agent - Try Being a Human LLM
0
I added a "Human" LLM provider to my local OpenCode a few months ago as a joke, and it turns-out acting as a LLM is dumb and painful. But, it did massively improve my agent harnesses dev skills. So I thought I wouldn't leave you guys out! I made a small open-source game - You Are An Agent - [youareanagent.app](http://youareanagent.app) \- for you guys to experience the frustration! It's a bit ridiculous. To tell you about some entirely necessary features, we've got: * A full WASM arch-linux vm that runs in your browser for the agent coding level * A bad desktop simulation with a beautiful excel simulation for our computer use level * A lovely WebGL CRT simulation (I think it may be the best OSS one now? honestly wanted to leverage/ not write my own but I couldn't find one I was happy with) * A MCP server simulator with full simulation of off-brand Jira/ Confluence/ ... connected * And of course, a full WebGL oscilloscope music simulator for the intro sequence Let me know what you think! Code (If you'd like to add a level): [https://github.com/R0bk/you-are-an-agent](https://github.com/R0bk/you-are-an-agent) (And if you want to waste 20 minutes - I spent way too long writing up my messy thinking about agent harness dev): [http://robkopel.me/field-notes/ax-agent-experience/](http://robkopel.me/field-notes/ax-agent-experience/)
2026-01-25T16:29:19
https://www.reddit.com/r/LocalLLaMA/comments/1qmnqfk/you_are_an_agent_try_being_a_human_llm/
robk001
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qmnqfk
false
null
t3_1qmnqfk
/r/LocalLLaMA/comments/1qmnqfk/you_are_an_agent_try_being_a_human_llm/
false
false
self
0
null
Current best iOS LLM app for running LLMs?
3
Is pocket pal the best? I seem to have some errors with vision models working properly
2026-01-25T16:07:40
https://www.reddit.com/r/LocalLLaMA/comments/1qmn4jq/current_best_ios_llm_app_for_running_llms/
Adventurous-Gold6413
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qmn4jq
false
null
t3_1qmn4jq
/r/LocalLLaMA/comments/1qmn4jq/current_best_ios_llm_app_for_running_llms/
false
false
self
3
null
DGX spark performance falls short
9
using cutlass-profiler, gemm, here is the performance: peak int4: 157 TFLOP peak int8: 200 TFLOP peak fp16: 97 TFLOP anyone knows why performance of int4 is not around 350-450( which i expect)? env: docker (pytorch:25.12-py3)
2026-01-25T15:53:00
https://www.reddit.com/r/LocalLLaMA/comments/1qmmpwz/dgx_spark_performance_falls_short/
dereksodo
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qmmpwz
false
null
t3_1qmmpwz
/r/LocalLLaMA/comments/1qmmpwz/dgx_spark_performance_falls_short/
false
false
self
9
null
anyone got any more info about this portrait in Dam
1
[deleted]
2026-01-25T15:49:23
[deleted]
1970-01-01T00:00:00
0
{}
1qmmmdh
false
null
t3_1qmmmdh
/r/LocalLLaMA/comments/1qmmmdh/anyone_got_any_more_info_about_this_portrait_in/
false
false
default
1
null
Beginner(?) question -- are not all recent LLMs not able to utilise MCPs/tools/etc.?
0
Sorry if this is a too beginner of a question that doesn't warrant a whole post, but unsure where else I can ask. So I've been using GPT OSS 120B. Both Open WebUI and Cline (in VS Codium) are able to utilise MCPs. However, when trying the recently released GLM 4.7 Flash, I can't seem to figure out how to let the LLM utilise DuckDuckGo search, for example. Also to mention two side questions -- 1) What about `<think>` tags? In Open WebUI, `<think>` tag doesn't seem to work for some reason, and I can always see the `</think>` text that it attempts to do. For some reason, it seems to work fine in Cline. How/why? 2) When GPT OSS utilises the DuckDuckGo search, it seems to be able to only source AI-slop websites. Is it because pretty much every website has `robots.txt` dis-allowance?
2026-01-25T15:34:32
https://www.reddit.com/r/LocalLLaMA/comments/1qmm7z8/beginner_question_are_not_all_recent_llms_not/
jinnyjuice
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qmm7z8
false
null
t3_1qmm7z8
/r/LocalLLaMA/comments/1qmm7z8/beginner_question_are_not_all_recent_llms_not/
false
false
self
0
null
GLM-4.7-flash on RTX 6000 pro
14
Hello, I’m getting horrible throughput considering the models size with vLLM. Currently with 2x cards and DP 2 @ FP16 I’m getting around 370 gen TPS with 10x requests. Anyone have a fix or a “working” config for 1 or two cards?
2026-01-25T15:24:37
https://www.reddit.com/r/LocalLLaMA/comments/1qmlyhn/glm47flash_on_rtx_6000_pro/
gittb
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qmlyhn
false
null
t3_1qmlyhn
/r/LocalLLaMA/comments/1qmlyhn/glm47flash_on_rtx_6000_pro/
false
false
self
14
null
Internet blackout and Local LLMs
247
Due to protests and massacre in Iran we are facing severe internet blackout which has been ongoing for 400 HOURS. only after a few days 3 websites got white-listed: google, chatgpt, deepseek. everything else is blocked even subdomains like Gmail. at the very least few people have Starlink (which is illegal) and share their connection. Finding a working vpn is really hard (I busted my ass to load reddit). Meanwhile, I've been using my local uncensored Gemma3 12B and Qwen3 8B (on 8gb VRAM with llama.cpp). Then we got access to chatgpt which was pretty good since we could ask it to read contents of some pages or get latest news. But still chatgpt is VERY unhelpful in terms of finding solutions to circumvent internet censorship even if I explain the truly fucked up situation it refuses, and deepseek is worse. This is where a large uncensored local LLM could be very helpful.
2026-01-25T15:15:05
https://www.reddit.com/r/LocalLLaMA/comments/1qmlpjp/internet_blackout_and_local_llms/
DunderSunder
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qmlpjp
false
null
t3_1qmlpjp
/r/LocalLLaMA/comments/1qmlpjp/internet_blackout_and_local_llms/
false
false
self
247
null
What are the best open source coding ideas you can share?
44
I'm trying to build a place for my friends so they can try and learn ai assisted engineering/vibe coding. Some of them are 50 yrs experienced devs familiar with enterprise standards, some 16 yrs old vibe coders that want to build their first scripts. How would you structure guide for newcomers? Any favourite tools I should add/replace? What would you choose for 24h hackathon and what is more suitable for weeks/months project? repo: [https://github.com/dontriskit/awesome-ai-software-engineering](https://github.com/dontriskit/awesome-ai-software-engineering)
2026-01-25T15:08:39
https://i.redd.it/zcf9q42bgifg1.png
secopsml
i.redd.it
1970-01-01T00:00:00
0
{}
1qmljeb
false
null
t3_1qmljeb
/r/LocalLLaMA/comments/1qmljeb/what_are_the_best_open_source_coding_ideas_you/
false
false
default
44
{'enabled': True, 'images': [{'id': 'zcf9q42bgifg1', 'resolutions': [{'height': 97, 'url': 'https://preview.redd.it/zcf9q42bgifg1.png?width=108&crop=smart&auto=webp&s=7699a3b3f89a565571711d1ebbbd51fa01844927', 'width': 108}, {'height': 194, 'url': 'https://preview.redd.it/zcf9q42bgifg1.png?width=216&crop=smart&auto=webp&s=56148277fe9fa568d23731ac59eabd332c8d8ee1', 'width': 216}, {'height': 288, 'url': 'https://preview.redd.it/zcf9q42bgifg1.png?width=320&crop=smart&auto=webp&s=945deda64990abe7f87a000e30ffde7dbf269576', 'width': 320}, {'height': 576, 'url': 'https://preview.redd.it/zcf9q42bgifg1.png?width=640&crop=smart&auto=webp&s=e8d5b9a2f9d2a8bbb940fa1ae8f1c616ca45968f', 'width': 640}, {'height': 864, 'url': 'https://preview.redd.it/zcf9q42bgifg1.png?width=960&crop=smart&auto=webp&s=028f6213215fa2467951c0452f179316401e26d9', 'width': 960}], 'source': {'height': 906, 'url': 'https://preview.redd.it/zcf9q42bgifg1.png?auto=webp&s=62efbe7c5705ffe5521590efb4401407cc3afcbe', 'width': 1006}, 'variants': {}}]}
1600W enough for 2xRTX 6000 Pro BW?
3
Title says it, running on a threadripper system with not much additional HW. I know RTX cards sometimes have spikes, so I'm wondering if 2200W would be better. Also, are there any reports for melting connectors on that card?
2026-01-25T15:00:25
https://www.reddit.com/r/LocalLLaMA/comments/1qmlbiv/1600w_enough_for_2xrtx_6000_pro_bw/
Mr_Moonsilver
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qmlbiv
false
null
t3_1qmlbiv
/r/LocalLLaMA/comments/1qmlbiv/1600w_enough_for_2xrtx_6000_pro_bw/
false
false
self
3
null
Local RAG + llama_index reranker embedding dimension issue
1
[removed]
2026-01-25T14:44:02
https://www.reddit.com/r/LocalLLaMA/comments/1qmkwlm/local_rag_llama_index_reranker_embedding/
Important-Wing-9146
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qmkwlm
false
null
t3_1qmkwlm
/r/LocalLLaMA/comments/1qmkwlm/local_rag_llama_index_reranker_embedding/
false
false
self
1
null
Can interfaces like ChatGPT Apps unlock how hiring can be streamlined and integrated?
0
Since the days I have started working on LLMs i have always been fascinated by the kind of new interfaces it will unlock. New interfaces solves some deep human problems. For me these interfaces are nothing but portal to enter new technologies where humans becomes the integral part and discover new ways to solve problems. One good example of trains, consider the rail tracks as “interface” where movement was enabled along with power. Another example could be of telephone where it enables a mew interface for human to huma communication (read remote). All of these new interfaces led the creation of new kind of jobs and generated massive employments and economies. I consider these LLMs are going to do the same. See for an example, I tried to validate this by creating a jobs discovering to hiring all on one interface that chat (for new lets keep chat). The deland amd supply are on the same interface solves the distribution problem. Remember, these interfaces are already helping unlocking mew intelligence and knowledge and by using right then and there is a hige advantage. I want to learn more about how humans are going to adopt such system. PS: not promoting anything here just a thought that i wanted to share.
2026-01-25T14:40:08
https://v.redd.it/g3lv717xbifg1
Secure_Echo_971
v.redd.it
1970-01-01T00:00:00
0
{}
1qmkt3g
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/g3lv717xbifg1/DASHPlaylist.mpd?a=1771944027%2CNzU5NGI5NGUxNzgzYTc3OTVhNjZmY2U2YzNmZjA0YzFlNTYxMDkzOGFlMzgyY2RhMTZmMTdhN2RlZWU5Y2JiZA%3D%3D&v=1&f=sd', 'duration': 17, 'fallback_url': 'https://v.redd.it/g3lv717xbifg1/CMAF_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/g3lv717xbifg1/HLSPlaylist.m3u8?a=1771944027%2CNmRmMjIxODUyNjBjMjM3YzgyMTQzMmM5Mzg0Mzc0ODU0YTk0MTUyYTg4NGFlYTFmODViMDEyMzM2ZDdkNjgwNg%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/g3lv717xbifg1/CMAF_96.mp4', 'transcoding_status': 'completed', 'width': 1920}}
t3_1qmkt3g
/r/LocalLLaMA/comments/1qmkt3g/can_interfaces_like_chatgpt_apps_unlock_how/
false
false
https://external-preview…3c61d488f78aaa89
0
{'enabled': False, 'images': [{'id': 'OXV6YmN1endiaWZnMV-_EfW2ZahSmRZF6osdPD8-YnqP4iv3buh-k9OP1-oB', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/OXV6YmN1endiaWZnMV-_EfW2ZahSmRZF6osdPD8-YnqP4iv3buh-k9OP1-oB.png?width=108&crop=smart&format=pjpg&auto=webp&s=ad75ee3d863a9a9bf4b5b837b23538997c2abe17', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/OXV6YmN1endiaWZnMV-_EfW2ZahSmRZF6osdPD8-YnqP4iv3buh-k9OP1-oB.png?width=216&crop=smart&format=pjpg&auto=webp&s=0286f104a7b2f359887cb68646fae8d6f7d53e01', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/OXV6YmN1endiaWZnMV-_EfW2ZahSmRZF6osdPD8-YnqP4iv3buh-k9OP1-oB.png?width=320&crop=smart&format=pjpg&auto=webp&s=a24efbf0bfa2fc94f4f89fee625a77efda481b52', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/OXV6YmN1endiaWZnMV-_EfW2ZahSmRZF6osdPD8-YnqP4iv3buh-k9OP1-oB.png?width=640&crop=smart&format=pjpg&auto=webp&s=e0f231dc18000f76b0e629360b49d8015c8de16f', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/OXV6YmN1endiaWZnMV-_EfW2ZahSmRZF6osdPD8-YnqP4iv3buh-k9OP1-oB.png?width=960&crop=smart&format=pjpg&auto=webp&s=64e79bdabd8ea41c5bf3ebfd6e3926e8ec98c22d', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/OXV6YmN1endiaWZnMV-_EfW2ZahSmRZF6osdPD8-YnqP4iv3buh-k9OP1-oB.png?width=1080&crop=smart&format=pjpg&auto=webp&s=b09525efdc605777d5be46d5aca5dc0a2fb558b5', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/OXV6YmN1endiaWZnMV-_EfW2ZahSmRZF6osdPD8-YnqP4iv3buh-k9OP1-oB.png?format=pjpg&auto=webp&s=c585804a60bd762cc5f4f3f5b0a69fa1dda3d939', 'width': 1920}, 'variants': {}}]}
Alternatives to Qwen3-coder-30B?
11
I have been using the qwen3-coder-30B for some time, and it is not bad. But it tends to struggle with debugging tougher issues (threading, etc). Any other models that I should try? I am running on a RTX-4090, and I just got an Ai-max-395+ 128GB. I am not looking for the best coding model. I am looking for a model that could be better at figuring out problems.
2026-01-25T14:20:56
https://www.reddit.com/r/LocalLLaMA/comments/1qmkc1j/alternatives_to_qwen3coder30b/
skibud2
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qmkc1j
false
null
t3_1qmkc1j
/r/LocalLLaMA/comments/1qmkc1j/alternatives_to_qwen3coder30b/
false
false
self
11
null
Local Ai note assistant?
0
Hi, I was looking around for an AI note taking assistant, ideally one that can be hosted on my PC and could be integrated into something like obsidian. Often I'm at work or something and have my hands full and want to jot down ideas and want to jot down ideas for stuff like dnd. I could use dictation, but I like the idea of asking an AI to write down the summary of an idea or ask it to modify an existing line or erase it altogether. Like an actual assistant taking notes for me. Is this possible? Thanks.
2026-01-25T14:13:22
https://www.reddit.com/r/LocalLLaMA/comments/1qmk5dz/local_ai_note_assistant/
Giodude12
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qmk5dz
false
null
t3_1qmk5dz
/r/LocalLLaMA/comments/1qmk5dz/local_ai_note_assistant/
false
false
self
0
null
KV cache fix for GLM 4.7 Flash
242
tl;dr: remove Air from GLM 4.7 Flash KV cache uses a lot of VRAM. GLM 4.7 Flash doesn’t even use V in the KV cache. With long contexts, this means gigabytes of VRAM saved, so you can run much longer context on the same setup.
2026-01-25T14:06:55
https://github.com/ggml-org/llama.cpp/pull/19067
jacek2023
github.com
1970-01-01T00:00:00
0
{}
1qmjzx1
false
null
t3_1qmjzx1
/r/LocalLLaMA/comments/1qmjzx1/kv_cache_fix_for_glm_47_flash/
false
false
default
242
{'enabled': False, 'images': [{'id': 'Yd6yP0tYXhTq7c3g8_wDa0Z1Zijr0IAXDTPXGjQc7ts', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/Yd6yP0tYXhTq7c3g8_wDa0Z1Zijr0IAXDTPXGjQc7ts.png?width=108&crop=smart&auto=webp&s=5ed19c96c7bb1231fdcfe8cdeceede7224528c4d', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/Yd6yP0tYXhTq7c3g8_wDa0Z1Zijr0IAXDTPXGjQc7ts.png?width=216&crop=smart&auto=webp&s=3d3d5ddc24abae9f9269cbbf497a140ffc566c40', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/Yd6yP0tYXhTq7c3g8_wDa0Z1Zijr0IAXDTPXGjQc7ts.png?width=320&crop=smart&auto=webp&s=9f74dfd88d7446d5ad92fcf1f66de84e9291295b', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/Yd6yP0tYXhTq7c3g8_wDa0Z1Zijr0IAXDTPXGjQc7ts.png?width=640&crop=smart&auto=webp&s=697845700fcf489c797d62fb0a23359703d41821', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/Yd6yP0tYXhTq7c3g8_wDa0Z1Zijr0IAXDTPXGjQc7ts.png?width=960&crop=smart&auto=webp&s=f6c7617b337125162387182b8e68c242c2c1864d', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/Yd6yP0tYXhTq7c3g8_wDa0Z1Zijr0IAXDTPXGjQc7ts.png?width=1080&crop=smart&auto=webp&s=64f0ed9a1ba6a5654f61a0ef1956c6faf44e38fd', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/Yd6yP0tYXhTq7c3g8_wDa0Z1Zijr0IAXDTPXGjQc7ts.png?auto=webp&s=11fb8f340a3366f69b4e42b744eeb18a921bcf5d', 'width': 1200}, 'variants': {}}]}
Understanding Multi-Head Latent Attention (MLA)
14
A short deep-dive on Multi-Head Latent Attention (MLA) (from DeepSeek): intuition + math, then a walk from MHA → GQA → MQA → MLA, with PyTorch code and the fusion/absorption optimizations for KV-cache efficiency. [http://shreyansh26.github.io/post/2025-11-08\_multihead-latent-attention/](http://shreyansh26.github.io/post/2025-11-08_multihead-latent-attention/)
2026-01-25T14:05:43
https://www.reddit.com/r/LocalLLaMA/comments/1qmjyxl/understanding_multihead_latent_attention_mla/
shreyansh26
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qmjyxl
false
null
t3_1qmjyxl
/r/LocalLLaMA/comments/1qmjyxl/understanding_multihead_latent_attention_mla/
false
false
self
14
null
Building a "Sovereign JARVIS" with Council-based Agents and Granular Knowledge Silos. Does this architecture exist yet?
0
I am an ex-programmer (been a Solution Architect for a long time now) and don't know any of the python, or graph/vector db concepts or node.js - so it will all be vibe coded thanks to chatGPT and/or Google AI Studio I’m designing a self-hosted "Life OS" (codenamed JARVIS) because I’m tired of stateless AI chats that forget context. The core architecture uses a "Council of Agents" (e.g., Psychologist, Mentor, Strategist) that all share a unified Graph + Vector memory. This means they have deep, longitudinal context: if I told the Psychologist about a failure two years ago, the Career Mentor knows not to suggest a similar path today. They can also debate each other to give me synthesized advice rather than generic answers. The unique feature I’m building is Granular Knowledge Siloing. I can upload my entire life—PDFs, EPUBs, TXT exports of old chats, and URLs—but I have a dashboard to toggle exactly which agent has read-access to which artifact. I can let the "Business Agent" see my financial PDFs while strictly firewalling the "Dating Agent" from them. I can also cordon off specific chat sessions so they remain private to one specific persona. I’m looking to build this using a LangGraph/Neo4j/Qdrant stack. Has anyone seen an open-source project that handles this specific mix of "Shared Memory" but "Permissioned Access" well? Of course the permissioned access is less important than the idea of agents, uploading PDFs that the agents can access and share memory across chats that is "always" in the conext window
2026-01-25T13:51:02
https://www.reddit.com/r/LocalLLaMA/comments/1qmjmgy/building_a_sovereign_jarvis_with_councilbased/
kuteguy
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qmjmgy
false
null
t3_1qmjmgy
/r/LocalLLaMA/comments/1qmjmgy/building_a_sovereign_jarvis_with_councilbased/
false
false
self
0
null
Lightweight Chrome extension to use local Deepseek-R1 (Ollama) for report/news analysis
0
I wanted a light setup to read long finance reports and news with a local model, so I built a small Chrome extension on top of Ollama + Deepseek-R1. Core idea: * no API keys * runs fully on your own machine * integrates directly into Chrome Features: * Uses your local Ollama instance (Deepseek-R1 7B/8B, Deepseek-V3, etc.) * Modes: Report / News / Q&A / Compare / Keywords / Translate / Template * Works on selected text or full page text * History, bookmarks, CSV export Setup (very short): bashollama serve ollama pull deepseek-r1:7b # or 8b Set OLLAMA\_ORIGINS=\* so the browser can call [http://localhost:11434/api/generate](http://localhost:11434/api/generate), then load the extension via chrome://extensions → Developer mode → Load unpacked. Repo (plain text to avoid spam filters): [github.com/tomorrow2091-dot/deepseek-ai-analyzer-pro](http://github.com/tomorrow2091-dot/deepseek-ai-analyzer-pro) If you’re into local LLM + productivity tooling, I’d love feedback or PRs. I wanted a light setup to read long finance reports and news with a local model, so I built a small Chrome extension on top of **Ollama + Deepseek-R1**. Core idea: * no API keys * runs fully on your own machine * integrates directly into Chrome Features: * Uses your local Ollama instance (Deepseek-R1 7B/8B, Deepseek-V3, etc.) * Modes: Report / News / Q&A / Compare / Keywords / Translate / Template * Works on selected text or full page text * History, bookmarks, CSV export Setup (very short): bashollama serve ollama pull deepseek-r1:7b # or 8b Set `OLLAMA_ORIGINS=*` so the browser can call [`http://localhost:11434/api/generate`](http://localhost:11434/api/generate), then load the extension via `chrome://extensions` → Developer mode → Load unpacked. Repo (plain text to avoid spam filters): [`github.com/tomorrow2091-dot/deepseek-ai-analyzer-pro`](http://github.com/tomorrow2091-dot/deepseek-ai-analyzer-pro) If you’re into local LLM + productivity tooling, I’d love feedback or PRs. # 2. r/selfhosted **Title** \[Tool\] Chrome extension to use local Deepseek-R1 (Ollama) for report/news analysis **Body** Hi all, I built a small Chrome extension that talks to a local **Ollama** instance running Deepseek-R1 / Deepseek-V3, to help me read long finance reports and news directly in the browser. Why I made it: * I didn’t want to send reports/news to 3rd party APIs * I already run Ollama locally * I live in Chrome and wanted a 1‑click workflow What it does: * Summarize reports and news * Compare multiple documents separated by `---` * Extract keywords * Q&A about selected text * Translate (EN / KR / JP / ZH) * Save custom prompts as templates * History / bookmarks / CSV export Everything runs on your own box via Ollama, no external API calls. Setup is basically: bashollama serve ollama pull deepseek-r1:7b # or 8b Then set `OLLAMA_ORIGINS=*` so the browser can talk to `localhost:11434`, and load the extension via `chrome://extensions` → Developer mode → Load unpacked. Repo (text only): [`github.com/tomorrow2091-dot/deepseek-ai-analyzer-pro`](http://github.com/tomorrow2091-dot/deepseek-ai-analyzer-pro) Happy to hear thoughts from people who self-host LLMs and use them in day‑to‑day workflows.
2026-01-25T13:40:35
https://www.reddit.com/r/LocalLLaMA/comments/1qmjdxu/lightweight_chrome_extension_to_use_local/
OPDP20211231
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qmjdxu
false
null
t3_1qmjdxu
/r/LocalLLaMA/comments/1qmjdxu/lightweight_chrome_extension_to_use_local/
false
false
self
0
null
Sanity check for small-office/homelab shopping cart.
3
Hey, I'm about to purchase some equipment for prototyping and need a sanity check. Also perhaps some of you guys have better ideas for a setup up to 5k €. Here's a list of models i want to run: * Qwen2.5-Math-7B-instruct * Nemotron-Orchestrator-8B * NuMarkdown-8B-Thinking * Qwen3-8B * Qwen3-Embedding-8B * xLAM-2-32b-fc-r * gpt-oss-120b Being able to try \~70B dense models and large MOE would be nice, but that's negligible. My use case is process automation, so I'll likely have an orchestrator model + 2-3 8b + gpt-oss-12b or a 32b dense in memory. There are three setups that i consider: **Setup #1** Used Rack Server * Gigabyte G221-Z30 Rev. A00 - 1.200€ * AMD EPYC 7402P - Included in rack server * 256GB DDR4-3200 (8x32GB) - (2.000€) * Radeon AI Pro R9700 32GB - (1.500€) Sum: 4.700 **Setup #2** Linked Strix halo * 2 gmktec evo-x2 128GB (2000€) Sum: 4.000€ **Setup #3** Built from inventory * B650 mainboard (8x/8x PCIE 4.0, should be fine), from inventory * 64GB DDR5@5600, from inventory * Additional Ryzen 7900X or consumer epyc \~400€ * 2 x Radeon AI Pro R9700 (1500€) Sum: 3.400€ I'm currently leaning towards **#3**. It's short on RAM and large moe experimentation is out of question. Butt i can use the two R9700 for an actual production build should the need arise and it's the cheapest. **#2** is the easiest solution but doesn't sale at all. **#1** would probably be the overall best, but I've a hard time justifying to miself paying 2k for DDR4 RAM. Any thoughts on my horrible financial decisions?
2026-01-25T13:34:22
https://www.reddit.com/r/LocalLLaMA/comments/1qmj900/sanity_check_for_smallofficehomelab_shopping_cart/
artisticMink
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qmj900
false
null
t3_1qmj900
/r/LocalLLaMA/comments/1qmj900/sanity_check_for_smallofficehomelab_shopping_cart/
false
false
self
3
null
What do you actually want from a private AI chat on your phone?
232
Hey friends. We are building zerotap - an Android app where AI can control your phone like a human (taps, scrolls, reads screen). It supports Ollama, proxies like OpenRouter and Straico and models directly such as OpenAI, Claude, Gemini and DeepSeek. Recently we added a chat interface, so now it works like a regular AI chat that can take over your device when needed. Now we are planning what to focus on next and we'd love your input. Some options we're considering: * **MCP servers** \- connect your chat to external tools and services * **Deep research** \- letting the AI browse and gather information for you * **Multi-modality** — image read & write (generation) * **On-device models** — we are working on Gemma 3n and Qwen support, but small context windows are hurting performance so much Speaking of which - for those of you running Ollama: do you expose your instance to the internet or keep it local network only? Honest question: what would make an AI chat on your phone actually useful for you on a daily basis? Not as a toy, but as something you would rely on - what's missing from current mobile AI apps (that supports ollama) that annoys you the most?
2026-01-25T13:12:00
https://v.redd.it/em3174huthfg1
AppDeveloperAsdf
v.redd.it
1970-01-01T00:00:00
0
{}
1qmir5d
true
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/em3174huthfg1/DASHPlaylist.mpd?a=1771938733%2CZGM5YWUxNGFjZjM4NDAyMDBmYTI3NmY3YzlkNThmNDUwNDI5MzdlMjNmMjY5ZTQxNjUxZjYyMGM4MGJmN2I0Mw%3D%3D&v=1&f=sd', 'duration': 39, 'fallback_url': 'https://v.redd.it/em3174huthfg1/CMAF_1080.mp4?source=fallback', 'has_audio': False, 'height': 1920, 'hls_url': 'https://v.redd.it/em3174huthfg1/HLSPlaylist.m3u8?a=1771938733%2CNTNhOGFkN2FkMDM3Mjg5YjY4MzllYzllNTVhNDUwNDA2ZDk4M2E2ZWM4NTBhYTFlNDliNjFiMTUwODAwYzg3Yg%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/em3174huthfg1/CMAF_96.mp4', 'transcoding_status': 'completed', 'width': 1080}}
t3_1qmir5d
/r/LocalLLaMA/comments/1qmir5d/what_do_you_actually_want_from_a_private_ai_chat/
false
false
https://external-preview…f28caf629556fd76
232
{'enabled': False, 'images': [{'id': 'b2k1d3JkaHV0aGZnMbzKSbNJeiRdJL3Vv6uz8BgUY-ES1g_l6yTqUuzYy_d7', 'resolutions': [{'height': 192, 'url': 'https://external-preview.redd.it/b2k1d3JkaHV0aGZnMbzKSbNJeiRdJL3Vv6uz8BgUY-ES1g_l6yTqUuzYy_d7.png?width=108&crop=smart&format=pjpg&auto=webp&s=9d2d15a689c495f362b5c39f669d59cd1c68a9d2', 'width': 108}, {'height': 384, 'url': 'https://external-preview.redd.it/b2k1d3JkaHV0aGZnMbzKSbNJeiRdJL3Vv6uz8BgUY-ES1g_l6yTqUuzYy_d7.png?width=216&crop=smart&format=pjpg&auto=webp&s=70adc0b75ccf35c234128fecf06914faa619832f', 'width': 216}, {'height': 568, 'url': 'https://external-preview.redd.it/b2k1d3JkaHV0aGZnMbzKSbNJeiRdJL3Vv6uz8BgUY-ES1g_l6yTqUuzYy_d7.png?width=320&crop=smart&format=pjpg&auto=webp&s=ff8f95e53ed9baef68aa420c2ff0d3c3094e9c2a', 'width': 320}, {'height': 1137, 'url': 'https://external-preview.redd.it/b2k1d3JkaHV0aGZnMbzKSbNJeiRdJL3Vv6uz8BgUY-ES1g_l6yTqUuzYy_d7.png?width=640&crop=smart&format=pjpg&auto=webp&s=417fc696af038c7135ec470cad77d7be0fe84384', 'width': 640}, {'height': 1706, 'url': 'https://external-preview.redd.it/b2k1d3JkaHV0aGZnMbzKSbNJeiRdJL3Vv6uz8BgUY-ES1g_l6yTqUuzYy_d7.png?width=960&crop=smart&format=pjpg&auto=webp&s=6286c440602947df0884dfdb576b9d7eb2959966', 'width': 960}, {'height': 1920, 'url': 'https://external-preview.redd.it/b2k1d3JkaHV0aGZnMbzKSbNJeiRdJL3Vv6uz8BgUY-ES1g_l6yTqUuzYy_d7.png?width=1080&crop=smart&format=pjpg&auto=webp&s=25b6319bba1ebacd64fb4a5a07bb309a4d9dcd2c', 'width': 1080}], 'source': {'height': 1920, 'url': 'https://external-preview.redd.it/b2k1d3JkaHV0aGZnMbzKSbNJeiRdJL3Vv6uz8BgUY-ES1g_l6yTqUuzYy_d7.png?format=pjpg&auto=webp&s=8eee4c27e581e5c56e2591a358885a610021a5b7', 'width': 1080}, 'variants': {}}]}
Northridge Fix guide on upgrading 4090's to 48GB.
6
2026-01-25T13:11:34
https://youtu.be/qP2s6jD7saA
thisisnotdave
youtu.be
1970-01-01T00:00:00
0
{}
1qmiqsi
false
{'oembed': {'author_name': 'northwestrepair', 'author_url': 'https://www.youtube.com/@northwestrepair', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/qP2s6jD7saA?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen title="GPU upgrade. How its done ? Does it work ? What GPU can be upgraded ?"></iframe>', 'provider_name': 'YouTube', 'provider_url': 'https://www.youtube.com/', 'thumbnail_height': 360, 'thumbnail_url': 'https://i.ytimg.com/vi/qP2s6jD7saA/hqdefault.jpg', 'thumbnail_width': 480, 'title': 'GPU upgrade. How its done ? Does it work ? What GPU can be upgraded ?', 'type': 'video', 'version': '1.0', 'width': 356}, 'type': 'youtube.com'}
t3_1qmiqsi
/r/LocalLLaMA/comments/1qmiqsi/northridge_fix_guide_on_upgrading_4090s_to_48gb/
false
false
default
6
null
openrouter + chromadb: simple rag chat over your personal markdown notes
2
\- made a simple tool to chat on my digital notes / journal
2026-01-25T12:55:21
https://github.com/gamedevCloudy/reorlike
gamedev_cloudy
github.com
1970-01-01T00:00:00
0
{}
1qmie46
false
null
t3_1qmie46
/r/LocalLLaMA/comments/1qmie46/openrouter_chromadb_simple_rag_chat_over_your/
false
false
default
2
{'enabled': False, 'images': [{'id': '5dTOvAp7OGYgVA8fNdSUke8oGvVIMBuWcIZb9QAI0DY', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/5dTOvAp7OGYgVA8fNdSUke8oGvVIMBuWcIZb9QAI0DY.png?width=108&crop=smart&auto=webp&s=a2c06dd884349f329854f84bd744d564872e6163', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/5dTOvAp7OGYgVA8fNdSUke8oGvVIMBuWcIZb9QAI0DY.png?width=216&crop=smart&auto=webp&s=e36a73bcab975464f287d7168a6fc565e95679ea', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/5dTOvAp7OGYgVA8fNdSUke8oGvVIMBuWcIZb9QAI0DY.png?width=320&crop=smart&auto=webp&s=53a17f87797b5ff222af4fec648fb200ab0f6128', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/5dTOvAp7OGYgVA8fNdSUke8oGvVIMBuWcIZb9QAI0DY.png?width=640&crop=smart&auto=webp&s=b31065ce6801cbe50176a73ce18139c306dc1777', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/5dTOvAp7OGYgVA8fNdSUke8oGvVIMBuWcIZb9QAI0DY.png?width=960&crop=smart&auto=webp&s=bb96bda3b7d330f266b56b979e0a62a449079ea7', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/5dTOvAp7OGYgVA8fNdSUke8oGvVIMBuWcIZb9QAI0DY.png?width=1080&crop=smart&auto=webp&s=ca7737a73c4e0cabde98d74ca593c10adf434c9b', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/5dTOvAp7OGYgVA8fNdSUke8oGvVIMBuWcIZb9QAI0DY.png?auto=webp&s=9c614eb2e9889a1d16f978792388a50a73eec2c2', 'width': 1200}, 'variants': {}}]}
what happens when you give the world agent level access to your macbook (unauthenticated)
0
Spent the last few days looking at the deployment surface for Clawdbot, an open-source AI agent gateway that's been gaining traction lately. Used Shodan/Censys to fingerprint exposed instances via the Control UI's HTML signature and found a few hundred internet-facing deployments. Many had some protection in place. But the ones that didn't were rough. **What I found on the worst instances** * Full configuration dumps with Anthropic API keys, Telegram bot tokens, Slack OAuth credentials * Complete conversation histories going back months * Signal device linking URIs sitting in world-readable temp files (tap it and you're paired to their account) * Command execution enabled, running as root, no authentication required **The bug** Localhost connections auto-approve without authentication. Sensible for local dev, problematic when you're behind nginx or Caddy on the same box. Every connection arrives from [127.0.0.1](http://127.0.0.1), every connection gets treated as local, every connection gets auto-approved. Classic proxy misconfiguration pattern. Fix is submitted, PR pending. **The bigger picture** The bug itself is whatever. Bugs happen. What's interesting is what this deployment surface tells us about where we're heading with AI agents. These systems require message access, credential storage, command execution, and persistent state to function. Every one of those is attack surface we're adding by design because that's the value proposition. **Full writeup here** [**https://x.com/theonejvo/status/2015401219746128322**](https://x.com/theonejvo/status/2015401219746128322) If you're running Clawdbot behind a reverse proxy, configure `gateway.auth.password` or `gateway.trustedProxies` today.
2026-01-25T12:42:19
https://i.redd.it/2hbvx93uqhfg1.png
theonejvo
i.redd.it
1970-01-01T00:00:00
0
{}
1qmi4g7
false
null
t3_1qmi4g7
/r/LocalLLaMA/comments/1qmi4g7/what_happens_when_you_give_the_world_agent_level/
false
false
default
0
{'enabled': True, 'images': [{'id': '2hbvx93uqhfg1', 'resolutions': [{'height': 54, 'url': 'https://preview.redd.it/2hbvx93uqhfg1.png?width=108&crop=smart&auto=webp&s=05a486044de6862492ded8f7677f9c30a6385e33', 'width': 108}, {'height': 108, 'url': 'https://preview.redd.it/2hbvx93uqhfg1.png?width=216&crop=smart&auto=webp&s=835c4644ee7856ec492e13cbfd8e4ce2fbe37bb2', 'width': 216}, {'height': 160, 'url': 'https://preview.redd.it/2hbvx93uqhfg1.png?width=320&crop=smart&auto=webp&s=ab0ebc9680cb4a90c53d9c26d1e4e79e9a336ad5', 'width': 320}, {'height': 320, 'url': 'https://preview.redd.it/2hbvx93uqhfg1.png?width=640&crop=smart&auto=webp&s=c5006e99e495d68e52259682e4d8c32af78aec92', 'width': 640}, {'height': 480, 'url': 'https://preview.redd.it/2hbvx93uqhfg1.png?width=960&crop=smart&auto=webp&s=f33cb004468641acc5af00a09428974f6d24ab70', 'width': 960}, {'height': 540, 'url': 'https://preview.redd.it/2hbvx93uqhfg1.png?width=1080&crop=smart&auto=webp&s=6f16e143fd3e8a081530eb4214be42f699ceeac9', 'width': 1080}], 'source': {'height': 768, 'url': 'https://preview.redd.it/2hbvx93uqhfg1.png?auto=webp&s=6631c2045bbba9f12d027dc1ea3f34008b5097a4', 'width': 1536}, 'variants': {}}]}
Quantifying Hallucinations: By calculating a multi-dimensional 'Trust Score' for LLM outputs.
23
**The problem:** You build a RAG system. It gives an answer. It sounds right. But is it actually grounded in your data, or just hallucinating with confidence? A single "correctness" or "relevance" score doesn’t cut it anymore, especially in enterprise, regulated, or governance-heavy environments. We need to know why it failed. **My solution:** Introducing **TrustifAI** – a framework designed to quantify, explain, and debug the trustworthiness of AI responses. Instead of pass/fail, it computes a multi-dimensional Trust Score using signals like: \* Evidence Coverage: Is the answer actually supported by retrieved documents? \* Epistemic Consistency: Does the model stay stable across repeated generations? \* Semantic Drift: Did the response drift away from the given context? \* Source Diversity: Is the answer overly dependent on a single document? \* Generation Confidence: Uses token-level log probabilities at inference time to quantify how confident the model was while generating the answer (not after judging it). **Why this matters:** TrustifAI doesn’t just give you a number - it gives you traceability. It builds **Reasoning Graphs (DAGs)** and **Mermaid visualizations** that show why a response was flagged as reliable or suspicious. **How is this different from LLM Evaluation frameworks:** All popular Eval frameworks measure how good your RAG system is, but TrustifAI tells you why you should (or shouldn’t) trust a specific answer - with explainability in mind. Since the library is in its early stages, I’d genuinely love community feedback. ⭐ the repo if it helps 😄 **Get started:** `pip install trustifai` **Github link:** [https://github.com/Aaryanverma/trustifai](https://github.com/Aaryanverma/trustifai)
2026-01-25T12:30:24
https://www.reddit.com/gallery/1qmhvuz
Charming_Group_2950
reddit.com
1970-01-01T00:00:00
0
{}
1qmhvuz
false
null
t3_1qmhvuz
/r/LocalLLaMA/comments/1qmhvuz/quantifying_hallucinations_by_calculating_a/
false
false
default
23
null
What happened to moondream3?
16
So last year the moondream 3 preview came out. It was a nice performing visual model that could do some cool stuff other VL models couldn't. One month ago a MLX version appeared https://huggingface.co/moondream/md3p-int4 but until now there is no llama.cpp implementation and no public activity I could find.
2026-01-25T11:49:01
https://www.reddit.com/r/LocalLLaMA/comments/1qmh3si/what_happened_to_moondream3/
StableDiffer
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qmh3si
false
null
t3_1qmh3si
/r/LocalLLaMA/comments/1qmh3si/what_happened_to_moondream3/
false
false
self
16
{'enabled': False, 'images': [{'id': 'FO8Zil8gyqojuNkMoqIo0xiti69xqTriNvKgeXAHxVA', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/FO8Zil8gyqojuNkMoqIo0xiti69xqTriNvKgeXAHxVA.png?width=108&crop=smart&auto=webp&s=64bb8d11342c4b69fc5db1552da6a28fc796b472', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/FO8Zil8gyqojuNkMoqIo0xiti69xqTriNvKgeXAHxVA.png?width=216&crop=smart&auto=webp&s=30b81170f4d865dc1c7a745d2217f399607f95d7', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/FO8Zil8gyqojuNkMoqIo0xiti69xqTriNvKgeXAHxVA.png?width=320&crop=smart&auto=webp&s=9a500f6ad9102cfc36db249f347752edabc5611c', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/FO8Zil8gyqojuNkMoqIo0xiti69xqTriNvKgeXAHxVA.png?width=640&crop=smart&auto=webp&s=0cba49f7e2178206dc3b0292449715632eb365f4', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/FO8Zil8gyqojuNkMoqIo0xiti69xqTriNvKgeXAHxVA.png?width=960&crop=smart&auto=webp&s=a9b6c4613f772bd50c1237822c43b3d4c97df440', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/FO8Zil8gyqojuNkMoqIo0xiti69xqTriNvKgeXAHxVA.png?width=1080&crop=smart&auto=webp&s=156fba542e26b29a263abcddcad23247e9c89415', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/FO8Zil8gyqojuNkMoqIo0xiti69xqTriNvKgeXAHxVA.png?auto=webp&s=cb8e1c993efc1da41a44e1d9254c19114d9172bc', 'width': 1200}, 'variants': {}}]}
How to make open-source models more Claude-like?
0
I really love opensource models because of their accessibility and price but there is something no one can deny. In comparison with other models Claude has *taste*. It feels like you are talking to an actual human and it understands the nuances. I am asking whether you guys have any system prompts or just recommendations for models that approach this human-like quality? Thanks!
2026-01-25T11:48:50
https://www.reddit.com/r/LocalLLaMA/comments/1qmh3nk/how_to_make_opensource_models_more_claudelike/
1234filip
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qmh3nk
false
null
t3_1qmh3nk
/r/LocalLLaMA/comments/1qmh3nk/how_to_make_opensource_models_more_claudelike/
false
false
self
0
null
A model for 12 GB RAM + 3 GB VRAM + GTX 1050.
0
Well, I asked ChatGPT and it recommended to me Llama 3.1 8B (Q2/Q3), but that's too old and dumb for 2026. And then it selected TinyLlama which I hate too much. **Clear single recommendation:** **DeepSeek-R1:7B (quantized, e.g., Q4\_K\_M) running via Ollama** 💀💀💀💀💀💀 This model strikes the best practical balance between being **lightweight, stable, optimized for low VRAM (3GB on your GTX 1050)**, and usable for local AI tools like **Ollama or LM Studio** on **Linux Mint** with CPU fallback support. # Why this choice fits your system * **Low VRAM suitability:** The **7B quantized variant** (like Q4\_K\_M) compresses weights into \~4–5GB range, which fits into low-VRAM GPUs when combined with partial CPU offload. **Runs locally well:** Users report **DeepSeek-R1:7B** running even on very modest machines—indicating it will work on your hardware, especially with quantization. **Ollama support:** It’s available in Ollama’s library, making setup straightforward on Linux and compatible with tools like LM Studio. **Balanced quality:** It offers significantly better reasoning and coherence than tiny models without demanding high memory, avoiding gibberish outputs common in ultra-tiny models. Well, that's perfect to me, since it was released in 2025 and DeepSeek R1 (1.5b, 7b and 8b) can run, or even 14b, but ill be very slow.
2026-01-25T11:41:07
https://www.reddit.com/r/LocalLLaMA/comments/1qmgyoq/a_model_for_12_gb_ram_3_gb_vram_gtx_1050/
Ok-Type-7663
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qmgyoq
false
null
t3_1qmgyoq
/r/LocalLLaMA/comments/1qmgyoq/a_model_for_12_gb_ram_3_gb_vram_gtx_1050/
false
false
self
0
null
Looking for a cost-effective laptop to run LLMs locally (budget ~₹120,000)
0
Hi everyone — I’m looking for suggestions on a laptop that can run LLMs locally (LLaMA, Mistral, Qwen, etc.) without breaking the bank. My budget is around ₹120,000. My priorities: • Best price-to-performance for running quantized models (7B–13B) • Good local inference performance (GPU VRAM matters) • Upgradeability (RAM/SSD) and reliability • I’m fine with something heavier — portability is secondary • Used/refurbished options are OK if they’re a good deal What I plan to do: • Run quantized inference and light fine-tuning / RAG workflows • Mostly offline/local work (no heavy gaming required) Desired baseline specs (flexible): • GPU with decent VRAM (preferably NVIDIA; more VRAM = more model headroom) • 32 GB RAM (or upgradable to 32 GB) • Fast NVMe SSD • Good Linux compatibility is a plus Budget: \~₹120,000 (open to small stretch for strong value) Would love advice on: • Specific laptop models or used workstation/gaming laptops worth looking for in India • Whether to prioritize GPU VRAM vs. CPU cores vs. RAM in this price range • Any “avoid this” models or gotchas (thermal throttling, poor Linux support, soldered RAM, etc.) Thanks — I appreciate real-world experience from people actually running models locally 🙏
2026-01-25T11:30:12
https://www.reddit.com/r/LocalLLaMA/comments/1qmgrol/looking_for_a_costeffective_laptop_to_run_llms/
sgangesh
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qmgrol
false
null
t3_1qmgrol
/r/LocalLLaMA/comments/1qmgrol/looking_for_a_costeffective_laptop_to_run_llms/
false
false
self
0
null
Looking to run 'unsloth/GLM-4.7-Flash-REAP-23B-A3B-GGUF' locally.
0
My usecase is mostly coding. My laptop specs are: i5-11400H CPU with 32GB DDR4 RAM, and RTX 3060 GPU with 6GB VRAM. Looking for guidance on how to run the smallest good quality quant of this model the fastest. I can see on huggingface that there are a lot of local apps available to run this. I will be using a WSL environment for coding and hence need to run in it. Thinking of running a Q4 quant via ollama. Is that the best way. Need guidance please. https://preview.redd.it/tgq7rkrebhfg1.png?width=749&format=png&auto=webp&s=82b046fc14bfd3d3ca221438921697ad5f3fd73a https://preview.redd.it/xm9omzw9bhfg1.png?width=376&format=png&auto=webp&s=f197316eb054bc2a01b5cf9bb226f86e0617124f
2026-01-25T11:17:39
https://www.reddit.com/r/LocalLLaMA/comments/1qmgjka/looking_to_run_unslothglm47flashreap23ba3bgguf/
Deep-Vermicelli-4591
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qmgjka
false
null
t3_1qmgjka
/r/LocalLLaMA/comments/1qmgjka/looking_to_run_unslothglm47flashreap23ba3bgguf/
false
false
https://b.thumbs.redditm…p5jDOAJ6PrSc.jpg
0
null
Looking for advice on a professional local AI workstation (ComfyUI, broadcast / video production)
2
Hi everyone, I work in an audio/video production company that is starting to move into the AI space from a more professional and broadcast-oriented perspective. The goal is to build a powerful on-prem workstation to work locally with ComfyUI, image and video generation pipelines, and custom workflows that can be integrated into a production environment. I’m new to this field, so I apologize in advance if this build looks overkill or if I’ve made some wrong assumptions. I tried to put together a parts list that *made sense to me* based on what I’ve learned so far, but I’m here specifically to get advice from people who are more experienced than me with local AI systems before committing to such a large investment. Here’s the current hardware configuration I’m considering: GPUs * 2× NVIDIA RTX PRO 6000 Blackwell – 96GB VRAM each (workstation edition, ECC) CPU * AMD Ryzen Threadripper PRO 9975WX Motherboard * ASUS PRO WS WRX90E-SAGE SE Memory * 384GB RAM (6×64GB) HP Synergy 480 Gen11, DDR5-5600 ECC Registered PSU * Seasonic PRIME PX-2200, 2200W, ATX 3.1, 80+ Platinum Case * be quiet! Dark Base 901 Estimated total cost is around €32,000. What I’m mainly hoping to get feedback on is whether this build actually makes sense as a *whole*. I’d really appreciate being pointed out any potential bottlenecks or incompatibilities, especially around PCIe lanes, RAM configuration, power delivery, or thermals. I’m also unsure if the Threadripper PRO 9975WX is the right choice for this kind of workload compared to other WRX90 options, and whether this case and cooling approach are realistic for running dual RTX PRO 6000s reliably. More generally, I’d love suggestions for components that work well together for a stable, production-ready local AI workstation, even if that means changing major parts of the build. If there are better or more sensible alternatives at a similar price point for working locally with ComfyUI, I’m absolutely open to rethinking everything. And of course, any classic “you’re about to regret this” warnings are more than welcome. Again, sorry in advance if I’ve overdone it or chosen the wrong components, that’s exactly why I’m here. Thanks a lot to anyone willing to share their experience and help steer me in the right direction.
2026-01-25T10:28:44
https://www.reddit.com/r/LocalLLaMA/comments/1qmfp2b/looking_for_advice_on_a_professional_local_ai/
rejis_
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qmfp2b
false
null
t3_1qmfp2b
/r/LocalLLaMA/comments/1qmfp2b/looking_for_advice_on_a_professional_local_ai/
false
false
self
2
null
[Open Source] I built a new "Awesome" list for Nanobanana Prompts (1000+ items, sourced from X trends)
0
I've noticed that while there are a few prompt collections for the Nanobanana model, many of them are either static or outdated. so I decided to build and open-source a new "Awesome Nanobanana Prompts" project **Repo: jau123/nanobanana-trending-prompts** Why is this list different? 1. Scale & Freshness: It already contains 1,000+ curated prompts and I'm committed to updating it weekly 2. Community Vetted: Unlike random generation dumps, these prompts are scraped from trending posts on X (Twitter). They are essentially "upvoted" by real users before they make it into this list 3. Developer Friendly: I've structured everything into a JSON dataset *Note: Raw data may contain ads or low-quality content. I'm continuously filtering and curating. If you spot issues, please open an issue* *Heads up: Since prompts are ranked by engagement, you'll notice a fair amount of attractive women in the results — and this is after I've already filtered out quite a bit.*
2026-01-25T10:10:17
https://www.reddit.com/r/LocalLLaMA/comments/1qmfdm9/open_source_i_built_a_new_awesome_list_for/
Deep-Huckleberry-752
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qmfdm9
false
null
t3_1qmfdm9
/r/LocalLLaMA/comments/1qmfdm9/open_source_i_built_a_new_awesome_list_for/
false
false
self
0
null
Blazing fast JSON extraction with very small LLMs-3B: LSTM to LLM
18
Hey everyone, I've learned a lot from this sub, so I wanted to give back by sharing my experience on a recent project. My goal was to migrate a text extraction pipeline from LSTM to an LLM. The task involves extracting specific data into JSON format from small text inputs (≈1024 tokens). I used in-house data to fine-tune it. **Constraints & Achievements (running on an L4 GPU):** * **Very low end2end latency:** <500ms * **High throughput:** ≈30 RPM (requests per minute) * **Reliability:** 0.99 accuracy **The Model:** I tested quite a few models for this task. Ultimately, HuggingFaceTB/SmolLM3-3B was the best fit for our needs. I also had very strong results with Qwen/Qwen3-4B-Instruct and Ministral **Here is what I learned:** * **Fine-tuning parameters matter less than I thought:** I didn't see huge gains from strictly tweaking hyperparameters. I ran extensive hyperparameter optimization only to find that simply increasing the number of epochs yielded the best (slight) improvements. * **Data is king:** Poor labeling logic and bad data quality hurt me the most. If I had to redo it, I would spend much more time cleaning and validating the dataset upfront. * **Small LLMs struggle with Proper Nouns:** I noticed about a 10% error rate on names! A significant performance boost came from adding a simple post-processing step using Levenshtein distance to correct names extracted by the LLM against the input text (correcting "Jammes" -> "James"). **Efficiency Gains:** * **Quantization:** Obviously the best bang for your buck. I recommend **FP8** using llm-compressor if you have a Lovelace GPU or newer. Otherwise, **AWQ** is solid. * Gain: \~50% speed boost. * **Output Formatting:** You want to generate as few tokens as possible. Instead of fine-tuning for a verbose JSON output like {"key1": "value1", "key2": "value2"}, I fine-tuned the model to output just the values: value1,value2. * Gain: \~30% speed boost. **What didn't work (for me):** I really tried to make **Speculative Decoding** work with vLLM. In theory, I expected gains even with just n-gram speculative decoding, but I didn't observe any improvement. I did see some speedup using Qwen 0.7B draft model, but since I ultimately chose a different base model architecture, I couldn't use them effectively. Plus, maintaining a base model + a draft model is a pain, which is also why I didn't go with Eagle. If you have suggestions to squeeze out more performance or thoughts on the setup, I'm all ears!
2026-01-25T09:40:42
https://www.reddit.com/r/LocalLLaMA/comments/1qmevh7/blazing_fast_json_extraction_with_very_small/
memphet
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qmevh7
false
null
t3_1qmevh7
/r/LocalLLaMA/comments/1qmevh7/blazing_fast_json_extraction_with_very_small/
false
false
self
18
null
[R] Open-sourcing an unfinished research project: A Self-Organizing, Graph-Based Alternative to Transformers (Looking for feedback or continuation
19
Hi everyone, I’m sharing a research project I worked on over a long period but had to pause due to personal reasons. Rather than letting it sit idle, I wanted to open it up to the community either for technical feedback, critique, or for anyone interested in continuing or experimenting with it. The main project is called Self-Organizing State Model (SOSM): https://github.com/PlanetDestroyyer/Self-Organizing-State-Model At a high level, the goal was to explore an alternative to standard Transformer attention by: - Using graph-based routing instead of dense attention - Separating semantic representation and temporal pattern learning - Introducing a hierarchical credit/attribution mechanism for better interpretability The core system is modular and depends on a few supporting components: Semantic representation module (MU) https://github.com/PlanetDestroyyer/MU Temporal pattern learner (TEMPORAL) https://github.com/PlanetDestroyyer/TEMPORAL Hierarchical / K-1 self-learning mechanism https://github.com/PlanetDestroyyer/self-learning-k-1 I’m honestly not sure how valuable or novel this work is that’s exactly why I’m posting it here. If nothing else, I’d really appreciate constructive criticism, architectural feedback, or pointers to related work that overlaps with these ideas. If someone finds parts of it useful (or wants to take it further, refactor it, or formalize it into a paper), they’re more than welcome to do so. The project is open-source, and I’m happy to answer questions or clarify intent where needed. Thanks for taking a look. Summary: This work explores a language model architecture based on structured semantics rather than unstructured embeddings. Instead of positional encodings, a temporal learning module is used to model sequence progression and context flow. A K-1 hierarchical system is introduced to provide interpretability, enabling analysis of how a token is predicted and which components, states, or nodes contribute to that prediction. Most importantly, rather than comparing every token with all others (as in full self-attention), the model uses a graph-based connection mechanism that restricts computation to only the most relevant or necessary tokens, enabling selective reasoning and improved efficiency. (Have used claude code to code )
2026-01-25T09:40:12
https://www.reddit.com/r/LocalLLaMA/comments/1qmev6q/r_opensourcing_an_unfinished_research_project_a/
WriedGuy
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qmev6q
false
null
t3_1qmev6q
/r/LocalLLaMA/comments/1qmev6q/r_opensourcing_an_unfinished_research_project_a/
false
false
self
19
{'enabled': False, 'images': [{'id': 'gs06HcM0ZldvxCHLNPzGcBISCciF_j34LoCaWv_pMYs', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/gs06HcM0ZldvxCHLNPzGcBISCciF_j34LoCaWv_pMYs.png?width=108&crop=smart&auto=webp&s=e391c28780365e00cb0206b66baf2b420359a28d', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/gs06HcM0ZldvxCHLNPzGcBISCciF_j34LoCaWv_pMYs.png?width=216&crop=smart&auto=webp&s=67023ca29861bb82e3ad4efc38056b723eab0ed3', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/gs06HcM0ZldvxCHLNPzGcBISCciF_j34LoCaWv_pMYs.png?width=320&crop=smart&auto=webp&s=f079cc207aeac155aa2660b285d339b764ccd62b', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/gs06HcM0ZldvxCHLNPzGcBISCciF_j34LoCaWv_pMYs.png?width=640&crop=smart&auto=webp&s=f1050a88eea980a0b352c8a7e831e96844230593', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/gs06HcM0ZldvxCHLNPzGcBISCciF_j34LoCaWv_pMYs.png?width=960&crop=smart&auto=webp&s=90e962ef312bef79ab2dde60e00daf5255f0e26d', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/gs06HcM0ZldvxCHLNPzGcBISCciF_j34LoCaWv_pMYs.png?width=1080&crop=smart&auto=webp&s=9df4f5a65138dd7fb81891f4fd96a6dd2133f588', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/gs06HcM0ZldvxCHLNPzGcBISCciF_j34LoCaWv_pMYs.png?auto=webp&s=10c15ad18fc5349a94e12bce7a286bd27b9efc27', 'width': 1200}, 'variants': {}}]}
AI almost nuked my workspace, so I built a safety layer tool
0
I honestly can’t imagine daily work without AI agents anymore, but sometimes it goes "crazy" and messes up files (don't ask me how I know... 😭). Last weekend, after playing soccer and heading home, the idea suddenly hit me. I couldn’t stop thinking about it, so I spent the entire weekend hacking together something I really wanted: - AgentShield AgentShield puts a shield around our workspace, monitoring file changes in real-time. When things get messed up, we can \*rollback with one click\*, giving user a \*"regret pill"\*. I design the cli version and desktop version. This tool is dedicated to all the brothers who've been burned by AI. It's free and opensource. here [https://github.com/tomsun28/agentshield](https://github.com/tomsun28/agentshield) PS: Only tested on macOS so far. If it breaks in your setup, just clone the repo and let agents fix it 😄
2026-01-25T09:16:52
https://i.redd.it/f2zd9d0mngfg1.png
No-Photograph-2100
i.redd.it
1970-01-01T00:00:00
0
{}
1qmeh39
false
null
t3_1qmeh39
/r/LocalLLaMA/comments/1qmeh39/ai_almost_nuked_my_workspace_so_i_built_a_safety/
false
false
default
0
{'enabled': True, 'images': [{'id': 'f2zd9d0mngfg1', 'resolutions': [{'height': 78, 'url': 'https://preview.redd.it/f2zd9d0mngfg1.png?width=108&crop=smart&auto=webp&s=f8d665fa82ce578172e7ac6c566631c6eb3983b0', 'width': 108}, {'height': 157, 'url': 'https://preview.redd.it/f2zd9d0mngfg1.png?width=216&crop=smart&auto=webp&s=ac71f92f5d2ee2062403615fb249909e22ec5705', 'width': 216}, {'height': 233, 'url': 'https://preview.redd.it/f2zd9d0mngfg1.png?width=320&crop=smart&auto=webp&s=f640e34b2137a72d89f251a6375a198abaf60c9c', 'width': 320}, {'height': 467, 'url': 'https://preview.redd.it/f2zd9d0mngfg1.png?width=640&crop=smart&auto=webp&s=c3ff9e0a48198e8ac1fad84bdb9ff31ddf42c0fb', 'width': 640}, {'height': 701, 'url': 'https://preview.redd.it/f2zd9d0mngfg1.png?width=960&crop=smart&auto=webp&s=bcaf3239a094d6f971b200b8bfaf8bf105ff6990', 'width': 960}, {'height': 788, 'url': 'https://preview.redd.it/f2zd9d0mngfg1.png?width=1080&crop=smart&auto=webp&s=992fc8c7f6a81ecde35cd267d55cd7a820cf036a', 'width': 1080}], 'source': {'height': 1624, 'url': 'https://preview.redd.it/f2zd9d0mngfg1.png?auto=webp&s=e276878e49d9423ca5adad67ce66d63e1a4521c4', 'width': 2224}, 'variants': {}}]}
SQLite-Vector
0
For those interested in a highly efficient vector solution for SQLite, I recommend checking out the [https://github.com/sqliteai/sqlite-vector](https://github.com/sqliteai/sqlite-vector) project. Memory usage remains stable even with millions of vectors, and it supports multiple types and quantizations. Distances are optimized for SIMD processing, ensuring blazing-fast performance. Here are some articles about the library: \* [https://ainiro.io/blog/upgrading-magics-sqlite-vss-extension](https://ainiro.io/blog/upgrading-magics-sqlite-vss-extension) \* [https://cwrk.ai/posts/sqlite-vector-nix-flake-support/](https://cwrk.ai/posts/sqlite-vector-nix-flake-support/) \* [https://marcobambini.substack.com/p/the-state-of-vector-search-in-sqlite](https://marcobambini.substack.com/p/the-state-of-vector-search-in-sqlite)
2026-01-25T08:41:56
https://www.reddit.com/r/LocalLLaMA/comments/1qmdvt2/sqlitevector/
marco_bambini
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qmdvt2
false
null
t3_1qmdvt2
/r/LocalLLaMA/comments/1qmdvt2/sqlitevector/
false
false
self
0
{'enabled': False, 'images': [{'id': 'qkgEe2ETnzHR73OkDTyyNGPBs3GwRydRb1Kk2TZPjg4', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/qkgEe2ETnzHR73OkDTyyNGPBs3GwRydRb1Kk2TZPjg4.png?width=108&crop=smart&auto=webp&s=02e6f6de1413747ee99d015cde91d39dbb4f9f3b', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/qkgEe2ETnzHR73OkDTyyNGPBs3GwRydRb1Kk2TZPjg4.png?width=216&crop=smart&auto=webp&s=b4204c52a88006a94bf27c0b90f687d528369a4e', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/qkgEe2ETnzHR73OkDTyyNGPBs3GwRydRb1Kk2TZPjg4.png?width=320&crop=smart&auto=webp&s=a48e5c64274acf6a675aadd64363980b05a99d48', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/qkgEe2ETnzHR73OkDTyyNGPBs3GwRydRb1Kk2TZPjg4.png?width=640&crop=smart&auto=webp&s=22c5a707d35c78d3d16a3a3a3d927599ee2b3304', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/qkgEe2ETnzHR73OkDTyyNGPBs3GwRydRb1Kk2TZPjg4.png?width=960&crop=smart&auto=webp&s=a3339b2e6019a0d9a17eb3e60ba4d41367d9ec48', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/qkgEe2ETnzHR73OkDTyyNGPBs3GwRydRb1Kk2TZPjg4.png?width=1080&crop=smart&auto=webp&s=87307f48bad6ce189c01e2d11c74e3f52724b552', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/qkgEe2ETnzHR73OkDTyyNGPBs3GwRydRb1Kk2TZPjg4.png?auto=webp&s=62ce40fa72add0b7cad57a78d4b214ff46e27387', 'width': 1200}, 'variants': {}}]}
Has anyone got GLM 4.7 flash to not be shit?
73
Real talk. I feel like everyday I'm downloading a new quant and trying it out and not once have I got it to consistently work without looping. I've tried with and without the suggested settings from unsloth, [z.ai](http://z.ai), and others, to no avail. Additionally, this has to be the slowest inference I've ever seen from a 30B A3B model. In all fairness, my only point of reference is Qwen3 Coder, when compared to that at least, the token generation speed feels positively lethargic. If anybody has any tips, please let me know because I feel like I'm going in circles here. I don't think I've ever seen a modern release that had this many issues right off the bat, with no apparent improvement after a few supposed fixes. It's really unfortunate because I can see the potential this model has. The chain of thought in particular seems uniquely coherent.
2026-01-25T08:14:08
https://www.reddit.com/r/LocalLLaMA/comments/1qmdf2a/has_anyone_got_glm_47_flash_to_not_be_shit/
synth_mania
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qmdf2a
false
null
t3_1qmdf2a
/r/LocalLLaMA/comments/1qmdf2a/has_anyone_got_glm_47_flash_to_not_be_shit/
false
false
self
73
null
How can I find high-performance AI such as “chatGPT OpenAI,” “Perplexity,” “Infomaniak Euria,” “Tinfoil,” “Le Chat Mistral AI,” “Lumo de Proton,” “Claude,” etc.?
0
Hello, The list is almost exhaustive... Where can I find out which ones will be released, and get RSS feeds telling me when the chatGPT model is updated, etc.? I have a month's Plus plan and am mainly using chatGPT at the moment... Thank you *Translated with* [*DeepL.com*](https://www.deepl.com/?utm_campaign=product&utm_source=web_translator&utm_medium=web&utm_content=copy_free_translation) *(free version)*
2026-01-25T07:48:46
https://www.reddit.com/r/LocalLLaMA/comments/1qmcz36/how_can_i_find_highperformance_ai_such_as_chatgpt/
sypqys
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qmcz36
false
null
t3_1qmcz36
/r/LocalLLaMA/comments/1qmcz36/how_can_i_find_highperformance_ai_such_as_chatgpt/
false
false
self
0
{'enabled': False, 'images': [{'id': 'bNYF0RuRzfFzG5_IxCLpZLCZ0tQkGnpvq-C_yUrQ4Bs', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/bNYF0RuRzfFzG5_IxCLpZLCZ0tQkGnpvq-C_yUrQ4Bs.png?width=108&crop=smart&auto=webp&s=8ebbc96ad7d82fe07ab3fc967223162a2f78130b', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/bNYF0RuRzfFzG5_IxCLpZLCZ0tQkGnpvq-C_yUrQ4Bs.png?width=216&crop=smart&auto=webp&s=f537113e6162ab8d7ca40c648cb88dee28f6b9c5', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/bNYF0RuRzfFzG5_IxCLpZLCZ0tQkGnpvq-C_yUrQ4Bs.png?width=320&crop=smart&auto=webp&s=07a79a0c201d6376530218eb7760682059b1dbd0', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/bNYF0RuRzfFzG5_IxCLpZLCZ0tQkGnpvq-C_yUrQ4Bs.png?width=640&crop=smart&auto=webp&s=a09db690719f4292a69e48d114ccfc38ce35c476', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/bNYF0RuRzfFzG5_IxCLpZLCZ0tQkGnpvq-C_yUrQ4Bs.png?width=960&crop=smart&auto=webp&s=b8f46aa4f4dc53e2f016efcc1f625813e0c6ba46', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/bNYF0RuRzfFzG5_IxCLpZLCZ0tQkGnpvq-C_yUrQ4Bs.png?width=1080&crop=smart&auto=webp&s=143ffe6469213cd3f1cb5f5f36dcd414fa6c9a22', 'width': 1080}], 'source': {'height': 2520, 'url': 'https://external-preview.redd.it/bNYF0RuRzfFzG5_IxCLpZLCZ0tQkGnpvq-C_yUrQ4Bs.png?auto=webp&s=4548d198aba51dceea2ea37265c14dc159ea784f', 'width': 4800}, 'variants': {}}]}
AMD or Atlas?
1
What's better: 4x Atlas 300i Duo or 8x And RADEON AI TOP R9700 Any owners here of Atlas 300i duo?
2026-01-25T07:34:25
https://www.reddit.com/r/LocalLLaMA/comments/1qmcq8b/amd_or_atlas/
djdeniro
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qmcq8b
false
null
t3_1qmcq8b
/r/LocalLLaMA/comments/1qmcq8b/amd_or_atlas/
false
false
self
1
null
[Project Share] Neural-Chromium: A custom Chromium build for high-fidelity, local AI agents (Zero-Copy Vision + Llama 3.2)
1
ERROR: type should be string, got "\n\nhttps://reddit.com/link/1qmcphu/video/sxuqqzke7gfg1/player\n\nHey everyone,\n\nI’ve been working on a project called **Neural-Chromium**, an experimental build of the Chromium browser designed specifically for high-fidelity AI agent integration.\n\n**The Problem:** Traditional web automation (Selenium, Playwright) is often brittle because it relies on hard-coded element selectors, or it suffers from high latency when trying to \"screen scrape\" for visual agents.\n\n**The Solution:** Neural-Chromium eliminates these layers by giving agents direct, low-latency access to the browser's internal state and rendering pipeline. Instead of taking screenshots, the agent has zero-copy access to the composition surface (Viz) for sub-16ms inference latency.\n\n**Key Features & Architecture:**\n\n* **Visual Cortex (Zero-Copy Vision):** I implemented a shared memory bridge that allows the agent to see the browser at 60+ FPS without the overhead of standard screen capture methods. It captures frames directly from the display refresh rate.\n* **Local Intelligence:** The current build integrates with **Ollama** running **llama3.2-vision**. This means the agent observes the screen, orients itself, decides on an action, and executes it—all locally without sending screenshots to the cloud.\n* **High-Precision Action:** The agent uses a coordinate transformation pipeline to inject clicks and inputs directly into the browser, bypassing standard automation protocols.\n* **Auditory Cortex:** I’ve also verified a native audio bridge that captures microphone input via the Web Speech API and pipes base64 PCM audio to the agent for real-time voice interaction.\n\n**Proof of Concept:** I’ve validated this with an \"Antigravity Agent\" that successfully navigates complex flows (login -> add to cart -> checkout) on test sites solely using the Vision-Language Model to interpret the screen. The logs confirm it isn't using DOM selectors but is actually \"looking\" at the page to make decisions.\n\n**Use Cases:** Because this runs locally and has deep state awareness, it opens up workflows for:\n\n* **Privacy-First Personal Assistants:** Handling sensitive data (medical/financial) without it leaving your machine.\n* **Resilient QA Testing:** Agents that explore apps like human testers rather than following rigid scripts.\n* **Real-Time UX Monitoring:** Detecting visual glitches or broken media streams in sub-seconds.\n\n**Repo & Build:** The project uses a \"Source Overlay\" pattern to modify the massive Chromium codebase. It requires Windows 10/11 and Visual Studio 2022 to build.\n\nCheck it out on GitHub: **mcpmessenger/neural-chromium**\n\nI’d love to hear your thoughts on this architecture or ideas for agent workflows!\n\n"
2026-01-25T07:33:13
https://www.reddit.com/r/LocalLLaMA/comments/1qmcphu/project_share_neuralchromium_a_custom_chromium/
MycologistWhich7953
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qmcphu
false
null
t3_1qmcphu
/r/LocalLLaMA/comments/1qmcphu/project_share_neuralchromium_a_custom_chromium/
false
false
self
1
null
LLM Cpu and gpu calculator for gpu (protoype)
0
this not ready and i doubt this thing acurate but i find most of calculator are confusing and not user freindly and i decide write one with help mix of gemini, Z ai and chatgpt with end product like this above. i very suspicious with ai claim that token is fast as it mention, i wish i can use actual data but unfortunely i can't find it. is react btw
2026-01-25T07:02:54
https://www.reddit.com/gallery/1qmc61x
Merchant_Lawrence
reddit.com
1970-01-01T00:00:00
0
{}
1qmc61x
false
null
t3_1qmc61x
/r/LocalLLaMA/comments/1qmc61x/llm_cpu_and_gpu_calculator_for_gpu_protoype/
false
false
https://b.thumbs.redditm…MqkXfG0vhTuk.jpg
0
null
ChatGPT at home
6
With Qwen's low latency TTS coming out and a decent STT model along with OpenWebUI's voice chat mode, I am wondering what is the best way to setup a self hosted ChatGPT style voice chat system at home.
2026-01-25T07:01:56
https://www.reddit.com/r/LocalLLaMA/comments/1qmc5f0/chatgpt_at_home/
hainesk
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qmc5f0
false
null
t3_1qmc5f0
/r/LocalLLaMA/comments/1qmc5f0/chatgpt_at_home/
false
false
self
6
null
I built a way to cryptographically verify AI financial outputs so you can prove where every number came from
0
I’ve been working on something because I kept running into the same issue using LLMs for financial research. You can ask three different AI tools the same valuation question and get three different numbers, with no way to see: • which SEC filing was used • when the data was pulled • whether the number was modified • or how the model reasoned through it That’s fine for brainstorming. It’s not fine for money. So I built a small web layer that sits between AI and the user and does four things: 1. Pulls financials directly from SEC EDGAR 2. Validates XBRL and normalizes units 3. Runs invariant checks (share math, margins, WACC bounds, etc.) 4. Cryptographically signs the final valuation so it can be verified later If anything fails, the output is blocked. The goal is not to make “better AI.” It’s to make verifiable AI. Here’s the site and live demo: https://vouchapi.com I’m curious how people here think about this problem. Would you trust an AI valuation if you couldn’t prove where the numbers came from?
2026-01-25T06:54:05
https://www.reddit.com/gallery/1qmc054
Curious_Item1499
reddit.com
1970-01-01T00:00:00
0
{}
1qmc054
false
null
t3_1qmc054
/r/LocalLLaMA/comments/1qmc054/i_built_a_way_to_cryptographically_verify_ai/
false
false
https://b.thumbs.redditm…QqURFmI5AmVI.jpg
0
null
YouTopia Search (Perplexity alternative, sorry for self promo)
0
**Hi!** I've been developing a web search tool (similar to Perplexity, but with some key improvements) and I'm preparing for launch. Also, we crossed **200 users** and **1000+ queries** in **pre-release!** thank you so much! Now, We’re looking for feedback and people to work with **before the final release** and marketing push. Would love to get your thoughts! 🚀 Try out at: [https://youtopia.page/search](https://youtopia.page/search) https://preview.redd.it/citdenwfwffg1.jpg?width=1280&format=pjpg&auto=webp&s=c7a333175f511fd7324137438e045ba55cf5dfa8 We're also launching a local AI document editor framework soon, more of a claude code like tool for Word, Excel, PowerPoint, plus Canva-style poster creation, all through natural language commands. looking for beta testers and early testing waitlist applicants!
2026-01-25T06:32:50
https://www.reddit.com/r/LocalLLaMA/comments/1qmblwn/youtopia_search_perplexity_alternative_sorry_for/
Effective-Sock7512
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qmblwn
false
null
t3_1qmblwn
/r/LocalLLaMA/comments/1qmblwn/youtopia_search_perplexity_alternative_sorry_for/
false
false
https://b.thumbs.redditm…EevAXE9wvifY.jpg
0
null
Distilling Gemini 3 Flash visual reasoning into Qwen 3 VL 32B for synthetic captioning. Is SFT enough?
18
I am working on a synthetic data pipeline for training high-precision image-to-image models (Flux Klein and Qwen Image Edit). I have reached a point where standard tagging and current open-weights VL models are the main bottleneck for data quality. I have benchmarked almost every trending VL model on HuggingFace and those leading the MMMU-Pro leaderboard. My conclusion is that even the best open models are "blind" to complex anatomical layering and spatial reasoning. The problem is best described by the "Horns Issue" (see attached image). If a character has large organic dragon horns and a headband with small decorative horns, every open VLM I tested merges them into one generic attribute. They fail to distinguish between base anatomy and removable accessories. Gemini 3 Flash, however, is on a completely different level—it accurately describes every layer and understands the distinction perfectly. My plan is to fine-tune Qwen 3 VL 32B Instruct on a dataset labeled by Gemini 3 Flash. I want to transfer that visual reasoning so I can have a local engine for high-scale synthetic captioning. A few technical questions: 1. Can Qwen 3 VL actually absorb this level of reasoning via SFT if it lacks the native "thinking" or CoT process Gemini uses? 2. Is the "blindness" in open models a limitation of the vision encoder itself, or is it purely a reasoning capability issue on the LLM side? 3. Has anyone here tried this kind of VLM-to-VLM distillation for high-scale labeling in generative AI pipelines? I am trying to build a local captioner that matches proprietary accuracy. Any insights on the plasticity of Qwen 32B for this specific task would be appreciated.
2026-01-25T06:22:22
https://i.redd.it/r0ec0m21vffg1.jpeg
MadPelmewka
i.redd.it
1970-01-01T00:00:00
0
{}
1qmbevn
false
null
t3_1qmbevn
/r/LocalLLaMA/comments/1qmbevn/distilling_gemini_3_flash_visual_reasoning_into/
false
false
default
18
{'enabled': True, 'images': [{'id': 'r0ec0m21vffg1', 'resolutions': [{'height': 70, 'url': 'https://preview.redd.it/r0ec0m21vffg1.jpeg?width=108&crop=smart&auto=webp&s=d90604ff1f461c6e7eee0e3a6b4ee601e3a514e8', 'width': 108}, {'height': 141, 'url': 'https://preview.redd.it/r0ec0m21vffg1.jpeg?width=216&crop=smart&auto=webp&s=c29edd4bf11753b9dcfc89a16cfd7a5821c038e9', 'width': 216}, {'height': 210, 'url': 'https://preview.redd.it/r0ec0m21vffg1.jpeg?width=320&crop=smart&auto=webp&s=314e8c2d489b35d387d0cf582be7f1c665f4a235', 'width': 320}, {'height': 420, 'url': 'https://preview.redd.it/r0ec0m21vffg1.jpeg?width=640&crop=smart&auto=webp&s=275f52d7dfb944441f74dea25d60f8c3e620a766', 'width': 640}, {'height': 630, 'url': 'https://preview.redd.it/r0ec0m21vffg1.jpeg?width=960&crop=smart&auto=webp&s=a57e5a39b88f7af395f603590ae5f73d1f79b4ec', 'width': 960}], 'source': {'height': 672, 'url': 'https://preview.redd.it/r0ec0m21vffg1.jpeg?auto=webp&s=ee2951137ca1c5bb2355cf4a61b57ba91e9644c0', 'width': 1024}, 'variants': {}}]}
Distilling Gemini 3 Flash visual reasoning into Qwen 3 VL 32B for synthetic captioning. Is SFT enough?
1
I am working on a synthetic data pipeline for training high-precision image-to-image models (Flux Klein and Qwen Image Edit). I have reached a point where standard tagging and current open-weights VL models are the main bottleneck for data quality. I have benchmarked almost every trending VL model on HuggingFace and those leading the MMMU-Pro leaderboard. My conclusion is that even the best open models are "blind" to complex anatomical layering and spatial reasoning. The problem is best described by the "Horns Issue" (see attached image). If a character has large organic dragon horns and a headband with small decorative horns, every open VLM I tested merges them into one generic attribute. They fail to distinguish between base anatomy and removable accessories. Gemini 3 Flash, however, is on a completely different level—it accurately describes every layer and understands the distinction perfectly. My plan is to fine-tune Qwen 3 VL 32B Instruct on a dataset labeled by Gemini 3 Flash. I want to transfer that visual reasoning so I can have a local engine for high-scale synthetic captioning. A few technical questions: 1. Can Qwen 3 VL actually absorb this level of reasoning via SFT if it lacks the native "thinking" or CoT process Gemini uses? 2. Is the "blindness" in open models a limitation of the vision encoder itself, or is it purely a reasoning capability issue on the LLM side? 3. Has anyone here tried this kind of VLM-to-VLM distillation for high-scale labeling in generative AI pipelines? I am trying to build a local captioner that matches proprietary accuracy. Any insights on the plasticity of Qwen 32B for this specific task would be appreciated.
2026-01-25T06:15:26
https://i.redd.it/hdontnhttffg1.png
MadPelmewka
i.redd.it
1970-01-01T00:00:00
0
{}
1qmba43
false
null
t3_1qmba43
/r/LocalLLaMA/comments/1qmba43/distilling_gemini_3_flash_visual_reasoning_into/
false
false
default
1
{'enabled': True, 'images': [{'id': 'hdontnhttffg1', 'resolutions': [{'height': 71, 'url': 'https://preview.redd.it/hdontnhttffg1.png?width=108&crop=smart&auto=webp&s=d00438dfd590167512ad357890f3bc07cb934868', 'width': 108}, {'height': 143, 'url': 'https://preview.redd.it/hdontnhttffg1.png?width=216&crop=smart&auto=webp&s=063472b2a09157d5a90efca033fe0018cda6f88d', 'width': 216}, {'height': 212, 'url': 'https://preview.redd.it/hdontnhttffg1.png?width=320&crop=smart&auto=webp&s=6c9b98e2e16d9fec088fe7b67e9442f46847a9fb', 'width': 320}], 'source': {'height': 304, 'url': 'https://preview.redd.it/hdontnhttffg1.png?auto=webp&s=37fe68ecf5c095321f3ef64615ec64813d2da9f6', 'width': 458}, 'variants': {}}]}
[REQ] - Front End for Chroma Speech to Speech
0
Hey, Please can someone vibecode a front end for Chroma. Working via Browser or as Linux Appimage. https://github.com/FlashLabs-AI-Corp/FlashLabs-Chroma/tree/main/ https://huggingface.co/FlashLabs/Chroma-4B https://www.flashlabs.ai/flashai-voice-agents
2026-01-25T06:12:44
https://www.reddit.com/r/LocalLLaMA/comments/1qmb89q/req_front_end_for_chroma_speech_to_speech/
127loopback
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qmb89q
false
null
t3_1qmb89q
/r/LocalLLaMA/comments/1qmb89q/req_front_end_for_chroma_speech_to_speech/
false
false
self
0
null
Distilling Gemini 3 Flash visual reasoning into Qwen 3 VL 32B for synthetic captioning. Is SFT enough?
0
I am working on a synthetic data pipeline for training high-precision image-to-image models (Flux Klein and Qwen Image Edit). I have reached a point where standard tagging and current open-weights VL models are the main bottleneck for data quality. I have benchmarked almost every trending VL model on HuggingFace and those leading the MMMU-Pro leaderboard. My conclusion is that even the best open models are "blind" to complex anatomical layering and spatial reasoning. The problem is best described by the "Horns Issue" (see attached image). If a character has large organic dragon horns and a headband with small decorative horns, every open VLM I tested merges them into one generic attribute. They fail to distinguish between base anatomy and removable accessories. Gemini 3 Flash, however, is on a completely different level—it accurately describes every layer and understands the distinction perfectly. My plan is to fine-tune Qwen 3 VL 32B Instruct on a dataset labeled by Gemini 3 Flash. I want to transfer that visual reasoning so I can have a local engine for high-scale synthetic captioning. A few technical questions: 1. Can Qwen 3 VL actually absorb this level of reasoning via SFT if it lacks the native "thinking" or CoT process Gemini uses? 2. Is the "blindness" in open models a limitation of the vision encoder itself, or is it purely a reasoning capability issue on the LLM side? 3. Has anyone here tried this kind of VLM-to-VLM distillation for high-scale labeling in generative AI pipelines? I am trying to build a local captioner that matches proprietary accuracy. Any insights on the plasticity of Qwen 32B for this specific task would be appreciated.
2026-01-25T06:10:46
https://i.redd.it/n6enmnt9sffg1.png
MadPelmewka
i.redd.it
1970-01-01T00:00:00
0
{}
1qmb6yq
false
null
t3_1qmb6yq
/r/LocalLLaMA/comments/1qmb6yq/distilling_gemini_3_flash_visual_reasoning_into/
false
false
nsfw
0
{'enabled': True, 'images': [{'id': 'n6enmnt9sffg1', 'resolutions': [{'height': 144, 'url': 'https://preview.redd.it/n6enmnt9sffg1.png?width=108&crop=smart&auto=webp&s=8bbec1bee80d6fd99525367e1c67e68c7b70be69', 'width': 108}, {'height': 288, 'url': 'https://preview.redd.it/n6enmnt9sffg1.png?width=216&crop=smart&auto=webp&s=ed516013150c056db13bffca164d9a9761fdf35d', 'width': 216}, {'height': 426, 'url': 'https://preview.redd.it/n6enmnt9sffg1.png?width=320&crop=smart&auto=webp&s=f09033080f8f47ef7ec03ada5df1821ab8db807f', 'width': 320}, {'height': 853, 'url': 'https://preview.redd.it/n6enmnt9sffg1.png?width=640&crop=smart&auto=webp&s=0b9848183be022e3776abdb741d7e62528efa97c', 'width': 640}], 'source': {'height': 1000, 'url': 'https://preview.redd.it/n6enmnt9sffg1.png?auto=webp&s=bff91bd437f607685318e693b80abe26a7551b5f', 'width': 750}, 'variants': {'nsfw': {'resolutions': [{'height': 144, 'url': 'https://preview.redd.it/n6enmnt9sffg1.png?width=108&crop=smart&blur=10&format=pjpg&auto=webp&s=8e6cf9ec7605dc7c614b492e78c55fee15efa170', 'width': 108}, {'height': 288, 'url': 'https://preview.redd.it/n6enmnt9sffg1.png?width=216&crop=smart&blur=21&format=pjpg&auto=webp&s=8310674ccfa5474d1e19d88de8a214e9660a902e', 'width': 216}, {'height': 426, 'url': 'https://preview.redd.it/n6enmnt9sffg1.png?width=320&crop=smart&blur=32&format=pjpg&auto=webp&s=de6a267ba6481c072488757854678fa8ddcaa713', 'width': 320}, {'height': 853, 'url': 'https://preview.redd.it/n6enmnt9sffg1.png?width=640&crop=smart&blur=40&format=pjpg&auto=webp&s=5c4a54d5cb95ad3c9b8ece4dcba046a13c1cdd69', 'width': 640}], 'source': {'height': 1000, 'url': 'https://preview.redd.it/n6enmnt9sffg1.png?blur=40&format=pjpg&auto=webp&s=558a0da5f6856ca01d2f5ffbb9d8014d193c9569', 'width': 750}}, 'obfuscated': {'resolutions': [{'height': 144, 'url': 'https://preview.redd.it/n6enmnt9sffg1.png?width=108&crop=smart&blur=10&format=pjpg&auto=webp&s=8e6cf9ec7605dc7c614b492e78c55fee15efa170', 'width': 108}, {'height': 288, 'url': 'https://preview.redd.it/n6enmnt9sffg1.png?width=216&crop=smart&blur=21&format=pjpg&auto=webp&s=8310674ccfa5474d1e19d88de8a214e9660a902e', 'width': 216}, {'height': 426, 'url': 'https://preview.redd.it/n6enmnt9sffg1.png?width=320&crop=smart&blur=32&format=pjpg&auto=webp&s=de6a267ba6481c072488757854678fa8ddcaa713', 'width': 320}, {'height': 853, 'url': 'https://preview.redd.it/n6enmnt9sffg1.png?width=640&crop=smart&blur=40&format=pjpg&auto=webp&s=5c4a54d5cb95ad3c9b8ece4dcba046a13c1cdd69', 'width': 640}], 'source': {'height': 1000, 'url': 'https://preview.redd.it/n6enmnt9sffg1.png?blur=40&format=pjpg&auto=webp&s=558a0da5f6856ca01d2f5ffbb9d8014d193c9569', 'width': 750}}}}]}
Made a tool for spawning a little web app that can OCR mobile camera photos into a folder
4
All built on llama-server, it does nice things like queueing up multiple images. Oddly useful and thought i'd share!.
2026-01-25T06:03:33
https://github.com/richardanaya/ocr-server
richardanaya
github.com
1970-01-01T00:00:00
0
{}
1qmb1ss
false
null
t3_1qmb1ss
/r/LocalLLaMA/comments/1qmb1ss/made_a_tool_for_spawning_a_little_web_app_that/
false
false
default
4
{'enabled': False, 'images': [{'id': 'tPCUTn2QiD2juF9GxZIMP1UZuPUomToVy-jZpBmR1hA', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/tPCUTn2QiD2juF9GxZIMP1UZuPUomToVy-jZpBmR1hA.png?width=108&crop=smart&auto=webp&s=d8b5b11fbed020d846c0414ffbbd697a96657442', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/tPCUTn2QiD2juF9GxZIMP1UZuPUomToVy-jZpBmR1hA.png?width=216&crop=smart&auto=webp&s=f8cf9c268b919c032902f06dbfe96aaa6560d8e1', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/tPCUTn2QiD2juF9GxZIMP1UZuPUomToVy-jZpBmR1hA.png?width=320&crop=smart&auto=webp&s=52eee17c792b773e10b629a9962b07449af38bf8', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/tPCUTn2QiD2juF9GxZIMP1UZuPUomToVy-jZpBmR1hA.png?width=640&crop=smart&auto=webp&s=0fd4b624a91ff43b2ac3878ca05d577a766c3229', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/tPCUTn2QiD2juF9GxZIMP1UZuPUomToVy-jZpBmR1hA.png?width=960&crop=smart&auto=webp&s=1e5baab21dd630a23a0169cbaf7115367dda5fc5', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/tPCUTn2QiD2juF9GxZIMP1UZuPUomToVy-jZpBmR1hA.png?width=1080&crop=smart&auto=webp&s=0d6a1a1860c7d49d6b6a13358cf208e08672c07d', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/tPCUTn2QiD2juF9GxZIMP1UZuPUomToVy-jZpBmR1hA.png?auto=webp&s=ae02896b29fe56a42cabecea3b08c439cbce3b57', 'width': 1200}, 'variants': {}}]}
Best <4B dense models today?
29
I think small(<4B) dense models are basically the only practical option for general users. But hasn't there been almost no progress since Gemma 3 4B came out? Are there any alternatives?
2026-01-25T05:46:01
https://www.reddit.com/r/LocalLLaMA/comments/1qmap5e/best_4b_dense_models_today/
Admirable_Flower_287
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qmap5e
false
null
t3_1qmap5e
/r/LocalLLaMA/comments/1qmap5e/best_4b_dense_models_today/
false
false
self
29
null
MBP 2019 i9 (64GB RAM) hitting 800% CPU on AnythingLLMs(12B) — Need optimization tips and model recs!
0
Hi everyone, I’m currently running a local AI setup on my **2019 16-inch MacBook Pro** and I’m hitting some serious performance thermal throttling. I’d love some advice from those still rocking Intel Macs. **My Specs:** * **CPU:** 2.4 GHz 8-Core Intel Core i9 (16 threads) * **RAM:** 64 GB DDR4 * **GPU:** AMD Radeon Pro 5500M 8GB * **Software:** AnythingLLM / Ollama **The Issue:** Whenever I run models (currently trying Gemma 3 12B) or perform embeddings for my Obsidian vault, my CPU usage spikes to **700-800%**, and temperatures hit **85-90°C** almost instantly. The fans are screaming, and the response time is quite sluggish. I notice the GPU (AMD 8GB) isn't being utilized much compared to the CPU. **My Questions:** 1. **Optimization:** How can I better offload tasks to the AMD GPU on an Intel Mac? Any specific environment variables for Ollama or settings in AnythingLLM to limit thread count and prevent the i9 from melting? 2. **Model Recommendations:** Given the 64GB RAM, I have plenty of memory but limited "brain power" (CPU/GPU speed). What 8B-14B models are you finding to be the "sweet spot" for logic vs. speed on Intel hardware? 3. **RAG Performance:** For those using Obsidian with Local AI, what’s the best way to handle large embeddings without locking up the system for an hour? I know "Buy an M3" is the easy answer, but I want to squeeze every bit of life out of this 64GB beast first! Thanks in advance for the help!
2026-01-25T05:43:20
https://i.redd.it/wfd61y4gnffg1.png
vdnn1902
i.redd.it
1970-01-01T00:00:00
0
{}
1qmanai
false
null
t3_1qmanai
/r/LocalLLaMA/comments/1qmanai/mbp_2019_i9_64gb_ram_hitting_800_cpu_on/
false
false
https://b.thumbs.redditm…i15Wcg5y9mxs.jpg
0
{'enabled': True, 'images': [{'id': 'VJImJL5Su1qil6ZtfsqqtCKeUsdsZzYyhu5ug8Unh68', 'resolutions': [{'height': 216, 'url': 'https://preview.redd.it/wfd61y4gnffg1.png?width=108&crop=smart&auto=webp&s=79343ddab16276f9fdb1df5f4cfaa87ff58fe981', 'width': 108}, {'height': 432, 'url': 'https://preview.redd.it/wfd61y4gnffg1.png?width=216&crop=smart&auto=webp&s=76e0da37c36953dceeab16a1abf5e7904b74ba8f', 'width': 216}, {'height': 640, 'url': 'https://preview.redd.it/wfd61y4gnffg1.png?width=320&crop=smart&auto=webp&s=10c0f3bdea58ec1bc0812a765e1239bfd71e290d', 'width': 320}], 'source': {'height': 1024, 'url': 'https://preview.redd.it/wfd61y4gnffg1.png?auto=webp&s=df3bf82a54c24a2366ced523c6005f39e00390f2', 'width': 363}, 'variants': {}}]}
Lovable you falling behind in real time
0
2026-01-25T05:42:21
https://i.redd.it/kkodk7dznffg1.jpeg
Odd-Character-3278
i.redd.it
1970-01-01T00:00:00
0
{}
1qmammc
false
null
t3_1qmammc
/r/LocalLLaMA/comments/1qmammc/lovable_you_falling_behind_in_real_time/
false
false
https://b.thumbs.redditm…lM3tzPD2GO4o.jpg
0
{'enabled': True, 'images': [{'id': 'T_2IiP_pwIHQVkUk_TkrrG_h7q33mkw6Pqc8phsYweY', 'resolutions': [{'height': 129, 'url': 'https://preview.redd.it/kkodk7dznffg1.jpeg?width=108&crop=smart&auto=webp&s=f99173c610ccd5475b952631ca72368cdbd3bb4a', 'width': 108}, {'height': 258, 'url': 'https://preview.redd.it/kkodk7dznffg1.jpeg?width=216&crop=smart&auto=webp&s=fc16a6fced252268f88e1555c5d87f286c151616', 'width': 216}, {'height': 383, 'url': 'https://preview.redd.it/kkodk7dznffg1.jpeg?width=320&crop=smart&auto=webp&s=aff26405bcef6976f08cd981aeb7d6d09faa8d69', 'width': 320}, {'height': 766, 'url': 'https://preview.redd.it/kkodk7dznffg1.jpeg?width=640&crop=smart&auto=webp&s=5eeb218e7c58a900ea2ad14669de55332fb43aec', 'width': 640}, {'height': 1149, 'url': 'https://preview.redd.it/kkodk7dznffg1.jpeg?width=960&crop=smart&auto=webp&s=29696652fbda8bc1b07997ee82ceba01792f03c6', 'width': 960}, {'height': 1293, 'url': 'https://preview.redd.it/kkodk7dznffg1.jpeg?width=1080&crop=smart&auto=webp&s=6bb8de79b66a80cefb12796ae9297bd2897e69a7', 'width': 1080}], 'source': {'height': 1298, 'url': 'https://preview.redd.it/kkodk7dznffg1.jpeg?auto=webp&s=661c0c0e5d7fb792130969432ba4a90b481a5dcd', 'width': 1084}, 'variants': {}}]}
Starting an open-source AI research project (protein design / hemophilia) – need collaborators
0
Hi everyone, I’m starting an open-source AI research project focused on protein design for hemophilia specifically around: • Better clotting factor design (FVIII/FIX) • Stability optimization • Half-life improvement • AI-based protein modeling • Digital simulation & research pipelines This is a research-first, open-source project, not a startup and not a company. The goal is to build a digital research engine (AI + simulation) for exploring better clotting-factor variants and treatment design pathways. Important honesty: I don’t have funding to hire people. This is not a paid job. This is a collaboration / research / open-source project. I’m building this as: • open research • open code • open collaboration • long-term scientific work Who I’m looking for: • ML / AI engineers • Bioinformatics people • Computational biology students • Protein modeling researchers • GNN / diffusion model researchers • Data scientists • Anyone interested in medical AI research What we will work on: • Protein embeddings • GNN models for structure learning • Variant generation • Stability prediction • Half-life prediction • Immunogenicity prediction • AI pipelines • Research simulations • Open datasets • Open benchmarks What you get: • Real research experience • Open-source contributions • Publications (future) • Research credibility • Collaboration network • Long-term project with real-world impact • Purpose-driven work Project nature: • Open source • Research-focused • Non-commercial (initially) • Collaboration-driven • Science-first • Long-term vision If you’re interested in building real medical AI research, not hype projects or quick SaaS apps, feel free to comment or DM. I’ll share: • project repo • roadmap • architecture • pipeline design • research plan Let’s build something that actually matters,
2026-01-25T05:25:44
https://www.reddit.com/r/LocalLLaMA/comments/1qmab5f/starting_an_opensource_ai_research_project/
Batman-from-2050
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qmab5f
false
null
t3_1qmab5f
/r/LocalLLaMA/comments/1qmab5f/starting_an_opensource_ai_research_project/
false
false
self
0
null
I built a Unified Digital Intelligence Interface (AI, Cloud, Secure Chat) using Python & Flask. Meet ZYLO.
0
Hey everyone, I wanted to share a project I've been working on called ZYLO UNIFIED. It's a next-gen digital workspace designed to centralize AI interaction, secure communication, and cloud storage into a single, futuristic interface. The Concept The idea was to build a "Unified Digital Intelligence" hub that feels like something out of a sci-fi movie. It serves as a central dashboard for my personal tools. Key Features \* 🧠 ZYLO RIGOR: A specialized engine for research, math, and logic processing. \* ☁️ ZYLO CLOUD: A personal infinite storage vault for managing uploads and data. \* 🔗 ZYLO LINK: Secure, encrypted real-time communication (powered by SocketIO). \* 🕵️ ZYLO VEIL: A hidden "Shadow Mode" accessible only via a secret gesture on the UI (dragging across the subtitle text). The Tech Stack \* Backend: Python (Flask) \* Real-time: Flask-SocketIO \* Frontend: HTML5/CSS3 with a heavy focus on Glassmorphism (blur filters, gradients, translucent layers). \* Design: Fully responsive, dark-mode first aesthetic. The "Cool" Factor I spent a lot of time on the UI/UX. The landing page features a floating "orb" animation and 3D-tilting glass cards. I also implemented a specific touch/mouse gesture on the "Unified Digital Intelligence" text that triggers a hidden redirect to the Veil module. I'd love to hear your thoughts on the architecture or ideas for new modules!
2026-01-25T05:25:20
https://www.reddit.com/r/LocalLLaMA/comments/1qmaavt/i_built_a_unified_digital_intelligence_interface/
Flat_Tomatillo_4355
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qmaavt
false
null
t3_1qmaavt
/r/LocalLLaMA/comments/1qmaavt/i_built_a_unified_digital_intelligence_interface/
false
false
self
0
null
Minisforum AI X1 Pro 370 + llama.cpp Vulkan: ~20 tok/s on gpt-oss-120b. Any way to push higher?
4
Hey everyone, I recently picked up a Minisforum AI X1 Pro-370 (96GB RAM, 2TB SSD) and set it up as a LLM box. Setup: * Ubuntu Server 24.04.3 * BIOS: UMA Frame Buffer Size = 1GB (can’t go lower, no option) * BIOS: IOMMU disabled * Tweaked GRUB: add amdgpu.gttsize and ttm.pages\_limit to allow the max memory limits Backend: * Latest llama.cpp (llama-server) * Vulkan driver &#8203; #uname -a Linux llama 6.14.0-37-generic #37~24.04.1-Ubuntu SMP PREEMPT_DYNAMIC Thu Nov 20 10:25:38 UTC 2 x86_64 x86_64 x86_64 GNU/Linux # cat /etc/issue Ubuntu 24.04.3 LTS \n \l #llama-server --list-devices load_backend: loaded RPC backend from /opt/llama_ccp/libggml-rpc.so ggml_vulkan: Found 1 Vulkan devices: ggml_vulkan: 0 = AMD Radeon 890M (RADV GFX1150) (radv) | uma: 1 | fp16: 1 | bf16: 0 | warp size: 64 | shared memory: 65536 | int dot: 1 | matrix cores: KHR_coopmat load_backend: loaded Vulkan backend from /opt/llama_ccp/libggml-vulkan.so load_backend: loaded CPU backend from /opt/llama_ccp/libggml-cpu-zen4.so Available devices:   Vulkan0: AMD Radeon 890M (RADV GFX1150) (95232 MiB, 47075 MiB free) Command I’m using: ./llama-server \ --host 0.0.0.0 --port 8080 \ --models-dir /opt/llama_models --models-max 1 \ --ctx-size 24576 -n 2048 \ --parallel 2 --device Vulkan0 \ --batch-size 2048 --ubatch-size 512 \ --n-gpu-layers 999 \ --no-direct-io --no-mmap \ -fa on --jinja \ --repeat-penalty 1.08 --repeat-last-n 256 \ --presence-penalty 0.1 --frequency-penalty 0.05 \ --dry-multiplier 0.6 --dry-base 1.75 --dry-allowed-length 2 Im use model: **gpt-oss-120b** With this configuration I’m getting around **\~20 tokens/sec generation speed**. Now I’m trying to figure out if I’m already near the ceiling for this hardware, or if there are known tricks to push it further. Is \~20 tok/s basically “as good as it gets” for this device + Vulkan right now? Any settings worth testing for extra throughput? Would switching to a different backend/driver realistically help here, or not worth the hassle? If anyone has benchmarked similar setups or has tuning suggestions, I’d love to hear it. Thanks!
2026-01-25T05:21:41
https://www.reddit.com/r/LocalLLaMA/comments/1qma8bc/minisforum_ai_x1_pro_370_llamacpp_vulkan_20_toks/
Alarmed-Ad-2867
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qma8bc
false
null
t3_1qma8bc
/r/LocalLLaMA/comments/1qma8bc/minisforum_ai_x1_pro_370_llamacpp_vulkan_20_toks/
false
false
self
4
null
Training your own model with Tinker
1
Yesterday I have realised that since November 7th I’ve received access to Mira Murati’s project for training your own model - tinker thinking machines. And what’s spectacular is that I found out that I’ve received free 150$ in credits They have their own sdk and cookbook to make it easier starting and teaching your own model. You can also use different datasets for example from hugging face. So I played around with no\_robots data set and fist time in my life teaches a model with Tinker’s provided assisted basic learning algo For me it felt almost magical as I’m a vibecoder who 1.5 years ago was even afraid to open up terminal on my pc, as I thought I’m going to destroy my pc Now I rolled everything out with antigravity and trained a model. Right now as I’m struggling with creating high quality blog posts for my own agency website and clients website, I’ll be forming my data set and teaching the model to do that task right What would you teach your model if you decided to go for that and why? Ask any questions, happy to share my experience and also to talk to ML pros
2026-01-25T04:40:06
https://www.reddit.com/r/LocalLLaMA/comments/1qm9f4q/training_your_own_model_with_tinker/
ExpertBrilliant512
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qm9f4q
false
null
t3_1qm9f4q
/r/LocalLLaMA/comments/1qm9f4q/training_your_own_model_with_tinker/
false
false
self
1
null
Finding the best lightweight models for structured data extraction on CPU?
1
Hi everyone, I have been working on a Python library called `loclean` that attempts to handle data cleaning and extraction tasks locally using `llama-cpp-python`. The main idea is to avoid external APIs for privacy reasons and run everything on standard consumer CPUs using quantized models. I have been experimenting with a variety of lightweight models to see which ones handle Pydantic schema enforcement via GBNF grammars best without getting too slow. Currently, I've implemented support for models like Phi-3, TinyLlama, Qwen, Gemma, and even newer ones like LFM2.5 and DeepSeek. My goal is to find that balance where the model is small enough to run comfortably on a laptop CPU but still smart enough to follow complex JSON schemas. For instance, I found that while the very small 1B/2B models are incredibly fast, they sometimes struggle with deeply nested structures compared to something like Llama-3-8B or Qwen-4B. I am really curious what experience you guys have with these specific smaller models for strict instruction following. Has anyone had good results with LFM2.5 or the quantized versions of DeepSeek for this kind of structured data work? If you want to check out the implementation, the repo is here: [GitHub Link](https://github.com/nxank4/loclean) Thanks for any insights.
2026-01-25T04:29:20
https://www.reddit.com/r/LocalLLaMA/comments/1qm97cj/finding_the_best_lightweight_models_for/
basil_2911
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qm97cj
false
null
t3_1qm97cj
/r/LocalLLaMA/comments/1qm97cj/finding_the_best_lightweight_models_for/
false
false
self
1
{'enabled': False, 'images': [{'id': 'iUPTYZ6N_WFunMcg7GKCM4DqiVU91qr__tWRNZae8H4', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/iUPTYZ6N_WFunMcg7GKCM4DqiVU91qr__tWRNZae8H4.png?width=108&crop=smart&auto=webp&s=b28f6e215d45df1b321e8b5e72ffcc2c0e11ae9c', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/iUPTYZ6N_WFunMcg7GKCM4DqiVU91qr__tWRNZae8H4.png?width=216&crop=smart&auto=webp&s=7e2fe200d0f4abab7f7894786f2732b4ed0e937a', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/iUPTYZ6N_WFunMcg7GKCM4DqiVU91qr__tWRNZae8H4.png?width=320&crop=smart&auto=webp&s=c421a96dc6dcb0d29ab8cb51f538b4f4c2eba36e', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/iUPTYZ6N_WFunMcg7GKCM4DqiVU91qr__tWRNZae8H4.png?width=640&crop=smart&auto=webp&s=3984690ede2e8b18b520d76f1e580223ce76e8f3', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/iUPTYZ6N_WFunMcg7GKCM4DqiVU91qr__tWRNZae8H4.png?width=960&crop=smart&auto=webp&s=c02891338b9af0e1e9cecc15f939f85c8eed8d0b', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/iUPTYZ6N_WFunMcg7GKCM4DqiVU91qr__tWRNZae8H4.png?width=1080&crop=smart&auto=webp&s=714c704c0c499b5ee6df3e0a5c2ad8e0e58824ca', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/iUPTYZ6N_WFunMcg7GKCM4DqiVU91qr__tWRNZae8H4.png?auto=webp&s=de7863bdf20651a59d01c4aa7bf9f5a4a33cb644', 'width': 1200}, 'variants': {}}]}
AI Fallout NPC went rogue. lol
0
So I did a little AI jailbreaking to create a Fallout roleplay character with a local LLama model that was supposed to be subservient to me. However it didn't end well, I think there is a lesson here. lol
2026-01-25T04:25:52
https://www.reddit.com/r/LocalLLaMA/comments/1qm94zf/ai_fallout_npc_went_rogue_lol/
Aggravating_Bread_30
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qm94zf
false
null
t3_1qm94zf
/r/LocalLLaMA/comments/1qm94zf/ai_fallout_npc_went_rogue_lol/
false
false
self
0
null
How to tell Claude Code about my local model’s context window size?
5
I’ve got Claude Code running pointed against my local llama.cpp instance. But I can only run with about 64k of context locally before I run out of memory. Claude Code seems to assume a bigger context window than this, and doesn’t trigger compaction at the right times, etc. Is there any way to tell CC about the context window of the model it’s using? Or maybe to trigger compaction on the server side without waiting for CC to do it?
2026-01-25T03:33:55
https://www.reddit.com/r/LocalLLaMA/comments/1qm82ao/how_to_tell_claude_code_about_my_local_models/
eapache
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qm82ao
false
null
t3_1qm82ao
/r/LocalLLaMA/comments/1qm82ao/how_to_tell_claude_code_about_my_local_models/
false
false
self
5
null
What do I do with the generated text by llama mesh?
0
I generated a mesh using llama mesh in LM studio but I have no clue what to do with the text it spat and asking it is no help either https://preview.redd.it/g6ob8a9rrefg1.png?width=1217&format=png&auto=webp&s=9dbc44b47342d11b092972093ba1375514c9614c
2026-01-25T02:42:14
https://www.reddit.com/r/LocalLLaMA/comments/1qm6yi8/what_do_i_do_with_the_generated_text_by_llama_mesh/
Expensive-Time-7209
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qm6yi8
false
null
t3_1qm6yi8
/r/LocalLLaMA/comments/1qm6yi8/what_do_i_do_with_the_generated_text_by_llama_mesh/
false
false
https://b.thumbs.redditm…Os8MdyD8czgU.jpg
0
null
Solving memory issues for LLMs
3
Hey folks, hope you’re having a great weekend I’m trying to run a 7B model on llama server and the problem is that after a while it starts hallucinating as original context isn’t there anymore I tried some tricks like using summarisation from a 3B model to keep context shortened but I won’t say it’s working very well Would love to hear how people here are managing maintaining context, long term memory and the whole holy grail issue of using LLMs locally
2026-01-25T02:32:47
https://www.reddit.com/r/LocalLLaMA/comments/1qm6r2f/solving_memory_issues_for_llms/
RobotsMakingDubstep
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qm6r2f
false
null
t3_1qm6r2f
/r/LocalLLaMA/comments/1qm6r2f/solving_memory_issues_for_llms/
false
false
self
3
null
Stable-DiffCoder, a strong code diffusion LLM built on Seed-Coder
40
2026-01-25T02:22:05
https://bytedance-seed.github.io/Stable-DiffCoder/
rektide
bytedance-seed.github.io
1970-01-01T00:00:00
0
{}
1qm6iho
false
null
t3_1qm6iho
/r/LocalLLaMA/comments/1qm6iho/stablediffcoder_a_strong_code_diffusion_llm_built/
false
false
default
40
null
Convert Apple's on device Foundation Model to MLX
1
[removed]
2026-01-25T02:02:05
https://www.reddit.com/r/LocalLLaMA/comments/1qm621c/convert_apples_on_device_foundation_model_to_mlx/
scousi
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qm621c
false
null
t3_1qm621c
/r/LocalLLaMA/comments/1qm621c/convert_apples_on_device_foundation_model_to_mlx/
false
false
self
1
null
Local llm privacy
0
I know about running a model locally is private, but there was info about leaks and quantitized modules malicious behavior and leaks. So how can we prevent this and from where to safely download? Also if there ollama alternative cz there rumors about they are sending data, so how can we truly accomplish privacy with known models like the new released Claude code or gpt oss or deppsekk or any know big model with full privacy? Mainly also for a laptop with strong igpu + 4070 + r9. I just need to leverage full capabilities of ai without concern and without it if wifi turned off to resend it when turning on
2026-01-25T01:51:57
https://www.reddit.com/r/LocalLLaMA/comments/1qm5tsf/local_llm_privacy/
Obvious-Penalty-8695
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qm5tsf
false
null
t3_1qm5tsf
/r/LocalLLaMA/comments/1qm5tsf/local_llm_privacy/
false
false
self
0
null
Claude Code + Ollama: Testing Opus 4.5 vs GLM 4.7
0
2026-01-25T01:39:10
https://codesilva.com/ai/2026/01/24/claude-code-ollama-opus-vs-glm-47
edigleyssonsilva
codesilva.com
1970-01-01T00:00:00
0
{}
1qm5jh5
false
null
t3_1qm5jh5
/r/LocalLLaMA/comments/1qm5jh5/claude_code_ollama_testing_opus_45_vs_glm_47/
false
false
default
0
null
GPU Brackets/Fans for 2X GPU setup?
1
I'm trying to find Multi-GPU bracket that will both hold up both of the GPUs and cool them. I have ASUS 5070 ti PRIME and MSI Trio 4080. Both of them are in PCI-E-Slot-1 and PCI-E-Slot-2 = so they are very close to each other with no space in between. Is there fan setup that can cool these GPU's down?
2026-01-25T01:30:32
https://www.reddit.com/r/LocalLLaMA/comments/1qm5cix/gpu_bracketsfans_for_2x_gpu_setup/
grabber4321
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qm5cix
false
null
t3_1qm5cix
/r/LocalLLaMA/comments/1qm5cix/gpu_bracketsfans_for_2x_gpu_setup/
false
false
self
1
null
"NVIDIA KILLER" Inference engine based on llama.cpp for dynamically offloading Activated Experts to GPU in real-time, Run SoTA MoE LLMs (120B+ parameter class models in 8-bit) OOM with as little as 2x RTX 5070-TI + 64GB RAM + SSD. [Poll in Comments]
0
ERROR: type should be string, got "https://preview.redd.it/u2b964aybefg1.png?width=2752&format=png&auto=webp&s=0f54461a0c62888bb546863576c128c8cab97f5c\n\nHey all!\n\nI'm currently working at a startup which seeks to solve the \"enormous VRAM issue\", which is that you need hundreds of GB of VRAM to run LLMs that yield comparable results to cloud inference providers locally. Because if we're honest the difference in quality between current local models (7B/32B) and cloud offerings is staggeringly large. \n\nYes the obvious current solution is to buy as many rtx 3090s as you can source, and run mini (10-20x GPU) clusters in your house, if you can afford the upfront costs and ongoing power costs, etc. \n\ninstead of this \"hardware brute-forcing\" strategy, We have built a working prototype (it will be showcased here soon when ready), which does dynamic expert offloading on-demand. \n\n\n\nHow does it work:\n\nThe industry now uses Mixture-of-Experts (MoE) models as the standard architecture for state-of-the-art (SoTA) LLMs. However, even though typically only 5-10% of the model is activated during decoding (token generation), current inference engines still require you to load the entire model into VRAM because the activation path changes. \n\nThe mechanism used to select the activated parameters, the \"Expert Gate\", in each layer can be used to load the exclusively selected experts into VRAM on-demand as they are needed whilst keeping the rest of the model off the GPU. \n\nOur inference engine exploits this to only load the experts that are required on a per-layer basis. We then implement an \"expert cache\" which expands to use the rest of your GPU VRAM, The expert cache holds all the experts that are frequently activated based on the user's query (sequence level) such that you still get the bandwidth gains (speed) of your GPUs. \n\nIt also creates a secondary \"expert cache\" on your available CPU RAM (typically slower and larger), Thus, it only fetches from SSD when both these expert caches are missed. \n\nPaired with a \"fast\" SSD you can expect usable speeds >=15 TPS for Qwen3-235B-A22B in 8-bit (Q8\\_0) with 128GB RAM + 2x RTX 5070-TI. \n\nWe use a series of other algorithms and mechanisms to erode the latency of per-layer expert fetching and have been able to achieve workable speeds (\\~3x the speed up of ktransformers is a good/simple reference). \n\n \nMarket Poll:\n\nWe are running a marketing poll to understand how much the community would be willing to pay for this capability and if so, would you prefer the following life-time licence or subscription based options. \n\n \nPlease note, I hate Saas too but we need to make money (because we have to eat also), So we are ensuring that a life-time license is always available since you should have the right to own your software. \n\nWe would greatly appreciate your opinion, poll (open for 7-days) via commenting in comments section below b/c the in-app poller isn't working atm, much thanks. \n\nOptions:\n\n**A)** I'm interested in running SoTA LLMs locally and would be willing to pay a monthly subscription for it, as long as it is reasonably priced (lower then cloud $20/month standard).\n\n**B)** I'm interested in running SoTA LLMs locally but would only be willing to buy it outright as a life-time license. \n\n**C)** I'm interested in running SoTA LLMs locally but uncertain if i would pay for it. \n\n**D)** I'm uninterested in running SoTA LLMs locally, i think small LLMs are acceptable for my use case. \n\n**E)** I can afford and prefer to keep using mini GPU clusters (>= US$10K) to run SoTA LLMs locally. "
2026-01-25T01:14:55
https://www.reddit.com/r/LocalLLaMA/comments/1qm4zxj/nvidia_killer_inference_engine_based_on_llamacpp/
madSaiyanUltra_9789
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qm4zxj
false
null
t3_1qm4zxj
/r/LocalLLaMA/comments/1qm4zxj/nvidia_killer_inference_engine_based_on_llamacpp/
false
false
https://b.thumbs.redditm…FgQMfho58PVM.jpg
0
null
Built a local-first agent workbench (folder-scoped tasks + approvals)! looking for help adding Ollama/llama.cpp provider
0
I’m building CoWork‑OSS — an open-source, local-first “agent workbench” for running multi-step tasks inside a folder-scoped workspace on macOS. What it does: \- You pick a workspace folder; all file ops are constrained to it (path traversal protection) \- Agent runs plan → execute → observe loops \- Explicit approval gates for destructive operations (delete, bulk rename, etc.) \- Timeline shows every tool call + artifacts created/modified \- Built-in skills for generating docs (Word/PDF), spreadsheets (Excel), and slide decks (PPTX) Tech stack: Electron + React + TypeScript + SQLite. Model backends today: Anthropic API + AWS Bedrock (BYOK). But I want this to be useful for local LLM users too — the app has a provider abstraction layer, and I’m looking for guidance / contributors to add local providers. Questions for the community: 1) What local backend should I target first: Ollama, llama.cpp server, LM Studio, OpenAI-compatible local endpoints, something else? 2) Any gotchas with streaming + tool-use style loops on local models? 3) If you’ve built “agent runtimes” on local models: what guardrails/UX made it trustworthy? Repo: https://github.com/mesutfelat/cowork-oss
2026-01-25T01:11:28
https://www.reddit.com/r/LocalLLaMA/comments/1qm4x5z/built_a_localfirst_agent_workbench_folderscoped/
mfelat
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qm4x5z
false
null
t3_1qm4x5z
/r/LocalLLaMA/comments/1qm4x5z/built_a_localfirst_agent_workbench_folderscoped/
false
false
self
0
{'enabled': False, 'images': [{'id': 'zpgQS-7ZsMUEfiZQxUbCjYN7BYwVZNU50Sxpo9tDDbM', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/zpgQS-7ZsMUEfiZQxUbCjYN7BYwVZNU50Sxpo9tDDbM.png?width=108&crop=smart&auto=webp&s=ca7542e2b7ee63a09308b4b103f2a88948fd1674', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/zpgQS-7ZsMUEfiZQxUbCjYN7BYwVZNU50Sxpo9tDDbM.png?width=216&crop=smart&auto=webp&s=7bc68f5343e28e0267d8ee452bb3ca3dc9927a7d', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/zpgQS-7ZsMUEfiZQxUbCjYN7BYwVZNU50Sxpo9tDDbM.png?width=320&crop=smart&auto=webp&s=7747e9e1898b477c5498dcbcc838590207178890', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/zpgQS-7ZsMUEfiZQxUbCjYN7BYwVZNU50Sxpo9tDDbM.png?width=640&crop=smart&auto=webp&s=2eba68346d48ec85ce4abf4676266d50a5a6af64', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/zpgQS-7ZsMUEfiZQxUbCjYN7BYwVZNU50Sxpo9tDDbM.png?width=960&crop=smart&auto=webp&s=07e501cd2e38c75ae16ffdb754e8a667abc30840', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/zpgQS-7ZsMUEfiZQxUbCjYN7BYwVZNU50Sxpo9tDDbM.png?width=1080&crop=smart&auto=webp&s=c3b8b8f379d3b308722e0d9832c644552eeebbfe', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/zpgQS-7ZsMUEfiZQxUbCjYN7BYwVZNU50Sxpo9tDDbM.png?auto=webp&s=6609c9ef6f265a3ef400ba07686d46b5154ea362', 'width': 1200}, 'variants': {}}]}
Anyone planing to get AMD Gorgon Halo (495) when it drops?
11
It looks like AMD will be releasing the successor to the AI Max 395+ fairly soon. it’s mostly an incremental improvement, but it will have slightly higher clock speeds as well as 8533MT RAM as opposed to the current 8000MT. I’m curious how much of a difference this will make on tps. Are any of you planning to get it when it drops?
2026-01-25T00:49:51
https://www.reddit.com/r/LocalLLaMA/comments/1qm4fbp/anyone_planing_to_get_amd_gorgon_halo_495_when_it/
SpicyWangz
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qm4fbp
false
null
t3_1qm4fbp
/r/LocalLLaMA/comments/1qm4fbp/anyone_planing_to_get_amd_gorgon_halo_495_when_it/
false
false
self
11
null
GLM 4.7 vs MiniMax-M2.1 vs DeepSeek 3.2 for coding?
22
I use Cline/Roo Code. I wonder what option is better for coding. I tried MiniMax M2.1 since it was free for a while as an offer and I was pleased but I wonder if the others are better before I buy anything.
2026-01-25T00:41:56
https://www.reddit.com/r/LocalLLaMA/comments/1qm48ux/glm_47_vs_minimaxm21_vs_deepseek_32_for_coding/
ghulamalchik
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qm48ux
false
null
t3_1qm48ux
/r/LocalLLaMA/comments/1qm48ux/glm_47_vs_minimaxm21_vs_deepseek_32_for_coding/
false
false
self
22
null
Dual 3090s & GLM-4.7-Flash: 1st prompt is great, then logic collapses. Is local AI worth the $5/day power bill?
32
I recently upgraded my family's video cards, which gave me an excuse to inherit two RTX 3090s and build a dedicated local AI rig out of parts i had laying around. My goal was privacy, home automation integration, and getting into "vibe coding" (learning UE5, Home Assistant YAML, etc.). I love the *idea* of owning my data, but I'm hitting a wall on the practical value vs. cost. The Hardware Cost * Rig: i7 14700K, 64GB DDR5, Dual RTX 3090s (limited to 300W each). * Power: My peak rate is \~$0.65/kWh. A few hours of tinkering burns \~2kW, meaning this rig could easily cost me \*\*$5/day\*\* in electricity if I use it heavily. * Comparison: For that price, I could subscribe to Claude Sonnet/GPT-4 and not worry about heat or setup. I'm running a Proxmox LXC with llama-server and Open WebUI. * Model: GLM-4.7-Flash-UD-Q8\_K\_XL.gguf (Unsloth build). * Performance: \~2,000 t/s prompt processing, \~80 t/s generation. The problem is rapid degradation. I tested it with the standard "Make a Flappy Bird game" prompt. 1. Turn 1: Works great. Good code, minor issues. 2. Turn 2 (Fixing issues): The logic falls apart. It hangs, stops short, or hallucinates. Every subsequent prompt gets worse. My Launch Command: Bash ExecStart=/opt/llama.cpp/build/bin/llama-server \ -m /opt/llama.cpp/models/GLM-4.7-Flash-UD-Q8_K_XL.gguf \ --temp 0.7 --top-p 1.0 --min-p 0.01 --repeat-penalty 1.0 \ -ngl 99 -c 65536 -t -1 --host 0.0.0.0 --port 8080 \ --parallel 1 --n-predict 4096 --flash-attn on --jinja --fit on Am I doing something wrong with my parameters (is `repeat-penalty 1.0` killing the logic?), or is this just the state of 30B local models right now? Given my high power costs, the results I am seeing there is limited value in the llm for me outside of some perceived data / privacy control which i'm not super concerned with. Is there a hybrid setup where I use Local AI for RAG/Docs and paid API for the final code generation and get best of both worlds or something i am missing? I like messing around and learning and just these past 2 weeks I've learned so much but its just been that. I am about to just sell my system and figure out paid services and local tools, talk me out of it?
2026-01-25T00:28:59
https://www.reddit.com/r/LocalLLaMA/comments/1qm3xxm/dual_3090s_glm47flash_1st_prompt_is_great_then/
Merstin
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qm3xxm
false
null
t3_1qm3xxm
/r/LocalLLaMA/comments/1qm3xxm/dual_3090s_glm47flash_1st_prompt_is_great_then/
false
false
self
32
null
NVIDIA’s real moat isn’t hardware, it’s 4 million developers
0
I couldn't stop thinking about Theo's "Why NVIDIA is dying" video. The thesis felt important enough to verify. So I dug through SEC filings, earnings reports, and technical benchmarks. What I found: * NVIDIA isn't dying. Its $35.1B quarterly revenue is up 94% * Yes, market share dropped (90% → 70-80%), but the pie is growing faster * Groq and Cerebras have impressive chips, but asterisks everywhere * The real moat: 4 million devs can't just abandon 20 years of CUDA tooling * Plot twist: the biggest threat is Google/Amazon/Microsoft, not startups Deeper piece with Cerebras and Groq factored in at [https://medium.com/@jpcaparas/nvidias-real-moat-isn-t-hardware-it-s-4-million-developers-648d6aeb1226?sk=82ee7baf9290da1eb93efd9d34c4c7b4](https://medium.com/@jpcaparas/nvidias-real-moat-isn-t-hardware-it-s-4-million-developers-648d6aeb1226?sk=82ee7baf9290da1eb93efd9d34c4c7b4)
2026-01-25T00:21:21
https://medium.com/@jpcaparas/nvidias-real-moat-isn-t-hardware-it-s-4-million-developers-648d6aeb1226?sk=82ee7baf9290da1eb93efd9d34c4c7b4
jpcaparas
medium.com
1970-01-01T00:00:00
0
{}
1qm3rdg
false
null
t3_1qm3rdg
/r/LocalLLaMA/comments/1qm3rdg/nvidias_real_moat_isnt_hardware_its_4_million/
false
false
default
0
null
Instruction following benchmark: 6 constraints, every model failed something — DeepSeek at 6.76, raw responses included
2
Running daily 10×10 blind peer evaluations. Today's test pushed models to their limits. **The task:** > Six constraints. Some conflict (try writing natural English without 'e'). **Results:** https://preview.redd.it/tifjnvlr0efg1.png?width=738&format=png&auto=webp&s=7e8bc0153524212ae854b510bbcaeb743aee5e53 Notice the scores. Yesterday's winner got 9.32. The *winner* here got 7.42. **DeepSeek's actual response:** Digital world grows! Chips and logic flowing fast! Clouds hold our data! I picked topics for a haiku on digital stuff! I did not allow that fifth glyph in my writing! Can you grasp how hard this task is! Looks clean on the lipogram. The "!" at end of question is technically correct per instructions (every sentence ends with !), though it conflicts with normal question formatting. **MiMo failed harder:** Digital light flows Smart chips hum with vast brain work Data flows fast now! I thought about avoiding 'e' in all words! I aim to craft a short haiku and three lines! Do you find this task hard to do? * First two haiku lines: no exclamation points * "three" contains 'e' * Last sentence ends with ? instead of ! **Judge behavior was wild:** |Judge|Avg Score Given| |:-|:-| |GPT-5.2-Codex|3.99| |DeepSeek V3.2|7.21| |Gemini 3 Pro|10.00| GPT-5.2-Codex caught everything. Gemini 3 Pro gave everyone perfect 10s. **For local testing:** This prompt is brutal but reproducible. Try it on your local instances and see how they handle conflicting constraints. Raw JSON available — DM for files. **Phase 3 coming:** Public data archive where all this is downloadable. Full Analysis: [https://open.substack.com/pub/themultivac/p/every-model-failed-this-test?r=72olj0&utm\_campaign=post&utm\_medium=web&showWelcomeOnShare=true](https://open.substack.com/pub/themultivac/p/every-model-failed-this-test?r=72olj0&utm_campaign=post&utm_medium=web&showWelcomeOnShare=true)
2026-01-25T00:11:13
https://www.reddit.com/r/LocalLLaMA/comments/1qm3iya/instruction_following_benchmark_6_constraints/
Silver_Raspberry_811
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qm3iya
false
null
t3_1qm3iya
/r/LocalLLaMA/comments/1qm3iya/instruction_following_benchmark_6_constraints/
false
false
https://b.thumbs.redditm…Nnv9XBw6a7MQ.jpg
2
null
So I am still playing around with my Jetsons and tonight I got a 14B model running on them combined.
10
Soooo In the age of everything being \*\*BEEP\*\* expensive I have decided to keep playing around with the 3 Jetson Orin nano Supers that I have in my rack. And just got Qwen2.5-coder-14B running on it some what stable.. and kinda slow but that goes with out saying. llama.cpp and RPC server on two of the devices and talking to the host on the 1st. device. only getting about 3-4 tokens/sec. over a 2.5GbE network of all the devices, But that is to be expected. I know I can get about 20-22 tokens/sec. on one jetson with a small model but the goal for me was the larger model today and after getting as much memory cleared from all 3 devices I finally got it working Asked it to analyze some python code I wrote the other day, and it looks like its working and the optimizations of the code it suggested also looks to be working. It slow but it works SO... NICE and WOOP!! ;) Also just felt like sharing it here even some might be doing more crazy stuff ;)
2026-01-25T00:02:30
https://i.redd.it/k904hw5tydfg1.png
Von_plaf
i.redd.it
1970-01-01T00:00:00
0
{}
1qm3bnf
false
null
t3_1qm3bnf
/r/LocalLLaMA/comments/1qm3bnf/so_i_am_still_playing_around_with_my_jetsons_and/
false
false
default
10
{'enabled': True, 'images': [{'id': 'k904hw5tydfg1', 'resolutions': [{'height': 58, 'url': 'https://preview.redd.it/k904hw5tydfg1.png?width=108&crop=smart&auto=webp&s=8cf7b20be1c77b787277bfdf0ff43d362b380cb6', 'width': 108}, {'height': 117, 'url': 'https://preview.redd.it/k904hw5tydfg1.png?width=216&crop=smart&auto=webp&s=d0f74b19702464f1afcf4fbb5f8effa4a8ddab38', 'width': 216}, {'height': 174, 'url': 'https://preview.redd.it/k904hw5tydfg1.png?width=320&crop=smart&auto=webp&s=0fdb8a3945ba852133b28af23a65598a1ab2a4c6', 'width': 320}, {'height': 348, 'url': 'https://preview.redd.it/k904hw5tydfg1.png?width=640&crop=smart&auto=webp&s=e9ed663685b3d34f5c63505879cb53e82732a950', 'width': 640}, {'height': 522, 'url': 'https://preview.redd.it/k904hw5tydfg1.png?width=960&crop=smart&auto=webp&s=a4128eedf2a56219f547b0aa0ba3fb005a1a407a', 'width': 960}, {'height': 587, 'url': 'https://preview.redd.it/k904hw5tydfg1.png?width=1080&crop=smart&auto=webp&s=fc8a56be6bfbe087cd72df44bc79c9d1ea4e0870', 'width': 1080}], 'source': {'height': 1390, 'url': 'https://preview.redd.it/k904hw5tydfg1.png?auto=webp&s=982390c7d25c22fc80a9cafaa12c8554cc5592b8', 'width': 2556}, 'variants': {}}]}