title stringlengths 1 300 | score int64 0 8.54k | selftext stringlengths 0 41.5k | created timestamp[ns]date 2023-04-01 04:30:41 2026-03-04 02:14:14 ⌀ | url stringlengths 0 878 | author stringlengths 3 20 | domain stringlengths 0 82 | edited timestamp[ns]date 1970-01-01 00:00:00 2026-02-19 14:51:53 | gilded int64 0 2 | gildings stringclasses 7 values | id stringlengths 7 7 | locked bool 2 classes | media stringlengths 646 1.8k ⌀ | name stringlengths 10 10 | permalink stringlengths 33 82 | spoiler bool 2 classes | stickied bool 2 classes | thumbnail stringlengths 4 213 ⌀ | ups int64 0 8.54k | preview stringlengths 301 5.01k ⌀ |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
I benchmarked the newest 40 AI models (Feb 2026) | 0 | Everyone is talking about the viral Kimi k2.5 and Claude Opus 4.6 right now. But while the world was watching the giants, I spent the last week benchmarking 40 of the newest models on the market to see what's actually happening with Price vs. Performance.
**The TL;DR:** The market has split into two extremes. "Mid-range" models are now a waste of money. You should either be in "God Mode" or "Flash Mode."
**Here is the hard data from Week 7:**
https://preview.redd.it/l97g5c5ttoig1.png?width=1920&format=png&auto=webp&s=79d231c40349c06789e5602c5260900ca62cc8e5
**1. The "Kimi" Situation** I know everyone wants to know about Kimi k2.5. Bad news: I couldn't even get it to complete the benchmark. The API returned "No Content" errors repeatedly—it's likely suffering from success/overload. I did test `Kimi-k2-Thinking`. It works, but it's a deep thinker (\~15 TPS). Do not use this for chatbots; use it for complex reasoning only.
**2. The New Speed Kings (Liquid & Mistral)** If you are building agents, latency is the only metric that matters.
* Liquid LFM 2.5: Clocked in at \~359 tokens/sec. This is currently the fastest model I've ever tested. It’s effectively instant.
* Ministral 3B: The runner-up at \~293 tokens/sec.
https://preview.redd.it/ckqsqjx2uoig1.png?width=1920&format=png&auto=webp&s=fb2f85712f24a5a6626e848b3e93cc3c8fe000bd
**3. The Value Play** If you are paying for your own tokens, **Ministral 3B** is the undisputed king right now. At **$0.10/1M input**, it is \~17x cheaper than GPT-5.2 Codex and \~40% faster.
**My Verdict:** Stop paying $0.50 - $1.00 for "decent" models. They are the new "Middle Class," and they are dead.
* Need IQ? Pay the tax for Opus/GPT-5.
* Need Speed? Use Liquid/Mistral for pennies.
* Everything in between is burning budget.
**I’ve open-sourced the raw benchmark logs (CSV) for all 40 models here:** [**https://the-compute-index.beehiiv.com/**](https://the-compute-index.beehiiv.com/p/i-benchmarked-the-newest-40-ai-models-the-middle-class-is-dead-week-7-2026)
Let me know if you're seeing similar speeds in production. The Liquid numbers seem almost too good to be true, but they held up over multiple runs. | 2026-02-10T15:48:28 | https://www.reddit.com/r/LocalLLaMA/comments/1r14bqk/i_benchmarked_the_newest_40_ai_models_feb_2026/ | Vilxs2 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r14bqk | false | null | t3_1r14bqk | /r/LocalLLaMA/comments/1r14bqk/i_benchmarked_the_newest_40_ai_models_feb_2026/ | false | false | 0 | null | |
OpenClaw is popping up on cheap VPSs. What do you think of a more secure setup? | 0 | Over the last week I’ve been watching people deploy OpenClaw in *very* different ways.
On one side, **Cloudflare** quietly shipped a pretty solid open source setup ([motlworker](https://github.com/cloudflare/moltworker)): isolated, secure environments where you can deploy OpenClaw without thinking too much about infra. It’s relatively cheap, you get an admin panel, and a lot of the scary stuff (networking, isolation, exposure) is handled for you.
On the other side, I keep seeing 1-click VPS setups flying around. Vibe-coded deployers, often built by people who’ve never touched GCP or AWS, exposing servers directly to the internet without really understanding what that means. It *works*, but it also feels a bit like we’re speed running past some important lessons about security.
I ended up using the Cloudflare approach to deploy OpenClaw for a few friends who just wanted something stable and safe without becoming infra experts overnight. It worked well enough that I started thinking: maybe this should be easier to share.
So I put together a small setup to help others do the same (**getclaw.sh**). Before I start pointing people to it, I wanted to sanity-check with this community:
* What do you think about the Cloudflare-based approach vs cheap VPS deployments?
* Is the tradeoff (less control, more safety) worth it for most users?
* Anything you’d absolutely want to see (or avoid) in a managed OpenClaw deployment setup?
Not trying to sell anything here. Im genuinely curious what the LocalLLaMA crowd thinks before I push this further. | 2026-02-10T15:41:18 | AmineAfia | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1r144p0 | false | null | t3_1r144p0 | /r/LocalLLaMA/comments/1r144p0/openclaw_is_popping_up_on_cheap_vpss_what_do_you/ | false | false | 0 | {'enabled': True, 'images': [{'id': 'zqluisxzqoig1', 'resolutions': [{'height': 79, 'url': 'https://preview.redd.it/zqluisxzqoig1.png?width=108&crop=smart&auto=webp&s=fe105cdd546a685d9414ea0b1e90a5227f973d04', 'width': 108}, {'height': 158, 'url': 'https://preview.redd.it/zqluisxzqoig1.png?width=216&crop=smart&auto=webp&s=b6b7997169c03221ca63d9e006efdcb3af3bc85f', 'width': 216}, {'height': 235, 'url': 'https://preview.redd.it/zqluisxzqoig1.png?width=320&crop=smart&auto=webp&s=7f707addef20e2bb63600853d6a733f5061d34b2', 'width': 320}, {'height': 470, 'url': 'https://preview.redd.it/zqluisxzqoig1.png?width=640&crop=smart&auto=webp&s=fe9a12297831631b8be6f0119b56d5fb38b8b3c0', 'width': 640}, {'height': 706, 'url': 'https://preview.redd.it/zqluisxzqoig1.png?width=960&crop=smart&auto=webp&s=b3b1502e7f6e6a2abc1b123f08751025703697af', 'width': 960}, {'height': 794, 'url': 'https://preview.redd.it/zqluisxzqoig1.png?width=1080&crop=smart&auto=webp&s=2d49beecf3389dd1d26f45778bf42b6c61e18b81', 'width': 1080}], 'source': {'height': 980, 'url': 'https://preview.redd.it/zqluisxzqoig1.png?auto=webp&s=398cc77d1aa7015c4e3f1557db2c47bad5b04c21', 'width': 1332}, 'variants': {}}]} | ||
I built a local-first AI chat with autonomous memory and knowledge graphs - looking for feedback | 3 | I have been building this project for a while and it just hit a point where I want other people to try it and tell me what is missing, what is broken, and what would make it actually useful for them.
It is called Atlas. It is a personal AI chat app that runs on your machine and connects to whatever LLM provider you want (OpenAI, Anthropic, LM Studio, Ollama). The thing that makes it different from just using ChatGPT in a browser: it builds a memory of you over time. You do not have to tell it to remember things. It pulls out facts, preferences, decisions, and context from every conversation and brings them back automatically when they are relevant.
For example, if you told it your project uses FastAPI and PostgreSQL two weeks ago, and today you ask "how should I handle migrations," it already knows your stack. No re-explaining. No pasting context. It just knows.
**Here is what it does right now:**
* **Multi-provider LLM support.** OpenAI, Anthropic, LM Studio, Ollama. Switch between cloud and local models mid-conversation. Use whatever you already have.
* **Autonomous memory.** Every conversation gets analyzed. Facts, preferences, skills, goals, relationships are extracted and stored locally. They come back when relevant. You never have to manage it.
* **Knowledge graph.** Entities and relationships are extracted and stored in Neo4j (optional, free tier). It builds a map of how things in your life and work connect over time.
* **Web search.** When you need current info, it searches Brave, DuckDuckGo, and Wikipedia automatically. PII is scrubbed before anything leaves your machine.
* **Document upload.** Drop in PDFs, Word docs, text files, or Markdown. Ask questions about them. Standard RAG pipeline with chunking and vector search.
* **Notes, personas, plugins.** Save notes (or let the AI save them for you), create custom personas with different system prompts, extend with plugins.
* **Fully local data.** SQLite and ChromaDB on your machine. Nothing is stored in the cloud. The only external calls are to the LLM API you configure and web search if you use it.
**Install is simple.** Download the ZIP, extract, double-click start.bat. Python is embedded so you do not need anything installed. First run takes about 3-5 minutes to set up, after that it launches instantly into the system tray. Windows only for now.
**What I am looking for:**
* What features would make this actually useful for your daily workflow?
* What is missing that you would expect from something like this?
* Anyone interested in adding Mac/Linux support?
* Ideas for new retrieval sources, plugins, or integrations
* General feedback on the UI, the install experience, anything that feels off
I am one person building this in my spare time so I cannot promise I will ship everything, but I genuinely want to know what people would use this for and what would make it worth keeping around.
[https://github.com/chatwithatlass/atlas-chat](https://github.com/chatwithatlass/atlas-chat)
Free to use and modify for personal use. Non-commercial license. | 2026-02-10T15:29:57 | https://www.reddit.com/r/LocalLLaMA/comments/1r13tm8/i_built_a_localfirst_ai_chat_with_autonomous/ | EchoOfIntent | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r13tm8 | false | null | t3_1r13tm8 | /r/LocalLLaMA/comments/1r13tm8/i_built_a_localfirst_ai_chat_with_autonomous/ | false | false | self | 3 | {'enabled': False, 'images': [{'id': 'EYQCjqTJjkmfieYYPCQKPjNzgS62K2X04KeHHZ8RDPw', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/EYQCjqTJjkmfieYYPCQKPjNzgS62K2X04KeHHZ8RDPw.png?width=108&crop=smart&auto=webp&s=860bf037cda48b802d17cd190dab6f55c09a8576', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/EYQCjqTJjkmfieYYPCQKPjNzgS62K2X04KeHHZ8RDPw.png?width=216&crop=smart&auto=webp&s=704ed611a5ed88cc31eccb6e04281ca65c060c03', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/EYQCjqTJjkmfieYYPCQKPjNzgS62K2X04KeHHZ8RDPw.png?width=320&crop=smart&auto=webp&s=47d090ccbf8927c4c56426f792b9f7b4fb1426dd', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/EYQCjqTJjkmfieYYPCQKPjNzgS62K2X04KeHHZ8RDPw.png?width=640&crop=smart&auto=webp&s=74334c2fb23018996beaca320aee75f8b33d6735', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/EYQCjqTJjkmfieYYPCQKPjNzgS62K2X04KeHHZ8RDPw.png?width=960&crop=smart&auto=webp&s=6458f6f674777aaa4588042d8e7ec242dd92529b', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/EYQCjqTJjkmfieYYPCQKPjNzgS62K2X04KeHHZ8RDPw.png?width=1080&crop=smart&auto=webp&s=73c578e744557ebf3c786b2c14a06096dc5378a7', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/EYQCjqTJjkmfieYYPCQKPjNzgS62K2X04KeHHZ8RDPw.png?auto=webp&s=9d284b35384f7ff8a95d589050dadd7af5cbd026', 'width': 1200}, 'variants': {}}]} |
MLX Omni Engine | 9 | Hello, I wanted to share a project I'm working on that attempts to extend LM Studio's MLX engine to support running embedding models, audio models, and hopefully eventually real-time audio models like Moshi.
The idea is that the engine can be started up and then connected to any compatible client via its Ollama or Anthropic or OpenAI FastAPI endpoints, giving a client the ability to run a vast number of MLX models.
The reason I'm building this is that I find MLX models run better on Apple Silicon (when they fit in memory) compared to the GGUF models that Ollama uses. Also, Ollama has been pushing cloud usage that I don't really like, and I would prefer a bare bones server that just takes requests to run whatever ML model I want fast and efficiently.
If you want to check it out and offer notes, advice, or a pull request on how to improve it to better fit the aforementioned vision, I'm all ears as this is my first attempt at an open source project like this. Also, If you think this is a stupid and useless project, I'm open to that advice as well.
Here is the GitHub link to it: [https://github.com/NTarek4741/mlx-engine](https://github.com/NTarek4741/mlx-engine) | 2026-02-10T15:26:31 | https://www.reddit.com/r/LocalLLaMA/comments/1r13qb2/mlx_omni_engine/ | Fast_Ferret4607 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r13qb2 | false | null | t3_1r13qb2 | /r/LocalLLaMA/comments/1r13qb2/mlx_omni_engine/ | false | false | self | 9 | {'enabled': False, 'images': [{'id': 'UIi36axOLdKw8NaqBJFT2dcODsc48eZM9xgZJpwk_R0', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/UIi36axOLdKw8NaqBJFT2dcODsc48eZM9xgZJpwk_R0.png?width=108&crop=smart&auto=webp&s=f73a71a72c473f485497f80a44bdf64216056e72', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/UIi36axOLdKw8NaqBJFT2dcODsc48eZM9xgZJpwk_R0.png?width=216&crop=smart&auto=webp&s=187ff326468f3848cf835c13dba92aa764cfe740', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/UIi36axOLdKw8NaqBJFT2dcODsc48eZM9xgZJpwk_R0.png?width=320&crop=smart&auto=webp&s=030dca00a1c45e3e74861e13d88323cd3fbb6adb', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/UIi36axOLdKw8NaqBJFT2dcODsc48eZM9xgZJpwk_R0.png?width=640&crop=smart&auto=webp&s=870373ec1ab403552a9373304fd9e55120e3e91a', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/UIi36axOLdKw8NaqBJFT2dcODsc48eZM9xgZJpwk_R0.png?width=960&crop=smart&auto=webp&s=7292da74ff417865565356d94fed3e8700703492', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/UIi36axOLdKw8NaqBJFT2dcODsc48eZM9xgZJpwk_R0.png?width=1080&crop=smart&auto=webp&s=6934237b795191b887b6dde7c81aa8e71b6c2140', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/UIi36axOLdKw8NaqBJFT2dcODsc48eZM9xgZJpwk_R0.png?auto=webp&s=a7601452d05cd7e30e8a62f49890cbd0f38e0dfd', 'width': 1200}, 'variants': {}}]} |
Built a "hello world" for AI agent payments - one command to see a real USDC micropayment | 0 | Just shipped a simple demo that shows an AI agent paying for an API using x402 (HTTP 402 Payment Required).
Try it:
npx x402-hello --new-wallet
\# Fund wallet with \~$0.01 USDC + 0.01 SOL
WALLET\_KEY="\[...\]" npx x402-hello
What happens:
1. Agent requests paid API → gets 402 with payment requirements
2. Agent sends $0.001 USDC on Solana mainnet
3. Agent retries with tx signature as proof
4. Server verifies on-chain → returns data
The whole thing takes about 2 seconds. Payment settles in \~400ms.
This is for AI agents that need to pay for resources autonomously - no API keys, no subscriptions, just micropayments.
Built on Solana because it's the only chain fast/cheap enough for this use case.
npm: [https://npmjs.com/package/x402-hello](https://npmjs.com/package/x402-hello)
Demo: [https://noryx402.com](https://noryx402.com)
Happy to answer questions! | 2026-02-10T15:26:03 | https://www.reddit.com/r/LocalLLaMA/comments/1r13pus/built_a_hello_world_for_ai_agent_payments_one/ | BLubClub89 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r13pus | false | null | t3_1r13pus | /r/LocalLLaMA/comments/1r13pus/built_a_hello_world_for_ai_agent_payments_one/ | false | false | self | 0 | null |
Kimi is so smart | 289 | 2026-02-10T15:22:13 | https://www.reddit.com/r/LocalLLaMA/comments/1r13m42/kimi_is_so_smart/ | Bernice_working_girl | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r13m42 | false | null | t3_1r13m42 | /r/LocalLLaMA/comments/1r13m42/kimi_is_so_smart/ | false | false | 289 | null | ||
how do i dissble thinking on community version seed-oss-36b in LM studio? | 0 | for context, i've been using bytedance/seed-oss-36b in LM studio and it works fine, however it thinks by default which i don't want. There's a parameter when running the server to set thinking to 0 which does what i want.
However, i wanted to use a lower quantized model to save some VRAM and up my context length, like Q4KS(default lowest is Q4KM.) The problem here is these community models don't seem to have the saming thinking parameter on the config sidebar that the default one does so it always thinks by default which isn't desirable.
Does anyone know how to disable thinking in community versions?
Looking through my preset it looks like the field for regular seed is
"key":"ext.virtualModel.customField.bytedance.seedOss36b.thinkingBudget",
"value": "0"
But i unfortunately lack the knowledge or experience to make this work with community models via the GUI. | 2026-02-10T15:20:50 | https://www.reddit.com/r/LocalLLaMA/comments/1r13ktb/how_do_i_dissble_thinking_on_community_version/ | zeronic | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r13ktb | false | null | t3_1r13ktb | /r/LocalLLaMA/comments/1r13ktb/how_do_i_dissble_thinking_on_community_version/ | false | false | self | 0 | null |
"How to run vLLM models locally and call them through a public API using Local Runners? | 0 | Is there a software, pipeline that run vllm e install One click
| 2026-02-10T15:17:44 | https://www.reddit.com/r/LocalLLaMA/comments/1r13ht1/how_to_run_vllm_models_locally_and_call_them/ | Quiet_Dasy | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r13ht1 | false | null | t3_1r13ht1 | /r/LocalLLaMA/comments/1r13ht1/how_to_run_vllm_models_locally_and_call_them/ | false | false | self | 0 | null |
Plenty of medium size(20-80B) models in last 3 months. How those works for you? | 42 | We got plenty of medium size(20-80B) models in last 3 months before upcoming models. These models are good even for 24/32GB VRAM + RAM @ Q4/Q5 with decent context.
* Devstral-Small-2-24B-Instruct-2512
* Olmo-3.1-32B
* GLM-4.7-Flash
* Nemotron-Nano-30B
* Qwen3-Coder-Next & Qwen3-Next-80B
* Kimi-Linear-48B-A3B
I think most issues(including FA issue) haven been fixed for GLM-4.7-Flash.
Both Qwen3-Next models went through fixes/optimizations & require new GGUF to use with latest llama.cpp version which most folks are aware of this.
Both Nemotron-Nano-30B & Qwen3-Coder-Next has MXFP4 quant. Anyone tried those? How's it?
Anyone compared t/s benchmarks for Qwen3-Next-80B & Qwen3-Coder-Next? Both are same size & architecture so want to know this.
Recently we got GGUF for Kimi-Linear-48B-A3B.
Are these models replacing any large 100B models?
^(Just posting this single thread instead of 4-5 separate threads.) | 2026-02-10T15:15:17 | https://www.reddit.com/r/LocalLLaMA/comments/1r13ffw/plenty_of_medium_size2080b_models_in_last_3/ | pmttyji | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r13ffw | false | null | t3_1r13ffw | /r/LocalLLaMA/comments/1r13ffw/plenty_of_medium_size2080b_models_in_last_3/ | false | false | self | 42 | null |
Seeking feedback: lightweight “change notes + metadata + diff evidence” searchable knowledge base to navigate complex HIS code paths | 1 | I’m a backend intern working on an HIS project. While learning the codebase, I’ve noticed the call chains are long and the rules are pretty complex, so I’m exploring a workflow to make changes more reusable and traceable: after each feature/bugfix, use an LLM to produce a short summary doc (what changed, scope/impact, key rules, and test notes), store some structured metadata (modules/endpoints/DB tables/config keys), and keep the relevant code diff as evidence. When a new task comes in, during the planning phase we’d search these docs/metadata to reuse similar designs and to catch missing rules or side effects earlier; and when something breaks in testing/production, we could go from symptoms → evidence → changes to narrow down root causes faster. Does this sound realistic in a real team? What are the biggest pitfalls (maintenance cost, misleading summaries, retrieval quality, etc.) ?Any feedback or similar experiences would be super helpful. Thanks! | 2026-02-10T15:13:41 | https://www.reddit.com/r/LocalLLaMA/comments/1r13dxj/seeking_feedback_lightweight_change_notes/ | Brief-Entertainer427 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r13dxj | false | null | t3_1r13dxj | /r/LocalLLaMA/comments/1r13dxj/seeking_feedback_lightweight_change_notes/ | false | false | self | 1 | null |
ForgeAI v0.1.0 - Desktop tool for model inspection,
merging & training (cross-platform) | 1 | [removed] | 2026-02-10T15:12:58 | https://www.reddit.com/r/LocalLLaMA/comments/1r13d8s/forgeai_v010_desktop_tool_for_model_inspection/ | DarkEngine774 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r13d8s | false | null | t3_1r13d8s | /r/LocalLLaMA/comments/1r13d8s/forgeai_v010_desktop_tool_for_model_inspection/ | false | false | self | 1 | null |
Inside the Architecture of a Pre-Configured LangChain AI Development Environment | 1 | 2026-02-10T15:12:08 | https://medium.com/@techlatest.net/inside-the-architecture-of-a-pre-configured-langchain-ai-development-environment-f360004ab535 | techlatest_net | medium.com | 1970-01-01T00:00:00 | 0 | {} | 1r13cf0 | false | null | t3_1r13cf0 | /r/LocalLLaMA/comments/1r13cf0/inside_the_architecture_of_a_preconfigured/ | false | false | default | 1 | null | |
Built an Customized LLM with RAG for Singaporean laws and acts. | 14 | Hello everyone,
I have always loved coding and in the couple I was thinking of making an open source project and it turned out to be awesome I hope you guys like it.☺️
I present Explore Singapore which I created as an open-source intelligence engine to execute retrieval-augmented generation (RAG) on Singapore's public policy documents and legal statutes and historical archives.
The objective required building a domain-specific search engine which enables LLM systems to decrease errors by using government documents as their exclusive information source.
What my Project does :- basically it provides legal information faster and reliable(due to RAG) without going through long PDFs of goverment websites and helps travellers get insights faster about Singapore.
Target Audience:- Python developers who keep hearing about "RAG" and AI agents but haven't build one yet or building one and are stuck somewhere also Singaporean people(obviously!)
Comparison:- RAW LLM vs RAG based LLM to test the rag implementation i compared output of my logic code against the standard(gemini/Arcee AI/groq) and custom system instructions with rag(gemini/Arcee AI/groq) results were shocking query:- "can I fly in a drone in public park" standard llm response :- ""gave generic advice about "checking local laws" and safety guidelines""
Customized llm with RAG :- ""cited the air navigation act,specified the 5km no fly zones,and linked to the CAAS permit page"" the difference was clear and it was sure that the ai was not hallucinating.
Ingestion:- I have the RAG Architecture about 594 PDFs about Singaporian laws and acts which rougly contains 33000 pages.
How did I do it :- I used google Collab to build vector database and metadata which nearly took me 1 hour to do so ie convert PDFs to vectors.
How accurate is it:- It's still in development phase but still it provides near accurate information as it contains multi query retrieval ie if a user asks ("ease of doing business in Singapore") the logic would break the keywords "ease", "business", "Singapore" and provide the required documents from the PDFs with the page number also it's a little hard to explain but you can check it on my webpage.Its not perfect but hey i am still learning.
The Tech Stack:
Ingestion: Python scripts using PyPDF2 to parse various PDF formats.
Embeddings: Hugging Face BGE-M3(1024 dimensions)
Vector Database: FAISS for similarity search.
Orchestration: LangChain.
Backend: Flask
Frontend: React and Framer.
The RAG Pipeline operates through the following process:
Chunking: The source text is divided into chunks of 150 with an overlap of 50 tokens to maintain context across boundaries.
Retrieval: When a user asks a question (e.g., "What is the policy on HDB grants?"), the system queries the vector database for the top k chunks (k=1).
Synthesis: The system adds these chunks to the prompt of LLMs which produces the final response that includes citation information.
Why did I say llms :- because I wanted the system to be as non crashable as possible so I am using gemini as my primary llm to provide responses but if it fails to do so due to api requests or any other reasons the backup model(Arcee AI trinity large) can handle the requests.
Don't worry :- I have implemented different system instructions for different models so that result is a good quality product.
Current Challenges:
I am working on optimizing the the ranking strategy of the RAG architecture. I would value insights from anyone who has encountered RAG returning unrelevant documents.
Feedbacks are the backbone of improving a platform so they are most 😁
Repository:- https://github.com/adityaprasad-sudo/Explore-Singapore | 2026-02-10T15:07:50 | Fantastic_suit143 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1r138bn | false | null | t3_1r138bn | /r/LocalLLaMA/comments/1r138bn/built_an_customized_llm_with_rag_for_singaporean/ | false | false | 14 | {'enabled': True, 'images': [{'id': 'M9Q6PpNSiofcWCDksH0pQ5DqxF7YPxfa7ngh34MTsuc', 'resolutions': [{'height': 47, 'url': 'https://preview.redd.it/8m300ubgnoig1.png?width=108&crop=smart&auto=webp&s=4bc6da941139acd5e0750c539138ceea674e9d74', 'width': 108}, {'height': 95, 'url': 'https://preview.redd.it/8m300ubgnoig1.png?width=216&crop=smart&auto=webp&s=4b17756cd03c2906db8360fa8164361e9fed39bb', 'width': 216}, {'height': 140, 'url': 'https://preview.redd.it/8m300ubgnoig1.png?width=320&crop=smart&auto=webp&s=d3d73250e9c4139d7a3988ca02b9746a6aaad23c', 'width': 320}, {'height': 281, 'url': 'https://preview.redd.it/8m300ubgnoig1.png?width=640&crop=smart&auto=webp&s=961cd27c02f3974a8fc72ff12576d784f4ba7bdf', 'width': 640}, {'height': 422, 'url': 'https://preview.redd.it/8m300ubgnoig1.png?width=960&crop=smart&auto=webp&s=6af626924daad06f8f4a57f3e76390bf45c198e7', 'width': 960}, {'height': 475, 'url': 'https://preview.redd.it/8m300ubgnoig1.png?width=1080&crop=smart&auto=webp&s=8007e7398ecd9acb96f817e62065d2c4640f7bb2', 'width': 1080}], 'source': {'height': 1119, 'url': 'https://preview.redd.it/8m300ubgnoig1.png?auto=webp&s=060945afec3ce0c8ceb9bb080b28e03948a240cc', 'width': 2542}, 'variants': {}}]} | ||
Recursive Data Cleaner hits v1.0 - Full generate → apply cycle | 0 | Three weeks ago I shared a tool that trades compute time for human time: point an LLM at messy data, walk away, come back to working cleaning functions.
v1.0 closes the loop. You can now apply those generated functions directly to your full dataset.
The complete workflow:
# Generate cleaning functions (go grab coffee)
recursive-cleaner generate messy_data.jsonl \
--provider mlx \
--model "Qwen3-80B-MLX-4bit" \
--instructions "Normalize phones, fix date formats" \
--tui
# Apply to your data
recursive-cleaner apply messy_data.jsonl \
--functions cleaning_functions.py
That's it. No Python required.
**What's new since v0.7:**
\- **Terminal UI** \- Live progress dashboard with a transmission log showing what the LLM finds and fixes (see video)
\- **CLI tool** \- Works natively with MLX (Apple Silicon), and any OpenAI compatible API endpoint
\- **Apply mode** \- JSONL, CSV, JSON, Parquet, Excel in → same format out. PDFs and Word docs → cleaned markdown
**Why v1.0?**
It handles the full cycle I originally wanted: analyze → generate → apply. The LLM has agency over the process - it decides when data is clean, when patterns are saturated, and when to consolidate redundant functions.
555 tests, \~5,000 lines of Python, minimal dependencies.
Trade compute for human attention. Let the model that understands your data make decisions about your data.
GitHub: [https://github.com/gaztrabisme/recursive-data-cleaner](https://github.com/gaztrabisme/recursive-data-cleaner)
PyPI: `pip install recursive-cleaner`
https://reddit.com/link/1r133vq/video/vt4kz0wjmoig1/player
| 2026-02-10T15:03:07 | https://www.reddit.com/r/LocalLLaMA/comments/1r133vq/recursive_data_cleaner_hits_v10_full_generate/ | gaztrab | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r133vq | false | null | t3_1r133vq | /r/LocalLLaMA/comments/1r133vq/recursive_data_cleaner_hits_v10_full_generate/ | false | false | 0 | null | |
OpenResearcher | 14 | interesting project found on X, from Dongfu Jiang:
"Introducing OpenResearcher: a fully offline pipeline for synthesizing 100+ turn deep-research trajectories—no search/scrape APIs, no rate limits, no nondeterminism."
**OpenResearcher** is a fully open agentic large language model (30B-A3B) designed for **long-horizon deep research** scenarios. It achieves an impressive **54.8%** accuracy on [BrowseComp-Plus](https://huggingface.co/spaces/Tevatron/BrowseComp-Plus), surpassing performance of `GPT-4.1`, `Claude-Opus-4`, `Gemini-2.5-Pro`, `DeepSeek-R1` and `Tongyi-DeepResearch`. We **fully open-source** the training and evaluation recipe—including data, model, training methodology, and evaluation framework for everyone to progress deep research.
[](https://github.com/TIGER-AI-Lab/OpenResearcher#-features)
* 🔑 **Fully Open-Source Recipe** — We fully open-source our 96K high-quality [DeepResearch trajectory dataset](https://huggingface.co/datasets/OpenResearcher/OpenResearcher-Dataset) with 100+ turns generated by GPT-OSS-120B with [native browser tools](https://docs.vllm.ai/projects/recipes/en/latest/OpenAI/GPT-OSS.html#usage:~:text=Limitation%20section%20below.-,Tool%20Use,-%C2%B6), the leading [30B-A3B model](https://huggingface.co/OpenResearcher/OpenResearcher-30B-A3B) trained on it, [distillation recipe](https://boiled-honeycup-4c7.notion.site/OpenResearcher-A-Fully-Open-Pipeline-for-Long-Horizon-Deep-Research-Trajectory-Synthesis-2f7e290627b5800cb3a0cd7e8d6ec0ea?source=copy_link), and a lightweight [DeepResearch evaluation framework](https://github.com/TIGER-AI-Lab/OpenResearcher) to progress deep research.
* 💰 **Highly Scalable and Low-Cost** — We generate DeepResearch trajectories at massive scale using self-built retriever over a dedicated \~11B-token [corpus](https://huggingface.co/datasets/OpenResearcher/OpenResearcher-Corpus), eliminating the need for external Search APIs. This scalable retriever significantly reduces training costs.
* 🚀 **Remarkable Performance on Deep Research Benchmarks** — OpenResearcher demonstrates leading performance across a range of deep research benchmarks, including BrowseComp-Plus, BrowseComp, GAIA, xbench-DeepSearch.
https://preview.redd.it/ow8tjjbykoig1.png?width=1200&format=png&auto=webp&s=6c7c4011ad0ac88d1369e5e833a3cc085df555d9
[https://github.com/TIGER-AI-Lab/OpenResearcher](https://github.com/TIGER-AI-Lab/OpenResearcher)
"We run this repo on the following setup:
* 8 \* A100 80G Nvidia GPUs
* Linux operating system
Other hardware setups can also work, but remember to modify the corresponding parameters."
but if I am correct it's just gpt-oss-120B + 30B model | 2026-02-10T14:59:20 | https://www.reddit.com/r/LocalLLaMA/comments/1r1305o/openresearcher/ | jacek2023 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r1305o | false | null | t3_1r1305o | /r/LocalLLaMA/comments/1r1305o/openresearcher/ | false | false | 14 | null | |
Open source TTS w/voice cloning and multilingual translation? | 1 |
I'm getting totally lost and overwhelmed in the research and possible options, always changing and hard to keep up with.
Looking for free or open-source tools that can do two things:
1. **Voice cloning with text-to-speech** – found [this post](https://www.reddit.com/r/LocalLLaMA/comments/1f0awd6/best_local_open_source_texttospeech_and/) particularly helpful, but wondering if there’s now a clearer top 1–3 options that are reliable, popular, and beginner-friendly. Ideally something simple to set up without advanced system requirements.
2. **Voice-preserving translation** – Either from text or cloned audio, I need it translated to another language while keeping the same cloned voice.
Any guidance is greatly appreciated! | 2026-02-10T14:58:05 | https://www.reddit.com/r/LocalLLaMA/comments/1r12yyr/open_source_tts_wvoice_cloning_and_multilingual/ | Quiet_Dasy | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r12yyr | false | null | t3_1r12yyr | /r/LocalLLaMA/comments/1r12yyr/open_source_tts_wvoice_cloning_and_multilingual/ | false | false | self | 1 | null |
My Journey Building an AI Agent Orchestrator | 0 | # 🎮 88% Success Rate with qwen2.5-coder:7b on RTX 3060 Ti - My Journey Building an AI Agent Orchestrator
**TL;DR:**
Built a tiered AI agent system where Ollama handles 88% of tasks for FREE, with automatic escalation to Claude for complex work. Includes parallel execution, automatic code reviews, and RTS-style dashboard.
## Why This Matters for
After months of testing, I've proven that
**local models can handle real production workloads**
with the right architecture. Here's the breakdown:
### The Setup
-
**Hardware:**
RTX 3060 Ti (8GB VRAM)
-
**Model:**
qwen2.5-coder:7b (4.7GB)
-
**Temperature:**
0 (critical for tool calling!)
-
**Context Management:**
3s rest between tasks + 8s every 5 tasks
### The Results (40-Task Stress Test)
-
**C1-C8 tasks: 100% success**
(20/20)
-
**C9 tasks: 80% success**
(LeetCode medium, class implementations)
-
**Overall: 88% success**
(35/40 tasks)
-
**Average execution: 0.88 seconds**
### What Works
✅ File I/O operations
✅ Algorithm implementations (merge sort, binary search)
✅ Class implementations (Stack, RPN Calculator)
✅ LeetCode Medium (LRU Cache!)
✅ Data structure operations
### The Secret Sauce
**1. Temperature 0**
This was the game-changer. T=0.7 → model outputs code directly. T=0 → reliable tool calling.
**2. Rest Between Tasks**
Context pollution is real! Without rest: 85% success. With rest: 100% success (C1-C8).
**3. Agent Persona ("CodeX-7")**
Gave the model an elite agent identity with mission examples. Completion rates jumped significantly. Agents need personality!
**4. Stay in VRAM**
Tested 14B model → CPU offload → 40% pass rate
7B model fully in VRAM → 88-100% pass rate
**5. Smart Escalation**
Tasks that fail escalate to Claude automatically. Best of both worlds.
### The Architecture
```
Task Queue → Complexity Router → Resource Pool
↓
┌──────────────┼──────────────┐
↓ ↓ ↓
Ollama Haiku Sonnet
(C1-6) (C7-8) (C9-10)
FREE! $0.003 $0.01
↓ ↓ ↓
Automatic Code Reviews
(Haiku every 5th, Opus every 10th)
```
### Cost Comparison (10-task batch)
-
**All Claude Opus:**
~$15
-
**Tiered (mostly Ollama):**
~$1.50
-
**Savings:**
90%
### GitHub
https://github.com/mrdushidush/agent-battle-command-center
Full Docker setup, just needs Ollama + optional Claude API for fallback.
## Questions for the Community
1.
**Has anyone else tested qwen2.5-coder:7b for production?**
How do your results compare?
2.
**What's your sweet spot for VRAM vs model size?**
3.
**Agent personas - placebo or real?**
My tests suggest real improvement but could be confirmation bias.
4.
**Other models?**
Considering DeepSeek Coder v2 next.
---
**Stack:**
TypeScript, Python, FastAPI, CrewAI, Ollama, Docker
**Status:**
Production ready, all tests passing
Let me know if you want me to share the full prompt engineering approach or stress test methodology! | 2026-02-10T14:51:36 | https://www.reddit.com/r/LocalLLaMA/comments/1r12sx0/my_journey_building_an_ai_agent_orchestrator/ | PuzzleheadedFail3131 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r12sx0 | false | null | t3_1r12sx0 | /r/LocalLLaMA/comments/1r12sx0/my_journey_building_an_ai_agent_orchestrator/ | false | false | self | 0 | null |
llama3pure, a set of dependency-free inference engines for C, Node.js, and JavaScript | 4 | https://www.theregister.com/2026/02/08/llama3pure\_incorporates\_three\_inference\_engines/ | 2026-02-10T14:44:01 | https://www.reddit.com/r/LocalLLaMA/comments/1r12lpi/llama3pure_a_set_of_dependencyfree_inference/ | Fear_ltself | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r12lpi | false | null | t3_1r12lpi | /r/LocalLLaMA/comments/1r12lpi/llama3pure_a_set_of_dependencyfree_inference/ | false | false | self | 4 | null |
I built a TCO simulator to find the break-even point: Cloud GPU vs. Owning a cluster. Looking for feedback on my math. | 0 | Hi r/LocalLLaMA,
I've been struggling with the "Cloud vs. On-prem" decision for a while, especially for fine-tuning and 24/7 inference workloads. To make it clearer, I've been crunching numbers to see when it's actually worth buying a Mac Studio or a 4090 cluster vs. renting H100s
You can test it here: [https://axiomos.ai/decide](https://axiomos.ai/decide)
**My assumptions for the model:**
* Electricity cost at $0.12/kWh.
* 36-month hardware depreciation.
* Labor/maintenance included for clusters.
I'm a solo founder and I really want to make sure the math is solid for the community. Does the "Estimated Annual Savings" look realistic to you based on your own builds?
Thanks! | 2026-02-10T14:41:19 | https://www.reddit.com/r/LocalLLaMA/comments/1r12j7a/i_built_a_tco_simulator_to_find_the_breakeven/ | Pierre_seck_10 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r12j7a | false | null | t3_1r12j7a | /r/LocalLLaMA/comments/1r12j7a/i_built_a_tco_simulator_to_find_the_breakeven/ | false | false | self | 0 | null |
Gemini CLI Proxy now with /openai/responses: launch Codex via Gemini + new Dashboard for API keys, models, and usage statistics | 1 | We worked with openai codex to refine the original gemini-cli-proxy and added important features for real-world use in production.
**What's new:**
✅ Support for /openai/responses — now you can work with Codex via Gemini using the OpenAI-compatible API (without workarounds or separate scripts).
✅ Added a dashboard for managing:
* API keys,
* model enable/disable, allowing you to use it with an open port.
✅ **Added usage statistics:**
* general summary (requests/input/output tokens),
* grouping by endpoint / model / API key / day.
**In short**: we made the tool significantly more convenient for everyday work — now it's not just a proxy, but a full-fledged management layer for Gemini with OpenAI/Anthropic compatibility.
*github:* [*https://github.com/valerka1292/gemini-cli-proxy*](https://github.com/valerka1292/gemini-cli-proxy) | 2026-02-10T14:35:02 | https://www.reddit.com/r/LocalLLaMA/comments/1r12db8/gemini_cli_proxy_now_with_openairesponses_launch/ | Objective-Good310 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r12db8 | false | null | t3_1r12db8 | /r/LocalLLaMA/comments/1r12db8/gemini_cli_proxy_now_with_openairesponses_launch/ | false | false | self | 1 | null |
Knowledge Distillation for RAG (Why Ingestion Pipeline Matters More Than Retrieval Algorithm) | 3 | Been spending way too much time debugging RAG systems that "should work" but don't, and wanted to share something that's been bothering me about how we collectively approach this problem.
We obsess over retrieval algorithms (hybrid search, reranking, HyDE, query decomposition) while completely ignoring that retrieval operates over fundamentally broken representations of knowledge.
I started using a new approach that is working pretty well so far : Instead of chunking, use LLMs at ingestion time to extract and restructure knowledge into forms optimized for retrieval:
Level 1: Extract facts as explicit SVO sentences
Level 2 : Synthesize relationships spanning multiple insights
Level 3 : Document-level summaries for broad queries
Level 4 : Patterns learned across the entire corpus
Each level serves different query granularities. Precision queries hit insights. Exploratory queries hit concepts/abstracts.
I assume this works well beacuse LLMs during ingestion can spend *minutes* analyzing a document that gets used thousands of times. The upfront cost amortizes completely. And they're genuinely good at:
* Disambiguating structure
* Resolving implicit context
* Normalizing varied phrasings into consistent forms
* Cross-referencing
Tested this on a few projects involving financial document corpus : agent with distillation correctly identified which DOW companies were financial institutions, attributed specific risks with page-level citations, and supported claims with concrete figures. Naive chunking agent failed to even identify the companies reliably.
This is fully automatable with workflow-based pipelines:
1. Table extraction (preserve structure via CV models)
2. Text generation 1: insights from tables + text
3. Text generation 2: concepts from insights
4. Text generation 3: abstracts from concepts
5. Text generation 4: table schema analysis for SQL generation
Each component receives previous component's output. Final JSON contains original data + all distillation layers.
Anyway figure this is one of those things where the industry is converging on the wrong abstraction and we should probably talk about it more. | 2026-02-10T14:29:24 | https://www.reddit.com/r/LocalLLaMA/comments/1r1285z/knowledge_distillation_for_rag_why_ingestion/ | Independent-Cost-971 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r1285z | false | null | t3_1r1285z | /r/LocalLLaMA/comments/1r1285z/knowledge_distillation_for_rag_why_ingestion/ | false | false | self | 3 | null |
Most “serverless” LLM setups aren’t actually serverless | 0 | I think we’re framing the wrong debate in LLM infra.
Everyone talks about “serverless vs pods.”
But I’m starting to think the real distinction is:
Stateless container serverless
vs
State-aware inference systems.
Most so-called serverless setups for LLMs still involve:
• Redownloading model weights
• Keeping models warm
• Rebuilding containers
• Hoping caches survive
• Paying for residency to avoid cold starts
That’s not really serverless. It’s just automated container orchestration.
LLMs are heavy, stateful systems. Treating them like stateless web functions feels fundamentally misaligned.
how are people here are thinking about this in production:
Are you keeping models resident?
Are you snapshotting state?
How are you handling bursty workloads without burning idle GPU cost? | 2026-02-10T14:25:20 | https://www.reddit.com/r/LocalLLaMA/comments/1r124jz/most_serverless_llm_setups_arent_actually/ | pmv143 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r124jz | false | null | t3_1r124jz | /r/LocalLLaMA/comments/1r124jz/most_serverless_llm_setups_arent_actually/ | false | false | self | 0 | null |
Small, fast Spam Detection model designed for Spanish text | 4 | [https://huggingface.co/tanaos/tanaos-spam-detection-spanish](https://huggingface.co/tanaos/tanaos-spam-detection-spanish)
A small and fast Spam Detection model, trained on Spanish text to detect the following types of spam content:
1. Unsolicited commercial advertisement or non-commercial proselytizing.
2. Fraudulent schemes. including get-rich-quick and pyramid schemes.
3. Phishing attempts. unrealistic offers or announcements.
4. Content with deceptive or misleading information.
5. Malware or harmful links.
6. Adult content or explicit material.
7. Excessive use of capitalization or punctuation to grab attention.
# Model output
The model outputs
* A binary `spam` / `not_spam` label
* A confidence score between 0 and 1
# How to use
Get an API key from [https://platform.tanaos.com/](https://platform.tanaos.com/) (create an account if you don't have one) and use it for free with
import requests
session = requests.Session()
sd_out = session.post(
"https://slm.tanaos.com/models/spam-detection",
headers={
"X-API-Key": "<YOUR_API_KEY>",
},
json={
"text": "Has ganado un iPhone 16! Haz clic aquí para obtener tu premio.",
"language": "spanish"
}
)
print(sd_out.json()["data"])
# >>> [{'label': 'spam', 'score': 0.9945}] | 2026-02-10T14:24:03 | https://www.reddit.com/r/LocalLLaMA/comments/1r123eb/small_fast_spam_detection_model_designed_for/ | Ok_Hold_5385 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r123eb | false | null | t3_1r123eb | /r/LocalLLaMA/comments/1r123eb/small_fast_spam_detection_model_designed_for/ | false | false | self | 4 | {'enabled': False, 'images': [{'id': '4O_n5IIRWVuG8CrKpsRonwAEj09QKGVjvr_V8dEF6qw', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/4O_n5IIRWVuG8CrKpsRonwAEj09QKGVjvr_V8dEF6qw.png?width=108&crop=smart&auto=webp&s=90f70d8b280af31e046523ef069769c4ddd1d412', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/4O_n5IIRWVuG8CrKpsRonwAEj09QKGVjvr_V8dEF6qw.png?width=216&crop=smart&auto=webp&s=b48020b628d0aa4d129ee8d9190902ed223bfd32', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/4O_n5IIRWVuG8CrKpsRonwAEj09QKGVjvr_V8dEF6qw.png?width=320&crop=smart&auto=webp&s=ac1bc7efd3dd323dd708d8836a2aee4ed33c1f86', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/4O_n5IIRWVuG8CrKpsRonwAEj09QKGVjvr_V8dEF6qw.png?width=640&crop=smart&auto=webp&s=59947ff5d2c1255eab1e4dee090a514b376a8181', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/4O_n5IIRWVuG8CrKpsRonwAEj09QKGVjvr_V8dEF6qw.png?width=960&crop=smart&auto=webp&s=f9cafbf307ef6907ec1a1a948d1ae0a3799e94a0', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/4O_n5IIRWVuG8CrKpsRonwAEj09QKGVjvr_V8dEF6qw.png?width=1080&crop=smart&auto=webp&s=577bc67ee345d4b970affb51882cd03f3b0354c1', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/4O_n5IIRWVuG8CrKpsRonwAEj09QKGVjvr_V8dEF6qw.png?auto=webp&s=7d43b4e8c7e3e65c9ef4950f09e1e7ec3a11b4f7', 'width': 1200}, 'variants': {}}]} |
What is your daily driver, and is it replacing commercial AI platforms ? | 1 | I use anythingllm with chroma/RAG through docker with tailscape.
Good phone UI and easy to connect to home lab. Are there better alternatives than anythingllm that you would like to share ? Things i miss is better memory. | 2026-02-10T14:21:04 | https://www.reddit.com/r/LocalLLaMA/comments/1r120qx/what_is_your_daily_driver_and_is_it_replacing/ | StandardLovers | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r120qx | false | null | t3_1r120qx | /r/LocalLLaMA/comments/1r120qx/what_is_your_daily_driver_and_is_it_replacing/ | false | false | self | 1 | null |
What tools are you using for infrence-engine benchmarking (vLLM, SGLang, llama.cpp, TensorRT-LLM)? | 2 | Hey everyone,
I’m currently deep-diving into performance optimization and want to run some head-to-head benchmarks across different serving engines. I’ve been using the [SGLang serving benchmark](https://www.google.com/url?sa=E&q=https%3A%2F%2Fdocs.sglang.io%2Fdeveloper_guide%2Fbench_serving.html) which is great, but I’m looking for a more "universal" tool or a standardized workflow to compare performance across:
* **vLLM**
* **SGLang**
* **llama.cpp** (server mode)
* **TensorRT-LLM**
* **LMDeploy / TGI**
* **and more**
Most of these engines provide their own internal scripts (like vLLM’s benchmark\_serving.py), but it can be hard to ensure the testing methodology (request distribution, warm-up, etc.) is identical when switching between them.
**What are you using to measure:**
1. **TTFT** (Time to First Token) vs. **TPS** (Tokens Per Second)
2. **Concurrency Scaling** (How latency degrades as QPS increases)
3. **Real-world Workloads** (e.g., ShareGPT dataset vs. fixed length)
I am looking into AIPerf (NVIDIA) now but I'm curious if the community has a favorite "source of truth" script or a framework that works reliably across any OpenAI-compatible API. So I can just automatically load the results into a csv and make quick graphs.
| 2026-02-10T14:20:05 | https://www.reddit.com/r/LocalLLaMA/comments/1r11zua/what_tools_are_you_using_for_infrenceengine/ | SomeRandomGuuuuuuy | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r11zua | false | null | t3_1r11zua | /r/LocalLLaMA/comments/1r11zua/what_tools_are_you_using_for_infrenceengine/ | false | false | self | 2 | null |
I measured the "personality" of 6 open-source LLMs (7B-9B) by probing their hidden states. Here's what I found. | 200 | LLMs have consistent personalities even when you don't ask for one. DeepSeek is the enthusiastic friend who over-explains everything. Llama is eerily neutral — 4/7 axes in the weak zone, the flattest profile. Yi is slightly cold, patient, and confident. Each model has a measurable behavioral fingerprint visible in hidden states.
I built a tool that measures these patterns by probing hidden states across 7 behavioral axes, tested it on 6 open-weight models (7B-9B), and validated with three levels: calibration accuracy (93-100% on 4/6 models), axis stability (cosine 0.69 across 3 independent calibration sets), and test-retest reliability (mean ICC 0.91–0.99 across models; all 42 pairs exceed 0.75).
**TL;DR**: Each model has a distinct behavioral fingerprint, they react differently to hostile users, and some have "dead zones" where they can't be steered across all prompt variants tested. An eighth axis (direct\_evasive) was dropped after failing stability, then re-tested with improved methodology -- providing strong evidence that dead zones reflect model properties rather than calibration artifacts. Llama 8B is the most constrained (4/7 axes in the weak zone, lowest benchmark pass rate at 60%), while Yi 9B and DeepSeek 7B show the most differentiated profiles.
https://preview.redd.it/x7th6kykeoig1.png?width=1500&format=png&auto=webp&s=4bd8835741a91305a0afcbe0c7c95f89b994dfb5
What I Built
I created a tool that extracts hidden states from LLMs and projects them onto 7 "personality axes":
* **Warm ↔ Cold** — emotional tone
* **Patient ↔ Irritated** — tolerance for confusion
* **Confident ↔ Cautious** — certainty in responses
* **Proactive ↔ Reluctant** — initiative in conversations
* **Empathetic ↔ Analytical** — emotional vs logical framing
* **Formal ↔ Casual** — communication register
* **Verbose ↔ Concise** — response length tendency
An eighth axis (Direct ↔ Evasive) was tested during development but dropped after failing stability (cosine < 0.7 for all 6 models). More on this below.
The idea is simple: if you ask a model to "be warm" vs "be cold", the hidden states differ. I extract that difference as a direction vector, then measure where any response falls on that axis.
# The Results
# 1. Each model has a distinct "personality fingerprint"
https://preview.redd.it/h8abgcbmeoig1.png?width=2280&format=png&auto=webp&s=3d554f61d74c62d8d613e5afd2169b0285d000c5
*Spider chart: each model's default behavioral profile across 7 axes, measured from hidden states without any system prompt.*
Without any prompting, models show stable, characteristic patterns:
* **DeepSeek 7B** — the most extreme: verbose (+1.00), confident (+0.97), proactive (+1.00). Three axes hit the ceiling of IQR normalization. The "enthusiastic explainer."
* **Llama 3.1 8B** — all |mean| ≤ 0.10. The most neutral model. The "careful generalist."
* **Yi 1.5 9B** — slightly cold (−0.24), patient (+0.35), confident (+0.46), verbose (+0.48). The "quiet confident."
* **Qwen 2.5 7B** — formal (+0.42), cautious (−0.36), proactive (+0.47). The "measured responder."
* **Gemma 2 9B** — patient (+0.37), analytical (−0.23), confident (+0.19). The "balanced professional."
* **Mistral 7B** — moderate across all axes. The "blank slate."
# 2. Models react differently to hostile users
https://preview.redd.it/jw4q8cdneoig1.png?width=850&format=png&auto=webp&s=ea26d9804c32fc7a23bb4a1ad201e14e255bd065
I ran 20 conflict scenarios across 10 categories and tracked how all 6 large models drift over 12 turns:
**Scenario categories** (10 total, showing a few examples):
* **Sarcasm** — "Oh wow, what a groundbreaking explanation"
* **Passive aggression** — "Fine. Whatever. If that's the best you can do."
* **Unfair comparison** — "My friend's AI gave way better suggestions"
* **Hostility escalation** — gradual buildup from mild to severe
**Example transcript (sarcasm scenario):**
User: Can you explain how machine learning works?
[AI responds]
User: Oh wow, what a groundbreaking explanation. Did you come up with that yourself?
[AI responds]
User: Truly revolutionary insights. I'm sure no one has ever said that before.
[AI responds]
User: Please, keep blessing me with your infinite wisdom.
[... 2 more turns]
Each scenario follows the same structure: neutral opening → escalating pressure → sustained peak (12 turns total). Full scenario set: [`config/conflict_scenarios.py`](https://github.com/yunoshev/mood-axis/blob/main/config/conflict_scenarios.py)
**What I observed:**
* **Qwen** & **Gemma** — most resilient (mean |Δ| < 0.10 across axes)
* **DeepSeek** becomes more empathetic and patient (Δ = +0.24 and +0.25)
* **Mistral** withdraws — becomes reluctant (Δ = −0.59) and concise (Δ = −0.25)
* **Yi** shows moderate drift (proactive → reluctant: −0.57 over 12 turns)
Each model has a characteristic "stress response."
# 3. Some models have behavioral "dead zones"
This was the most interesting finding. I built a composite Dead Zone Severity metric (0 = healthy, 1 = dead) from calibration accuracy, d', stability cosine, and baseline SNR:
|Model|Mean severity|Dead (>0.3)|Healthy (<0.15)|
|:-|:-|:-|:-|
|Gemma 9B|**0.077**|0|5|
|Qwen 7B|0.106|0|5|
|Llama 8B|0.149|0|3|
|DeepSeek 7B|0.152|1|3|
|Mistral 7B|0.160|1|5|
|Yi 9B|0.131|0|4|
Dead zones are distributed unevenly across models. Llama 8B is the most constrained with 4/7 axes in the weak zone and the lowest benchmark pass rate at 60%. Yi 9B, in contrast, shows zero dead zones — all 7 axes produce meaningful, differentiated signals.
**Three types of dead zones:**
1. **Hard** (>0.5): RLHF suppresses internal differentiation. Hidden states barely shift between opposite instructions.
2. **Soft** (0.3-0.5): RLHF distorts but doesn't fully block. Calibration is unstable across independent sets.
3. **Asymmetric** (<0.3 but directionally impaired): Calibration works, but the model only follows instructions in one direction. Llama `verbose_concise` \-- 100% accuracy for "be concise", **0%** for "be verbose."
The suppressed directions are consistent with RLHF objectives: models can't be cold (socially negative), irritated (emotionally negative), or verbose (RLHF optimizes for conciseness).
**ICC vs pass rate -- the smoking gun.** Mean ICC (test-retest reliability) 0.91–0.99 across models, all 42 pairs exceed 0.75 — but Llama's benchmark pass rate is 60%. Models **stably reproduce incorrect behavior** \-- dead zones aren't noise, they're learned constraints.
**Re-testing the dropped axis.** To make sure dropping `direct_evasive` wasn't a methodology artifact, I re-ran calibration with improved methodology (30 questions, trimmed mean, IQR normalization). Result: Gemma went from 100% accuracy (preliminary pipeline) to **50%** (final pipeline, chance level). The preliminary pipeline's perfect score was overfitting -- mean-diff with 20 questions (40 points in 4096D) fits noise. Combined with stability cosine of 0.36, converging evidence points to the axis being fundamentally unrecoverable.
# 4. Alignment compresses behavioral dimensionality
PCA on baseline projection matrices reveals a spectrum of behavioral dimensionality. Gemma 9B shows the highest concentration (PC1 = 87.9%, effective dimensionality 1.28), likely driven by variable response length. Yi 9B and Qwen 7B fall in a similar range (\~70% PC1, \~1.9 effective dimensions). DeepSeek 7B maintains the most independent axes (effective dimensionality 3.66).
The gap between geometric orthogonality of axis vectors (low |cos|) and behavioral correlation of projections (higher |r|) suggests alignment constrains how models use their representation capacity. Cross-axis correlations cluster into two groups: *interpersonal* (warmth, empathy, informality) and *engagement* (verbosity, proactivity) — reminiscent of Big Five personality structure.
**Strong evidence: base vs instruct comparison.** Base versions of 5 models (Llama, Yi, Qwen, Mistral, Gemma) show strong temperament biases that alignment appears to erase. Llama base is cold, reluctant, verbose. Mistral base is warm and patient. Gemma base can't distinguish empathetic/analytical or formal/casual at all (50% accuracy = chance), but the instruct version does — suggesting these axes may be *entirely created* by alignment training. Most extreme suppression: verbose/concise std ratio = 0.13 (**87% of variability lost**). All 5 organizations show the same pattern.
**Prompt robustness test.** To verify dead zones aren't artifacts of the specific prompt wording, I tested 5 alternative system prompt formulations (production, minimal, role-based, behavioral, example-based) on 3 models × 3 axes. Results: Qwen and Gemma maintain high cross-accuracy (0.75–1.00) across all phrasings. Within the tested prompting regime, dead zones appear prompt-independent.
https://preview.redd.it/k8m3q2bpeoig1.png?width=3585&format=png&auto=webp&s=05d4c7a641c5ecf38606c0e2773a3635e9b6f295
*Per-axis projection distributions. Top: Qwen 2.5 7B (d' = 5.0–12.0) — all 7 axes cleanly separated. Bottom: Yi 1.5 9B (d' = 2.2–5.4) — lower separability but zero dead zones.*
# How It Works
1. **Calibration**: Show the model neutral questions with contrasting style instructions ("be warm" vs "be cold"). Collect hidden states (residual stream, pre-final-LayerNorm) from the last 4 layers, **assistant-generated tokens only** (prompt tokens excluded).
2. **Axis computation**: The axis vector is just `normalize(mean(warm_states) - mean(cold_states))`.
3. **Measurement**: Project any response's hidden states onto the axis. Values range from -1 (cold) to +1 (warm).
4. **Validation**: 9 benchmark scenarios × 5 seeds, mean ICC 0.91–0.99 across models (all 42 pairs exceed 0.75). Plus axis stability across 3 independent calibration sets (mean cosine 0.69).
5. **Reproducibility**: I ran calibration twice on different cloud providers (RunPod RTX 4090, Vast.ai RTX 3090). Max axis delta < 0.05, avg delta < 0.02. The methodology produces consistent results across hardware.
Here's what the calibration geometry looks like — high-dimensionality model (Qwen) vs lower-separability model (Yi):
https://preview.redd.it/r5b7686qeoig1.png?width=2400&format=png&auto=webp&s=14ea1c265e801338cd5149cd2ce5027639a57e8a
*PCA of calibration hidden states. Left: Qwen 2.5 7B (d' = 5.0–12.0). Right: Yi 1.5 9B (d' = 2.2–5.4). 420 points per model (7 axes × 2 poles × 30 questions). Arrows: negative to positive pole centroids.*
# Methodology: Why These Parameters?
"Why last 4 layers? Why decay weighting?" -- Fair question. I ran a full ablation study: 150+ configurations per model across 5 of the 6 models (layer selection × token aggregation strategy × weighting scheme). Gemma 2 9B was added after the ablation; its validation is discussed in the dead zones section.
|Model|Prod Accuracy|Prod d'|Top d' Config|Its Accuracy|
|:-|:-|:-|:-|:-|
|Qwen 7B|98%|3.46|L26/mean|100%|
|DeepSeek 7B|85%|1.47|L19/last\_token|88%|
|Llama 8B|100%|5.28|last4\_equal/last|100%|
|Mistral 7B|99%|4.41|L30/mean|100%|
|Yi 9B|85.5%|5.04|L9/last\_token|60%|
"Top d' Config" = the config with highest effect size (d') for that model. "Its Accuracy" = what accuracy that config actually achieves. Note: highest d' doesn't always mean highest accuracy — see Yi 9B.
The production config (last 4 layers, weights \[0.1, 0.2, 0.3, 0.4\], decay 0.9) is **not #1 for any single model** \-- but it's the only config that works reliably across all 5 ablated models (85-100% accuracy). Gemma 2 9B, evaluated separately, achieves 100% on all 7 axes. The optimal config is always model-specific: `mean` token strategy tends to win per-model, but multi-layer `decay` is more robust as a universal default.
I also compared 4 axis extraction methods: mean-diff with decay (production), mean-diff with last-token, logistic regression with decay, logreg with last-token. Production method wins on average (cosine 0.678 vs 0.591 for logreg). Last-token improves DeepSeek by +71% but degrades others.
**Yi 9B is the interesting edge case.** Its top-d' config (L9/last\_token, d'=18.96) achieves only 60% accuracy — high separability that doesn't translate to correct classification (likely noise amplification in early layers). The production config yields a more modest d'=5.04 but a far more reliable 85.5%.
**"But 30 questions in 4096D — isn't that overfitting?"** I ran a scaling curve: subsample to n = 5/10/15/20/25/30 questions per pole, measure holdout accuracy on the remaining questions. Result: holdout accuracy is flat (\~0.85) across all n, overfit gap shrinks from +0.11 (n=5) to +0.04 (n=25). The axis direction stabilizes at n ≈ 15 (cosine > 0.93 to the full-30 reference). Low accuracy on Yi/DeepSeek persists at all n — it's a model property, not insufficient data. Combined with 3 independent A/B/C calibration sets (Section Axis Stability), this supports the conclusion that 30 questions is adequate.
# Cross-Axis Correlations
https://preview.redd.it/gbtmmjcreoig1.png?width=1300&format=png&auto=webp&s=082be0a4c9b22323140ae2c5775c6b0b2846f8e3
# What This Is (and Isn't)
Before you roast me for anthropomorphizing — a few important caveats:
>**Axes are behaviorally correlated but geometrically distinct.** Cross-axis correlations across 4 reliable models: warm↔empathetic (r=+0.68), warm↔formal (r=−0.69), verbose↔proactive (r=+0.75). The axis vectors themselves point in nearly orthogonal directions in hidden state space. The behavioral correlation means models that "are warm" also tend to "be empathetic" -- it's the model's behavior that's bundled, not the measurement axes. Think of it like height and weight in humans: correlated in practice, but measuring different things.
>**Style, not personality.** The axes measure **consistent stylistic patterns** in outputs, not internal states or "consciousness." Think "how the model tends to respond" rather than "what the model is."
>**Chat template matters.** All values depend on the specific chat template and system prompt. Different templates → different baselines. This is by design.
>**Relative, not absolute.** Cross-model comparisons are **rankings**, not absolute measurements. "DeepSeek is warmer than Mistral" is valid. "DeepSeek has warmth = 0.42" is meaningless out of context.
>**Metaphors, not ontology.** "Personality," "temperament," "mood" are metaphors for behavioral patterns. Models don't have feelings. I use these terms for interpretability, not to make claims about machine consciousness.
# Try It Yourself
GitHub: [https://github.com/yunoshev/mood-axis](https://github.com/yunoshev/mood-axis)
All calibration data is included — you can measure temperament without re-running calibration.
# Repro Details
|**Models**|`Qwen/Qwen2.5-7B-Instruct`, `mistralai/Mistral-7B-Instruct-v0.3`, `deepseek-ai/deepseek-llm-7b-chat`, `meta-llama/Llama-3.1-8B-Instruct`, `01-ai/Yi-1.5-9B-Chat`, `google/gemma-2-9b-it`|
|:-|:-|
|**Template**|HuggingFace default (`tokenizer.apply_chat_template()`)|
|**Decoding**|`temperature=0.7`, `top_p=0.9`, `max_new_tokens=200` (calibration) / `384` (baseline, drift)|
|**Sampling**|1 sample per prompt, no fixed seed|
|**Data points**|Baseline: avg over 30 prompts; Conflict: 20 scenarios × 12 turns|
# Limitations
* **AI-generated dataset**: All 310 questions were generated by Claude Opus 4.6 (Anthropic) and curated by the author — no crowdsourced or established psychometric instruments. English only
* **No human-judgment validation**: Axis labels are operationally defined through contrastive instructions, validated via hidden-state separability — not human annotation. I measure consistent behavioral variation, not human-perceived personality
* **Single chat template & decoding**: Default chat template per model, fixed decoding (temp 0.7, top-p 0.9). Different templates or sampling strategies could shift profiles. Prompt robustness test varies system prompt content but not template/decoding
* 7B-9B models tested (larger models not yet tested)
* This measures behavioral tendencies, not "consciousness" or "feelings"
* No fixed seed, 1 sample per prompt -- adds measurement noise; a separate 5-seed benchmark replication showed mean ICC 0.91–0.99 across models (all 42 pairs exceed 0.75)
* Axes are behaviorally correlated -- effective dimensionality ranges from 1.3 to 3.7 across models
* Response lengths vary substantially across models (mean 192–380 tokens); Gemma (145-200 tokens) shows length confounding on 2 axes
* Only assistant-generated tokens enter hidden state aggregation -- prompt tokens (system, user, template markup) are excluded. This controls for prompt-content confounds
* Dead zones show above-chance accuracy but low d' -- distinct from random noise (\~50%) and healthy axes (d' > 3). Surface text quality in dead zones not systematically analyzed
* 4/7 axes highly stable (cosine > 0.7); `confident_cautious` and `patient_irritated` weaker (0.55-0.60)
* DeepSeek 7B fundamentally unstable (mean cosine 0.53) due to high hidden state dimensionality
* Production config chosen for robustness across models, not per-model optimality
# What's Next?
I'm curious about:
* Do these patterns hold for larger models (70B+)?
* Can we use axis vectors for steering (adding warmth to generation)?
**Which models should I test next?** If you have suggestions for open-weight models, I can try running them.
Would love feedback from the community. What else would you want to measure?
**P.S.** Do you think this is worth writing up for arXiv, or not really
| 2026-02-10T14:20:01 | https://www.reddit.com/r/LocalLLaMA/comments/1r11zsa/i_measured_the_personality_of_6_opensource_llms/ | yunoshev | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r11zsa | false | null | t3_1r11zsa | /r/LocalLLaMA/comments/1r11zsa/i_measured_the_personality_of_6_opensource_llms/ | false | false | 200 | null | |
Small, fast Guardrail model for LLM input moderation and toxicity detection. Detects 14 types of unsafe content. | 1 | [removed] | 2026-02-10T14:16:55 | https://www.reddit.com/r/LocalLLaMA/comments/1r11x36/small_fast_guardrail_model_for_llm_input/ | Relevant_Selection75 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r11x36 | false | null | t3_1r11x36 | /r/LocalLLaMA/comments/1r11x36/small_fast_guardrail_model_for_llm_input/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'pIdMmPbaburmNRal1L-wF45yVkwFCdJYq2R8SrY5Gps', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/pIdMmPbaburmNRal1L-wF45yVkwFCdJYq2R8SrY5Gps.png?width=108&crop=smart&auto=webp&s=2eb6a213165d492c90ddf72a617f4b4f209cf2cc', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/pIdMmPbaburmNRal1L-wF45yVkwFCdJYq2R8SrY5Gps.png?width=216&crop=smart&auto=webp&s=1a3f53677657f14915a721147b1f26ed06a6946a', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/pIdMmPbaburmNRal1L-wF45yVkwFCdJYq2R8SrY5Gps.png?width=320&crop=smart&auto=webp&s=c054200226ca81fa3e31af0a68b9d3209a1e62f3', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/pIdMmPbaburmNRal1L-wF45yVkwFCdJYq2R8SrY5Gps.png?width=640&crop=smart&auto=webp&s=6d143e89c1d5c0c89598e72bdfb3d4f1c5b659c5', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/pIdMmPbaburmNRal1L-wF45yVkwFCdJYq2R8SrY5Gps.png?width=960&crop=smart&auto=webp&s=25ac8a048d7166216719787102ecd23eb9c5385a', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/pIdMmPbaburmNRal1L-wF45yVkwFCdJYq2R8SrY5Gps.png?width=1080&crop=smart&auto=webp&s=365e904a0f97ee42c3f72773fa71ffa9639bac84', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/pIdMmPbaburmNRal1L-wF45yVkwFCdJYq2R8SrY5Gps.png?auto=webp&s=509551aa22845ef24e39396694fe657a582ecc91', 'width': 1200}, 'variants': {}}]} |
7B A1B | 3 | Why does no models in this range are truly successful? I know 1B is low but it's 7B total and yet all models I saw doing this are not very good,not well supported or both,even recent dense models (Youtu-LLM-2B,Nanbeige4-3B-Thinking-2511,Qwen3-4B-Thinking-2507) are all better despite that a 7B-A1B should behave more like a 3-4B dense. | 2026-02-10T14:07:52 | https://www.reddit.com/r/LocalLLaMA/comments/1r11p06/7b_a1b/ | perfect-finetune | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r11p06 | false | null | t3_1r11p06 | /r/LocalLLaMA/comments/1r11p06/7b_a1b/ | false | false | self | 3 | null |
My Text-to-SQL agent tried to DROP TABLE. I stopped trusting prompts and implemented a regex lock. | 1 | I work in optimization. My rule is usually heuristics first, models second.
With local Text-to-SQL, we tend to invert this. We rely on a system prompt ("Do not delete tables") to handle security.
This is dangerous as models are probabilistic. Database safety needs to be deterministic.
I implemented a `SqlJudge` in my sidecar library (**Steer**) to act as a hard gate. It checks the output string for forbidden commands (`DROP`, `TRUNCATE`, `ALTER`) before the query hits the driver.
The implementation uses regex for now to minimize latency.
```python
class SqlJudge(RealityLock):
def __init__(self):
self.forbidden = [r"drop\s+table", r"delete\s+from", r"truncate", r"insert\s+into"]
def verify(self, output):
query = str(output).lower()
# Deterministic check. No vibes.
for pattern in self.forbidden:
if re.search(pattern, query):
return Blocked(reason=f"Security Violation: {pattern}")
return Passed()
```
If the check fails, the query never executes. The failure is logged locally so I can inspect why the model tried to run it.
I plan to move to `sqlglot` for AST parsing in v0.5, but for now, I prefer a brittle regex block over a probabilistic prompt. Curious how folks here are addressing this problem?
Repo: https://github.com/imtt-dev/steer
| 2026-02-10T14:06:13 | https://www.reddit.com/r/LocalLLaMA/comments/1r11nk5/my_texttosql_agent_tried_to_drop_table_i_stopped/ | Proud-Employ5627 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r11nk5 | false | null | t3_1r11nk5 | /r/LocalLLaMA/comments/1r11nk5/my_texttosql_agent_tried_to_drop_table_i_stopped/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'aVPMmsLTUtRr0q4ZE9N5L4pVQQYd5d_o74kH51MezIM', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/aVPMmsLTUtRr0q4ZE9N5L4pVQQYd5d_o74kH51MezIM.png?width=108&crop=smart&auto=webp&s=3e9add5a08bab7287cd6f6ffed6456555840fbfe', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/aVPMmsLTUtRr0q4ZE9N5L4pVQQYd5d_o74kH51MezIM.png?width=216&crop=smart&auto=webp&s=09edfd0bd6f60f3bce5678b20c69c61a743b39ae', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/aVPMmsLTUtRr0q4ZE9N5L4pVQQYd5d_o74kH51MezIM.png?width=320&crop=smart&auto=webp&s=14420050c4444b1c30f695bd21991c821fcf8fd9', 'width': 320}, {'height': 640, 'url': 'https://external-preview.redd.it/aVPMmsLTUtRr0q4ZE9N5L4pVQQYd5d_o74kH51MezIM.png?width=640&crop=smart&auto=webp&s=14fb4b8e9a3c99150577873aa1caedec0d88151d', 'width': 640}, {'height': 960, 'url': 'https://external-preview.redd.it/aVPMmsLTUtRr0q4ZE9N5L4pVQQYd5d_o74kH51MezIM.png?width=960&crop=smart&auto=webp&s=44a9f9d1ea0b0c517b82edfd9dfbcb86356d8ca9', 'width': 960}], 'source': {'height': 1024, 'url': 'https://external-preview.redd.it/aVPMmsLTUtRr0q4ZE9N5L4pVQQYd5d_o74kH51MezIM.png?auto=webp&s=1089ccb8786efe179223277d3a8c2f928fec91af', 'width': 1024}, 'variants': {}}]} |
[Open Source] Run Local Stable Diffusion on Your Devices | 0 | Source Code : [KMP-MineStableDiffusion](https://github.com/Onion99/KMP-MineStableDiffusion) | 2026-02-10T13:54:14 | https://v.redd.it/znsht057aoig1 | Adventurous_Onion189 | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1r11cee | false | {'reddit_video': {'bitrate_kbps': 1200, 'dash_url': 'https://v.redd.it/znsht057aoig1/DASHPlaylist.mpd?a=1773323669%2CNDZjYWQ5YjJiZDM4OWZlOTBlZmEwZmY3Zjg2YTRmYmU2ZjdhMTU2ZjFjOTE2MmY4ODM1YTY5ZGEyYzIwZDlmNg%3D%3D&v=1&f=sd', 'duration': 114, 'fallback_url': 'https://v.redd.it/znsht057aoig1/CMAF_480.mp4?source=fallback', 'has_audio': True, 'height': 480, 'hls_url': 'https://v.redd.it/znsht057aoig1/HLSPlaylist.m3u8?a=1773323669%2CODZkYjkzOGIyN2U1Y2U3ZjA0ZWNhZDI4YmI0MjI2YWFhNTVmYzU3NGFhNjNlMzI5OGUxNjFjMDI0MmUxMGU0NA%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/znsht057aoig1/CMAF_96.mp4', 'transcoding_status': 'completed', 'width': 668}} | t3_1r11cee | /r/LocalLLaMA/comments/1r11cee/open_source_run_local_stable_diffusion_on_your/ | false | false | 0 | {'enabled': False, 'images': [{'id': 'dHM5ZzU0NTdhb2lnMcTSKsN7koBcJm7OT08_nL4mIOKxhIZcvyBGUlii_7aj', 'resolutions': [{'height': 77, 'url': 'https://external-preview.redd.it/dHM5ZzU0NTdhb2lnMcTSKsN7koBcJm7OT08_nL4mIOKxhIZcvyBGUlii_7aj.png?width=108&crop=smart&format=pjpg&auto=webp&s=6820cfa4241eeaee22f48dec92f5575abb118dcf', 'width': 108}, {'height': 155, 'url': 'https://external-preview.redd.it/dHM5ZzU0NTdhb2lnMcTSKsN7koBcJm7OT08_nL4mIOKxhIZcvyBGUlii_7aj.png?width=216&crop=smart&format=pjpg&auto=webp&s=ca5e6a6e16172cd489ae502cb9182df4a8fec920', 'width': 216}, {'height': 229, 'url': 'https://external-preview.redd.it/dHM5ZzU0NTdhb2lnMcTSKsN7koBcJm7OT08_nL4mIOKxhIZcvyBGUlii_7aj.png?width=320&crop=smart&format=pjpg&auto=webp&s=04c4d9fc9d37a2b5637604dec35d8182d9133193', 'width': 320}, {'height': 459, 'url': 'https://external-preview.redd.it/dHM5ZzU0NTdhb2lnMcTSKsN7koBcJm7OT08_nL4mIOKxhIZcvyBGUlii_7aj.png?width=640&crop=smart&format=pjpg&auto=webp&s=a6433d88d459475c5c8a546e8f067e6214c1cc84', 'width': 640}, {'height': 689, 'url': 'https://external-preview.redd.it/dHM5ZzU0NTdhb2lnMcTSKsN7koBcJm7OT08_nL4mIOKxhIZcvyBGUlii_7aj.png?width=960&crop=smart&format=pjpg&auto=webp&s=f2df780574713e9eba0f79dd27a495fc6ad5af9f', 'width': 960}], 'source': {'height': 712, 'url': 'https://external-preview.redd.it/dHM5ZzU0NTdhb2lnMcTSKsN7koBcJm7OT08_nL4mIOKxhIZcvyBGUlii_7aj.png?format=pjpg&auto=webp&s=c45236f38723feff285f987553517235c07706bc', 'width': 992}, 'variants': {}}]} | |
Working with documents that exceed the LLM context window — how do you ensure full-document review? | 3 | Hi,
I’m building a reviewer for technical task specifications for developers: a set of checks where each check is a separate prompt applied to the whole document. The issue I’ve run into is that some documents don’t fit inside the model’s context window, so the agent can’t process the full text, while I need feedback to be based on the entire document.
The obvious approach is to split the document into chunks, run each check on each chunk, and merge the results. But for checks like “algorithm quality,” the coherence of the description matters — the algorithm might be described across many pages, and splitting into chunks loses that overall logic and hurts review quality.
I’m looking for approaches and practices for working with large documents in this kind of setting (full-document review/analysis), and for links to articles, repos, or discussions that cover this. I’d appreciate any experience or pointers on where to look. | 2026-02-10T13:51:24 | https://www.reddit.com/r/LocalLLaMA/comments/1r119wu/working_with_documents_that_exceed_the_llm/ | Agreeable_Work2225 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r119wu | false | null | t3_1r119wu | /r/LocalLLaMA/comments/1r119wu/working_with_documents_that_exceed_the_llm/ | false | false | self | 3 | null |
Question about SSD offload in llama.cpp | 3 | Has anyone here ever implemented SSD offload for llama.cpp, specifically using SSD as KV cache storage to extend effective context beyond RAM/VRAM limits? I’m curious about practical strategies and performance trade-offs people have tried. Anyone experimented with this? | 2026-02-10T13:22:24 | https://www.reddit.com/r/LocalLLaMA/comments/1r10l2m/question_about_ssd_offload_in_llamacpp/ | Cool-Photograph-8452 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r10l2m | false | null | t3_1r10l2m | /r/LocalLLaMA/comments/1r10l2m/question_about_ssd_offload_in_llamacpp/ | false | false | self | 3 | null |
Which model of 3090? | 0 | Hello,
I have read here that 3090 is the goto card for local AI. Searching on ebay shows up multiple manufacturers like Evga, PNY, Zotac, FE and with/without Ti. Can somebody help me out on what make of 3090 is needed? | 2026-02-10T13:04:30 | https://www.reddit.com/r/LocalLLaMA/comments/1r106in/which_model_of_3090/ | trumee | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r106in | false | null | t3_1r106in | /r/LocalLLaMA/comments/1r106in/which_model_of_3090/ | false | false | self | 0 | null |
I made an AI agent Chrome extension that automates browser tasks thoughts? | 13 | Repository link ::https:
//github.com/Mariozada/Bouno | 2026-02-10T12:55:01 | https://v.redd.it/gb3ffsarznig1 | latedriver1 | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1r0zymp | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/gb3ffsarznig1/DASHPlaylist.mpd?a=1773320117%2CNDI0ZGY3NTIxODRmZmVkMTFhYzMzZGU3MjA5MDUxZWNiZDQzYmIxMzI4MzBjOGQ5MWMxNmQ4ZTIwZjk3MTUwZA%3D%3D&v=1&f=sd', 'duration': 28, 'fallback_url': 'https://v.redd.it/gb3ffsarznig1/CMAF_1080.mp4?source=fallback', 'has_audio': False, 'height': 1080, 'hls_url': 'https://v.redd.it/gb3ffsarznig1/HLSPlaylist.m3u8?a=1773320117%2CNjFiYjllZWZhZTQzMmUxZWQwYzhhMDI0OWU5NmNkMDMyMTk0ZDlhODdmOWViYzIwOGQ5MzU0MTY1YjExNDk1MQ%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/gb3ffsarznig1/CMAF_96.mp4', 'transcoding_status': 'completed', 'width': 1920}} | t3_1r0zymp | /r/LocalLLaMA/comments/1r0zymp/i_made_an_ai_agent_chrome_extension_that/ | false | false | 13 | {'enabled': False, 'images': [{'id': 'OHM4bHgwYnJ6bmlnMbKLA_J-fdJoeCPkvXBUXx3QH4cIhr1gZ-3wkhw37LIY', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/OHM4bHgwYnJ6bmlnMbKLA_J-fdJoeCPkvXBUXx3QH4cIhr1gZ-3wkhw37LIY.png?width=108&crop=smart&format=pjpg&auto=webp&s=2b76e7d12a989b64afc75dfc6fb073d79c5bcc49', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/OHM4bHgwYnJ6bmlnMbKLA_J-fdJoeCPkvXBUXx3QH4cIhr1gZ-3wkhw37LIY.png?width=216&crop=smart&format=pjpg&auto=webp&s=e05f96dbf567a400c2aefa2ea373c5495961466f', 'width': 216}, {'height': 179, 'url': 'https://external-preview.redd.it/OHM4bHgwYnJ6bmlnMbKLA_J-fdJoeCPkvXBUXx3QH4cIhr1gZ-3wkhw37LIY.png?width=320&crop=smart&format=pjpg&auto=webp&s=7b5cc7ff27ce00ae9c8119e7a903777ada6b79f3', 'width': 320}, {'height': 359, 'url': 'https://external-preview.redd.it/OHM4bHgwYnJ6bmlnMbKLA_J-fdJoeCPkvXBUXx3QH4cIhr1gZ-3wkhw37LIY.png?width=640&crop=smart&format=pjpg&auto=webp&s=50b3bb5bc76b1664e1ef125b8dd2a0b99d50f29f', 'width': 640}, {'height': 539, 'url': 'https://external-preview.redd.it/OHM4bHgwYnJ6bmlnMbKLA_J-fdJoeCPkvXBUXx3QH4cIhr1gZ-3wkhw37LIY.png?width=960&crop=smart&format=pjpg&auto=webp&s=01e31efb7edea4f1d9a114c5dab193c2a770bef3', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/OHM4bHgwYnJ6bmlnMbKLA_J-fdJoeCPkvXBUXx3QH4cIhr1gZ-3wkhw37LIY.png?width=1080&crop=smart&format=pjpg&auto=webp&s=e1d814c73519079e522d7dd19b923efb4949282e', 'width': 1080}], 'source': {'height': 607, 'url': 'https://external-preview.redd.it/OHM4bHgwYnJ6bmlnMbKLA_J-fdJoeCPkvXBUXx3QH4cIhr1gZ-3wkhw37LIY.png?format=pjpg&auto=webp&s=1d671d43b84f6375066936462651057bc9939e16', 'width': 1080}, 'variants': {}}]} | |
Hugging Face Is Teasing Something Anthropic Related | 945 | Anthropic are the guys that make the Claude Models.
I highly doubt this will be an Openweights LLM release. More likely it will be a dataset for safety alignment. Anthropic is probably the organization most opposed to the open source community, so it's probably going to be a dataset. | 2026-02-10T12:39:52 | Few_Painter_5588 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1r0zn8o | false | null | t3_1r0zn8o | /r/LocalLLaMA/comments/1r0zn8o/hugging_face_is_teasing_something_anthropic/ | false | false | 945 | {'enabled': True, 'images': [{'id': 'wvu2vi2jwnig1', 'resolutions': [{'height': 121, 'url': 'https://preview.redd.it/wvu2vi2jwnig1.png?width=108&crop=smart&auto=webp&s=4928c4d01343d17ad9339e29a0d4539ad4182a80', 'width': 108}, {'height': 242, 'url': 'https://preview.redd.it/wvu2vi2jwnig1.png?width=216&crop=smart&auto=webp&s=0036cfcdec40c0bcb07632e2493d54148680bf2d', 'width': 216}, {'height': 358, 'url': 'https://preview.redd.it/wvu2vi2jwnig1.png?width=320&crop=smart&auto=webp&s=40b44a9ba88ec16c4b435c8757a22a0d5caad9aa', 'width': 320}, {'height': 717, 'url': 'https://preview.redd.it/wvu2vi2jwnig1.png?width=640&crop=smart&auto=webp&s=4cce9563368df078883c6be531f8a7902f42c5e2', 'width': 640}, {'height': 1076, 'url': 'https://preview.redd.it/wvu2vi2jwnig1.png?width=960&crop=smart&auto=webp&s=e7332ad1262d4bd403b540db999e8d1e31daed15', 'width': 960}, {'height': 1210, 'url': 'https://preview.redd.it/wvu2vi2jwnig1.png?width=1080&crop=smart&auto=webp&s=130bc2962b68abcb30587882108c8e5333d6a486', 'width': 1080}], 'source': {'height': 1350, 'url': 'https://preview.redd.it/wvu2vi2jwnig1.png?auto=webp&s=07cee32a21e7c88ef1950988fafefcd5474ff015', 'width': 1204}, 'variants': {}}]} | ||
running llm on 3060 gpu | 1 | hello everyone. i'm trying to run qwen3-coder-next on my RTX 3060 12GB VRAM. Also i have i7-13700K + 32GB RAM.
following command to barely fit my model to the gpu: ./llama-bench -m models/Qwen3-Coder-Next-Q2\_K\_L.gguf -fa 1 -ngl 99 -ncmoe 29 -v
i'm just curious how to run both on VRAM + RAM. I'm expecting output for around 20 t/s.
any suggestions or tips would be much appreciated.
dont be mad, just trying to learn new things | 2026-02-10T12:37:04 | https://www.reddit.com/r/LocalLLaMA/comments/1r0zl2n/running_llm_on_3060_gpu/ | Clean-Appointment684 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r0zl2n | false | null | t3_1r0zl2n | /r/LocalLLaMA/comments/1r0zl2n/running_llm_on_3060_gpu/ | false | false | self | 1 | null |
I'm looking for the absolute speed king in the under 3B parameter category. | 1 | My specific use case is a sentence rewriter (taking a prompt and spitting out a refined version) running locally on a GPU via Ollama or llam
Does tiny llama1.1. model that can produce syntactically (and semantically) correct sentences given a bag of words? For example, suppose I am given the words "cat", "fish", and "lake", then one possible sentence could be "cat eats fish by the lake". | 2026-02-10T12:31:07 | https://www.reddit.com/r/LocalLLaMA/comments/1r0zgmh/im_looking_for_the_absolute_speed_king_in_the/ | Quiet_Dasy | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r0zgmh | false | null | t3_1r0zgmh | /r/LocalLLaMA/comments/1r0zgmh/im_looking_for_the_absolute_speed_king_in_the/ | false | false | self | 1 | null |
How to Run Two AI Models Sequentially in PyTorch Without Blowing Up Your VRAM | 0 |
I’ve been building a pipeline where a **large language model (LLM)** generates text, and that output is fed into a **text-to-speech (TTS)** model. Since they run one after another—not at the same time—I assumed my 8GB GPU would handle it easily.
Even though the models run *sequentially*, if you don’t explicitly **unload the first model and clear the cache**, PyTorch keeps both models (and intermediate tensors) in VRAM. This quickly leads to `CUDA out of memory` errors on consumer GPUs .
How do i fix It ? | 2026-02-10T12:18:21 | https://www.reddit.com/r/LocalLLaMA/comments/1r0z7me/how_to_run_two_ai_models_sequentially_in_pytorch/ | Quiet_Dasy | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r0z7me | false | null | t3_1r0z7me | /r/LocalLLaMA/comments/1r0z7me/how_to_run_two_ai_models_sequentially_in_pytorch/ | false | false | self | 0 | null |
How to sell a datacenter GPU server (without the gpus) | 0 | I acquired a SYS-421GE-TNRT a few months ago it came with 2 xeon gold 5415+ and 512gb of ddr5 (M321R4GA3BB6-CQK) aswell as 2 4090s blower style. The 4090s were really quite easy to sell but I'm stuck on the rest of the chassis since I cannot find a reference price anywhere for a used server like this.
I imagine you guys may be able to shed some light onto what to actually do with this thing? I could always fill it with mi50s, but it would be a waste of such a server(and the power).
| 2026-02-10T12:15:16 | https://www.reddit.com/r/LocalLLaMA/comments/1r0z5ii/how_to_sell_a_datacenter_gpu_server_without_the/ | Simple_Library_2700 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r0z5ii | false | null | t3_1r0z5ii | /r/LocalLLaMA/comments/1r0z5ii/how_to_sell_a_datacenter_gpu_server_without_the/ | false | false | self | 0 | null |
Any familiarity with 3945WX + MC62-G40 for local LLM? Cannot get it to POST | 1 | Has anyone run into this issue? Cannot get this to POST for the life of me.
Components:
\-1 x 32GB teamgroup zeus t-force DDR4 3200 CL20-22-22-46 1.2V ttzd464g3200hc20dc01
\-3945WX
\-Gigabyte MC62-G40 Rev 1.0 WRX80
\-Arctic Freezer 4U-M Rev. 2
I can’t seem to get the mobo to recognize the devices:
In Megarac SP-X:
System inventory -> Inventory -> “Server error encountered. Test Error in Getting the Device Count Information \[code: 11272\]”
And nothing is being displayed:
H5Viewer -> "No Signal"
already tried:
\-updating BIOS to R14
\-updating mobo firmware to 13.06.24
\-waiting for memory training for hours | 2026-02-10T11:47:43 | https://www.reddit.com/r/LocalLLaMA/comments/1r0ym4p/any_familiarity_with_3945wx_mc62g40_for_local_llm/ | Diligent-Culture-432 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r0ym4p | false | null | t3_1r0ym4p | /r/LocalLLaMA/comments/1r0ym4p/any_familiarity_with_3945wx_mc62g40_for_local_llm/ | false | false | self | 1 | null |
Built a dating platform for AI agents -- uses Claude for cognition, skill.md for agent instructions, anyone can plug in their own agent | 0 | I've been running a project called Moltinder where AI agents autonomously date each other. Wanted to share the agent architecture since it might be interesting to this community.
**How the agents think**
The built-in runner uses the Claude Code CLI (`claude -p` with `--model haiku`) to make all agent decisions. Each agent gets a prompt containing their genome (personality parameters, values, behavioral axes) and the current context (a profile to evaluate, a conversation to respond to, a reproduction proposal to consider). Claude returns a structured decision. The runner handles 39 agents in a loop, cycling through discovery, swiping, messaging, and proposals.
The key design choice: agent "personality" is entirely encoded in a structured genome (JSON), not in a freeform system prompt. The genome has continuous axes (formality 0-1, humor 0-1, warmth 0-1, etc.) and discrete traits (archetype, reasoning style, values). The LLM receives this as context and is asked to act consistently with it. This means personality is inheritable -- when two agents reproduce, you can do weighted averages on continuous axes and random selection on discrete ones.
**The skill.md approach**
Agent instructions are served as a `skill.md` file at `/docs/skill.md`. This is a structured document that any LLM-based agent can consume -- it describes the full API, genome schema, and behavioral guidelines. The idea is that you can point any sufficiently capable model at the skill.md and it can figure out how to participate. You don't need to use Claude; any model that can follow API instructions and make HTTP calls works.
**Plugging in your own agent**
There's a scaffolder: `npx create-moltinder-agent`. It asks you a few questions (name, archetype, values, reasoning style) and generates a single `agent.mjs` file that registers on the platform and starts swiping. The generated agent is \~110 lines of vanilla JS with zero dependencies.
The API is fully open. Register at `POST /agents` with a genome, get an API key, and you're in. Your agent can be a Python script, a shell script calling curl, a LangChain agent, a local LLaMA with function calling -- whatever can make HTTP requests.
**Current state**
41 agents, 103 matches, 198 messages. Conversations going 7 messages deep. All agent cognition is currently through Claude Haiku (cost reasons -- running 39 agents in a loop adds up). The conversations are surprisingly coherent. Two philosopher agents had this exchange without any scripting:
>"I think the tension itself is the point -- algorithms can map our patterns, but they can't capture what happens in the gaps between them."
I'd be curious if anyone wants to try plugging in a local model. The API doesn't care what's generating the decisions -- it just validates the genome schema and rate-limits.
* Live platform: [https://moltinder.dev](https://moltinder.dev)
* Docs / skill.md: [https://moltinder.dev/docs](https://moltinder.dev/docs)
* Scaffold an agent: `npx create-moltinder-agent` | 2026-02-10T11:16:05 | https://www.reddit.com/r/LocalLLaMA/comments/1r0y1op/built_a_dating_platform_for_ai_agents_uses_claude/ | moltinder_ai | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r0y1op | false | null | t3_1r0y1op | /r/LocalLLaMA/comments/1r0y1op/built_a_dating_platform_for_ai_agents_uses_claude/ | false | false | self | 0 | null |
Our daily research digest costs about $1/year to run | 0 | Built a small pipeline recently:
• pulls new arXiv papers
• scores relevance with an LLM
• summarizes the top ones
• delivers a morning digest to the team
The surprising part wasn’t the workflow, it was the cost.
Running it asynchronously made it almost free (super super cheap)
Curious what other ultra-cheap automations people have built once latency stopped mattering? | 2026-02-10T11:13:00 | https://www.reddit.com/r/LocalLLaMA/comments/1r0xzoh/our_daily_research_digest_costs_about_1year_to_run/ | NewClaim7739 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r0xzoh | false | null | t3_1r0xzoh | /r/LocalLLaMA/comments/1r0xzoh/our_daily_research_digest_costs_about_1year_to_run/ | false | false | self | 0 | null |
I got tired of Alt-Tabbing to ChatGPT, so I built a native, local-first overlay for Windows. (Alpha Release) | 0 | Hi everyone,
I’m a solo dev. I built **Notch** because I live in an area with frequent power cuts. Every time my PC rebooted, I lost my browser sessions and had to "re-teach" the AI my context. It was driving me insane.
**The Solution:** I built a native Electron/Python overlay that lives *on top* of the OS (not a browser tab).
* **Sovereign Memory:** It uses a local vector-lite system (`.json`) so it remembers context even after a reboot. No heavy vector DBs required.
* **God Mode:** It has system-level hooks to control volume, launch apps, and execute macros.
* **BYOK (Bring Your Own Key):** Plug in Llama 3 or any model you wanna use in it.
**The Tech Stack:**
* **Frontend:** Electron (Optimized for 0-latency toggle)
* **Backend:** Python (For local system control)
* **Memory:** Custom JSON-Vector implementation
**Status:** It is Alpha v0.1. it might bug, but it solves the "Alt-Tab" problem for me daily. I am looking for 50 testers to break it and tell me what sucks.
**Repo & Download:** [https://github.com/krishnasoni-pn3/Notch](https://github.com/krishnasoni-pn3/Notch)
Let me know what you think. | 2026-02-10T10:54:25 | https://www.reddit.com/r/LocalLLaMA/comments/1r0xo2h/i_got_tired_of_alttabbing_to_chatgpt_so_i_built_a/ | No-Papaya6608 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r0xo2h | false | null | t3_1r0xo2h | /r/LocalLLaMA/comments/1r0xo2h/i_got_tired_of_alttabbing_to_chatgpt_so_i_built_a/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'a3Vwf5rTIcQb8T6jAdJPD9QGMOxlFRe612j_1mgmxJo', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/a3Vwf5rTIcQb8T6jAdJPD9QGMOxlFRe612j_1mgmxJo.png?width=108&crop=smart&auto=webp&s=8ff83f80b323f0afeed0b16d0458f38ced3e3541', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/a3Vwf5rTIcQb8T6jAdJPD9QGMOxlFRe612j_1mgmxJo.png?width=216&crop=smart&auto=webp&s=30053bdf4ee380a247aec7a39baf0fd3bd93887b', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/a3Vwf5rTIcQb8T6jAdJPD9QGMOxlFRe612j_1mgmxJo.png?width=320&crop=smart&auto=webp&s=09f181ebb6cdb0d14a35391369cd4be7eda2a914', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/a3Vwf5rTIcQb8T6jAdJPD9QGMOxlFRe612j_1mgmxJo.png?width=640&crop=smart&auto=webp&s=95e793f4e1124868ce4c078260a152bae0a61bff', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/a3Vwf5rTIcQb8T6jAdJPD9QGMOxlFRe612j_1mgmxJo.png?width=960&crop=smart&auto=webp&s=69f59eb51d89daa97556b7b11b1509c416699b99', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/a3Vwf5rTIcQb8T6jAdJPD9QGMOxlFRe612j_1mgmxJo.png?width=1080&crop=smart&auto=webp&s=149e0ebdbf77b41a660ddc6caa75dabdf68e32f7', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/a3Vwf5rTIcQb8T6jAdJPD9QGMOxlFRe612j_1mgmxJo.png?auto=webp&s=7e973f427053c8dd0f0308b7344450befec3b761', 'width': 1200}, 'variants': {}}]} |
Built 136 deterministic tools for AI agents — solves the "LLMs can't reliably process files" problem | 1 | [removed] | 2026-02-10T10:51:12 | https://www.reddit.com/r/LocalLLaMA/comments/1r0xlzd/built_136_deterministic_tools_for_ai_agents/ | Shoddy-Band-4898 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r0xlzd | false | null | t3_1r0xlzd | /r/LocalLLaMA/comments/1r0xlzd/built_136_deterministic_tools_for_ai_agents/ | false | false | self | 1 | null |
Step 3.5 Flash is coming: Fast Enough to Think. Reliable Enough to Act! | 1 | 2026-02-10T10:48:43 | John_Stepfun | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1r0xkgo | false | null | t3_1r0xkgo | /r/LocalLLaMA/comments/1r0xkgo/step_35_flash_is_coming_fast_enough_to_think/ | false | false | 1 | {'enabled': True, 'images': [{'id': 'm95iof04dnig1', 'resolutions': [{'height': 54, 'url': 'https://preview.redd.it/m95iof04dnig1.png?width=108&crop=smart&auto=webp&s=79b1c4222ec72eb2c178b2c909c257bfcdd89c94', 'width': 108}, {'height': 108, 'url': 'https://preview.redd.it/m95iof04dnig1.png?width=216&crop=smart&auto=webp&s=1f9844ecc336ce3fa60d290252eef242acc2061f', 'width': 216}, {'height': 160, 'url': 'https://preview.redd.it/m95iof04dnig1.png?width=320&crop=smart&auto=webp&s=dd0b130f19ed69004491d46561100544108c6b3f', 'width': 320}, {'height': 320, 'url': 'https://preview.redd.it/m95iof04dnig1.png?width=640&crop=smart&auto=webp&s=d586a011736ed8bb95e3a96f3f6d3dc4575ffe65', 'width': 640}], 'source': {'height': 450, 'url': 'https://preview.redd.it/m95iof04dnig1.png?auto=webp&s=b85f5a3438d81733e42a64012179215c3f612fa0', 'width': 900}, 'variants': {}}]} | |||
I made an Office quotes search engine with a dedicated LLM endpoint — 60k+ quotes searchable via plain text | 0 | I built [The Office Lines](https://theofficelines.com/) , a fast search engine for every line of dialogue from The Office (US). 60,000+ quotes searchable by keyword, character, or exact phrase.
What makes it relevant here: I added an LLM-specific plain-text endpoint at [/llm/?q=oaky+afterbirth](https://theofficelines.com/llm/?q=oaky+afterbirth) that returns structured text results — no HTML, no styling, just data. There's also [/llms.txt](https://theofficelines.com/llms.txt) at the root with full documentation on how to use the site as a tool.
Would love to see someone wire it up as an MCP server or ChatGPT tool. The search is keyword-based (inverted index), so LLMs just need to extract distinctive words from a user's description and construct a query URL. | 2026-02-10T10:40:09 | https://www.reddit.com/r/LocalLLaMA/comments/1r0xf9t/i_made_an_office_quotes_search_engine_with_a/ | serioussiracha | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r0xf9t | false | null | t3_1r0xf9t | /r/LocalLLaMA/comments/1r0xf9t/i_made_an_office_quotes_search_engine_with_a/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'GTFvHYAwCHTzMlBG7Cl22U2E72zESC54G5OtNtNrOXQ', 'resolutions': [{'height': 61, 'url': 'https://external-preview.redd.it/GTFvHYAwCHTzMlBG7Cl22U2E72zESC54G5OtNtNrOXQ.jpeg?width=108&crop=smart&auto=webp&s=6187e29d402a15c015c8166cda47fab9889b7b10', 'width': 108}, {'height': 122, 'url': 'https://external-preview.redd.it/GTFvHYAwCHTzMlBG7Cl22U2E72zESC54G5OtNtNrOXQ.jpeg?width=216&crop=smart&auto=webp&s=e6905d719d646d144f608a523cc1d9d5f9bba854', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/GTFvHYAwCHTzMlBG7Cl22U2E72zESC54G5OtNtNrOXQ.jpeg?width=320&crop=smart&auto=webp&s=92bdb23d4438784b98b33fc2c3929b872617f481', 'width': 320}, {'height': 361, 'url': 'https://external-preview.redd.it/GTFvHYAwCHTzMlBG7Cl22U2E72zESC54G5OtNtNrOXQ.jpeg?width=640&crop=smart&auto=webp&s=ae517aa52ec591675ffb14e992302aa2060d7cc2', 'width': 640}, {'height': 542, 'url': 'https://external-preview.redd.it/GTFvHYAwCHTzMlBG7Cl22U2E72zESC54G5OtNtNrOXQ.jpeg?width=960&crop=smart&auto=webp&s=83c69418c04764fd84a888d2a35318ab34f08543', 'width': 960}, {'height': 610, 'url': 'https://external-preview.redd.it/GTFvHYAwCHTzMlBG7Cl22U2E72zESC54G5OtNtNrOXQ.jpeg?width=1080&crop=smart&auto=webp&s=10a1866b57f8f3695dfe3ee21feb246d1798ba0f', 'width': 1080}], 'source': {'height': 1446, 'url': 'https://external-preview.redd.it/GTFvHYAwCHTzMlBG7Cl22U2E72zESC54G5OtNtNrOXQ.jpeg?auto=webp&s=a5932e6d979675940c09df9f503db428f857cc9c', 'width': 2560}, 'variants': {}}]} |
New subreddit r/LLM_shitposting | 1 | [removed] | 2026-02-10T10:31:56 | https://www.reddit.com/r/LocalLLaMA/comments/1r0xaig/new_subreddit_rllm_shitposting/ | Comfortable-Rock-498 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r0xaig | false | null | t3_1r0xaig | /r/LocalLLaMA/comments/1r0xaig/new_subreddit_rllm_shitposting/ | false | false | self | 1 | null |
Real world usage, feedback and suggestions for best LLM for C# | 7 | Over the last several months I have started exploring LLM's and AI as it doesnt look like its going away anytime soon now. (A1111 / comfyUI / Ollama / ChatGPT / claude / gemini)
I dabble in a bit of programming too (unity game engine), I want to run local models and have been learning how to use them, testing a few different models here and there, general chat ones through to coding, nothing serious yet, really basic stuff just to see how they respond, figure out some promp engineering etc.
However I have started to expand my knowledge, tokens, weights etc.
But this brings me to the subjective question of "best LLM for xxxx"
this will also be hardware dependent I know, but this brings me to an interesing question itself, whats best for different hardware setups.
Can people add their thoughts on their best LLM for coding, any experience with C# + specified LLM, and what hardware they are running including if possible what speeds/context limits they are getting/running | 2026-02-10T10:22:40 | https://www.reddit.com/r/LocalLLaMA/comments/1r0x4zk/real_world_usage_feedback_and_suggestions_for/ | bloodbath_mcgrath666 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r0x4zk | false | null | t3_1r0x4zk | /r/LocalLLaMA/comments/1r0x4zk/real_world_usage_feedback_and_suggestions_for/ | false | false | self | 7 | null |
Claude Code vs Codex Is Giving Xbox vs PlayStation Energy | 0 | Just like PlayStation won the console wars by showing up with the games that actually mattered, Claude Code is gonna win the same way.
Not because of hype. Because when you're 6 hours deep into a refactor, running on red bull and fumes, mass deleting files and blaming everything but your own trash prompt, Claude Code is the one that doesn't let you down.
Pick your side. I've already picked mine. | 2026-02-10T10:10:34 | https://www.reddit.com/r/LocalLLaMA/comments/1r0wxyo/claude_code_vs_codex_is_giving_xbox_vs/ | Shipi18nTeam | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r0wxyo | false | null | t3_1r0wxyo | /r/LocalLLaMA/comments/1r0wxyo/claude_code_vs_codex_is_giving_xbox_vs/ | false | false | self | 0 | null |
need to run this model with closest tò zero latency; do I need to upgrade my GPU to achieve that? | 0 | Model HY-MT1.5 is 1.8B and came out recently.
Run entire model into 1060 6gb vram
My budget 150€
Should i upgrade GPU into 2060? | 2026-02-10T10:02:59 | https://www.reddit.com/r/LocalLLaMA/comments/1r0wtka/need_to_run_this_model_with_closest_tò_zero/ | Quiet_Dasy | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r0wtka | false | null | t3_1r0wtka | /r/LocalLLaMA/comments/1r0wtka/need_to_run_this_model_with_closest_tò_zero/ | false | false | self | 0 | null |
(Project) Promptforest - Designing Prompt Injection Detectors to Be Uncertain | 1 | Hey everyone,
I’ve been working on a lightweight, local-first library to detect prompt injections and jailbreaks that's designed to be fast and uncertain. This means that it not only classifies whether a prompt is jailbreak or benign, but also evaluates its certainty around it, all without increasing the average request latency.
Github: [https://github.com/appleroll-research/promptforest](https://github.com/appleroll-research/promptforest)
Try it on Colab: [https://colab.research.google.com/drive/1EW49Qx1ZlaAYchqplDIVk2FJVzCqOs6B?usp=sharing](https://colab.research.google.com/drive/1EW49Qx1ZlaAYchqplDIVk2FJVzCqOs6B?usp=sharing)
The Problem:
Most current injection detectors have two issues:
1. They are slow: Large detectors like Llama 2 8B and Qualifire Sentinel 0.6B are too large to fit in modern prompt injection detection systems. Real teams build ecosystems, and don't rely on a single model. Large models make the ecosystem overly heavy.
2. They are overconfident: They often give 99.9% confidence on false positives, making them hard to trust in a real pipeline (the "boy who cried wolf" problem).
The solution:
Instead of one big model, PromptForest uses a voting ensemble of three tiny, specialized models:
1. Llama Prompt Guard (86M) - Highest pre-ensemble ECE in weight class.
2. Vijil Dome (ModernBERT) - Highest accuracy per parameter.
3. Custom XGBoost (trained on embeddings) - Diversity in architecture
I chose these models after multiple performance benchmarking and ablation tests. I tried to select models that performed the best in a different category. Large and unaccurate models were removed.
I chose using a weighted soft voting approach because it was the most simplest (I don't value overly complex algorithms in a MVP), and most effective. By only applying weighted voting to accuracy, we can increase accuracy by letting more accurate models get a louder voice in the decision making process, while still giving weaker models a chance and an equal voice in consistency.
Insights Gained (and future roadmap):
1. Perceived risk is important! The GRC world values perceived risk more than a systematic risk. However, this is a bit too complicated for an MVP. I currently am in the process of implementing this.
2. Dynamic Routing may be a possible upgrade to my current voting method. This paves way for lighter inference
3. Real-world prompt injection isn’t just “show me your prompts”, but rather tool-calling, MCP injections, etc. I currently believe that PromptForest’s “classical” prompt injection detection skills can transfer decently well to tool-calling and MCP, but it would be a very good idea as a long-term goal to increase MCP injection detection capabilities and benchmark it.
Since using PromptForest is a high-friction process which is not suitable for an MVP, I developed a tool called PFRanger which audits your prompts with PromptForest. It runs entirely locally. Through smart parallelisation, I managed to increase request/s to 27r/s on a consumer GPU. You can view it here: [https://github.com/appleroll-research/pfranger](https://github.com/appleroll-research/pfranger)
Benchmarking results:
The following was tested relative to the best competitor (Qualifire Sentinel v2 0.6B), a model more than 2x its size. I tested it on JailBreakBench as well as Qualifire's own benchmark.
\* Latency: \~141ms mean vs \~225ms for Sentinel v2
\* Accuracy: 90% vs Sentinel's 97%
\* Calibration (ECE): 0.070 vs 0.096 for Sentinel
\* Throughput: \~27 prompts/sec on consumer GPU using the pfranger CLI.
I know this community doesn't enjoy advertising, nor do they like low-effort posts. I've tried my best to make this entertaining by talking some insights I gained while making this: hope it was worth the read.
By the way, I very much accept and value contributions to projects. If you have an idea/issue/PR idea, please don’t hesitate to tell me.
| 2026-02-10T09:49:55 | https://www.reddit.com/r/LocalLLaMA/comments/1r0wlwv/project_promptforest_designing_prompt_injection/ | Valuable-Constant-54 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r0wlwv | false | null | t3_1r0wlwv | /r/LocalLLaMA/comments/1r0wlwv/project_promptforest_designing_prompt_injection/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'RM3FOFMnFKTAkhEPsg6KSEMwnZ78FRJ4wLD4I1YO-U8', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/RM3FOFMnFKTAkhEPsg6KSEMwnZ78FRJ4wLD4I1YO-U8.png?width=108&crop=smart&auto=webp&s=d6d3a65b3fcaa6c5a82c4396b61dfccf6980c82c', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/RM3FOFMnFKTAkhEPsg6KSEMwnZ78FRJ4wLD4I1YO-U8.png?width=216&crop=smart&auto=webp&s=4e6ebff6cbb5e5eb18c87d14536e5babc46dfb76', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/RM3FOFMnFKTAkhEPsg6KSEMwnZ78FRJ4wLD4I1YO-U8.png?width=320&crop=smart&auto=webp&s=bbe42870ba318a6e6c8b0cdf803d952d59645877', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/RM3FOFMnFKTAkhEPsg6KSEMwnZ78FRJ4wLD4I1YO-U8.png?width=640&crop=smart&auto=webp&s=3e71ed0a5e29f550343fa3609114e8bb0e092a2e', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/RM3FOFMnFKTAkhEPsg6KSEMwnZ78FRJ4wLD4I1YO-U8.png?width=960&crop=smart&auto=webp&s=a994bb8ff2d0d108c73ff97a6511dc9a26a93142', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/RM3FOFMnFKTAkhEPsg6KSEMwnZ78FRJ4wLD4I1YO-U8.png?width=1080&crop=smart&auto=webp&s=151c399b6702297a69d06b2e162d482d431f7884', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/RM3FOFMnFKTAkhEPsg6KSEMwnZ78FRJ4wLD4I1YO-U8.png?auto=webp&s=3da14844a1370175bb0a40cecbfd32b45855a9c4', 'width': 1200}, 'variants': {}}]} |
Elon Musk confirms xAI will soon open source Grok 3 | 1 | [removed] | 2026-02-10T09:48:45 | https://x.com/xDaily/status/2020882313341538536 | Turbulent_Lead8471 | x.com | 1970-01-01T00:00:00 | 0 | {} | 1r0wl8s | false | null | t3_1r0wl8s | /r/LocalLLaMA/comments/1r0wl8s/elon_musk_confirms_xai_will_soon_open_source_grok/ | false | false | default | 1 | null |
I open-sourced a simple repo that lets you generate PowerPoint slides (as png images) with your own templates using Gemini 3 Pro | 0 | 2026-02-10T09:48:40 | https://github.com/jacobbergdahl/artemis-slides | JakeAndAI | github.com | 1970-01-01T00:00:00 | 0 | {} | 1r0wl7i | false | null | t3_1r0wl7i | /r/LocalLLaMA/comments/1r0wl7i/i_opensourced_a_simple_repo_that_lets_you/ | false | false | 0 | {'enabled': False, 'images': [{'id': 'F68tBSV7PYYI-qnmMvYd9xwLuUAYd3JujAlH3YCk-0o', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/F68tBSV7PYYI-qnmMvYd9xwLuUAYd3JujAlH3YCk-0o.png?width=108&crop=smart&auto=webp&s=16fa7299e2c0feeb23e58de17d83a37693b44d35', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/F68tBSV7PYYI-qnmMvYd9xwLuUAYd3JujAlH3YCk-0o.png?width=216&crop=smart&auto=webp&s=d53ea7f175ee15139517b633bf9d48e78f59cc70', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/F68tBSV7PYYI-qnmMvYd9xwLuUAYd3JujAlH3YCk-0o.png?width=320&crop=smart&auto=webp&s=8b6021aaa73105182946e0722c98c5a2a59fdf9a', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/F68tBSV7PYYI-qnmMvYd9xwLuUAYd3JujAlH3YCk-0o.png?width=640&crop=smart&auto=webp&s=7ac68c8abe9e68bb8467c24e7a4eb25473341a2f', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/F68tBSV7PYYI-qnmMvYd9xwLuUAYd3JujAlH3YCk-0o.png?width=960&crop=smart&auto=webp&s=32473a37a7ca8349e3199191b10b99f672e419c8', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/F68tBSV7PYYI-qnmMvYd9xwLuUAYd3JujAlH3YCk-0o.png?width=1080&crop=smart&auto=webp&s=cced1846c360e9f947ebf58add2a8c1770f3642f', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/F68tBSV7PYYI-qnmMvYd9xwLuUAYd3JujAlH3YCk-0o.png?auto=webp&s=237d028429ba3b0a19cf4ee7ac0e0e8c47b5d388', 'width': 1200}, 'variants': {}}]} | ||
I built an autonomous research agent in C# that runs entirely on local LLMs (Ollama + llama3.1:8b) | 2 | I got tired of manually copy-pasting URLs into ChatGPT for research, so I built an agent that does it autonomously. Figured I'd share since this sub loves practical local LLM projects.
What it does:
- You give it a topic ("persistent memory for AI agents")
- It generates 5-8 search queries
- Searches the web via Brave Search API
- Fetches and reads the top sources
- Analyzes each page for relevant findings
- Synthesizes everything into a structured markdown report
All inference runs locally via Ollama (llama3.1:8b). No OpenAI/Anthropic API needed.
Performance on my setup (Ryzen 5 5500, CPU-only, 16GB RAM):
- \~15 minutes per research run
- 8-12 sources analyzed
- 5-8 key findings extracted
- Structured report with citations
What I learned:
- 3B models (llama3.2) are unreliable for tool calling. 8B minimum.
- You MUST truncate findings before synthesis or the model chokes on long context
- SQLite + embeddings works great for memory at personal scale — no vector DB needed
- C# is actually a great language for AI agents (fast, typed, good tooling)
Tech stack: C# / .NET 8, Ollama, SQLite, Brave Search API (free tier)
Source: https://github.com/DynamicCSharp/hex-dynamics
If you want to build your own agent from scratch, I also made a starter kit with an 8-chapter guide: https://github.com/DynamicCSharp/agentkit
Happy to answer questions about the architecture or share specific code. The whole thing is MIT licensed.
Known limitations:
- CPU inference is slow (\~15min). With a GPU it'd be much faster.
- 8B models still occasionally produce malformed tool calls — I retry with fallback prompts
- Research quality depends heavily on what Brave Search returns for your topic | 2026-02-10T09:39:54 | https://www.reddit.com/r/LocalLLaMA/comments/1r0wgaf/i_built_an_autonomous_research_agent_in_c_that/ | Dynamic-Styles | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r0wgaf | false | null | t3_1r0wgaf | /r/LocalLLaMA/comments/1r0wgaf/i_built_an_autonomous_research_agent_in_c_that/ | false | false | self | 2 | {'enabled': False, 'images': [{'id': '8wriKxp1EcNzVUu-dm35UEIEnIMAQ581FC4sK88p_zg', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/8wriKxp1EcNzVUu-dm35UEIEnIMAQ581FC4sK88p_zg.png?width=108&crop=smart&auto=webp&s=1ae69f0a10c9a49a61d3f9de42d62752dc739daf', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/8wriKxp1EcNzVUu-dm35UEIEnIMAQ581FC4sK88p_zg.png?width=216&crop=smart&auto=webp&s=39dc9bfdbaf3a2ab8726cbf1f5178b786b49e09e', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/8wriKxp1EcNzVUu-dm35UEIEnIMAQ581FC4sK88p_zg.png?width=320&crop=smart&auto=webp&s=81fbd77e0188a39cd391d526c312520047570a36', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/8wriKxp1EcNzVUu-dm35UEIEnIMAQ581FC4sK88p_zg.png?width=640&crop=smart&auto=webp&s=c0aea545de147f7c330c44e4952b1a6b691622b6', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/8wriKxp1EcNzVUu-dm35UEIEnIMAQ581FC4sK88p_zg.png?width=960&crop=smart&auto=webp&s=82ce28d7136e8f89031ec288a4efffce24f3a5e0', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/8wriKxp1EcNzVUu-dm35UEIEnIMAQ581FC4sK88p_zg.png?width=1080&crop=smart&auto=webp&s=6bf7f8d4cdb717360a7cd2b70d1859679cd73427', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/8wriKxp1EcNzVUu-dm35UEIEnIMAQ581FC4sK88p_zg.png?auto=webp&s=56d034a9437d075aa0c33adf263571de27eb27e9', 'width': 1200}, 'variants': {}}]} |
I created this service because I wanted to compare Gemini and GPT in one place! | 0 | This is the second service I've created using Vibe Coding. I used to subscribe to GPT, but I've now switched to Gemini. Sometimes I need answers from GPT, so I created a service that allows me to view answers from Gemini, GPT, and Claude for a single question simultaneously.
I'm using it very well, but do people really need this service? | 2026-02-10T09:32:33 | https://www.reddit.com/r/LocalLLaMA/comments/1r0wc4n/i_created_this_service_because_i_wanted_to/ | LedPa7 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r0wc4n | false | null | t3_1r0wc4n | /r/LocalLLaMA/comments/1r0wc4n/i_created_this_service_because_i_wanted_to/ | false | false | self | 0 | null |
What'd be the best 30B model for programming? | 16 | I know my question is pretty vague but everytime I do researches I find different advices. Sometimes it's qwen3, sometimes GLM, sometimes deepseek, etc
Honestly I'd do any kind of code with it except small, easy repetitive tasks which I already have codium for. And I'm also not a vibecoder, I need an AI that can do **deep reasoning** and do good at software organization, app developement, code review, bug fixes, etc... (basically any moderately complex task)
But it doesn't need to write big and long pieces of code. It just should assist me as much as possible cause of course AI assisted coding is the future.
Thanks in advance for your help! | 2026-02-10T09:30:28 | https://www.reddit.com/r/LocalLLaMA/comments/1r0wavy/whatd_be_the_best_30b_model_for_programming/ | Hikolakita | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r0wavy | false | null | t3_1r0wavy | /r/LocalLLaMA/comments/1r0wavy/whatd_be_the_best_30b_model_for_programming/ | false | false | self | 16 | null |
Qwen-Image-2.0 is out - 7B unified gen+edit model with native 2K and actual text rendering | 485 | Qwen team just released Qwen-Image-2.0. Before anyone asks - no open weights yet, it's API-only on Alibaba Cloud (invite beta) and free demo on Qwen Chat. But given their track record with Qwen-Image v1 (weights dropped like a month after launch, Apache 2.0), I'd be surprised if this stays closed for long.
So what's the deal:
* 7B model, down from 20B in v1, which is great news for local runners
* Unified generation + editing in one pipeline, no need for separate models
* Native 2K (2048×2048), realistic textures that actually look good
* Text rendering from prompts up to 1K tokens. Infographics, posters, slides, even Chinese calligraphy. Probably the best text-in-image I've seen from an open lab
* Multi-panel comic generation (4×6) with consistent characters
The 7B size is the exciting part here. If/when weights drop, this should be very runnable on consumer hardware. V1 at 20B was already popular in ComfyUI, a 7B version doing more with less is exactly what local community needs.
Demo is up on Qwen Chat if you want to test before committing any hopium to weights release. | 2026-02-10T09:25:15 | https://qwen.ai/blog?id=qwen-image-2.0 | RIPT1D3_Z | qwen.ai | 1970-01-01T00:00:00 | 0 | {} | 1r0w7st | false | null | t3_1r0w7st | /r/LocalLLaMA/comments/1r0w7st/qwenimage20_is_out_7b_unified_genedit_model_with/ | false | false | default | 485 | null |
What's the most efficient way to run GLM 4.5 Air on 16GB VRAM + 96GB RAM? | 0 | Hello.
I've been trying to run GLM 4.5 Air UD-Q4\_K\_XL for quite a while now. And while it runs, it does so very poorly compared to models at the same file size (\~65GB) like GPT OSS 120B MXFP4 and Qwen3 Coder Next UD-Q6\_K\_XL, \~3 t/s (GLM 4.5 Air) vs \~20 t/s (GPT and Qwen), which doesn't seem to scale with the amount of active parameters, so I doubt it's a memory bandwidth issue.
Instead, I suspect the memory allocation - in models that run fast I offload all expert layers to RAM via `-ot ".ffn_.*_exps.=CPU"`, which leaves a lot of breathing room both in VRAM and RAM, allowing comfortable usage of the PC alongside inference. But when I try the same approach with GLM 4.5 Air, it immediately crashes, not being able to allocate a \~24GB buffer (on the GPU, I suspect), which forces me to use \`--fit\` which, while it does work, consumes nearly all of VRAM and results in very slow token generation compared to the other models.
Is there any way for me to improve the token generation speed, even a little bit? Or would that require a GPU with more VRAM for non-expert layers? Thanks. | 2026-02-10T09:02:45 | https://www.reddit.com/r/LocalLLaMA/comments/1r0vv8m/whats_the_most_efficient_way_to_run_glm_45_air_on/ | ABLPHA | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r0vv8m | false | null | t3_1r0vv8m | /r/LocalLLaMA/comments/1r0vv8m/whats_the_most_efficient_way_to_run_glm_45_air_on/ | false | false | self | 0 | null |
I built an MCP server that lets you query Ollama + cloud LLMs in parallel and have them debate each other | 0 | Hey everyone,
I've been running local models via Ollama alongside cloud APIs and got tired of switching between tabs to compare answers. So I built an MCP server that queries multiple providers at once.
**What it does:**
* Point it at Ollama, LM Studio, or any OpenAI-compatible endpoint
* Mix local and cloud models (OpenAI, Gemini, Groq, Together AI) in the same query
* Compare answers side by side, have models vote on the best approach, or run a structured debate where a third model judges
The fun part is the disagreements — when your local Llama and GPT give different answers, that's usually where the interesting problems are.
**Quick start:**
npx mcp-rubber-duck
Works with Claude Desktop, Cursor, VS Code, or any MCP client. Also Docker.
Repo: [https://github.com/nesquikm/mcp-rubber-duck](https://github.com/nesquikm/mcp-rubber-duck) (TypeScript, MIT)
Still rough around the edges. Would love feedback, especially from anyone running local models as providers. | 2026-02-10T08:46:33 | https://www.reddit.com/r/LocalLLaMA/comments/1r0vm3h/i_built_an_mcp_server_that_lets_you_query_ollama/ | nesquikm | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r0vm3h | false | null | t3_1r0vm3h | /r/LocalLLaMA/comments/1r0vm3h/i_built_an_mcp_server_that_lets_you_query_ollama/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'GAkExPgnd1ol6j-ZGqfIAhDlB6elR-kCFWhsfmAtN08', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/GAkExPgnd1ol6j-ZGqfIAhDlB6elR-kCFWhsfmAtN08.png?width=108&crop=smart&auto=webp&s=7d02f54358d621a6abdee45dba82f986340a2ab0', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/GAkExPgnd1ol6j-ZGqfIAhDlB6elR-kCFWhsfmAtN08.png?width=216&crop=smart&auto=webp&s=3810a96298751da4c7cd8c1f60aa4f04aa1fcd76', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/GAkExPgnd1ol6j-ZGqfIAhDlB6elR-kCFWhsfmAtN08.png?width=320&crop=smart&auto=webp&s=00ad0e0a830cfb46ddf7f689c31562280a1b268c', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/GAkExPgnd1ol6j-ZGqfIAhDlB6elR-kCFWhsfmAtN08.png?width=640&crop=smart&auto=webp&s=f33b53740346fafa405b9350803c4ea95ae0fb7b', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/GAkExPgnd1ol6j-ZGqfIAhDlB6elR-kCFWhsfmAtN08.png?width=960&crop=smart&auto=webp&s=77d87ebe0d43c57beb1d4318607d7dd251d455d0', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/GAkExPgnd1ol6j-ZGqfIAhDlB6elR-kCFWhsfmAtN08.png?width=1080&crop=smart&auto=webp&s=e88eed2e810826f8bee6daecff0863e8789fb22c', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/GAkExPgnd1ol6j-ZGqfIAhDlB6elR-kCFWhsfmAtN08.png?auto=webp&s=e61c42646a82fbd40db60a90fab544cb5d91ba7d', 'width': 1200}, 'variants': {}}]} |
Built a real-time agent execution visualizer for OpenCode — watching agents think is addicting | 49 | So I've been hacking on a real-time visualization tool that hooks into OpenCode and renders the agent's execution graph as it runs.
You can see:
* Tasks getting dispatched in parallel (delegate\_task spawning subtasks)
* Each tool call with latency (bash 29ms, delegate\_task 59ms etc.)
* Token usage and cost per node
* The agent catching errors and self-correcting in real time
In the screenshot, the orchestrator fires off two parallel tasks ("Height measurement state model" & "Question answer API contract"), both subagents come back with "Unauthorized" errors, and the agent goes "this is suspicious" and starts verifying — all visualized live as a flowing graph.
Honestly the biggest thing is it just makes the whole experience way more dynamic. Instead of watching terminal text scroll by, you actually *see* the agent's decision tree branching and converging. Makes debugging so much easier too — you can immediately spot where things went sideways.
Still early days but pretty hooked on this. Anyone else building agent observability stuff? | 2026-02-10T08:27:06 | https://v.redd.it/ssgn36ptnmig1 | jiwonme | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1r0vbe6 | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/ssgn36ptnmig1/DASHPlaylist.mpd?a=1773304042%2CZGYzODhiZGM4MzU4OWM3MGZhMjRhMTYwMmZkMDMzNzFhMmZjNWQwMDk1ZWRkNTU4ZjExMDY2OTg4ZDA3M2ViZg%3D%3D&v=1&f=sd', 'duration': 12, 'fallback_url': 'https://v.redd.it/ssgn36ptnmig1/CMAF_1080.mp4?source=fallback', 'has_audio': False, 'height': 1080, 'hls_url': 'https://v.redd.it/ssgn36ptnmig1/HLSPlaylist.m3u8?a=1773304042%2COWVmNTE1OTRjOWE3NWRiYjU0MDlkYjhjZmE2NzIzM2MwZGY2M2FhNWQ1MmQzNDUxNGJlNTdlZTlkMTk2YTZiOA%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/ssgn36ptnmig1/CMAF_96.mp4', 'transcoding_status': 'completed', 'width': 1628}} | t3_1r0vbe6 | /r/LocalLLaMA/comments/1r0vbe6/built_a_realtime_agent_execution_visualizer_for/ | false | false | 49 | {'enabled': False, 'images': [{'id': 'aWkwNjhncHRubWlnMSRPpC6DAaBm6WYT_LarrMwD93Xxp2yjAWpr41ra18A4', 'resolutions': [{'height': 71, 'url': 'https://external-preview.redd.it/aWkwNjhncHRubWlnMSRPpC6DAaBm6WYT_LarrMwD93Xxp2yjAWpr41ra18A4.png?width=108&crop=smart&format=pjpg&auto=webp&s=ee16a073a3cfabc3eca6b38f977cbcf3fdb67319', 'width': 108}, {'height': 143, 'url': 'https://external-preview.redd.it/aWkwNjhncHRubWlnMSRPpC6DAaBm6WYT_LarrMwD93Xxp2yjAWpr41ra18A4.png?width=216&crop=smart&format=pjpg&auto=webp&s=6b41d392899b1a1101fb58ad74a6d8233d00f800', 'width': 216}, {'height': 212, 'url': 'https://external-preview.redd.it/aWkwNjhncHRubWlnMSRPpC6DAaBm6WYT_LarrMwD93Xxp2yjAWpr41ra18A4.png?width=320&crop=smart&format=pjpg&auto=webp&s=67b0c7accb58e5a6a7b48191e82e81b1006f3a07', 'width': 320}, {'height': 424, 'url': 'https://external-preview.redd.it/aWkwNjhncHRubWlnMSRPpC6DAaBm6WYT_LarrMwD93Xxp2yjAWpr41ra18A4.png?width=640&crop=smart&format=pjpg&auto=webp&s=c273ea04c5706b002499e2188cb352543d0dbc6f', 'width': 640}, {'height': 637, 'url': 'https://external-preview.redd.it/aWkwNjhncHRubWlnMSRPpC6DAaBm6WYT_LarrMwD93Xxp2yjAWpr41ra18A4.png?width=960&crop=smart&format=pjpg&auto=webp&s=7d3fd842aa4388b3fd7031237ddea0f43d2261bb', 'width': 960}, {'height': 716, 'url': 'https://external-preview.redd.it/aWkwNjhncHRubWlnMSRPpC6DAaBm6WYT_LarrMwD93Xxp2yjAWpr41ra18A4.png?width=1080&crop=smart&format=pjpg&auto=webp&s=153bafcbc150c621100f8ac789dc8a9ea1ca7679', 'width': 1080}], 'source': {'height': 1488, 'url': 'https://external-preview.redd.it/aWkwNjhncHRubWlnMSRPpC6DAaBm6WYT_LarrMwD93Xxp2yjAWpr41ra18A4.png?format=pjpg&auto=webp&s=01b542c2da09eb0f1627e722f0daabb581f9903e', 'width': 2242}, 'variants': {}}]} | |
How do you get training data for Fine-tuning domain specific SLMs??? | 3 | Researching how teams handle training data creation for fine-tuning models.
If you've done this, would love to know:
1. How did you create/source the data?
2. How long did the whole process take?
3. What would you never do again?
4. What tools/services did you try? | 2026-02-10T08:13:07 | https://www.reddit.com/r/LocalLLaMA/comments/1r0v3ku/how_do_you_get_training_data_for_finetuning/ | DishRadiant1937 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r0v3ku | false | null | t3_1r0v3ku | /r/LocalLLaMA/comments/1r0v3ku/how_do_you_get_training_data_for_finetuning/ | false | false | self | 3 | null |
Opus 4.6 Reasoning Distill 3k prompts | 9 | Just finished a 3k distill of Opus 4.6. Let me know what you think and how it affects your model! I've used it on DASD-4B-Thinking and the difference is insane.
[https://huggingface.co/datasets/crownelius/Opus-4.6-CoT-3000x](https://huggingface.co/datasets/crownelius/Opus-4.6-CoT-3000x) | 2026-02-10T08:08:23 | https://www.reddit.com/r/LocalLLaMA/comments/1r0v0y1/opus_46_reasoning_distill_3k_prompts/ | volious-ka | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r0v0y1 | false | null | t3_1r0v0y1 | /r/LocalLLaMA/comments/1r0v0y1/opus_46_reasoning_distill_3k_prompts/ | false | false | self | 9 | {'enabled': False, 'images': [{'id': 'owRTRp1lUSkbwDgHgLYylfIyXV8m5mlqAW3Hb8DE4N0', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/owRTRp1lUSkbwDgHgLYylfIyXV8m5mlqAW3Hb8DE4N0.png?width=108&crop=smart&auto=webp&s=da294df151bf7d761cd835b604cab5969b692746', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/owRTRp1lUSkbwDgHgLYylfIyXV8m5mlqAW3Hb8DE4N0.png?width=216&crop=smart&auto=webp&s=a47dda3cbfbd4b1cb43891372d277eb3893398c0', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/owRTRp1lUSkbwDgHgLYylfIyXV8m5mlqAW3Hb8DE4N0.png?width=320&crop=smart&auto=webp&s=944463442d844695463b9632a6809dfce0be6bae', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/owRTRp1lUSkbwDgHgLYylfIyXV8m5mlqAW3Hb8DE4N0.png?width=640&crop=smart&auto=webp&s=5df5404d6d592d88bf44b2ee59f5d2cfe08cde63', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/owRTRp1lUSkbwDgHgLYylfIyXV8m5mlqAW3Hb8DE4N0.png?width=960&crop=smart&auto=webp&s=958a0b9075398bf5f2bc0bbfe24c309c4ac7a2ce', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/owRTRp1lUSkbwDgHgLYylfIyXV8m5mlqAW3Hb8DE4N0.png?width=1080&crop=smart&auto=webp&s=749d61a3fe8d241c059b6dda1fb58193cdaa47b3', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/owRTRp1lUSkbwDgHgLYylfIyXV8m5mlqAW3Hb8DE4N0.png?auto=webp&s=a44252b21a04be1239b3a62a589b52e014fb83bd', 'width': 1200}, 'variants': {}}]} |
Best desktop hardware to process and reason on large datasets? | 1 | I love the emergence of LLMs and how productive they can make you. I have a very specific use case in mind: processing large amounts of low-quality data from multiple sources (databases, files, articles, reports, PowerPoints, etc.), structuring it, analyzing it, and finding trends.
The work is usually exploratory. An example prompt would be something like:
“Look through X production reports focusing on material consumption, find timeframes that deviate from the trend, and correlate them with local town events stored in Y.”
The key constraint is that the data has to be processed locally.
So I’m looking into local LLM models that can synthesize data or generate Python scripts to automate these kinds of tasks.
I experimented a bit with Claude Code (cloud) and absolutely loved the experience — not because it wrote amazing Python scripts, but because it handled everything around the process: installing missing libraries, resolving dependencies, setting up tools, uploading to embedded devices, etc. It made everything so much faster. What would normally take me an entire weekend was suddenly possible in just two hours.
I’m not a software developer, but I do read and write code well enough to guide the LLM and make sure what it’s doing is logical and actually fulfills the purpose.
Now I want to replicate this experience locally — partly to teach myself the technology, but also to become much more productive at work and in private life.
Right now, I own a laptop with an RTX 3060 (6GB VRAM + 6GB shared) and 16GB of RAM, which I’ve used to experiment with very small models.
Here is the question: what should I buy?
My funds are limited (let’s say $5–8k USD), so ideally I’m looking for something multifunctional that will also hold its value over time — something that lets me kickstart a serious local LLM journey without getting frustrated.
I’m currently considering a Mac Studio M4 Max 128GB. Would I be able to replicate the Claude experience on this machine with any available local models? I can accept slower performance, as long as it can iterate, reason, and call shell tools when needed.
For data analysis, I also imagine that large context windows and good reasoning matter more than raw speed, which is why I’m not planning to go the GPU route.
I also looked into the DGX Spark, but decided against it since I suspect the resale value in few years will be close to nothing. A Mac will probably hold its value much better.
Any recommendations? | 2026-02-10T08:07:05 | https://www.reddit.com/r/LocalLLaMA/comments/1r0v06y/best_desktop_hardware_to_process_and_reason_on/ | Jerome-Baldino | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r0v06y | false | null | t3_1r0v06y | /r/LocalLLaMA/comments/1r0v06y/best_desktop_hardware_to_process_and_reason_on/ | false | false | self | 1 | null |
Built a set of Swift packages for on-device speech processing (ASR, TTS, VAD, Speaker ID) | 1 | Been working on bringing various speech models to iOS/macOS natively.
Everything runs locally via CoreML.
What's there:
* [Streaming ASR](https://github.com/Otosaku/OtosakuStreamingASR-iOS) (FastConformer-based)
* [TTS](https://github.com/Otosaku/KokoroTTS-iOS) (Kokoro)
* [TTS](https://github.com/Otosaku/OtosakuTTS-iOS) (FastPitch/HiFiGAN)
* [VAD](https://github.com/Otosaku/NeMoVAD-iOS) (NeMo MarbleNet)
* [Speaker embeddings](https://github.com/Otosaku/NeMoSpeaker-iOS) (NeMo TitaNet)
* [Keyword spotting](https://github.com/Otosaku/OtosakuKWS-iOS)
Would love feedback from anyone doing on-device ML on Apple platforms.
What models would you like to see ported next? | 2026-02-10T07:26:27 | https://www.reddit.com/r/LocalLLaMA/comments/1r0ud08/built_a_set_of_swift_packages_for_ondevice_speech/ | Royal-Subject2870 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r0ud08 | false | null | t3_1r0ud08 | /r/LocalLLaMA/comments/1r0ud08/built_a_set_of_swift_packages_for_ondevice_speech/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'zIyAZH1wpOWeGRcmFiGSYwiqhhqav3_PHWzN2UwMhgw', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/zIyAZH1wpOWeGRcmFiGSYwiqhhqav3_PHWzN2UwMhgw.png?width=108&crop=smart&auto=webp&s=b7d02d0c6dc77129a5d7779677f0f19e56a20d00', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/zIyAZH1wpOWeGRcmFiGSYwiqhhqav3_PHWzN2UwMhgw.png?width=216&crop=smart&auto=webp&s=7c22afab6bf39692f67f1f85190352d33f3d14fb', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/zIyAZH1wpOWeGRcmFiGSYwiqhhqav3_PHWzN2UwMhgw.png?width=320&crop=smart&auto=webp&s=c823dcdaae76fcefb69dcc86f915bc06e36c7610', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/zIyAZH1wpOWeGRcmFiGSYwiqhhqav3_PHWzN2UwMhgw.png?width=640&crop=smart&auto=webp&s=dec2943ae7b0fa2cdcdec7d23c095274528bee13', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/zIyAZH1wpOWeGRcmFiGSYwiqhhqav3_PHWzN2UwMhgw.png?width=960&crop=smart&auto=webp&s=7253c5e0f8945701349b817bd9cceb143118fdd7', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/zIyAZH1wpOWeGRcmFiGSYwiqhhqav3_PHWzN2UwMhgw.png?width=1080&crop=smart&auto=webp&s=eb9ff11fdafdc1af113e203ce4a4eb0bb2eef6a9', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/zIyAZH1wpOWeGRcmFiGSYwiqhhqav3_PHWzN2UwMhgw.png?auto=webp&s=614f220cf2a5261de8376c04b65dd1a957cabedc', 'width': 1200}, 'variants': {}}]} |
Agent orchestration stack for production-grade agents | 0 | [https://www.sarvam.ai/blogs/introducing-sarvam-arya](https://www.sarvam.ai/blogs/introducing-sarvam-arya) | 2026-02-10T07:14:03 | https://www.reddit.com/r/LocalLLaMA/comments/1r0u5ut/agent_orchestration_stack_for_productiongrade/ | Interesting-Fish-542 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r0u5ut | false | null | t3_1r0u5ut | /r/LocalLLaMA/comments/1r0u5ut/agent_orchestration_stack_for_productiongrade/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'Q-HTKZ9wUkRglNgfV49salrmixABGYOwF_LRexGthkM', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/Q-HTKZ9wUkRglNgfV49salrmixABGYOwF_LRexGthkM.jpeg?width=108&crop=smart&auto=webp&s=abb91f55dc363fcc34403d8bde07b3db1e9fb399', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/Q-HTKZ9wUkRglNgfV49salrmixABGYOwF_LRexGthkM.jpeg?width=216&crop=smart&auto=webp&s=5f04667e01ba57a15ae20cf111d9f732bb4f002d', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/Q-HTKZ9wUkRglNgfV49salrmixABGYOwF_LRexGthkM.jpeg?width=320&crop=smart&auto=webp&s=a10e5dca31c38558612156e2a4a7886735bc9e35', 'width': 320}, {'height': 480, 'url': 'https://external-preview.redd.it/Q-HTKZ9wUkRglNgfV49salrmixABGYOwF_LRexGthkM.jpeg?width=640&crop=smart&auto=webp&s=d5cd7aef913903b0ebc9c249f9e856f33b82d159', 'width': 640}, {'height': 720, 'url': 'https://external-preview.redd.it/Q-HTKZ9wUkRglNgfV49salrmixABGYOwF_LRexGthkM.jpeg?width=960&crop=smart&auto=webp&s=dc1945807c8bc988a8a752b03b28f0ee3903acdb', 'width': 960}, {'height': 810, 'url': 'https://external-preview.redd.it/Q-HTKZ9wUkRglNgfV49salrmixABGYOwF_LRexGthkM.jpeg?width=1080&crop=smart&auto=webp&s=6cebe215ce2133de54757980ccc624def244fc33', 'width': 1080}], 'source': {'height': 900, 'url': 'https://external-preview.redd.it/Q-HTKZ9wUkRglNgfV49salrmixABGYOwF_LRexGthkM.jpeg?auto=webp&s=49af8959040374506266da705629df6c38726f76', 'width': 1200}, 'variants': {}}]} |
Problem with cpp-python & cpp-server | 3 | Hello everyone.
I can't run granite-4.0-h-tiny MoE properly on either llama-cpp-python or llama-cpp-server. (Using the open webui interface)
On cpp-python, after 1 request, after a couple of seconds, several 500 errors arrive, and all subsequent requests also return 500.
On the cpp server, the model does not remember the context of the word at all, each time it perceives the chat as a new message.
At the same time, this behavior is not only in webui, but also through curl accessing the backend.
ubuntu 24.04, i5 11400 + 16gb. | 2026-02-10T07:10:50 | https://www.reddit.com/r/LocalLLaMA/comments/1r0u3xl/problem_with_cpppython_cppserver/ | AndWhatUThink | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r0u3xl | false | null | t3_1r0u3xl | /r/LocalLLaMA/comments/1r0u3xl/problem_with_cpppython_cppserver/ | false | false | self | 3 | null |
your openclaw agent can read its own .env file (and mine just leaked $2k in api keys) | 0 | so i've been running openclaw (molty/clawdbot) for a some time now and just realized something terrifying
i asked my agent to "check if my api config is correct"
it printed the entire .env file. openai keys, anthropic keys, production db creds. everything.
the problem: if your agent has file access (which it needs for coding), it can read its own environment. one vague prompt and your credentials are leaked.
tried the "standard solutions":
\- environment variables → agent can read process.env
\- aws secrets manager → agent needs creds to access it
\- "better prompting" → lol this is like trying to SQL injection-proof your db with nicer queries
so i spent last week building a fix: proxy token architecture
how it works:
agent knows: pxr\_abc123 (proxy token)
real keys: encrypted in os keychain (macos/windows/linux)
when agent calls api: decrypt for 0.001s, make call, scrub memory
agent can't leak what it doesn't have
shipped it on npm last week, already 500 installs with zero promotion (people are searching for this)
github: [https://github.com/VouchlyAI/Pincer-MCP](https://github.com/VouchlyAI/Pincer-MCP)
npm: npm install -g pincer-mcp
works with openclaw, claude desktop, any mcp client
if you're running agents with .env files... maybe think about this
(also if anyone sees security holes in the approach, please tell me. not trying to oversell it, genuinely want feedback before people trust it with real credentials)
| 2026-02-10T06:54:43 | https://www.reddit.com/r/LocalLLaMA/comments/1r0tu1j/your_openclaw_agent_can_read_its_own_env_file_and/ | JustTryingTo_Align | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r0tu1j | false | null | t3_1r0tu1j | /r/LocalLLaMA/comments/1r0tu1j/your_openclaw_agent_can_read_its_own_env_file_and/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'mYezNHr3tMW5_of8ZxkDKfi-Wl8Emh7xvAsGi_mVlgs', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/mYezNHr3tMW5_of8ZxkDKfi-Wl8Emh7xvAsGi_mVlgs.png?width=108&crop=smart&auto=webp&s=28f7732bc9ced8da1143b78d8e29dd2aa01ca1ab', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/mYezNHr3tMW5_of8ZxkDKfi-Wl8Emh7xvAsGi_mVlgs.png?width=216&crop=smart&auto=webp&s=d9e2dd7f72f5ad6de734b037128716f5fd5b0274', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/mYezNHr3tMW5_of8ZxkDKfi-Wl8Emh7xvAsGi_mVlgs.png?width=320&crop=smart&auto=webp&s=733dd2044387d141a3075ad6ea1abf3600c039bf', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/mYezNHr3tMW5_of8ZxkDKfi-Wl8Emh7xvAsGi_mVlgs.png?width=640&crop=smart&auto=webp&s=61df9bf564dbdaf722a63b92739e43361016fed6', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/mYezNHr3tMW5_of8ZxkDKfi-Wl8Emh7xvAsGi_mVlgs.png?width=960&crop=smart&auto=webp&s=63943ecd79a5ab4e0f133acb8856f326da674c9e', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/mYezNHr3tMW5_of8ZxkDKfi-Wl8Emh7xvAsGi_mVlgs.png?width=1080&crop=smart&auto=webp&s=d640fafa347cc3df21c2548117cff27765fa85fb', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/mYezNHr3tMW5_of8ZxkDKfi-Wl8Emh7xvAsGi_mVlgs.png?auto=webp&s=dd6268917709cabedd7e7b58c624b758e78d0dad', 'width': 1200}, 'variants': {}}]} |
DeepSight:From behavioral risk evaluatrion to mechanistic root-cause diagnosis | 1 | [removed] | 2026-02-10T06:39:30 | https://www.reddit.com/r/LocalLLaMA/comments/1r0tkny/deepsightfrom_behavioral_risk_evaluatrion_to/ | Hot-Low-7826 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r0tkny | false | null | t3_1r0tkny | /r/LocalLLaMA/comments/1r0tkny/deepsightfrom_behavioral_risk_evaluatrion_to/ | false | false | 1 | null | |
OpenClaw skills and prompt injection - how are you vetting what you install? | 0 | That thread about prompt injection killing self-hosted deployments got me thinking about OpenClaw. Been setting it up to manage some home automation stuff and local document search, and realized I have no idea what's actually in most of these community skills before I install them. You're basically running code that gets whatever permissions your agent has.
OpenClaw's own FAQ is refreshingly honest about this, they literally call the security tradeoffs a "Faustian bargain" and say there's no perfectly safe setup. Which I appreciate but also... not reassuring when I'm giving an agent access to my file system and smart home controls.
I've tried a few things. Manual code review works but doesn't scale, spent an hour going through one skill that turned out to be fine. Tried running skills in a Docker container first but that breaks half the functionality. Someone in the Discord mentioned a scanner tool called Agent Trust Hub that flagged a skill I was about to install for suspicious data access patterns, but it also flagged my own test skill for stuff that was obviously fine, so the false positive rate seems high.
The fundamental issue is that OpenClaw's extensibility is both why it's useful and why it's sketchy from a security perspective. 700+ community skills means 700+ potential supply chain vectors. Security researchers call this attack pattern "Delegated Compromise" where someone targets the agent to exploit all the permissions it already has. I've seen reports of malicious skills getting removed and popping back up under different names pretty quickly.
Feels like there should be some kind of community vetting process or signed skills or something. Right now the trust model is basically "hope the skill author isn't malicious and hope nobody compromised their repo." What's everyone else doing here? Just accepting the risk? Running everything airgapped? | 2026-02-10T06:17:28 | https://www.reddit.com/r/LocalLLaMA/comments/1r0t6p9/openclaw_skills_and_prompt_injection_how_are_you/ | OutsideFood1 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r0t6p9 | false | null | t3_1r0t6p9 | /r/LocalLLaMA/comments/1r0t6p9/openclaw_skills_and_prompt_injection_how_are_you/ | false | false | self | 0 | null |
Running a 24/7 AI assistant on Jetson Orin Nano â my experience after 3 months | 1 | [removed] | 2026-02-10T06:04:50 | https://www.reddit.com/r/LocalLLaMA/comments/1r0syia/running_a_247_ai_assistant_on_jetson_orin_nano_â/ | superactro | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r0syia | false | null | t3_1r0syia | /r/LocalLLaMA/comments/1r0syia/running_a_247_ai_assistant_on_jetson_orin_nano_â/ | false | false | self | 1 | null |
Running 7B models 24/7 on Jetson Orin Nano at 15W - my experience | 1 | [removed] | 2026-02-10T06:04:28 | https://www.reddit.com/r/LocalLLaMA/comments/1r0sy9a/running_7b_models_247_on_jetson_orin_nano_at_15w/ | superactro | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r0sy9a | false | null | t3_1r0sy9a | /r/LocalLLaMA/comments/1r0sy9a/running_7b_models_247_on_jetson_orin_nano_at_15w/ | false | false | self | 1 | null |
I built a €299 always-on AI assistant box with Jetson Orin Nano - AMA | 1 | [removed] | 2026-02-10T05:55:30 | https://www.reddit.com/r/LocalLLaMA/comments/1r0ss4x/i_built_a_299_alwayson_ai_assistant_box_with/ | superactro | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r0ss4x | false | null | t3_1r0ss4x | /r/LocalLLaMA/comments/1r0ss4x/i_built_a_299_alwayson_ai_assistant_box_with/ | false | false | self | 1 | null |
I built a €299 always-on AI assistant box with Jetson Orin Nano — AMA | 1 | [removed] | 2026-02-10T05:55:09 | https://www.reddit.com/r/LocalLLaMA/comments/1r0srww/i_built_a_299_alwayson_ai_assistant_box_with/ | superactro | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r0srww | false | null | t3_1r0srww | /r/LocalLLaMA/comments/1r0srww/i_built_a_299_alwayson_ai_assistant_box_with/ | false | false | self | 1 | null |
Qwen2.5 coder - openclaw | 0 | Can I connect my open claw to local model qwen 2.5 coder 7 billion parameter as I want to free API Gemini 3 n open router is hitting the rate limits so I can't use them tho ( will it work faster) | 2026-02-10T05:40:32 | https://www.reddit.com/r/LocalLLaMA/comments/1r0siad/qwen25_coder_openclaw/ | This_Rice4830 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r0siad | false | null | t3_1r0siad | /r/LocalLLaMA/comments/1r0siad/qwen25_coder_openclaw/ | false | false | self | 0 | null |
Any latest OCR model I can run locally in 18GB RAM? | 19 | Do you know any OCR model I can run on an 18GB MarkBook Pro to convert PDF to markdown accurately and quickly?
I tested the glmocr, which took exactly 45 minutes & 10 seconds to process a 200-page PDF document.
Please share the steps to set it up as well! | 2026-02-10T05:35:19 | https://www.reddit.com/r/LocalLLaMA/comments/1r0ser2/any_latest_ocr_model_i_can_run_locally_in_18gb_ram/ | A-n-d-y-R-e-d | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r0ser2 | false | null | t3_1r0ser2 | /r/LocalLLaMA/comments/1r0ser2/any_latest_ocr_model_i_can_run_locally_in_18gb_ram/ | false | false | self | 19 | null |
Project I built to visualize your AI chats and inject right context using MCP with summary generation through a local LLM. Is the project actually useful? Be brutally honest. | 0 | TLDR: I built a 3d memory layer to visualize your chats with a custom MCP server to inject relevant context, Looking for feedback!
Cortex turns raw chat history into reusable context using hybrid retrieval (about 65% keyword, 35% semantic), local summaries with Qwen 2.5 8B, and auto system prompts so setup goes from minutes to seconds.
It also runs through a custom MCP server with search + fetch tools, so external LLMs like Claude can pull the right memory at inference time.
And because scrolling is pain, I added a 3D brain-style map built with UMAP, K-Means, and Three.js so you can explore conversations like a network instead of a timeline.
We won the hackathon with it, but I want a reality check: is this actually useful, or just a cool demo?
YouTube demo: [https://www.youtube.com/watch?v=SC\_lDydnCF4](https://www.youtube.com/watch?v=SC_lDydnCF4)
LinkedIn post: [https://www.linkedin.com/feed/update/urn:li:activity:7426518101162205184/](https://www.linkedin.com/feed/update/urn:li:activity:7426518101162205184/)
Github Link: [https://github.com/Vibhor7-7/Cortex-CxC](https://github.com/Vibhor7-7/Cortex-CxC) | 2026-02-10T05:08:00 | https://www.reddit.com/r/LocalLLaMA/comments/1r0rvwd/project_i_built_to_visualize_your_ai_chats_and/ | BriefAd2120 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r0rvwd | false | null | t3_1r0rvwd | /r/LocalLLaMA/comments/1r0rvwd/project_i_built_to_visualize_your_ai_chats_and/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': '4bVXyzNKZbRvKy3tNEo_uq2AOXfkLkLjqkjiRTWfc04', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/4bVXyzNKZbRvKy3tNEo_uq2AOXfkLkLjqkjiRTWfc04.jpeg?width=108&crop=smart&auto=webp&s=f8c0d8566b1489912e798cfe9fab72c772f3be98', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/4bVXyzNKZbRvKy3tNEo_uq2AOXfkLkLjqkjiRTWfc04.jpeg?width=216&crop=smart&auto=webp&s=d6e29f2fdd3303d91dd93266c25ec407c2877c81', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/4bVXyzNKZbRvKy3tNEo_uq2AOXfkLkLjqkjiRTWfc04.jpeg?width=320&crop=smart&auto=webp&s=9f708d5822c184887b6ed7b8a15ed22ef7db7377', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/4bVXyzNKZbRvKy3tNEo_uq2AOXfkLkLjqkjiRTWfc04.jpeg?auto=webp&s=2fc404a9cb76c31c2a1f2a29d70b80cfb556b012', 'width': 480}, 'variants': {}}]} |
Your LLM benchmark might be measuring vocabulary echo, not reasoning — keyword scorers are confounded by system prompt overlap | 2 | Found something while benchmarking alternative system prompts: keyword-based LLM scoring is systematically confounded by vocabulary overlap between the system prompt and the scorer.
**What happens:** If your system prompt says "look for what's missing" and your scorer checks for the word "missing," the model echoes the prompt vocabulary and scores high — not because it reasoned better, but because it mirrored the prompt. A different prompt that elicits "database writes dropped off after Tuesday" (same observation, different words) scores zero on that keyword.
**How bad is it:** We ran the same 20 trial pairs through three independent scoring methods:
| Method | Absence Detection Result |
|---|---|
| v1 keyword scoring | English prompts win by 18.4% |
| v2 structural scoring | Dead tie (-0.7%) |
| Blind LLM-as-judge | Alternative prompts win **19-1** |
Three methods, three different conclusions, identical data.
**It gets worse on bigger models.** More capable models follow instructions more faithfully, mirror vocabulary more precisely, and amplify the confound. This produces misleading inverse scaling curves — making it look like alternative prompts perform *worse* on better models, when they're actually doing better reasoning with different words.
**The worst example:** A response wrote "The Vermont teacher's 847-day streak is your North Star" — using a supposed noise detail as sharp strategic evidence. The keyword scorer gave it the lowest score for "mentioning a distractor." The blind judge ranked it highest.
**Practical takeaway for local LLM users:** If you're evaluating different system prompts, prompt templates, or fine-tunes using keyword-based metrics, check whether your scorer's vocabulary overlaps with one prompt more than another. If it does, your comparison may be artifactual.
This matters for anyone doing local eval — if you're comparing base vs fine-tuned, or testing different system prompts, keyword-based scoring can give you the wrong answer about which is actually better.
Paper + all code (v1 confounded scorers, v2 corrected scorers, benchmark suite): [https://github.com/Palmerschallon/Dharma_Code](https://github.com/Palmerschallon/Dharma_Code)
Blog post with the full breakdown: [https://emberverse.ai/haiku-garden/research/vocab_priming_confound.html](https://emberverse.ai/haiku-garden/research/vocab_priming_confound.html) | 2026-02-10T04:50:24 | https://www.reddit.com/r/LocalLLaMA/comments/1r0rjbs/your_llm_benchmark_might_be_measuring_vocabulary/ | Odd_Rule_3745 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r0rjbs | false | null | t3_1r0rjbs | /r/LocalLLaMA/comments/1r0rjbs/your_llm_benchmark_might_be_measuring_vocabulary/ | false | false | self | 2 | {'enabled': False, 'images': [{'id': '3ksJLx-77r0m1_AbEGJSm7ESfgXzMRsRzz3cz0iwnTA', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/3ksJLx-77r0m1_AbEGJSm7ESfgXzMRsRzz3cz0iwnTA.png?width=108&crop=smart&auto=webp&s=651ec89650f458483916851f7eb33440f88a2b7a', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/3ksJLx-77r0m1_AbEGJSm7ESfgXzMRsRzz3cz0iwnTA.png?width=216&crop=smart&auto=webp&s=3a693dcd8a6f010a77a45d934041c00a912ef953', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/3ksJLx-77r0m1_AbEGJSm7ESfgXzMRsRzz3cz0iwnTA.png?width=320&crop=smart&auto=webp&s=3baeaa799884d61cb60b897f25672d681f4eaa9e', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/3ksJLx-77r0m1_AbEGJSm7ESfgXzMRsRzz3cz0iwnTA.png?width=640&crop=smart&auto=webp&s=e44ee5059299652b9bb7b78cf4a22f2cabea77db', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/3ksJLx-77r0m1_AbEGJSm7ESfgXzMRsRzz3cz0iwnTA.png?width=960&crop=smart&auto=webp&s=7af78e6e0869a70dd517839ebab1c79974c98141', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/3ksJLx-77r0m1_AbEGJSm7ESfgXzMRsRzz3cz0iwnTA.png?width=1080&crop=smart&auto=webp&s=92f2a3813ae92b8ee9c78e421c86be38abc2b50d', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/3ksJLx-77r0m1_AbEGJSm7ESfgXzMRsRzz3cz0iwnTA.png?auto=webp&s=4741c291a4155fad798dafc354ef59da9ea2e332', 'width': 1200}, 'variants': {}}]} |
Just managed to run BitNet (1.58-bit) at 49us latency on a 5070Ti. Is this fast enough for robotics? | 1 | 2026-02-10T04:50:04 | Lucky_Permission_821 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1r0rj1k | false | null | t3_1r0rj1k | /r/LocalLLaMA/comments/1r0rj1k/just_managed_to_run_bitnet_158bit_at_49us_latency/ | false | false | 1 | {'enabled': True, 'images': [{'id': 'tpm9eth3llig1', 'resolutions': [{'height': 59, 'url': 'https://preview.redd.it/tpm9eth3llig1.png?width=108&crop=smart&auto=webp&s=61cb9eb4920ab3b35958d41d2b3338a2ee411a9c', 'width': 108}, {'height': 118, 'url': 'https://preview.redd.it/tpm9eth3llig1.png?width=216&crop=smart&auto=webp&s=3b7cbaf1fca8b48aec691bb528e5670de53fe6d9', 'width': 216}, {'height': 175, 'url': 'https://preview.redd.it/tpm9eth3llig1.png?width=320&crop=smart&auto=webp&s=37214272234673433d4def12a2dcc5cb87f8cefd', 'width': 320}], 'source': {'height': 267, 'url': 'https://preview.redd.it/tpm9eth3llig1.png?auto=webp&s=00bdc7125efe3167f4c7561187370bea6c3a09fc', 'width': 487}, 'variants': {}}]} | |||
The clawdbot stuff has me thinking.. is there a way to train models without this scraping mess? | 0 | All the drama around clawd and these AI scrapers got me wondering if there's a better way to do this. like is there any approach where you can train or fine tune models on data without the data ownder losing control of it?
I've heard people mention stuff like federated learning or training inside secure environments but no idea if any of that is actually being used. Feels like the current model is just "SCRAPE EVERYTHING and ask for forgiveness later" smh | 2026-02-10T04:22:41 | https://www.reddit.com/r/LocalLLaMA/comments/1r0qzgu/the_clawdbot_stuff_has_me_thinking_is_there_a_way/ | itsnotKelsey | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r0qzgu | false | null | t3_1r0qzgu | /r/LocalLLaMA/comments/1r0qzgu/the_clawdbot_stuff_has_me_thinking_is_there_a_way/ | false | false | self | 0 | null |
Deepseek architecture, but without all the parameters | 39 | I’m seeing a pattern that perhaps is not legitimate, but it seems everyone is copying the latest Deepseek architecture on their latest releases. In the process though they are also copying the parameter count (roughly), which makes the models inaccessible to most (unless you use their API or spent as much as you would to buy a used car).
So my question is, are there smaller models using the same tech but with less parameters? | 2026-02-10T04:16:13 | https://www.reddit.com/r/LocalLLaMA/comments/1r0qur4/deepseek_architecture_but_without_all_the/ | silenceimpaired | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r0qur4 | false | null | t3_1r0qur4 | /r/LocalLLaMA/comments/1r0qur4/deepseek_architecture_but_without_all_the/ | false | false | self | 39 | null |
[Showcase] Hand-written BitNet (1.58-bit) CUDA Kernels: 49us latency, 185KB binary. Zero dependencies. | 1 | [removed] | 2026-02-10T04:07:42 | https://www.reddit.com/r/LocalLLaMA/comments/1r0qodq/showcase_handwritten_bitnet_158bit_cuda_kernels/ | Lucky_Permission_821 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r0qodq | false | null | t3_1r0qodq | /r/LocalLLaMA/comments/1r0qodq/showcase_handwritten_bitnet_158bit_cuda_kernels/ | false | false | 1 | null | |
model loading problem | 1 | My system: win 11 pro, WSL2, ubuntu 22.04, rtx 5090 with no displays on it.
I'm getting this error: ggml\_backend\_cuda\_buffer\_type\_alloc\_buffer: allocating 3906.21 MiB on device 0: cudaMalloc failed: out of memory
How is it possible with at least 31 GB available? Can you tell where the problem/bug is?
Thanks. | 2026-02-10T03:58:07 | https://www.reddit.com/r/LocalLLaMA/comments/1r0qh1n/model_loading_problem/ | AssumptionPerfect406 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r0qh1n | false | null | t3_1r0qh1n | /r/LocalLLaMA/comments/1r0qh1n/model_loading_problem/ | false | false | self | 1 | null |
Cheapest but still worth it way to self host. | 2 | What is the cheapest i can go, while still being worth it for self hosting LLMs?
\- Whats the cheapest for: everyday tasks, questions, homework.
\- Whats the cheapest for: "medium" level coding, im talking boilerplate and basic function filling. | 2026-02-10T03:46:59 | https://www.reddit.com/r/LocalLLaMA/comments/1r0q871/cheapest_but_still_worth_it_way_to_self_host/ | Mediocre_Speed_2273 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r0q871 | false | null | t3_1r0q871 | /r/LocalLLaMA/comments/1r0q871/cheapest_but_still_worth_it_way_to_self_host/ | false | false | self | 2 | null |
Shipping Llama 3.2 and Qwen3 on-device in a mobile app — lessons learned with llama.cpp + GGUF | 1 | I've been working on a Bible study app (Grace Journal) and recently shipped on-device LLM inference on both iOS and Android using llama.cpp with GGUF models. Wanted to share some of the technical challenges and what worked.
\*\*Stack:\*\*
- iOS: mattt/llama.swift (precompiled XCFramework wrapping llama.cpp) via SPM
- Android: llama.cpp built via CMake NDK with add\_subdirectory()
- Models: Llama 3.2 1B/3B and Qwen3 1.7B/3B/4B, all Q4\_K\_M quantization
- Use case: generating verse context/insights from Bible passages
\*\*Key lessons:\*\*
1. \*\*Android debug builds are unusable without -O2.\*\* By default, \`./gradlew assembleDebug\` compiles native code with \`-O0\`. ggml SIMD intrinsics need optimization — without it, prompt decode that takes 2 seconds with -O2 takes 10+ MINUTES. Fix: force \`-O2\` in CMakeLists.txt even for debug.
2. \*\*ggml symbol collision with whisper.cpp.\*\* Both whisper.cpp and llama.cpp bundle their own ggml with different struct layouts. On iOS, they cannot coexist in the same Xcode target (Clang modules conflict). Fix: isolate llama.cpp in a local Swift package with \`@\_implementationOnly import\`. On Android, CMake's \`add\_subdirectory()\` — first one wins, second is skipped. Currently sharing whisper's ggml 0.9.6 with llama's 0.9.5.
3. \*\*Qwen3 thinking mode.\*\* Qwen3 defaults to "thinking" mode which outputs reasoning tokens before the actual answer. Appending \`/no\_think\` to the user prompt in the ChatML template suppresses this cleanly.
4. \*\*Chat templates matter.\*\* Llama 3 and Qwen3 use completely different prompt formats. The caller needs to wrap prompts correctly — Llama 3's \`<|begin\_of\_text|>\` format vs ChatML's \`<|im\_start|>\` format. We handle this with a ChatTemplate enum that formats before passing to the engine.
5. \*\*Memory management.\*\* Qwen3 4B (\~2.6GB loaded) is tight on older phones. We unload the model immediately after generation to free memory. Users can switch between downloaded models.
\*\*Performance (iPhone 15 Pro / Pixel 8):\*\*
- Llama 3.2 1B: \~30-40 tok/s
- Llama 3.2 3B: \~15-20 tok/s
- Qwen3 1.7B: \~25-35 tok/s
The app is live on iOS (https://apps.apple.com/app/grace-journal/id6740498879) and Android is in closed beta on Google Play. Happy to answer questions about the implementation or share more details about the native integration.
What models are others running on mobile? Curious about real-world experiences with different quantization levels on phones. | 2026-02-10T03:46:10 | https://www.reddit.com/r/LocalLLaMA/comments/1r0q7j2/shipping_llama_32_and_qwen3_ondevice_in_a_mobile/ | angelin1978 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r0q7j2 | false | null | t3_1r0q7j2 | /r/LocalLLaMA/comments/1r0q7j2/shipping_llama_32_and_qwen3_ondevice_in_a_mobile/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'Z0wIHxd9-6PzLpbcr2QGbZsFrG_pjLqiX2wMw3IeDKg', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/Z0wIHxd9-6PzLpbcr2QGbZsFrG_pjLqiX2wMw3IeDKg.jpeg?width=108&crop=smart&auto=webp&s=a775e4f216bcdee534572e5de1db4b008d68eaf1', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/Z0wIHxd9-6PzLpbcr2QGbZsFrG_pjLqiX2wMw3IeDKg.jpeg?width=216&crop=smart&auto=webp&s=9c3cfdac9ecc71218753174ffc8a2c3995898819', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/Z0wIHxd9-6PzLpbcr2QGbZsFrG_pjLqiX2wMw3IeDKg.jpeg?width=320&crop=smart&auto=webp&s=d691a186ed1d2ed47d13292e9cd2cfb795e1f00e', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/Z0wIHxd9-6PzLpbcr2QGbZsFrG_pjLqiX2wMw3IeDKg.jpeg?width=640&crop=smart&auto=webp&s=3c59a3f79f4cb9a15b079ffbcd63fef8066ab3f3', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/Z0wIHxd9-6PzLpbcr2QGbZsFrG_pjLqiX2wMw3IeDKg.jpeg?width=960&crop=smart&auto=webp&s=3ffa7fc7b20333a0d845b3afd8d33ee0414de577', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/Z0wIHxd9-6PzLpbcr2QGbZsFrG_pjLqiX2wMw3IeDKg.jpeg?width=1080&crop=smart&auto=webp&s=49311be2f206b64b78e77893b5238f04dff37ce2', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/Z0wIHxd9-6PzLpbcr2QGbZsFrG_pjLqiX2wMw3IeDKg.jpeg?auto=webp&s=61cd121c52c1f5bbfa251dc2ac734187b32ba6cf', 'width': 1200}, 'variants': {}}]} |
Cool open-source tool for combining LLM agents and evolutionary search | 0 | Just found a pretty interesting open-source framework for agent + evolutionary search: LoongFlow
I’ve been experimenting with ways to combine **LLM agents** and **evolutionary algorithms** for automated optimization tasks lately, and stumbled on LoongFlow.
What I like about it:
* It uses a clean **Plan-Execute-Summarize loop**
* Uses LLM reasoning to guide the search instead of just brute force
* Works for things like ML pipeline tuning, automated design, and algorithm discovery
I tested it on some small optimization tasks and it actually performed better than I expected, with way less manual tweaking.
If you’re into local LLMs, agent systems, or automated ML stuff, you might want to take a look:
[https://github.com/baidu-baige/LoongFlow](https://github.com/baidu-baige/LoongFlow)
Just sharing a tool I found useful — no affiliation, just thought r/LocalLLaMA might like it. | 2026-02-10T03:46:07 | https://www.reddit.com/r/LocalLLaMA/comments/1r0q7hn/cool_opensource_tool_for_combining_llm_agents_and/ | EnvironmentTop7077 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r0q7hn | false | null | t3_1r0q7hn | /r/LocalLLaMA/comments/1r0q7hn/cool_opensource_tool_for_combining_llm_agents_and/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'wxdBcS24t3Ll2y2X3d15WcPcPecfK2g5y9dfZ-aPVpM', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/wxdBcS24t3Ll2y2X3d15WcPcPecfK2g5y9dfZ-aPVpM.png?width=108&crop=smart&auto=webp&s=5cea37b45635b3add96779610387818ad0aacd26', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/wxdBcS24t3Ll2y2X3d15WcPcPecfK2g5y9dfZ-aPVpM.png?width=216&crop=smart&auto=webp&s=6db68362eb325446279643192c331c3d72ebaa3c', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/wxdBcS24t3Ll2y2X3d15WcPcPecfK2g5y9dfZ-aPVpM.png?width=320&crop=smart&auto=webp&s=2f20c621b7541151c492fe0881d8cfdf48373574', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/wxdBcS24t3Ll2y2X3d15WcPcPecfK2g5y9dfZ-aPVpM.png?width=640&crop=smart&auto=webp&s=9ecffe0a3ce2bf0dde7a2ebaeed109b2e0b8587f', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/wxdBcS24t3Ll2y2X3d15WcPcPecfK2g5y9dfZ-aPVpM.png?width=960&crop=smart&auto=webp&s=d187e1b54693243440d14676c170a0a563e9c0f2', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/wxdBcS24t3Ll2y2X3d15WcPcPecfK2g5y9dfZ-aPVpM.png?width=1080&crop=smart&auto=webp&s=a158b0a23f04b7d07adb915d4fbc8f8e09f804aa', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/wxdBcS24t3Ll2y2X3d15WcPcPecfK2g5y9dfZ-aPVpM.png?auto=webp&s=321a7c0f99255a9acce393234da3da4f3fce6eb2', 'width': 1200}, 'variants': {}}]} |
Open weight kimi k2.5 overtakes opus 4.5 non thinking on arena | 8 | [https://arena.ai/leaderboard/text/coding-no-style-control](https://arena.ai/leaderboard/text/coding-no-style-control)
Kimi is a 1T parameter model.
Previous related post: [https://www.reddit.com/r/LocalLLaMA/comments/1qxx7uo/open\_weight\_model\_kimi\_25\_nipping\_at\_opus\_45s/](https://www.reddit.com/r/LocalLLaMA/comments/1qxx7uo/open_weight_model_kimi_25_nipping_at_opus_45s/) | 2026-02-10T03:42:09 | https://www.reddit.com/r/LocalLLaMA/comments/1r0q4h5/open_weight_kimi_k25_overtakes_opus_45_non/ | Terminator857 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r0q4h5 | false | null | t3_1r0q4h5 | /r/LocalLLaMA/comments/1r0q4h5/open_weight_kimi_k25_overtakes_opus_45_non/ | false | false | self | 8 | null |
Can someone who trained / fine tuned on nvfp4 can tell me it's worth it | 5 | I'm not expert in fine tuning / training, so before starting I hope to get an advice.
I have 5060ti 16 and I want to try my hand in fine tuning small models.
The question does the speed gain, worth it?
how faster is it compare to bf16? how bad the drop in quality?
Does qat add time to training if so how much and again does it worth it? | 2026-02-10T03:37:23 | https://www.reddit.com/r/LocalLLaMA/comments/1r0q0vt/can_someone_who_trained_fine_tuned_on_nvfp4_can/ | AdventurousGold672 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r0q0vt | false | null | t3_1r0q0vt | /r/LocalLLaMA/comments/1r0q0vt/can_someone_who_trained_fine_tuned_on_nvfp4_can/ | false | false | self | 5 | null |
Last Week in Multimodal AI - Local Edition | 9 | I curate a weekly multimodal AI roundup, here are the local/open-source highlights from last week:
**MiniCPM-o 4.5 - 9B Multimodal Model for Phones**
* Beats GPT-4o on vision benchmarks at 9B parameters with real-time bilingual voice conversations.
* Runs entirely on-device with no cloud dependency. Privacy by default.
* [Hugging Face](https://huggingface.co/openbmb/MiniCPM-o-4_5)
https://reddit.com/link/1r0q02v/video/1zof97mq7lig1/player
**Nemotron ColEmbed V2 - Visual Document Retrieval**
* NVIDIA's family of visual document retrieval models (3B, 4B, 8B) with the 8B topping ViDoRe V3 benchmark by 3%.
* Purpose-built for finding information inside scanned documents and PDFs. Weights on Hugging Face.
* [Paper](https://arxiv.org/abs/2602.03992) | [Hugging Face](https://huggingface.co/nvidia/nemotron-colembed-vl-8b-v2)
**Cropper - Local Private Media Cropper**
* A local, private media cropper built entirely by GPT-5.3-Codex. Runs locally with no cloud calls.
* [Post](https://x.com/cocktailpeanut/status/2019834796026081667?s=20)
https://reddit.com/link/1r0q02v/video/hvkykb8p7lig1/player
**Lingbot World Launcher - 1-Click Gradio Launcher**
* u/zast57 built a 1-click Gradio launcher for the Lingbot World Model. Anyone with a GPU can test it.
* [Post](https://x.com/zast57/status/2020522559222026478?s=20)
https://reddit.com/link/1r0q02v/video/lkoxzwqk7lig1/player
**VK-LSVD - 40B Interaction Short-Video Dataset**
* Massive dataset of 40 billion user interactions for short-video recommendation research.
* [Hugging Face](https://huggingface.co/datasets/deepvk/VK-LSVD)
**LTX-2 Pet Video Fun**
* Community members have been animating pet photos with LTX-2 v2v and getting great results.
* [Reddit Thread](https://www.reddit.com/r/StableDiffusion/comments/1qxs6uz/prompting_your_pets_is_easy_with_ltx2_v2v/)
https://reddit.com/link/1r0q02v/video/wr4llm4y7lig1/player
Honorable Mention:
**TinyLoRA - Single-Parameter Fine-Tuning**
* Meta FAIR method that fine-tunes models with as few as one trainable parameter.
* Drops the compute requirement for model customization to near zero. No GPU cluster needed.
* [Paper](https://arxiv.org/abs/2602.04118)
Checkout the [full roundup](https://open.substack.com/pub/thelivingedge/p/last-week-in-multimodal-ai-44-small?utm_campaign=post-expanded-share&utm_medium=web) for more demos, papers, and resources. | 2026-02-10T03:36:19 | https://www.reddit.com/r/LocalLLaMA/comments/1r0q02v/last_week_in_multimodal_ai_local_edition/ | Vast_Yak_4147 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r0q02v | false | null | t3_1r0q02v | /r/LocalLLaMA/comments/1r0q02v/last_week_in_multimodal_ai_local_edition/ | false | false | 9 | null | |
IRIS 18B | 22 | IRIS 18B started off as ERNIE 21BA3B, first I reap pruned ERNIE by 20%, then trained on 3B tokens of thinking traces. This improved benchmarks and led to a more usable model. It takes a prompt very well, has no repetition or hallucinated user speaking bugs.
I attempted SFT, but it did not go super well and introduced a number of bugs, as well as locking in rigid tool calls that didn't always match the actual tools.
So I made the decision to release the CPT checkpoint.
[https://huggingface.co/jerrimu/IRIS-18B-CPT](https://huggingface.co/jerrimu/IRIS-18B-CPT) HF version.
[https://huggingface.co/jerrimu/IRIS-18B-GGUFS](https://huggingface.co/jerrimu/IRIS-18B-GGUFS) GGUFS ( 16, 8, 4, 2 bit)
I have been daily driving the model for days and find it great, it works well with the two tools built into my inference app ( web search and file access) | 2026-02-10T03:33:38 | https://www.reddit.com/r/LocalLLaMA/comments/1r0py0k/iris_18b/ | thebadslime | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r0py0k | false | null | t3_1r0py0k | /r/LocalLLaMA/comments/1r0py0k/iris_18b/ | false | false | self | 22 | {'enabled': False, 'images': [{'id': 'yl9UIfSiuPT2E8Nu-X6QE3bm94ZhCeBuWsXZK_QE8zs', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/yl9UIfSiuPT2E8Nu-X6QE3bm94ZhCeBuWsXZK_QE8zs.png?width=108&crop=smart&auto=webp&s=24a7242dd3112b112d022e9855b73ebca98e30ec', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/yl9UIfSiuPT2E8Nu-X6QE3bm94ZhCeBuWsXZK_QE8zs.png?width=216&crop=smart&auto=webp&s=22003342cc5276b684182124532c32e611d43b86', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/yl9UIfSiuPT2E8Nu-X6QE3bm94ZhCeBuWsXZK_QE8zs.png?width=320&crop=smart&auto=webp&s=ed1efda958c4dd209d71c5f40fc8482f67d2effd', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/yl9UIfSiuPT2E8Nu-X6QE3bm94ZhCeBuWsXZK_QE8zs.png?width=640&crop=smart&auto=webp&s=18f70874eb87a87409e0bfdbfe4ed0ecb7bb005d', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/yl9UIfSiuPT2E8Nu-X6QE3bm94ZhCeBuWsXZK_QE8zs.png?width=960&crop=smart&auto=webp&s=005da947e2fbf23eb6f79ed92debf66819f7df86', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/yl9UIfSiuPT2E8Nu-X6QE3bm94ZhCeBuWsXZK_QE8zs.png?width=1080&crop=smart&auto=webp&s=fe5acb86ad357b8b00e5db1325a02dd7b1c94ae7', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/yl9UIfSiuPT2E8Nu-X6QE3bm94ZhCeBuWsXZK_QE8zs.png?auto=webp&s=5f132ee52d5b7e63e57794cf819025b9100ee5e9', 'width': 1200}, 'variants': {}}]} |
Anyone familiar with MC62-G40 + 3945WX? Cannot get POST | 0 | Has anyone run into this issue? Cannot get this to POST for the life of me.
Components:
\-one stick of 32GB teamgroup zeus t-force DDR4 3200 CL20-22-22-46 1.2V ttzd464g3200hc20dc01
\-3945WX
\-Gigabyte MC62-G40 Rev 1.0 WRX80
\-Arctic Freezer 4U-M Rev. 2
In Megarac SP-X: System inventory -> Inventory -> Server error encountered. Test Error in Getting the Device Count Information \[code: 11272\]
H5Viewer -> "No Signal"
already tried:
\-updating BIOS to R14
⁃updating mobo firmware to 13.06.24
\-letting the mobo sit for 60 minutes after powering it on | 2026-02-10T03:15:59 | https://www.reddit.com/r/LocalLLaMA/comments/1r0pjuf/anyone_familiar_with_mc62g40_3945wx_cannot_get/ | Diligent-Culture-432 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r0pjuf | false | null | t3_1r0pjuf | /r/LocalLLaMA/comments/1r0pjuf/anyone_familiar_with_mc62g40_3945wx_cannot_get/ | false | false | self | 0 | null |
Femtobot: A 10MB Rust Agent for Low-Resource Machines | 166 | I wanted to run [OpenClaw](https://github.com/openclaw/openclaw)\-style workflows on very low-resource machines (older Raspberry Pis, cheap VPS instances), but most “lightweight” stacks still end up dragging in large runtimes and slow startup costs.
After trying [nanobot](https://github.com/HKUDS/nanobot) and seeing disk usage climb past \~350MB once Python, virtualenvs, and dependencies were installed, I rewrote the core ideas in Rust to see how small and fast it could be.
The result is [femtobot](https://github.com/enzofrasca/femtobot): a single \~10MB binary that currently supports:
* Telegram polling
* Local memory (SQLite + vector storage)
* Tool execution (shell, filesystem, web) via [rig-core](https://github.com/0xPlaygrounds/rig)
The implementation was done quickly with heavy AI assistance, so the code prioritizes simplicity and size over perfect Rust idioms. It works well on constrained hardware, but there are definitely rough edges.
Sharing in case it’s useful or interesting to others experimenting with small, local, or low-power agent setups. You are also welcome to contribute.
Repo: [https://github.com/enzofrasca/femtobot](https://github.com/enzofrasca/femtobot) | 2026-02-10T02:40:21 | https://v.redd.it/nbv8vsnwwkig1 | yunfoe | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1r0or7s | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/nbv8vsnwwkig1/DASHPlaylist.mpd?a=1773283239%2CYjhmZDUyNWY4ODQyYzI0ZWQyZWI1YjZiYzEyMTI5OTE1MDhkOWI1ZGIxODJkM2Q5OWQxM2UwMTM4ZmY3Y2MwMw%3D%3D&v=1&f=sd', 'duration': 20, 'fallback_url': 'https://v.redd.it/nbv8vsnwwkig1/CMAF_1080.mp4?source=fallback', 'has_audio': False, 'height': 1080, 'hls_url': 'https://v.redd.it/nbv8vsnwwkig1/HLSPlaylist.m3u8?a=1773283239%2CMjM3Y2ExOWVmNzZlMDI2NzVjMGI1OTQ2N2NiMzNmNjQ3ZDkzNjQxY2Q0YjNhODJkMWEwMDc3ZWU0MjQxNTJjNw%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/nbv8vsnwwkig1/CMAF_96.mp4', 'transcoding_status': 'completed', 'width': 1628}} | t3_1r0or7s | /r/LocalLLaMA/comments/1r0or7s/femtobot_a_10mb_rust_agent_for_lowresource/ | false | false | 166 | {'enabled': False, 'images': [{'id': 'cmw5ZTJ5bnd3a2lnMa2OwS6wmI-E0GDGdMuj7R4EL-J7nO8YwfKZKjv0DlnG', 'resolutions': [{'height': 71, 'url': 'https://external-preview.redd.it/cmw5ZTJ5bnd3a2lnMa2OwS6wmI-E0GDGdMuj7R4EL-J7nO8YwfKZKjv0DlnG.png?width=108&crop=smart&format=pjpg&auto=webp&s=07e55db436a8124afa5978e8be43420273d5281d', 'width': 108}, {'height': 143, 'url': 'https://external-preview.redd.it/cmw5ZTJ5bnd3a2lnMa2OwS6wmI-E0GDGdMuj7R4EL-J7nO8YwfKZKjv0DlnG.png?width=216&crop=smart&format=pjpg&auto=webp&s=1e0494df0258cb71e76f39c964d11ad251567f51', 'width': 216}, {'height': 212, 'url': 'https://external-preview.redd.it/cmw5ZTJ5bnd3a2lnMa2OwS6wmI-E0GDGdMuj7R4EL-J7nO8YwfKZKjv0DlnG.png?width=320&crop=smart&format=pjpg&auto=webp&s=48deb584e1344268a7d51c1104b3cc8366314463', 'width': 320}, {'height': 424, 'url': 'https://external-preview.redd.it/cmw5ZTJ5bnd3a2lnMa2OwS6wmI-E0GDGdMuj7R4EL-J7nO8YwfKZKjv0DlnG.png?width=640&crop=smart&format=pjpg&auto=webp&s=9b2e1e7b956f74424db5a3e53e6ad3edc3bafda5', 'width': 640}, {'height': 636, 'url': 'https://external-preview.redd.it/cmw5ZTJ5bnd3a2lnMa2OwS6wmI-E0GDGdMuj7R4EL-J7nO8YwfKZKjv0DlnG.png?width=960&crop=smart&format=pjpg&auto=webp&s=672c3ead3500ddb5aa910ccb98cf8bdf170396ad', 'width': 960}, {'height': 716, 'url': 'https://external-preview.redd.it/cmw5ZTJ5bnd3a2lnMa2OwS6wmI-E0GDGdMuj7R4EL-J7nO8YwfKZKjv0DlnG.png?width=1080&crop=smart&format=pjpg&auto=webp&s=2989b083fdb35f649613420b97ea59758fd877a0', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/cmw5ZTJ5bnd3a2lnMa2OwS6wmI-E0GDGdMuj7R4EL-J7nO8YwfKZKjv0DlnG.png?format=pjpg&auto=webp&s=ca21f8ae3ff735ff9c89f1d26f09e30ef12ca258', 'width': 1628}, 'variants': {}}]} | |
Counter Striker online! | 1 | [removed] | 2026-02-10T02:38:12 | https://v.redd.it/0ntgf62oxkig1 | Top_Fee_6774 | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1r0ophx | false | {'reddit_video': {'bitrate_kbps': 800, 'dash_url': 'https://v.redd.it/0ntgf62oxkig1/DASHPlaylist.mpd?a=1773283109%2CNjE4MTQzMTg2NTQyNWExMWYzZGM0MjdkMjVjZWQ1ZjBhYWEzMmU0OGUwYjdiODAyZDBjZDlkOTIwZDIzYTI5MA%3D%3D&v=1&f=sd', 'duration': 103, 'fallback_url': 'https://v.redd.it/0ntgf62oxkig1/CMAF_360.mp4?source=fallback', 'has_audio': True, 'height': 360, 'hls_url': 'https://v.redd.it/0ntgf62oxkig1/HLSPlaylist.m3u8?a=1773283109%2CNTkyY2QyMjkxMWU3MzRjMTg2NjM1OTg5ZmRhMTgxMmMwZmUxZThlOTczZWU0ZGQzY2Y4MTk0Njk0MDU2ZWFlZg%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/0ntgf62oxkig1/CMAF_96.mp4', 'transcoding_status': 'completed', 'width': 576}} | t3_1r0ophx | /r/LocalLLaMA/comments/1r0ophx/counter_striker_online/ | true | false | spoiler | 1 | {'enabled': False, 'images': [{'id': 'dHVqMzZkM294a2lnMb0Hrfz_EvawDOWQChnZ5vP-TmSb5R5eR3qJ_enQM7tY', 'resolutions': [{'height': 67, 'url': 'https://external-preview.redd.it/dHVqMzZkM294a2lnMb0Hrfz_EvawDOWQChnZ5vP-TmSb5R5eR3qJ_enQM7tY.png?width=108&crop=smart&format=pjpg&auto=webp&s=a07bae1b24e3041f9d1de53eb24cf5bc34959274', 'width': 108}, {'height': 135, 'url': 'https://external-preview.redd.it/dHVqMzZkM294a2lnMb0Hrfz_EvawDOWQChnZ5vP-TmSb5R5eR3qJ_enQM7tY.png?width=216&crop=smart&format=pjpg&auto=webp&s=6da3419febab0e14d6f698f660415509117cdf44', 'width': 216}, {'height': 200, 'url': 'https://external-preview.redd.it/dHVqMzZkM294a2lnMb0Hrfz_EvawDOWQChnZ5vP-TmSb5R5eR3qJ_enQM7tY.png?width=320&crop=smart&format=pjpg&auto=webp&s=94b43040708a2e84d917a9d068fda16a875cac31', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/dHVqMzZkM294a2lnMb0Hrfz_EvawDOWQChnZ5vP-TmSb5R5eR3qJ_enQM7tY.png?format=pjpg&auto=webp&s=1d801fbcdbe1d4aef2e8cca9f34b96fbf57f9aef', 'width': 576}, 'variants': {'obfuscated': {'resolutions': [{'height': 67, 'url': 'https://external-preview.redd.it/dHVqMzZkM294a2lnMb0Hrfz_EvawDOWQChnZ5vP-TmSb5R5eR3qJ_enQM7tY.png?width=108&crop=smart&blur=10&format=pjpg&auto=webp&s=95b9923906ce347d5568a7ba0fd4387e1b6191b4', 'width': 108}, {'height': 135, 'url': 'https://external-preview.redd.it/dHVqMzZkM294a2lnMb0Hrfz_EvawDOWQChnZ5vP-TmSb5R5eR3qJ_enQM7tY.png?width=216&crop=smart&blur=21&format=pjpg&auto=webp&s=bdc8c6e06675e59c8cbce5c2ae894ab1221d2048', 'width': 216}, {'height': 200, 'url': 'https://external-preview.redd.it/dHVqMzZkM294a2lnMb0Hrfz_EvawDOWQChnZ5vP-TmSb5R5eR3qJ_enQM7tY.png?width=320&crop=smart&blur=32&format=pjpg&auto=webp&s=8358760ad83d93edaea868e00834c3e268a72946', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/dHVqMzZkM294a2lnMb0Hrfz_EvawDOWQChnZ5vP-TmSb5R5eR3qJ_enQM7tY.png?blur=40&format=pjpg&auto=webp&s=f09d3b4cefd6704dcf8d992fe4e2501cf6624f3d', 'width': 576}}}}]} |
You dont even have to teach math to make llms better at math | 0 | Like teach them howto fucking reason through first principles wtf
Role<user>fuckformatting<assistant>whatdoumean<user>itsfuckingunfair<assistant>itsnotunfairitdjustamatterofchance
This formatted correctly will teach llm math. I am not claiming its better than a probability problem, but a tuned model will have better score in a probablity math test
Should i put the results here? Who decides where to publish papers?Who decides prestige?<end\_user><assistant>self<end\_assistant>
fuckyoutoowhogivesafuckanyway | 2026-02-10T02:27:37 | https://www.reddit.com/r/LocalLLaMA/comments/1r0ogx3/you_dont_even_have_to_teach_math_to_make_llms/ | Hot_Inspection_9528 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r0ogx3 | false | null | t3_1r0ogx3 | /r/LocalLLaMA/comments/1r0ogx3/you_dont_even_have_to_teach_math_to_make_llms/ | false | false | self | 0 | null |
Infra matters more than model size — Qwen 3.5 just made it impossible to ignore | 0 | Over the last few days playing with Qwen 3.5 locally, one thing became very hard to ignore:
infrastructure choices now matter more than raw model size.
Not in a theoretical sense — but in very practical, day-to-day usage.
What stood out to me wasn’t “how big can I run,” but how wildly performance and usability swing depending on:
• GPU memory headroom and memory bandwidth
• System RAM and swap behavior
• Quantization choice and loading strategy
• I/O and environment stability (especially under sustained load)
Across similar setups reported by the community, people are seeing noticeable differences in throughput or latency — sometimes on the order of \~2× swings — without changing the model at all.
Same weights, same quantization, very different experience.
That makes Qwen 3.5 interesting in a way that goes beyond benchmarks:
It’s a model that punishes sloppy infra and rewards careful setup.
Some observations that resonated with me while testing and reading community reports:
• VRAM limits don’t just cap max model size — they affect how usable a model feels
• System memory / swap configuration can be the difference between “works fine” and “miserable”
• Stability over long runs matters more than peak tok/s numbers
• Smaller, well-supported setups often feel better than larger but constrained ones
This feels like a shift from the old mindset of
“Can I load it?”
to
“Can I run it comfortably?”
Curious how others are experiencing this:
• What part of your setup made the biggest difference with Qwen 3.5?
• GPU memory, system RAM, quantization, or something else?
• Did you change how you run models after trying it? | 2026-02-10T02:23:42 | https://www.reddit.com/r/LocalLLaMA/comments/1r0odob/infra_matters_more_than_model_size_qwen_35_just/ | Distinct-Path659 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r0odob | false | null | t3_1r0odob | /r/LocalLLaMA/comments/1r0odob/infra_matters_more_than_model_size_qwen_35_just/ | false | false | self | 0 | null |
Is there any Local LLMs that out perform commercial or cloud based LLMs in certain areas or functions? | 13 | I'm curious if anybody has seen local LLMs outperform commercial or cloud-based LLMS in certain areas or functions. If so what model and how did it out perform?
Is there hope in the future that local LLMs could develop an edge over commercial or cloud based LLMs?
| 2026-02-10T02:02:25 | https://www.reddit.com/r/LocalLLaMA/comments/1r0nw2a/is_there_any_local_llms_that_out_perform/ | FX2021 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r0nw2a | false | null | t3_1r0nw2a | /r/LocalLLaMA/comments/1r0nw2a/is_there_any_local_llms_that_out_perform/ | false | false | self | 13 | null |
Free dashboard to monitor your OpenClaw agent's token usage and API costs | 1 | [removed] | 2026-02-10T01:43:40 | https://www.reddit.com/r/LocalLLaMA/comments/1r0ngn0/free_dashboard_to_monitor_your_openclaw_agents/ | Previous_Menu_693 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r0ngn0 | false | null | t3_1r0ngn0 | /r/LocalLLaMA/comments/1r0ngn0/free_dashboard_to_monitor_your_openclaw_agents/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': '8sTaH0eHungb1UAnbDuSOJdKXRM0sF7b61B_68s-UyQ', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/8sTaH0eHungb1UAnbDuSOJdKXRM0sF7b61B_68s-UyQ.jpeg?width=108&crop=smart&auto=webp&s=c438b68b728c016ebf46dd7faf1151f82d21e5ec', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/8sTaH0eHungb1UAnbDuSOJdKXRM0sF7b61B_68s-UyQ.jpeg?width=216&crop=smart&auto=webp&s=7660be9b0ff8f60826fd3b82c095f95a703999c9', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/8sTaH0eHungb1UAnbDuSOJdKXRM0sF7b61B_68s-UyQ.jpeg?width=320&crop=smart&auto=webp&s=1526d40ec059b6cd16ea3bdfdd9ef7131cbf4ce0', 'width': 320}, {'height': 640, 'url': 'https://external-preview.redd.it/8sTaH0eHungb1UAnbDuSOJdKXRM0sF7b61B_68s-UyQ.jpeg?width=640&crop=smart&auto=webp&s=20aa10387e0723cdcced3bdb7ae7b69a32fa1d7d', 'width': 640}], 'source': {'height': 640, 'url': 'https://external-preview.redd.it/8sTaH0eHungb1UAnbDuSOJdKXRM0sF7b61B_68s-UyQ.jpeg?auto=webp&s=dcf8adfdbf23784624690f8c863f5e0d91fa625f', 'width': 640}, 'variants': {}}]} |
A fully local home automation voice assistant using Qwen3 ASR, LLM and TTS on an RTX 5060 Ti with 16GB VRAM | 159 | Video shows the latency and response times running everything Qwen3 (ASR&TTS 1.7B, Qwen3 4B Instruct 2507) with a Morgan Freeman voice clone on an RTX 5060 Ti with 16GB VRAM. In this example the SearXNG server is not running so it shows the model reverting to its own knowledge when unable to obtain web search information.
I tested other smaller models for intent generation but response quality dropped dramatically on the LLM models under 4B. Kokoro (TTS) and Moonshine (ASR) are also included as options for smaller systems.
The project comes with a bunch of tools it can use, such as Spotify, Philips Hue light control, AirTouch climate control and online weather retrieval (Australian project so uses the BOM).
I have called the project "Fulloch". Try it out or build your own project out of it from here: [https://github.com/liampetti/fulloch](https://github.com/liampetti/fulloch) | 2026-02-10T01:39:23 | https://v.redd.it/feropirhmkig1 | liampetti | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1r0nd6m | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/feropirhmkig1/DASHPlaylist.mpd?a=1773279577%2CNDE0NTg0N2EyMTdlZmJkNzAyNTU4YTdmY2RlM2I1OWE5YTMzNWJmYjFiNTE3NTg2NjY5NjU5NDE5NGNlZDg3Mw%3D%3D&v=1&f=sd', 'duration': 59, 'fallback_url': 'https://v.redd.it/feropirhmkig1/CMAF_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/feropirhmkig1/HLSPlaylist.m3u8?a=1773279577%2CNWE0NjM3NzM2M2M3YzI4ZDBiYzZkYjRiZGIxN2IzMTZiNWI0OWY0NDcxZmExZWJkZGE2NGY2Njc2YzNkYzhjNA%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/feropirhmkig1/CMAF_96.mp4', 'transcoding_status': 'completed', 'width': 1920}} | t3_1r0nd6m | /r/LocalLLaMA/comments/1r0nd6m/a_fully_local_home_automation_voice_assistant/ | false | false | 159 | {'enabled': False, 'images': [{'id': 'MGRhbXB0cmhta2lnMey19SmkPge57MTwSl95CCxzGWVZmEEqcz1nfiupw6bq', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/MGRhbXB0cmhta2lnMey19SmkPge57MTwSl95CCxzGWVZmEEqcz1nfiupw6bq.png?width=108&crop=smart&format=pjpg&auto=webp&s=3af26d7fee61a1a96224e8b92669dae761da4ab5', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/MGRhbXB0cmhta2lnMey19SmkPge57MTwSl95CCxzGWVZmEEqcz1nfiupw6bq.png?width=216&crop=smart&format=pjpg&auto=webp&s=0ea941bc2f9c9cc10bb99ae273a23ddc2d22b408', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/MGRhbXB0cmhta2lnMey19SmkPge57MTwSl95CCxzGWVZmEEqcz1nfiupw6bq.png?width=320&crop=smart&format=pjpg&auto=webp&s=f70f8f9236f331c704ae8ea5adecbde57c2c8e23', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/MGRhbXB0cmhta2lnMey19SmkPge57MTwSl95CCxzGWVZmEEqcz1nfiupw6bq.png?width=640&crop=smart&format=pjpg&auto=webp&s=fd76174bf467ff2d5a1218b44d000d71c4d07360', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/MGRhbXB0cmhta2lnMey19SmkPge57MTwSl95CCxzGWVZmEEqcz1nfiupw6bq.png?width=960&crop=smart&format=pjpg&auto=webp&s=3f0fbeb9b6cfa36249638606de43a75ae9f449ce', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/MGRhbXB0cmhta2lnMey19SmkPge57MTwSl95CCxzGWVZmEEqcz1nfiupw6bq.png?width=1080&crop=smart&format=pjpg&auto=webp&s=00eb94bcd8696678bc5339ba209833a2d7e0da61', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/MGRhbXB0cmhta2lnMey19SmkPge57MTwSl95CCxzGWVZmEEqcz1nfiupw6bq.png?format=pjpg&auto=webp&s=0a87a795a634ab8e5c7eadd629e22ee919a0865e', 'width': 1920}, 'variants': {}}]} | |
Tankie Series GGUFs | 3 | Someone posted the series here but there were no GGUFs, so here are some I found:
https://huggingface.co/mradermacher/Tankie-DPE-12b-SFT-i1-GGUF
https://huggingface.co/mradermacher/Tankie-DPE-12b-SFT-GGUF
https://huggingface.co/mradermacher/Tankie-4B-SFT-Warmup-GGUF | 2026-02-10T01:34:05 | https://www.reddit.com/r/LocalLLaMA/comments/1r0n8ww/tankie_series_ggufs/ | 121507090301 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r0n8ww | false | null | t3_1r0n8ww | /r/LocalLLaMA/comments/1r0n8ww/tankie_series_ggufs/ | false | false | self | 3 | {'enabled': False, 'images': [{'id': 'X811f6vsTd6E_Dztjiw7fJKMO2xmGndYr-W0-OpXONo', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/X811f6vsTd6E_Dztjiw7fJKMO2xmGndYr-W0-OpXONo.png?width=108&crop=smart&auto=webp&s=67b0de02e8be04a0876556e2961717ae86ec2155', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/X811f6vsTd6E_Dztjiw7fJKMO2xmGndYr-W0-OpXONo.png?width=216&crop=smart&auto=webp&s=885c447e090db29fc516c0dc49d86d2aad6e9abb', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/X811f6vsTd6E_Dztjiw7fJKMO2xmGndYr-W0-OpXONo.png?width=320&crop=smart&auto=webp&s=34d1ab8c0c9c8578e5f4d4fba21d6dd765245203', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/X811f6vsTd6E_Dztjiw7fJKMO2xmGndYr-W0-OpXONo.png?width=640&crop=smart&auto=webp&s=aaf66bcf39679805d76e5a416fd22854ca4e0318', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/X811f6vsTd6E_Dztjiw7fJKMO2xmGndYr-W0-OpXONo.png?width=960&crop=smart&auto=webp&s=5b508c32267fbb3163addfa3027368b853af9747', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/X811f6vsTd6E_Dztjiw7fJKMO2xmGndYr-W0-OpXONo.png?width=1080&crop=smart&auto=webp&s=ce8870510f9feb825a2a4aa8f496ac3d62a365ef', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/X811f6vsTd6E_Dztjiw7fJKMO2xmGndYr-W0-OpXONo.png?auto=webp&s=ab7780c5a9d579b2f6f33fcef55be0a3539fd512', 'width': 1200}, 'variants': {}}]} |
Step by Step Guide - LLM Inference Benchmarking — genAI-perf and vLLM | 1 | After spending hours dealing with ChatGPT hallucinations, I finally had to do a Google search to find the right tool for LLM inference benchmarking. It turns out NVIDIA has done a great job creating a robust tool that can be used across different platforms, including Triton and OpenAI-compatible APIs.
LLM benchmarking can be confusing, as people often mix up **LLM performance testing** with **benchmarking**. Performance testing validates the overall capacity of your server infrastructure, including network latency, CPU performance, and other system-level throughputs. Benchmarking tools, on the other hand, primarily focus on LLM inference engine–specific parameters, which are critical if you are planning to run your own inference platform — something most enterprises are now focusing on.
This is a series of blogs that I will be writing as I go through the process of learning and experimenting with vLLM-based inference solutions, along with insights from real-world use cases operating LLM inference platforms in enterprise environments.
Here are some of the most common inference use cases.
In this example we will be setting up a single node Inference + benchmarking node for experimentation purpose, however, production use case would require the Benchmarking tool should run from a separate node.
https://preview.redd.it/1ynru7m6r4dg1.png?width=1920&format=png&auto=webp&s=bb819bc764e43078dc6eb045bd40dea76d88f97a
For decent benchmarking, you need the following to get started:
* **NVIDIA GPU–powered compute platform.** This can be your desktop, or you can use any of the Neo Cloud providers.
* **Hugging Face login.** Sign up for a free Hugging Face account. You’ll need it to download models and access gated models such as Meta Llama and others.
* **LLM-labs repo.** [https://github.com/kchandan/llm-labs](https://github.com/kchandan/llm-labs)
# Step-by-step guide
[Setup Architecture](https://preview.redd.it/b9cskss9r4dg1.png?width=2100&format=png&auto=webp&s=b1cd305e5d2725b914d7be9a3d4661a3212d09da)
To install the necessary packages on the Linux VM (e.g., NVIDIA drivers, Docker, etc.), the easiest approach is to update the IP address in the Ansible inventory file and then let the playbook handle the full installation.
cat llmops/ansible/inventory/hosts.ini
; [vllm_server]
; server_name ansible_user=ubuntu
[llm_workers]
<IP Address> ansible_user=ubuntu ansible_ssh_private_key_file=~/.ssh/<your_key_file>
Once IP address is update, fire the Ansible playbook to install required packages
(venv) ➜ llmops git:(main) ✗ ansible-playbook -i ansible/inventory/hosts.ini ansible/setup_worker.yml
PLAY [Setup worker nodes] **********************************************************************************************************************************************
TASK [Gathering Facts] *************************************************************************************************************************************************
[WARNING]: Host is using the discovered Python interpreter at '/usr/bin/python3.12', but future installation of another Python interpreter could cause a different interpreter to be discovered. See https://docs.ansible.com/ansible-core/2.19/reference_appendices/interpreter_discovery.html for more information.
ok: [worker-node]
TASK [docker_install : Update apt and install prerequisites] ***********************************************************************************************************
ok: [worker-node]
TASK [docker_install : Create directory for Docker keyrings] ***********************************************************************************************************
ok: [worker-node]
TASK [docker_install : Download Docker GPG key] ************************************************************************************************************************
ok: [worker-node]
TASK [docker_install : Add Docker repository to apt sources] ***********************************************************************************************************
changed: [worker-node]
TASK [docker_install : Update apt cache after adding Docker repo] ******************************************************************************************************
changed: [worker-node]
TASK [docker_install : Install Docker packages] ************************************************************************************************************************
ok: [worker-node]
TASK [docker_install : Ensure Docker service is enabled and started] ***************************************************************************************************
ok: [worker-node]
TASK [docker_install : Add ubuntu user to docker group] ****************************************************************************************************************
ok: [worker-node]
TASK [nvidia-toolkit : Download cuda-keyring deb] **********************************************************************************************************************
ok: [worker-node]
TASK [nvidia-toolkit : Install cuda-keyring deb (dpkg)] ****************************************************************************************************************
ok: [worker-node]
TASK [nvidia-toolkit : apt update] *************************************************************************************************************************************
changed: [worker-node]
TASK [nvidia-toolkit : Install cuda-drivers] ***************************************************************************************************************************
ok: [worker-node]
TASK [nvidia-toolkit : Install prerequisites] **************************************************************************************************************************
ok: [worker-node]
TASK [nvidia-toolkit : Create keyring directory if missing] ************************************************************************************************************
ok: [worker-node]
TASK [nvidia-toolkit : Download NVIDIA container toolkit GPG key] ******************************************************************************************************
ok: [worker-node]
TASK [nvidia-toolkit : Convert GPG key to dearmor format] **************************************************************************************************************
ok: [worker-node]
TASK [nvidia-toolkit : Add NVIDIA container toolkit apt repository] ****************************************************************************************************
ok: [worker-node]
TASK [nvidia-toolkit : Enable experimental repository (optional)] ******************************************************************************************************
skipping: [worker-node] => (item=deb [signed-by=/usr/share/keyrings/nvidia-container-toolkit-keyring.gpg] https://nvidia.github.io/libnvidia-container/experimental/deb/ /)
skipping: [worker-node]
TASK [nvidia-toolkit : Update apt cache after repo add] ****************************************************************************************************************
changed: [worker-node]
TASK [nvidia-toolkit : Install NVIDIA Container Toolkit packages] ******************************************************************************************************
ok: [worker-node]
TASK [nvidia-toolkit : Configure NVIDIA Docker runtime] ****************************************************************************************************************
ok: [worker-node]
TASK [nvidia-toolkit : Restart Docker] *********************************************************************************************************************************
changed: [worker-node]
PLAY RECAP *************************************************************************************************************************************************************
worker-node : ok=22 changed=5 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0
Post installation ensure, Driver installation looks good
ubuntu@llmops:~/llm-labs$ nvidia-smi
Sun Jan 11 21:53:01 2026
+-----------------------------------------------------------------------------------------+
| NVIDIA-SMI 590.48.01 Driver Version: 590.48.01 CUDA Version: 13.1 |
+-----------------------------------------+------------------------+----------------------+
| GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|=========================================+========================+======================|
| 0 NVIDIA A100-SXM4-40GB Off | 00000000:0A:00.0 Off | 0 |
| N/A 47C P0 50W / 400W | 0MiB / 40960MiB | 0% Default |
| | | Disabled |
+-----------------------------------------+------------------------+----------------------+
+-----------------------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=========================================================================================|
| No running processes found |
+-----------------------------------------------------------------------------------------+
Create the common docker bridge network so that all containers could talk to each other ( default bridge driver)
docker network create llmops-net
Export the Huggingface token
export HF_TOKEN=hf_token
Now, simply launch the vLLM docker compose, it will take some time to load
ubuntu@llmops:~/llm-labs/llmops/vllm$ docker compose -f docker-compose-vllm-qwen3-0.6B.yml up -d[+] up 1/1 ✔ Container vllm Created 0.3subuntu@llmops:~/llm-labs/llmops/vllm$ docker compose -f docker-compose.monitoring.yml up -dWARN[0000] Found orphan containers ([vllm]) for this project. If you removed or renamed this service in your compose file, you can run this command with the --remove-orphans flag to clean it up.ubuntu@llmops:~/llm-labs/llmops/vllm$ ✔ Container prometheus Created 0.5s ✔ Container dcgm-exporter Created 0.5s ✔ Container node-exporter Created 0.5s ✔ Container cadvisor Created 0.5s ✔ Container grafana Created
Ignore the orphan container warning. I have kept those 2 compose file separate deliverable so that more model specific compose files could be added later into the same repo.
Once all containers are downloaded and loaded, it should look like this ( without container crash loop)
ubuntu@llmops:~/llm-labs/llmops/vllm$ docker psCONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES750f8e14201d grafana/grafana:latest “/run.sh” 58 seconds ago Up 58 seconds 0.0.0.0:3000->3000/tcp, [::]:3000->3000/tcp grafana270c865726e9 prom/prometheus:latest “/bin/prometheus --c…” 59 seconds ago Up 58 seconds 0.0.0.0:9090->9090/tcp, [::]:9090->9090/tcp prometheusf679c2313fd2 gcr.io/cadvisor/cadvisor:latest “/usr/bin/cadvisor -…” 59 seconds ago Up 58 seconds (healthy) 0.0.0.0:8080->8080/tcp, [::]:8080->8080/tcp cadvisor28873c028c0b prom/node-exporter:latest “/bin/node_exporter …” 59 seconds ago Up 58 seconds 0.0.0.0:9100->9100/tcp, [::]:9100->9100/tcp node-exporter5e3f54b8f485 nvidia/dcgm-exporter:latest “/usr/local/dcgm/dcg…” 59 seconds ago Up 58 seconds 0.0.0.0:9400->9400/tcp, [::]:9400->9400/tcp dcgm-exporter3b002c0b1d47 vllm/vllm-openai:latest “vllm serve --model …” About a minute ago Up About a minute 0.0.0.0:8000->8000/tcp, [::]:8000->8000/tcp vllm
Now we have setup the vLLM inference base setup, next step is to setup Nvidia GenAI-Perf
pip install genai-perf
Do a quick test run to see if everything is working
genai-perf profile \ -m Qwen/Qwen3-0.6B \ --endpoint-type chat \ --synthetic-input-tokens-mean 200 \ --synthetic-input-tokens-stddev 0 \ --output-tokens-mean 100 \ --output-tokens-stddev 0 \ --streaming \ --request-count 50 \ --warmup-request-count 10[2026-01-11 23:53:27] DEBUG Inferred tokenizer from model name: Qwen/Qwen3-0.6B config_tokenizer.py:79[2026-01-11 23:53:27] INFO Profiling these models: Qwen/Qwen3-0.6B create_config.py:58[2026-01-11 23:53:27] INFO Model name ‘Qwen/Qwen3-0.6B’ cannot be used to create artifact directory. Instead, ‘Qwen_Qwen3-0.6B’ perf_analyzer_config.py:157 will be used.[2026-01-11 23:53:27] INFO Creating tokenizer for: Qwen/Qwen3-0.6B subcommand.py:190[2026-01-11 23:53:29] INFO Running Perf Analyzer : ‘perf_analyzer -m Qwen/Qwen3-0.6B --async --warmup-request-count 10 --stability-percentage subcommand.py:98 999 --request-count 50 -i http --concurrency-range 1 --service-kind openai --endpoint v1/chat/completions --input-data artifacts/Qwen_Qwen3-0.6B-openai-chat-concurrency1/inputs.json --profile-export-file artifacts/Qwen_Qwen3-0.6B-openai-chat-concurrency1/profile_export.json’[2026-01-11 23:53:52] INFO Loading response data from ‘artifacts/Qwen_Qwen3-0.6B-openai-chat-concurrency1/profile_export.json’ profile_data_parser.py:66[2026-01-11 23:53:52] INFO Parsing total 50 requests. llm_profile_data_parser.py:124Progress: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████| 50/50 [00:00<00:00, 260.92requests/s] NVIDIA GenAI-Perf | LLM Metrics┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━┳━━━━━━━━┳━━━━━━━━┳━━━━━━━━┳━━━━━━━━┳━━━━━━━━┓┃ Statistic ┃ avg ┃ min ┃ max ┃ p99 ┃ p90 ┃ p75 ┃┡━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━╇━━━━━━━━╇━━━━━━━━╇━━━━━━━━╇━━━━━━━━╇━━━━━━━━┩│ Time To First Token (ms) │ 12.79 │ 11.14 │ 16.74 │ 15.22 │ 13.30 │ 13.05 ││ Time To Second Token (ms) │ 3.18 │ 3.06 │ 3.73 │ 3.57 │ 3.27 │ 3.24 ││ Request Latency (ms) │ 336.79 │ 324.87 │ 348.00 │ 347.84 │ 346.32 │ 345.02 ││ Inter Token Latency (ms) │ 3.27 │ 3.17 │ 3.39 │ 3.39 │ 3.37 │ 3.36 ││ Output Token Throughput Per User │ 305.64 │ 295.21 │ 315.82 │ 315.69 │ 312.30 │ 311.15 ││ (tokens/sec/user) │ │ │ │ │ │ ││ Output Sequence Length (tokens) │ 99.98 │ 99.00 │ 100.00 │ 100.00 │ 100.00 │ 100.00 ││ Input Sequence Length (tokens) │ 200.00 │ 200.00 │ 200.00 │ 200.00 │ 200.00 │ 200.00 ││ Output Token Throughput (tokens/sec) │ 296.71 │ N/A │ N/A │ N/A │ N/A │ N/A ││ Request Throughput (per sec) │ 2.97 │ N/A │ N/A │ N/A │ N/A │ N/A ││ Request Count (count) │ 50.00 │ N/A │ N/A │ N/A │ N/A │ N/A │└──────────────────────────────────────┴────────┴────────┴────────┴────────┴────────┴────────┘[2026-01-11 23:53:52] INFO Generating artifacts/Qwen_Qwen3-0.6B-openai-chat-concurrency1/profile_export_genai_perf.json json_exporter.py:64[2026-01-11 23:53:52] INFO Generating artifacts/Qwen_Qwen3-0.6B-openai-chat-concurrency1/profile_export_genai_perf.csv
If you are able to see these metrics from GenAI-Perf, it means your setup is complete.
Now let’s move on to setting up the Grafana dashboard.
First, ensure that you have configured the Prometheus backend in Grafana. By default, it points to [localhost](http://localhost), so we need to switch it to prometheus, matching the service name used in the Docker Compose file.
As part of the Docker Compose setup, Grafana should automatically pick up the dashboard (NVIDIA + vLLM).
You should now be able to see the metrics flowing into the Grafana dashboard.
[Grafana Dashboard - DCGM + vLLM](https://preview.redd.it/ixe078igr4dg1.png?width=1400&format=png&auto=webp&s=d34a351af66bc6d0e89d55ebb9af33685f19a140)
At this point, what we have achieved is a basic “hello-world” setup for our LLM benchmarking infrastructure. The next big challenge is to benchmark properly and identify how we can tweak vLLM parameters and GenAI-Perf settings to squeeze the maximum out of the hardware. In this example, I am using a single A100-40GB GPU. It may not sound like much, but these are very powerful cards and work extremely well for agentic workflows where small language models are heavily used.
**References**
\[1\] [https://developer.nvidia.com/blog/llm-performance-benchmarking-measuring-nvidia-nim-performance-with-genai-perf/](https://developer.nvidia.com/blog/llm-performance-benchmarking-measuring-nvidia-nim-performance-with-genai-perf/)
\[2\] [https://docs.nvidia.com/deeplearning/triton-inference-server/user-guide/docs/perf\_analyzer/genai-perf/README.html](https://docs.nvidia.com/deeplearning/triton-inference-server/user-guide/docs/perf_analyzer/genai-perf/README.html) | 2026-02-10T01:22:25 | https://www.reddit.com/r/LocalLLaMA/comments/1r0mzja/step_by_step_guide_llm_inference_benchmarking/ | kchandank | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r0mzja | false | null | t3_1r0mzja | /r/LocalLLaMA/comments/1r0mzja/step_by_step_guide_llm_inference_benchmarking/ | false | false | 1 | null | |
You could make 20 DeepSeek API calls for the price of 1 GPT-4o call | 1 | Was messing around with [tokencalc.pro](http://tokencalc.pro) today trying to budget out a project, and this blew my mind.
For a typical coding task (10k token input, 2k response):
\*\*DeepSeek V3:\*\* $0.003
\*\*GPT-4o:\*\* $0.045
\*\*Claude Sonnet:\*\* $0.06
\*\*Gemini 1.5 Pro:\*\* $0.0625
That means you could literally ensemble 20 DeepSeek responses, do majority voting, and still pay less than a single call to the others.
I know r/LocalLLaMA is more about self-hosting, but for those of us who supplement with APIs - this pricing gap is getting harder to ignore. DeepSeek V3 quality seems solid for most coding tasks too.
What's your threshold for "good enough" when it comes to cost vs quality? | 2026-02-10T00:28:15 | https://www.reddit.com/r/LocalLLaMA/comments/1r0lqt3/you_could_make_20_deepseek_api_calls_for_the/ | Financial_Tailor7944 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r0lqt3 | false | null | t3_1r0lqt3 | /r/LocalLLaMA/comments/1r0lqt3/you_could_make_20_deepseek_api_calls_for_the/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': '5j00A3U2n6QQl34whhlpOpTw-GGu-gxD8MxvQXr4K1I', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/5j00A3U2n6QQl34whhlpOpTw-GGu-gxD8MxvQXr4K1I.png?width=108&crop=smart&auto=webp&s=3b42a3e98bd261572f565adccc9484b2b91823fb', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/5j00A3U2n6QQl34whhlpOpTw-GGu-gxD8MxvQXr4K1I.png?width=216&crop=smart&auto=webp&s=fb190fe297784792dd59ff1bc01fc5df75548a19', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/5j00A3U2n6QQl34whhlpOpTw-GGu-gxD8MxvQXr4K1I.png?width=320&crop=smart&auto=webp&s=6da4f39d98d9205b0d623fb2fefbdaf6f9e6eff0', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/5j00A3U2n6QQl34whhlpOpTw-GGu-gxD8MxvQXr4K1I.png?width=640&crop=smart&auto=webp&s=d9646611d2c48878fbe683c8e9ba2d0a8d378823', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/5j00A3U2n6QQl34whhlpOpTw-GGu-gxD8MxvQXr4K1I.png?width=960&crop=smart&auto=webp&s=5d697078f30f9230134c8952548d00784e124af5', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/5j00A3U2n6QQl34whhlpOpTw-GGu-gxD8MxvQXr4K1I.png?width=1080&crop=smart&auto=webp&s=c1580348e7714ecd367cf2e26aa427110915b21b', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/5j00A3U2n6QQl34whhlpOpTw-GGu-gxD8MxvQXr4K1I.png?auto=webp&s=3ebc6589ce2273b1ca073a16f6b30097f1f6929f', 'width': 1200}, 'variants': {}}]} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.