title
stringlengths
1
300
score
int64
0
8.54k
selftext
stringlengths
0
41.5k
created
timestamp[ns]date
2023-04-01 04:30:41
2026-03-04 02:14:14
url
stringlengths
0
878
author
stringlengths
3
20
domain
stringlengths
0
82
edited
timestamp[ns]date
1970-01-01 00:00:00
2026-02-19 14:51:53
gilded
int64
0
2
gildings
stringclasses
7 values
id
stringlengths
7
7
locked
bool
2 classes
media
stringlengths
646
1.8k
name
stringlengths
10
10
permalink
stringlengths
33
82
spoiler
bool
2 classes
stickied
bool
2 classes
thumbnail
stringlengths
4
213
ups
int64
0
8.54k
preview
stringlengths
301
5.01k
Deepseek R1, 64GBRam + 32GB VRAM
0
it works. Slowly of course, due to heavy disk Off-loading. but the system is stable. Used this mainly as a test, as the 4th module (16GB) is a little off (it is slower than the others).
2026-02-08T19:03:54
https://i.redd.it/dzlexslqjbig1.jpeg
Responsible-Stock462
i.redd.it
1970-01-01T00:00:00
0
{}
1qzhl8r
false
null
t3_1qzhl8r
/r/LocalLLaMA/comments/1qzhl8r/deepseek_r1_64gbram_32gb_vram/
false
false
https://preview.redd.it/…a7abb3c40a80a7d8
0
{'enabled': True, 'images': [{'id': 'dzlexslqjbig1', 'resolutions': [{'height': 192, 'url': 'https://preview.redd.it/dzlexslqjbig1.jpeg?width=108&crop=smart&auto=webp&s=e200670615fa1b030c8dcc543c577c00ad4a2ce2', 'width': 108}, {'height': 384, 'url': 'https://preview.redd.it/dzlexslqjbig1.jpeg?width=216&crop=smart&auto=webp&s=346b3e80633df360ec16f35bb2d2b463a4beba5b', 'width': 216}, {'height': 568, 'url': 'https://preview.redd.it/dzlexslqjbig1.jpeg?width=320&crop=smart&auto=webp&s=2ac9319e7127950d1cbda65b3de624493749aa13', 'width': 320}, {'height': 1137, 'url': 'https://preview.redd.it/dzlexslqjbig1.jpeg?width=640&crop=smart&auto=webp&s=9d441c087ecd021aa1d6f7e9b43752bb76d70c99', 'width': 640}, {'height': 1706, 'url': 'https://preview.redd.it/dzlexslqjbig1.jpeg?width=960&crop=smart&auto=webp&s=5e2a36d74d3450d40df9ccab98bf46fc0c497736', 'width': 960}, {'height': 1920, 'url': 'https://preview.redd.it/dzlexslqjbig1.jpeg?width=1080&crop=smart&auto=webp&s=cd678f73353a7bf0bbe675abf3acf87dc3f5d0e7', 'width': 1080}], 'source': {'height': 4096, 'url': 'https://preview.redd.it/dzlexslqjbig1.jpeg?auto=webp&s=8f945b2dfdd2c6c9dbffaee5b657acef2d96e633', 'width': 2304}, 'variants': {}}]}
Is it possible to run ragas or deepeval on a consumer-grade GPU?
1
I've been trying to run both RAG evaluation frameworks on my 6GB VRAM through their \`evaluate\` method with a small LLM and a small embedding model, on a single test and on any of the common metrics (contextual relevancy, faithfulness, answer relevancy, contextual recall). While the code compiles and executes, it's literally impossible for me to get any result with any metric for both evaluation frameworks: the code runs indefinitely, with the exception for ragas that it is interrupted by a timeout exception, and does not produce any metric result. My RAG is working perfectly fine and giving an answer to my questions in one of two seconds for each question when I invoke the RAG chain directly, so I don't believe it would be due to an extremely slow computational time. Since I'm running my code in a notebook in VSCode through the Jupyter extension, I read about the fact that there might be issues with asyncio and asynchronous runs, but I could not find any solution until now and I'm not even sure my issue is related to this. I am aware I am surely doing something wrong because I'm not able to run not one but two of the main RAG evaluation frameworks, but I'm just stuck with how to find solutions. I've been spending a huge time already on this. 1. Did you have any success in running a RAG evaluation framework on your own GPU installation? 2. Could you please advise on what works best for you or what I should investigate to hopefully be able to run a RAG evaluation framework similar to ragas or deepeval on my own GPU? 3. Would you know any existing notebook or script that executes successfully locally for running a RAG evaluation framework? 4. Should I ask for help somewhere else? Many thanks for your help!
2026-02-08T18:47:26
https://www.reddit.com/r/LocalLLaMA/comments/1qzh4nk/is_it_possible_to_run_ragas_or_deepeval_on_a/
EquivalentGood6455
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qzh4nk
false
null
t3_1qzh4nk
/r/LocalLLaMA/comments/1qzh4nk/is_it_possible_to_run_ragas_or_deepeval_on_a/
false
false
self
1
null
pwilkin is doing things
70
2026-02-08T18:38:34
https://github.com/ggml-org/llama.cpp/pull/19435
jacek2023
github.com
1970-01-01T00:00:00
0
{}
1qzgvyh
false
null
t3_1qzgvyh
/r/LocalLLaMA/comments/1qzgvyh/pwilkin_is_doing_things/
false
false
https://external-preview…581041e083d7bc8e
70
{'enabled': False, 'images': [{'id': 'LP9lWJIkvOFwEJy7i2edxqBM2iBmROue3pUEdiXyxYg', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/LP9lWJIkvOFwEJy7i2edxqBM2iBmROue3pUEdiXyxYg.png?width=108&crop=smart&auto=webp&s=97b9a19299baf71c2595f1d46f394359d66e8f0f', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/LP9lWJIkvOFwEJy7i2edxqBM2iBmROue3pUEdiXyxYg.png?width=216&crop=smart&auto=webp&s=c49c5b8a0ac103f7a679362615e6ef391b7347e6', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/LP9lWJIkvOFwEJy7i2edxqBM2iBmROue3pUEdiXyxYg.png?width=320&crop=smart&auto=webp&s=5047d4820eeaabac9c913a090edcfe1c449b2979', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/LP9lWJIkvOFwEJy7i2edxqBM2iBmROue3pUEdiXyxYg.png?width=640&crop=smart&auto=webp&s=a45fd4b46acdf1a22c62c7c684471a43354c1397', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/LP9lWJIkvOFwEJy7i2edxqBM2iBmROue3pUEdiXyxYg.png?width=960&crop=smart&auto=webp&s=5583f184eb8552f11d9a543d521c6cf465ee9bf4', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/LP9lWJIkvOFwEJy7i2edxqBM2iBmROue3pUEdiXyxYg.png?width=1080&crop=smart&auto=webp&s=221f182c5bae832dffef342bedf90bd1e7c868d6', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/LP9lWJIkvOFwEJy7i2edxqBM2iBmROue3pUEdiXyxYg.png?auto=webp&s=3589f24c1c521c8bee082c05cf70d3bba59bde8e', 'width': 1200}, 'variants': {}}]}
Quick Demo For OperatorKit
0
Built OperatorKit to explore what happens when AI runs locally and execution requires authorization before actions occur. Curious what this community thinks about treating the phone as sovereign compute. Opening a small TestFlight group for builders who want early access.
2026-02-08T18:24:19
https://v.redd.it/9jnz6ho89big1
Comprehensive_Help71
/r/LocalLLaMA/comments/1qzghtn/quick_demo_for_operatorkit/
1970-01-01T00:00:00
0
{}
1qzghtn
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/9jnz6ho89big1/DASHPlaylist.mpd?a=1773296668%2CYTI1YWI3OTcwODNjZDcyMTczMzIzODIxMmI4ODU4NzBhZTVjMjQ4MWM0N2VkYTFlYjMzYzgyYjYyNmE3NTZhMg%3D%3D&v=1&f=sd', 'duration': 463, 'fallback_url': 'https://v.redd.it/9jnz6ho89big1/CMAF_1080.mp4?source=fallback', 'has_audio': True, 'height': 1920, 'hls_url': 'https://v.redd.it/9jnz6ho89big1/HLSPlaylist.m3u8?a=1773296668%2CY2JhNThjMDVjMjQwZmJlZDlmMzljZmZmNGRjNTBhYTVjZDgyYjI0YTc0Y2MyOWIwY2E5ZDlmNjNkZjI3YzkyMg%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/9jnz6ho89big1/CMAF_96.mp4', 'transcoding_status': 'completed', 'width': 1080}}
t3_1qzghtn
/r/LocalLLaMA/comments/1qzghtn/quick_demo_for_operatorkit/
false
false
https://external-preview…e6a0180dc0c50276
0
{'enabled': False, 'images': [{'id': 'NzFydXM1YTg5YmlnMW3EKtER_oGn2HhhbP3t5QMyZw44-615QpuEN0hNjkom', 'resolutions': [{'height': 192, 'url': 'https://external-preview.redd.it/NzFydXM1YTg5YmlnMW3EKtER_oGn2HhhbP3t5QMyZw44-615QpuEN0hNjkom.png?width=108&crop=smart&format=pjpg&auto=webp&s=99ad8ac9088c04d39fdf712160192ad53c357cc5', 'width': 108}, {'height': 384, 'url': 'https://external-preview.redd.it/NzFydXM1YTg5YmlnMW3EKtER_oGn2HhhbP3t5QMyZw44-615QpuEN0hNjkom.png?width=216&crop=smart&format=pjpg&auto=webp&s=7ee9c04d0403ca8ecabfedda38594c8af591bf80', 'width': 216}, {'height': 568, 'url': 'https://external-preview.redd.it/NzFydXM1YTg5YmlnMW3EKtER_oGn2HhhbP3t5QMyZw44-615QpuEN0hNjkom.png?width=320&crop=smart&format=pjpg&auto=webp&s=5639a15a899996c3c462b016a73b2bb9801a1eed', 'width': 320}, {'height': 1137, 'url': 'https://external-preview.redd.it/NzFydXM1YTg5YmlnMW3EKtER_oGn2HhhbP3t5QMyZw44-615QpuEN0hNjkom.png?width=640&crop=smart&format=pjpg&auto=webp&s=a6735b699be7ef0fd1e91936ea8e8436882da172', 'width': 640}, {'height': 1706, 'url': 'https://external-preview.redd.it/NzFydXM1YTg5YmlnMW3EKtER_oGn2HhhbP3t5QMyZw44-615QpuEN0hNjkom.png?width=960&crop=smart&format=pjpg&auto=webp&s=214fd07819ff1e4b743e5dffb9b5265397e23fed', 'width': 960}, {'height': 1920, 'url': 'https://external-preview.redd.it/NzFydXM1YTg5YmlnMW3EKtER_oGn2HhhbP3t5QMyZw44-615QpuEN0hNjkom.png?width=1080&crop=smart&format=pjpg&auto=webp&s=e2722e17ff8cff7c99ed5865cf189e74964abffb', 'width': 1080}], 'source': {'height': 1920, 'url': 'https://external-preview.redd.it/NzFydXM1YTg5YmlnMW3EKtER_oGn2HhhbP3t5QMyZw44-615QpuEN0hNjkom.png?format=pjpg&auto=webp&s=0d575040df37546a69f26bcf3a49436e3bc534a0', 'width': 1080}, 'variants': {}}]}
Mamba precision loss after quantization
9
I noticed that almost all models that uses Mamba layers (which are hybrid models,some layers are transformers and most are mamba) especially Mamba-2 suffer from severe degradation of accuracy even at Q8 which is actually strange, are mamba layers more sensitive to quantizations or our current techniques for quantization aren't compatible with Mamba? I don't know if the recently released Mamba-3 is going to solve it but I couldn't find a proper quant of any Mamba models yet.
2026-02-08T18:21:40
https://www.reddit.com/r/LocalLLaMA/comments/1qzgf7x/mamba_precision_loss_after_quantization/
perfect-finetune
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qzgf7x
false
null
t3_1qzgf7x
/r/LocalLLaMA/comments/1qzgf7x/mamba_precision_loss_after_quantization/
false
false
self
9
null
Is Poe safe for proprietary prompts and docs? (Non-dev feedback on Financial AI)
0
Hi, I’m not a developer, but I’ve spent weeks fine-tuning a **financial agent** (**Marketbone-Pro**) using a back-and-forth workflow between Gemini and ChatGPT to optimize **LLM logic** and operational costs. It’s now running on **Gemini Flash** via Poe, and it’s incredibly lean. However, as I'm looking into **AI data privacy**, I have two concerns: 1. **IP Protection:** My system prompt and reference documents are the "secret sauce." Does Poe actually protect this **proprietary data**, or is it vulnerable to prompt injection and leaks? 2. **Credibility:** Is Poe seen as a "toy" or a serious platform for professional **financial AI tools**? I’m not sharing the link to avoid spamming, just looking for expert advice from the community on whether I should move to a standalone app to protect my IP. Thanks!
2026-02-08T18:17:20
https://www.reddit.com/r/LocalLLaMA/comments/1qzgaxx/is_poe_safe_for_proprietary_prompts_and_docs/
SamLeCoyote_Fix_1
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qzgaxx
false
null
t3_1qzgaxx
/r/LocalLLaMA/comments/1qzgaxx/is_poe_safe_for_proprietary_prompts_and_docs/
false
false
self
0
null
30 days using an uncensored, audited LLM for pentesting - technical breakdown of what worked
0
\*\*Background:\*\* Our red team has been testing Pingu Unchained (pingu.audn.ai) for professional behavioural AI over the past month to conduct https://www.kaggle.com/competitions/openai-gpt-oss-20b-red-teaming . Since this community discusses uncensored models frequently, figured I'd share the technical details of what actually worked in production. \*\*The setup:\*\* \- 120B GPT-OSS base model, unchained from ground up (not abliterated) \- Cryptographic audit logs for every prompt/completion \- Long context window (useful for multi-stage attack chains) \- Web search + inference capabilities \*\*What we tested it for:\*\* 1. Malware analysis and reverse engineering 2. Custom exploitation script generation 3. Reconnaissance automation 4. Adversarial test case creation 5. Phishing template generation for security awareness training \*\*Technical results that mattered:\*\* \*Zero refusals on legitimate tasks:\* Unlike jailbroken mainstream models, no mid-workflow failures. Tested with 200+ pentesting prompts - 0% refusal rate. \*Quality improvements:\* \- 40% reduction in time to generate working exploit PoCs \- Better context retention for complex attack chains (tested up to 8-stage attacks) \- More sophisticated social engineering template generation \*Compliance win:\* The audit logs include cryptographic signing of prompts + completions. This satisfied our security leadership's requirements for using AI in client engagements. \*\*What didn't work as well:\*\* \- Slower inference but it probably can get better \*\*Technical comparison vs alternatives:\*\* vs. GPT-4 with jailbreaks: More reliable, better context, but slower vs. Abliterated local models: Better reasoning, actually understands attack chains vs. Claude: No refusals mid-task, longer context for attack planning \*\*Would I recommend it?\*\* If you're doing professional offensive security work and need something that just works without prompt engineering gymnastics, yes. The audit trails make it usable in enterprise contexts. If you're just exploring or need general-purpose AI, probably overkill. Happy to answer technical questions about the setup or specific use cases.
2026-02-08T18:09:44
https://www.reddit.com/r/LocalLLaMA/comments/1qzg3gb/30_days_using_an_uncensored_audited_llm_for/
ozgurozkan
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qzg3gb
false
null
t3_1qzg3gb
/r/LocalLLaMA/comments/1qzg3gb/30_days_using_an_uncensored_audited_llm_for/
false
false
self
0
null
"AI PC" owners: Is anyone actually using their NPU for more than background blur? (Troubleshooting + ROI Discussion)
0
Hey everyone, I recently upgraded to an x86 "AI PC" (Intel Core Ultra / AMD Ryzen AI), primarily excited about the dedicated NPU for local AI workloads. However, after a few weeks, I’m feeling like I bought a race car that’s stuck in a school zone. **The Problem:** My NPU usage in Task Manager stays at basically 0% for almost everything I do. When I run local LLMs (via LM Studio or Ollama) or Stable Diffusion, it defaults to the GPU or hammers my CPU. It feels like the "40-50 TOPS" marketing isn't translating into usable software support for the average enthusiast yet. **I’d love to hear from other Intel/AMD NPU owners:** 1. **What hardware are you running?** (e.g., Lunar Lake/Core Ultra Series 2, Ryzen AI 300/Strix Point, etc.) 2. **The "How-To":** Have you successfully forced an LLM or Image Gen model onto the NPU? If so, what was the stack? (OpenVINO, IPEX-LLM, FastFlowLM, Amuse, etc.) 3. **The ROI (Performance vs. Efficiency):** What’s the actual benefit you’ve seen? Is the NPU actually *faster* than your iGPU, or is the "Return on Investment" strictly about battery life and silence? 4. **Daily Use:** Aside from Windows Studio Effects (webcam stuff), are there any "killer apps" you’ve found that use the NPU automatically? I’m trying to figure out if I’m missing a driver/config step, or if we’re all just waiting for the software ecosystem to catch up to the silicon.
2026-02-08T18:06:29
https://www.reddit.com/r/LocalLLaMA/comments/1qzg093/ai_pc_owners_is_anyone_actually_using_their_npu/
WhileKidsSleeping
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qzg093
false
null
t3_1qzg093
/r/LocalLLaMA/comments/1qzg093/ai_pc_owners_is_anyone_actually_using_their_npu/
false
false
self
0
null
Newb seeking help on hardware
3
Ladies and gents, Thanks for the informative nuggets so far. Though I have to say my use case is not the typical image and video generation. I need to build a local LLM to process a large number of documents that are sensitive (think contracts). I will also need the model to go and do research online. However, I would love to still be able to generate videos and images here and there. I also understand that lighter weight models like Qwen 3 8B can be already quite effective and efficient. What would be your suggestion for a local setup? A M5 MacBook? A “gaming” pc with a nice 24gb video card? .. any insights would be greatly appreciated. Cheers.
2026-02-08T18:00:31
https://www.reddit.com/r/LocalLLaMA/comments/1qzfu6t/newb_seeking_help_on_hardware/
chickensoup2day
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qzfu6t
false
null
t3_1qzfu6t
/r/LocalLLaMA/comments/1qzfu6t/newb_seeking_help_on_hardware/
false
false
self
3
null
I vibe-coded a web version of Worms World Party using Claude Code in a few hours
0
As a child (and also here and there as an adult), I really enjoyed the Worms World Party PC game, where two groups of animated worms fight against each other. Within a few hours today of vibe coding using Claude Code, I generated a web version of the game. It is far from the quality of the original, but it's just crazy what anyone can create today, without any technical knowledge. I open-sourced it if you want to play, or want to open PRs to enhance the game by Claude-Coding it on your end :) [https://github.com/NirDiamant/worms-world-party](https://github.com/NirDiamant/worms-world-party)
2026-02-08T17:54:25
https://v.redd.it/gmhc2vya7big1
Nir777
v.redd.it
1970-01-01T00:00:00
0
{}
1qzfobo
false
{'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/gmhc2vya7big1/DASHPlaylist.mpd?a=1773165280%2CMWZkOGJkMzVmNDllOTEwODQyMTZkMWU0ZjdmYmNjZGRjZjBiYzQyMGI2ZmNlNWUyMWM0MThiZDNiMTE3MTc0Mw%3D%3D&v=1&f=sd', 'duration': 60, 'fallback_url': 'https://v.redd.it/gmhc2vya7big1/CMAF_720.mp4?source=fallback', 'has_audio': False, 'height': 720, 'hls_url': 'https://v.redd.it/gmhc2vya7big1/HLSPlaylist.m3u8?a=1773165280%2COTRjMTMyMGY1MGZjM2FhMTc2MjA0Y2ExMDViNTZjMjFhMDIxMzUyNjg4MDQ1NzFlMjA5OTUwMDFjMDllODY0Zg%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/gmhc2vya7big1/CMAF_96.mp4', 'transcoding_status': 'completed', 'width': 1280}}
t3_1qzfobo
/r/LocalLLaMA/comments/1qzfobo/i_vibecoded_a_web_version_of_worms_world_party/
false
false
https://external-preview…624a710e679cd9a0
0
{'enabled': False, 'images': [{'id': 'aXQ3c2F4eWE3YmlnMderOmusuO7i4920f3m3s67Lm8r0wsRJnL8SHG33tcS0', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/aXQ3c2F4eWE3YmlnMderOmusuO7i4920f3m3s67Lm8r0wsRJnL8SHG33tcS0.png?width=108&crop=smart&format=pjpg&auto=webp&s=09d2efbf28005f2fd506b2675021e1db341c38a3', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/aXQ3c2F4eWE3YmlnMderOmusuO7i4920f3m3s67Lm8r0wsRJnL8SHG33tcS0.png?width=216&crop=smart&format=pjpg&auto=webp&s=99cec6dc87e180d086105d118fc4339f84775701', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/aXQ3c2F4eWE3YmlnMderOmusuO7i4920f3m3s67Lm8r0wsRJnL8SHG33tcS0.png?width=320&crop=smart&format=pjpg&auto=webp&s=190eee0ea722a82ac992acfc13043bea10d620f2', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/aXQ3c2F4eWE3YmlnMderOmusuO7i4920f3m3s67Lm8r0wsRJnL8SHG33tcS0.png?width=640&crop=smart&format=pjpg&auto=webp&s=92b88ce305f6404fce80d6add08e4d56b5575855', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/aXQ3c2F4eWE3YmlnMderOmusuO7i4920f3m3s67Lm8r0wsRJnL8SHG33tcS0.png?width=960&crop=smart&format=pjpg&auto=webp&s=c3628d51d9cfa8f49cb9f81eb73a567ecd85c2c5', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/aXQ3c2F4eWE3YmlnMderOmusuO7i4920f3m3s67Lm8r0wsRJnL8SHG33tcS0.png?width=1080&crop=smart&format=pjpg&auto=webp&s=90383f183ffe060a072886a1e7b05550c879115d', 'width': 1080}], 'source': {'height': 720, 'url': 'https://external-preview.redd.it/aXQ3c2F4eWE3YmlnMderOmusuO7i4920f3m3s67Lm8r0wsRJnL8SHG33tcS0.png?format=pjpg&auto=webp&s=9ff5e1500e4d12352cd168323f1f2175d2561d79', 'width': 1280}, 'variants': {}}]}
I built a site that shows what models your GPU can actually run
29
I wanted to start playing around with some LLaMA models with my 9070 XT, but wasn't really sure which models would be within the scope of my card. So I built [WhatModelsCanIRun.com](https://WhatModelsCanIRun.com) to help me and others get started. **How it works:** \- Pick your GPU, and it shows models that fit, barely fit, or not at all. \- Shows max context window for each model based on actual VRAM budget (weights + KV cache) \- Estimates tok/s from your GPU's memory bandwidth. I tried to cover a wide selection of models and GPUs with different quants. Would love feedback on the coverage, and if the estimate match your real-world experience. Thanks!
2026-02-08T17:50:21
https://www.reddit.com/r/LocalLLaMA/comments/1qzfke4/i_built_a_site_that_shows_what_models_your_gpu/
tim610
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qzfke4
false
null
t3_1qzfke4
/r/LocalLLaMA/comments/1qzfke4/i_built_a_site_that_shows_what_models_your_gpu/
false
false
self
29
null
Why is it so hard to search the web?
2
I’m using LM Studio for some coding and various text manipulation with OSS 20B ( and 120B when I don’t mind waiting). I’ve tried the DuckDuckGo plugin (what’s the difference between a plugin and a MCP?) and the visit-website by the same author which gives me the “best” results so far, but it’s still clunky and only works 30% of the time for basic requests like “Find a good recipe for cookies”. I’ve tried several other MCP servers with various results but it was a while back before tool use was more standardized in models. What do you use? I’d love to just type in “research using tools to find the 50 best cookie recipes, output a table with cookie type, rating, …” you get the idea. If I’m not mistaken, websites are thinking I’m a bot and blocking scraping. I believe DuckDuckGo plugin just finds links like a Google search then needs a retrieval tool to actually get the pages and parse them. (??) Do I need something to change HTML to markdown or something?
2026-02-08T17:39:41
https://www.reddit.com/r/LocalLLaMA/comments/1qzfa4s/why_is_it_so_hard_to_search_the_web/
johnfkngzoidberg
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qzfa4s
false
null
t3_1qzfa4s
/r/LocalLLaMA/comments/1qzfa4s/why_is_it_so_hard_to_search_the_web/
false
false
self
2
null
PATCH: compress long context into latent “patch tokens” (HF inputs_embeds) - looking for feedback
3
Hey folks I’ve been working on a small OSS project called PATCH (Latent Context Patching). Idea: split a prompt into VERBATIM (question/IDs/code) + COMPRESSIBLE (background/docs), encode the compressible part into a small set of continuous patch tokens, then feed \[patch\_tokens | verbatim\] to the model via inputs\_embeds. Base model stays frozen; encoder can be trained with distillation. In the included example (164-token doc + question), I’m seeing reductions like: strict selector: 164 → 36 effective tokens (78%, 4.6× collapse) more aggressive settings: down to \~15 effective tokens (\~91%) It also supports caching so repeated context can skip re-encoding entirely. Repo: https://github.com/newsbruno/patch I’d love feedback on: realism of the approach vs existing “context compression” best benchmark to prove quality (RAG-style eval?) runtime support beyond HF (vLLM/SGLang/llama.cpp embedding injection) Thanks!
2026-02-08T17:37:02
https://www.reddit.com/r/LocalLLaMA/comments/1qzf7mh/patch_compress_long_context_into_latent_patch/
Proud_Ad_7039
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qzf7mh
false
null
t3_1qzf7mh
/r/LocalLLaMA/comments/1qzf7mh/patch_compress_long_context_into_latent_patch/
false
false
self
3
{'enabled': False, 'images': [{'id': 'Qzdtqa5O6o9zrj1nigjXoTDXvEpUT_m6lpgW3MwWiC4', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/Qzdtqa5O6o9zrj1nigjXoTDXvEpUT_m6lpgW3MwWiC4.png?width=108&crop=smart&auto=webp&s=7ac44bd59588b5caa9146a47d898df6747d21e81', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/Qzdtqa5O6o9zrj1nigjXoTDXvEpUT_m6lpgW3MwWiC4.png?width=216&crop=smart&auto=webp&s=36bc938373324a87647222ec787b6f111932beab', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/Qzdtqa5O6o9zrj1nigjXoTDXvEpUT_m6lpgW3MwWiC4.png?width=320&crop=smart&auto=webp&s=f1d085d7ce42c3d21d64f0cfe8139a9478d654a7', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/Qzdtqa5O6o9zrj1nigjXoTDXvEpUT_m6lpgW3MwWiC4.png?width=640&crop=smart&auto=webp&s=3d4f2dc7fee7a108cdc94807424feb3bc4b30903', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/Qzdtqa5O6o9zrj1nigjXoTDXvEpUT_m6lpgW3MwWiC4.png?width=960&crop=smart&auto=webp&s=181026f1c14c993718dea92e84207896ab69ed90', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/Qzdtqa5O6o9zrj1nigjXoTDXvEpUT_m6lpgW3MwWiC4.png?width=1080&crop=smart&auto=webp&s=c9e926653693977281eb603fb4a21e383902f09f', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/Qzdtqa5O6o9zrj1nigjXoTDXvEpUT_m6lpgW3MwWiC4.png?auto=webp&s=99e302a574c15c636827fe33a0527e0af1ab102b', 'width': 1200}, 'variants': {}}]}
Sharing an open-source repository for pre-training small LMs with rust-bpe, Pytorch Lightning and Trackio
1
Hi everyone I wanted to dust off my knowledge of LLMs, so I decided to take inspiration from Karpathy’s nano-GPT and build my own version. The goal is learning, not building something "production-ready". That said, the code is fully usable for training your own model and I think it can serve as inspiration for building your own version: [https://github.com/ferjorosa/tiny-lm](https://github.com/ferjorosa/tiny-lm) I chose [rust-bpe](https://github.com/karpathy/rustbpe) for tokenization, [PyTorch Lightning](https://lightning.ai/docs/pytorch/stable/) for the training pipeline (I have prior experience with Lightning and I like how it structures the different stages and callbacks) and [Trackio](https://huggingface.co/docs/trackio/index) for the monitoring (good time to try it). As a first test, I have used the code to train a 2-layer GPT-2 model with an 8k vocabulary on the [TinyStories dataset](https://huggingface.co/datasets/roneneldan/TinyStories). I have wanted to reproduce [this paper from 2023](https://arxiv.org/pdf/2305.07759) for a while, so this felt like a nice opportunity. Training took about \~25 minutes on my RTX 5090, and the resulting model generates coherent short stories (you can find an example in the tiny-lm repo). I have uploaded the model to Hugging Face: [https://huggingface.co/ferjorosa/tiny-lm-tinystories-8k-gpt2-2l](https://huggingface.co/ferjorosa/tiny-lm-tinystories-8k-gpt2-2l) The code is open source. If you’re curious about how pre-training works under the hood, I’d encourage you to take a look or, even better, write your own version as I did, starting from scratch. I am really enjoying this project. Hope you find it useful https://preview.redd.it/xnqftpbf1big1.png?width=876&format=png&auto=webp&s=0161739963c1a6309ab118a79d41f3d4de07b2dd
2026-02-08T17:26:35
https://www.reddit.com/r/LocalLLaMA/comments/1qzexcm/sharing_an_opensource_repository_for_pretraining/
Eternal_Corrosion
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qzexcm
false
null
t3_1qzexcm
/r/LocalLLaMA/comments/1qzexcm/sharing_an_opensource_repository_for_pretraining/
false
false
https://b.thumbs.redditm…YvK5NpC52ZWk.jpg
1
null
Looking for the best local LLM for my laptop
0
I know am shooting too high. but I really want to have a local model with my personal data. this is my cofig to start with : CPU : Intel Core i9-12900 (16 Cores / 24 Threads) GPU : RTX 3070Ti mobile(8GB VRAM) RAM: 32GB. I need something that can tool call and use my comphyUI when needed. recently i tried qwen3:8B on openClaw. took 2 mins per msg.
2026-02-08T17:25:26
https://www.reddit.com/r/LocalLLaMA/comments/1qzew6u/looking_for_the_best_local_llm_for_my_laptop/
HaDeSxD
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qzew6u
false
null
t3_1qzew6u
/r/LocalLLaMA/comments/1qzew6u/looking_for_the_best_local_llm_for_my_laptop/
false
false
self
0
null
Made a tool to unify configs across AI coding assistants
0
I've been using a few AI coding tools lately (Claude Code, OpenCode, Kimi) and kept getting annoyed that each has its own config format and location. Switching from OpenRouter to moonshrot / NVIDIA or testing a local model meant updating configs separately in each tool. Inspired byt [Z AI Coding Helper](https://docs.z.ai/devpack/extension/coding-tool-helper), I threw together a CLI called coder-link that manages all of them from one place. You set up your provider and API key once, then sync it to whatever tool you want to use. It also handles MCP server setup so you don't have to install them separately for each tool. **Currently supports:** \- Coding Tools: Claude Code, OpenCode, Crush, Factory Droid, Kimi, AMP, Pi, (please suggest more if needed) \- Providers: OpenRouter, NVIDIA, Moonshot, GLM (coding plans), LM Studio (local) It's been useful for me when I want to quickly test different models or providers across tools without digging through config files. Still early but it works. You can install and test using: #install globally npm install -g coder-link #run using coder-link Repo: [https://github.com/HenkDz/coder-link](https://github.com/HenkDz/coder-link) Curious what others are using to manage this stuff, or if everyone just deals with the separate configs. Also open to adding support for more tools if there are others people use. https://preview.redd.it/k61vmbly0big1.png?width=939&format=png&auto=webp&s=b482e68de07e43dd8ebe4f4dd7ba6debe24717bf
2026-02-08T17:23:53
https://www.reddit.com/r/LocalLLaMA/comments/1qzeup5/made_a_tool_to_unify_configs_across_ai_coding/
Henkey9
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qzeup5
false
null
t3_1qzeup5
/r/LocalLLaMA/comments/1qzeup5/made_a_tool_to_unify_configs_across_ai_coding/
false
false
https://external-preview…4d24b80ce2bbcd4a
0
null
Remote RL Engineering Role ($150-$200/hr) - Verita AI
0
Verita AI is working with top-tier engineers on a cutting-edge project designing reinforcement learning environments that teach LLMs advanced AI/ML concepts. Your expertise would be valuable for shaping how next-generation models learn. **The role:** • Fully remote, contract • $150-$200/hour (based on expertise) + $500 take-home bonus • Minimum 4 hours daily overlap with PST (9am-5pm) • \~2 tasks per week, high autonomy **Ideal for:** • Graduates from top-tier engineering colleges or engineers from leading tech companies (FAANG+) • Strong Python engineers with LLM understanding • Those with deep ML fundamentals, RL systems experience, or research backgrounds This is a good fit for engineers who want challenging work at the intersection of fundamental research and applied ML, with compensation that reflects the caliber of work. Interested? Here's a short skills assessment: [https://docs.google.com/forms/d/e/1FAIpQLSevqhHH\_wRfFrTKiKElTovXlsgeY\_hUiN6YClzURmT6a85xAQ/viewform](https://docs.google.com/forms/d/e/1FAIpQLSevqhHH_wRfFrTKiKElTovXlsgeY_hUiN6YClzURmT6a85xAQ/viewform) Know someone who'd be a good fit? We offer referral bonuses for successful hires!
2026-02-08T17:20:08
https://www.reddit.com/r/LocalLLaMA/comments/1qzer08/remote_rl_engineering_role_150200hr_verita_ai/
BusinessProtection28
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qzer08
false
null
t3_1qzer08
/r/LocalLLaMA/comments/1qzer08/remote_rl_engineering_role_150200hr_verita_ai/
false
false
self
0
{'enabled': False, 'images': [{'id': 'nGdGdchsYdfTR4pQ5ZOq2JBEAdXDhnnk92e4MxNQlno', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/nGdGdchsYdfTR4pQ5ZOq2JBEAdXDhnnk92e4MxNQlno.png?width=108&crop=smart&auto=webp&s=c3e56ed6b6619eff55d60de62bc519bf6a743482', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/nGdGdchsYdfTR4pQ5ZOq2JBEAdXDhnnk92e4MxNQlno.png?width=216&crop=smart&auto=webp&s=efa452f5593082c44558145f4c0faa5ee5e8ae47', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/nGdGdchsYdfTR4pQ5ZOq2JBEAdXDhnnk92e4MxNQlno.png?width=320&crop=smart&auto=webp&s=22cf6f55c04b7741d6dcddd5e839b05346d863b3', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/nGdGdchsYdfTR4pQ5ZOq2JBEAdXDhnnk92e4MxNQlno.png?width=640&crop=smart&auto=webp&s=292b4b008111a132261274535c892323c6d7ce40', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/nGdGdchsYdfTR4pQ5ZOq2JBEAdXDhnnk92e4MxNQlno.png?width=960&crop=smart&auto=webp&s=8c503b38e65d71dcb3e2e6d212c90ce5a7443949', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/nGdGdchsYdfTR4pQ5ZOq2JBEAdXDhnnk92e4MxNQlno.png?width=1080&crop=smart&auto=webp&s=ff7d341b597522a728836053b5c5b6246bbfec24', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/nGdGdchsYdfTR4pQ5ZOq2JBEAdXDhnnk92e4MxNQlno.png?auto=webp&s=a30319b08a246a1d2b92df7e6f98cb0a743af053', 'width': 1200}, 'variants': {}}]}
Sharing an open-source repository for pretraining LLMs that Pytorch Lightning, rust-bpe and Trackio
1
I wanted to dust off my knowledge of LLMs, so I decided to take inspiration from Karpathy’s nano-GPT and build my own version. The goal is learning, not building something "production-ready". That said, the code is fully usable for training your own model and I think it can serve as inspiration for building your own version: [https://github.com/ferjorosa/tiny-lm](https://github.com/ferjorosa/tiny-lm) I chose **rust-bpe** for tokenization and **PyTorch Lightning** for the training pipeline. I have prior experience with Lightning and I like how it structures the different stages and callbacks. As a first test, I have used the code to train a 2-layer GPT-2 model with an 8k vocabulary on the [TinyStories dataset](https://huggingface.co/datasets/roneneldan/TinyStories). I’d wanted to reproduce [this paper from 2023](https://arxiv.org/pdf/2305.07759) for a while, so this felt like a good opportunity. Training took about \~25 minutes on my RTX 5090, and the resulting model generates coherent short stories (you can find an example in the tiny-lm repo). I have uploaded the model to Hugging Face: [https://huggingface.co/ferjorosa/tiny-lm-tinystories-8k-gpt2-2l](https://huggingface.co/ferjorosa/tiny-lm-tinystories-8k-gpt2-2l) The code is open source. If you’re curious about how pre-training works under the hood, I’d encourage you to take a look or, even better, write your own version as I did, starting from scratch. I am really enjoying this project. Hope you find it useful
2026-02-08T17:16:06
https://www.reddit.com/r/LocalLLaMA/comments/1qzen4c/sharing_an_opensource_repository_for_pretraining/
Eternal_Corrosion
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qzen4c
false
null
t3_1qzen4c
/r/LocalLLaMA/comments/1qzen4c/sharing_an_opensource_repository_for_pretraining/
false
false
self
1
{'enabled': False, 'images': [{'id': 'lUlpTFaBVsRW9agWUCr99LzctJhdNWQAa85fkxjelqo', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/lUlpTFaBVsRW9agWUCr99LzctJhdNWQAa85fkxjelqo.png?width=108&crop=smart&auto=webp&s=08b1a5761d32dac244eb961a5a618d0d712660a1', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/lUlpTFaBVsRW9agWUCr99LzctJhdNWQAa85fkxjelqo.png?width=216&crop=smart&auto=webp&s=b3abeb3e87a353b1ad2a2909c0ea2950c8b42ab4', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/lUlpTFaBVsRW9agWUCr99LzctJhdNWQAa85fkxjelqo.png?width=320&crop=smart&auto=webp&s=0f271e6a1f8b41cfb085d7d8ce0e4846b9c73c71', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/lUlpTFaBVsRW9agWUCr99LzctJhdNWQAa85fkxjelqo.png?width=640&crop=smart&auto=webp&s=27c1a363bf97f06585b3e134c8633788333b75e9', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/lUlpTFaBVsRW9agWUCr99LzctJhdNWQAa85fkxjelqo.png?width=960&crop=smart&auto=webp&s=9ecbbfa9db704d658766c4fb197be8179c77a04a', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/lUlpTFaBVsRW9agWUCr99LzctJhdNWQAa85fkxjelqo.png?width=1080&crop=smart&auto=webp&s=76af2ff2f9d701ec37fdbefa51f366332872c0a5', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/lUlpTFaBVsRW9agWUCr99LzctJhdNWQAa85fkxjelqo.png?auto=webp&s=ad1fc50b290f70b58631532d08d6447bbe3f1be2', 'width': 1200}, 'variants': {}}]}
StepFun 3.5 Flash vs MiniMax 2.1
37
I've been using [Minimax 2.1 Q3\_K\_XL](https://huggingface.co/unsloth/MiniMax-M2.1-GGUF) as a daily driver with good results. It's reasonably fast and intelligent. One of the best models at 128gb IMO. I downloaded [ubergarm's IQ4\_XS](https://huggingface.co/ubergarm/Step-3.5-Flash-GGUF) quant of StepFun 3.5 Flash. Tool calling is still a work in progress, so I built and installed llama.cpp from [pwilkin:autoparser](https://github.com/ggml-org/llama.cpp/pull/18675) which includes tool calling support for the model. I'm finding that the model likes to think *a lot*. Asking the model to write a commit message based on a small diff, the model thought for over 2 minutes. Much longer than minimax would generally take for an equivalent prompt. It definitely seems like it could be an incredibly intelligent model for its size but the overthinking doesn't feel great for a daily driver. Results on framework AMD Ryzen Max with vulkan: llama-server -hf ubergarm/Step-3.5-Flash-GGUF:IQ4_XS --host 0.0.0.0 --port 8080 -c 16000 --jinja -fa on -ngl 99 --no-context-shift Feb 08 10:46:32 edwin llama-server[20016]: prompt eval time = 4098.41 ms / 563 tokens ( 7.28 ms per token, 137.37 tokens per second) Feb 08 10:46:32 edwin llama-server[20016]: eval time = 188029.67 ms / 3460 tokens ( 54.34 ms per token, 18.40 tokens per second) Feb 08 10:46:32 edwin llama-server[20016]: total time = 192128.08 ms / 4023 tokens At 64k context, it takes up about 107gb of VRAM.
2026-02-08T17:00:20
https://www.reddit.com/r/LocalLLaMA/comments/1qze7q1/stepfun_35_flash_vs_minimax_21/
Zc5Gwu
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qze7q1
false
null
t3_1qze7q1
/r/LocalLLaMA/comments/1qze7q1/stepfun_35_flash_vs_minimax_21/
false
false
self
37
{'enabled': False, 'images': [{'id': '0UFWZCf6CZqsoe5f3ezOlV2Ne7X6ohE4hjPInk27imE', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/0UFWZCf6CZqsoe5f3ezOlV2Ne7X6ohE4hjPInk27imE.png?width=108&crop=smart&auto=webp&s=c6b38e6004b05a308d2ac759a83bfddaef055511', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/0UFWZCf6CZqsoe5f3ezOlV2Ne7X6ohE4hjPInk27imE.png?width=216&crop=smart&auto=webp&s=111fbfcff0eb3d6680934e633479b052cd7e4f95', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/0UFWZCf6CZqsoe5f3ezOlV2Ne7X6ohE4hjPInk27imE.png?width=320&crop=smart&auto=webp&s=902b623d4d0b8ac91c58e02ee593cc5d5e6aad77', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/0UFWZCf6CZqsoe5f3ezOlV2Ne7X6ohE4hjPInk27imE.png?width=640&crop=smart&auto=webp&s=ad0367e95961ea9598df86fe2cc3a8e083c3961a', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/0UFWZCf6CZqsoe5f3ezOlV2Ne7X6ohE4hjPInk27imE.png?width=960&crop=smart&auto=webp&s=4bff2933b8607eebd99342572cc5515a5d2a7641', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/0UFWZCf6CZqsoe5f3ezOlV2Ne7X6ohE4hjPInk27imE.png?width=1080&crop=smart&auto=webp&s=3568cb1e8a9ee554607f2d5d91cbd5e12c07a849', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/0UFWZCf6CZqsoe5f3ezOlV2Ne7X6ohE4hjPInk27imE.png?auto=webp&s=1718dbfbc38714b927d0661e90b653d31624b9af', 'width': 1200}, 'variants': {}}]}
Do NVIDIA GPUs + CUDA work on Ubuntu for local LLMs out of the box?
0
Hi all, I’m considering switching OS from Windows to Ubuntu on a gaming laptop with an NVIDIA GeForce RTX 4060. I want to be able to host local LLMs and GPU compute on Ubuntu. For LLM hosting I’m using CUDA and llama.cpp. I’ve heard an read that setting up Ubuntu with NVIDIA GPUs and CUDA can be tricky, so I’m looking for real-world experiences on a few questions: Does the GPU work „out-of-the-box“ on Ubuntu? On a fresh install, does the NVIDIA GPU get picked up cleanly, or do you typically need to install proprietary drivers immediately? Are there any common pain points on laptops (e.g., hybrid graphics, external monitors, etc.)? Is there anything I should watch out for during setup (Secure Boot, kernel/driver mismatch, etc.)? Thanks for your help!
2026-02-08T16:54:36
https://www.reddit.com/r/LocalLLaMA/comments/1qze23g/do_nvidia_gpus_cuda_work_on_ubuntu_for_local_llms/
External_Dentist1928
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qze23g
false
null
t3_1qze23g
/r/LocalLLaMA/comments/1qze23g/do_nvidia_gpus_cuda_work_on_ubuntu_for_local_llms/
false
false
self
0
null
The shift to Local SLMs in 2026 is about more than just privacy.
0
We’ve spent years chasing massive parameter counts, but in 2026, the real game-changer is **Intelligence Density.** I’ve been auditing AI workflows for a while now, and the "Red AI" approach (accuracy at any cost) is hitting a brick wall. Why are we still sending simple PDF summaries to a trillion-parameter cloud model that "drinks" 500ml of water per conversation? It’s overkill and environmentally reckless. I’ve switched my entire production stack to local Small Language Models (SLMs) using **Knowledge Distillation** and **4-bit quantization**. Here’s what I’ve found: * **Zero Transit Tax:** No global data hops. * **Hardware Downcycling:** My 2023-era GPUs are handling local agents like beasts. * **Net-Zero Potential:** Running an entire research hub on a simple solar setup is finally feasible. The era of "Bigger is Better" is dying. The era of "Efficient is Intelligent" is here. I’ve been documenting the full technical roadmap for this transition—from the math behind distillation to the hardware setup for Net-Zero intelligence. Curious to know if anyone else here is prioritizing **Inference-per-Watt** over just raw benchmarks?
2026-02-08T16:53:32
https://www.reddit.com/r/LocalLLaMA/comments/1qze11z/the_shift_to_local_slms_in_2026_is_about_more/
NGU-FREEFIRE
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qze11z
false
null
t3_1qze11z
/r/LocalLLaMA/comments/1qze11z/the_shift_to_local_slms_in_2026_is_about_more/
false
false
self
0
null
The Loss Illusion: Why Your Fine-Tuning Isn't Actually Working
0
Your training loss is dropping to **10⁻⁵**, but your model's behavior isn't changing at all. I’ve written a technical audit on how to fix these "stagnant" weights and force real alignment in 4-bit LoRA. [https://open.substack.com/pub/yotamabramson/p/the-behavioral-cliff-navigating-the?r=7e7s16&utm\_campaign=post&utm\_medium=web&showWelcomeOnShare=true](https://open.substack.com/pub/yotamabramson/p/the-behavioral-cliff-navigating-the?r=7e7s16&utm_campaign=post&utm_medium=web&showWelcomeOnShare=true)
2026-02-08T16:28:45
https://www.reddit.com/r/LocalLLaMA/comments/1qzddd9/the_loss_illusion_why_your_finetuning_isnt/
Ok-Scene-9037
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qzddd9
false
null
t3_1qzddd9
/r/LocalLLaMA/comments/1qzddd9/the_loss_illusion_why_your_finetuning_isnt/
false
false
self
0
{'enabled': False, 'images': [{'id': 'Pmnhgb_zvAodCyYIiAYWp4rUnHu0Cxpm6xE7lXYmJzs', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/Pmnhgb_zvAodCyYIiAYWp4rUnHu0Cxpm6xE7lXYmJzs.jpeg?width=108&crop=smart&auto=webp&s=0ff023292fc39bc111b88d34b27251c438f5e0cc', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/Pmnhgb_zvAodCyYIiAYWp4rUnHu0Cxpm6xE7lXYmJzs.jpeg?width=216&crop=smart&auto=webp&s=b0d0f32f531205899333de70141d754b59136a78', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/Pmnhgb_zvAodCyYIiAYWp4rUnHu0Cxpm6xE7lXYmJzs.jpeg?width=320&crop=smart&auto=webp&s=047889904f492ee1b89a9787884e0e8bd4a83759', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/Pmnhgb_zvAodCyYIiAYWp4rUnHu0Cxpm6xE7lXYmJzs.jpeg?width=640&crop=smart&auto=webp&s=a655d81a246ce178283efe8c272b52cede7b0490', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/Pmnhgb_zvAodCyYIiAYWp4rUnHu0Cxpm6xE7lXYmJzs.jpeg?width=960&crop=smart&auto=webp&s=ca64bac65a8427ed45d426c3b88ba9b0024d3c96', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/Pmnhgb_zvAodCyYIiAYWp4rUnHu0Cxpm6xE7lXYmJzs.jpeg?width=1080&crop=smart&auto=webp&s=d4e3096b0de1056cf23f51378ca6e1e686af6518', 'width': 1080}], 'source': {'height': 675, 'url': 'https://external-preview.redd.it/Pmnhgb_zvAodCyYIiAYWp4rUnHu0Cxpm6xE7lXYmJzs.jpeg?auto=webp&s=850fced25daae30544c2acef321f5f8a05b09407', 'width': 1200}, 'variants': {}}]}
Using DeepSeek-OCR 2 or similar for creating searchable PDFs
2
Has anyone tried to use one of the newer OCR models to transcribe PDFs, similar to OCRmyPDF? Internally I know it uses Tesseract, which is pretty decent but not always the greatest. It looks like there's a format called hOCR which I could feed into OCFmyPDF, but I haven't found much about trying to get hOCR (or something similar which could be converted) *out* of the OCR models. Is this something that's even possible, with some glue logic, or do the OCR models not have any ability to get positional information out?
2026-02-08T16:26:23
https://www.reddit.com/r/LocalLLaMA/comments/1qzdb1g/using_deepseekocr_2_or_similar_for_creating/
gjsmo
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qzdb1g
false
null
t3_1qzdb1g
/r/LocalLLaMA/comments/1qzdb1g/using_deepseekocr_2_or_similar_for_creating/
false
false
self
2
null
Local chatgpt replacement setup
0
I use chatgpt for all kinds of stuff, from IT to coding to business ideas to personal relationships and even mental health. As you can imagine, this is a gold mine of data that can be used for profiling. Therefore, I'm looking to run something local that can come close to replacing it. I have coding models already so this is more for stuff that you don't want Sam Altman reading. I'm thinking of a llamacpp + openwebui setup but which model would you choose? Also, what if you want to swap models? Can the history or memory be stored reliably? I've seen Openclaw trending now so I'm also wondering if that could be an option.
2026-02-08T16:23:15
https://www.reddit.com/r/LocalLLaMA/comments/1qzd80o/local_chatgpt_replacement_setup/
Blues520
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qzd80o
false
null
t3_1qzd80o
/r/LocalLLaMA/comments/1qzd80o/local_chatgpt_replacement_setup/
false
false
self
0
null
Opensource alternative to Claude Extension speedrunning wikipedia.
0
https://reddit.com/link/1qzd3zn/video/un8d3mpqmaig1/player I tried to find an agent that works in my browser side panel without having to install a bunch of python libraries and has the ability to work on background tabs. I only found closed source solutions like the Claude web extension, so I decided to build my own with some inspirations from Claude web extension. Side note. I can't understand why gemini 3 flash is so terrible at this. It doesn't grasp that you need to load the page first before taking actions. It just wanders off and starts outputing gibberish. I'll try to improve it over the next 2 weeks, mainly for small models, would appriciate any suggestions or tricks on how i can improve this. github repo: [https://github.com/Mariozada/Bouno](https://github.com/Mariozada/Bouno)
2026-02-08T16:18:59
https://www.reddit.com/r/LocalLLaMA/comments/1qzd3zn/opensource_alternative_to_claude_extension/
Evening_Tooth_1913
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qzd3zn
false
null
t3_1qzd3zn
/r/LocalLLaMA/comments/1qzd3zn/opensource_alternative_to_claude_extension/
false
false
https://external-preview…fec9d41a3a5ee6f6
0
null
Too much EQ - First LLM Build
2
Hi all, lots of good info here and my head is exploding a bit over the last few weeks of researching running local LLMs. Currently I have kind of an array of various parts/machines from different builds that I’m putting together as a starting place to see what kind of performance I can get before spending any (more) money. My main goal is to run a decent local coding model on my own repositories for development work. Intended builds using existing parts: Main AI Server Build: Linux 4090 RTX & 3090 RTX 256GB of DDR4 RAM AMD Threadripper 3960X 24 Core 48 Thread Development Machine (not intended to run any models, will just be IDE connected to above server): Windows 11 5070 RTX 64gb DDR5 AMD Ryzen 9 9950X3D Macs 2x Mac Studio 128GB Memory M2 Ultra I know the 4090 and 3090 can’t really be used together, but given the prices for these used cards am I better off selling and buying a 6000 Pro RTX? How do these two Macs fit into the picture? Bigger models that are slower, but better for bigger context windows? I’m mostly looking at the Qwen code models. Realistically which ones could I use and what kind of tokens per second am I looking at on the AI server or Mac Studios. I’ve done quite a bit of research, but there is so much info and different builds it’s hard to know what to expect when I put all of this together. Mostly just looking for a clear-ish answer about what model, context window size, and speed to expect given my current equipment or any tips for realistic upgrades based on what I currently own.
2026-02-08T16:09:36
https://www.reddit.com/r/LocalLLaMA/comments/1qzcvaa/too_much_eq_first_llm_build/
opaquevisions
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qzcvaa
false
null
t3_1qzcvaa
/r/LocalLLaMA/comments/1qzcvaa/too_much_eq_first_llm_build/
false
false
self
2
null
Ubuntu 24.04.3 LTS with 6.17.0-14-generic kernel not detecting 9070XT
1
I spent three hours figuring this one out, so putting it here in case it can help someone else. After the latest update on my system, I my 9070xt stopped working. I could not see it in Mission Center, but when I did sudo lshw -c video I could see it was there. After much faffing about, the reason why it was not working properly was that at some point during the updates an amdgpu blacklist file had been added in /etc/modprobe.d. blacklist-amdgpu.conf I commented its contents and everything is back to working as expected. Probably can delete the file, but have not gotten around to do that yet.
2026-02-08T16:07:42
https://www.reddit.com/r/LocalLLaMA/comments/1qzctg2/ubuntu_24043_lts_with_617014generic_kernel_not/
aram_mm
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qzctg2
false
null
t3_1qzctg2
/r/LocalLLaMA/comments/1qzctg2/ubuntu_24043_lts_with_617014generic_kernel_not/
false
false
self
1
null
I built an AI that refuses to act without your approval and it runs entirely on-device
0
Most AI tools focus on autonomy. I went the opposite direction. I built OperatorKit an execution control layer that ensures AI cannot take real-world actions without explicit authorization. Key differences: •Runs locally when possible : your data stays on your device •No silent cloud processing •Every action is reviewable and attributable •Designed for high-trust environments Think of it as governance before automation. Right now it supports workflows like: • drafting emails • summarizing meetings • generating action items • structured approvals But the larger goal is simple: AI should never execute without human authority. I’m opening a small TestFlight group and looking for serious builders, operators, and security-minded testers. If you want early access, comment and I’ll send the invite. Would especially value feedback from people thinking deeply about: • AI safety • local-first software • decision systems • operational risk Building this has changed how I think AI should behave less autonomous, more accountable. Curious if others see the future this way.
2026-02-08T16:05:01
https://www.reddit.com/r/LocalLLaMA/comments/1qzcqxe/i_built_an_ai_that_refuses_to_act_without_your/
Comprehensive_Help71
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qzcqxe
false
null
t3_1qzcqxe
/r/LocalLLaMA/comments/1qzcqxe/i_built_an_ai_that_refuses_to_act_without_your/
false
false
self
0
null
Best local models for 128gb VRAM and 192gb RAM
0
Unified memory 320GB: Hey masters! New hardware on its way. I need some recommendations. For coding, agent calls, general knowledge, etc.
2026-02-08T15:44:31
https://www.reddit.com/r/LocalLLaMA/comments/1qzc7jy/best_local_models_for_128gb_vram_and_192gb_ram/
Dry_Mortgage_4646
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qzc7jy
false
null
t3_1qzc7jy
/r/LocalLLaMA/comments/1qzc7jy/best_local_models_for_128gb_vram_and_192gb_ram/
false
false
self
0
null
Open-source fetch proxy for sandboxing AI agent network access (Go, capability separation)
1
[removed]
2026-02-08T15:33:53
https://www.reddit.com/r/LocalLLaMA/comments/1qzbxq5/opensource_fetch_proxy_for_sandboxing_ai_agent/
luckypipewrench6969
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qzbxq5
false
null
t3_1qzbxq5
/r/LocalLLaMA/comments/1qzbxq5/opensource_fetch_proxy_for_sandboxing_ai_agent/
false
false
self
1
{'enabled': False, 'images': [{'id': 'eV9Bu2cMmN4_zNoqY4srSPpKIKBAIvHpzJ73iTKwBqA', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/eV9Bu2cMmN4_zNoqY4srSPpKIKBAIvHpzJ73iTKwBqA.png?width=108&crop=smart&auto=webp&s=a647aef822fb66dd620499cb6325958898969944', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/eV9Bu2cMmN4_zNoqY4srSPpKIKBAIvHpzJ73iTKwBqA.png?width=216&crop=smart&auto=webp&s=2f622beebaf8d2188e9bcda8b807beec87acc7b7', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/eV9Bu2cMmN4_zNoqY4srSPpKIKBAIvHpzJ73iTKwBqA.png?width=320&crop=smart&auto=webp&s=19c149ac22ee02af0d7cb40b16727b21c3fde343', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/eV9Bu2cMmN4_zNoqY4srSPpKIKBAIvHpzJ73iTKwBqA.png?width=640&crop=smart&auto=webp&s=90e0f0ab8c252a33cf6ad2e41389b77797a5e277', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/eV9Bu2cMmN4_zNoqY4srSPpKIKBAIvHpzJ73iTKwBqA.png?width=960&crop=smart&auto=webp&s=9f2c93d3f4fd79165d70962d777c3eb6653e80f1', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/eV9Bu2cMmN4_zNoqY4srSPpKIKBAIvHpzJ73iTKwBqA.png?width=1080&crop=smart&auto=webp&s=b639d5ce173b423dd9967b0b6526e9308eab403c', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/eV9Bu2cMmN4_zNoqY4srSPpKIKBAIvHpzJ73iTKwBqA.png?auto=webp&s=ea9c291e46fab056eb3424efd83b33f1cf4fc81d', 'width': 1200}, 'variants': {}}]}
Hi all! Please help me choose a local LLM model. I'm making my own assistant for a PC and I want to choose a specialized model trained in dialogues or, in extreme cases, RP.
1
I have 12 GB VRAM and 32gb 3200. I liked the model magnum v4 11B, but I would like a smarter model. What do you think?
2026-02-08T15:32:14
https://www.reddit.com/r/LocalLLaMA/comments/1qzbwa5/hi_all_please_help_me_choose_a_local_llm_model_im/
BestLengthiness3988
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qzbwa5
false
null
t3_1qzbwa5
/r/LocalLLaMA/comments/1qzbwa5/hi_all_please_help_me_choose_a_local_llm_model_im/
false
false
self
1
null
How do you prioritize LLM spend when budget gets tight across multiple features?
0
honest question for anyone running LiteLLM or similar with multiple AI features on one budget we've got about 5 things hitting the API. customer chatbot (the one that actually matters), product search, an agent pipeline, internal summarizer, some analytics stuff. all sharing a $2K monthly budget through LiteLLM proxy. the problem is dumb but real: there's no priority. the summarizer that 3 people use internally costs the same dollars as the chatbot that talks to customers. last month the summarizer went heavy, budget ran out day 25, chatbot went down. got the 11pm text from the CEO. you know the one. now i'm manually adjusting per-key limits every week like it's 2003 and i'm managing a phone bill. works i guess. hate it. so: 1. how many LLM features are you actually running? 2. what's the monthly spend look like? trying to understand if this is a real problem at $500/mo or only starts hurting at $2K+ 3. ever had budget limits cause an actual incident? 4. do you have any way to say "this feature matters more, protect it" or is everything just equal? curious if others have solved this or if we're all just winging it.
2026-02-08T15:24:59
https://www.reddit.com/r/LocalLLaMA/comments/1qzbpds/how_do_you_prioritize_llm_spend_when_budget_gets/
Fit-Cryptographer469
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qzbpds
false
null
t3_1qzbpds
/r/LocalLLaMA/comments/1qzbpds/how_do_you_prioritize_llm_spend_when_budget_gets/
false
false
self
0
null
Please, someone create a distil of Qwen3-Coder-Next or Kimi K2.5 so it runs on consumer hardware
1
[removed]
2026-02-08T14:48:03
https://www.reddit.com/r/LocalLLaMA/comments/1qzaru3/please_someone_create_a_distil_of_qwen3codernext/
opensourceAIlover
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qzaru3
false
null
t3_1qzaru3
/r/LocalLLaMA/comments/1qzaru3/please_someone_create_a_distil_of_qwen3codernext/
false
false
self
1
null
New computer arrived... JAN is still super slow.
0
Hi all, Just received my new laptop: Thinkpad P1 Gen 8 with 64GB RAM an Intel(R) Core(TM) Ultra 9 285H processor and a RTX PRO 2000 NVIDIA GPU. Downloaded JAN (latest version). Enabled the GPU in the Settings >> Hardware. Installed the DEVSTRAL-Small-2507-GGUF model and asked it a question. And I started getting words at a pace of 1 word per second max... and the GPU seemed not to be in use... Is there something else that is required to be done in settings? is JAN slow? should I try something else? I tend not to use AI, because most of times it breaks the NDAs our company signs down with our customers. But having the opportunity to use it locally is a good thing. Thank you all in advance.
2026-02-08T14:45:43
https://www.reddit.com/r/LocalLLaMA/comments/1qzaps9/new_computer_arrived_jan_is_still_super_slow/
robotecnik
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qzaps9
false
null
t3_1qzaps9
/r/LocalLLaMA/comments/1qzaps9/new_computer_arrived_jan_is_still_super_slow/
false
false
self
0
null
Benchmarking On-Device MLX LLMs on iPhone 17 Pro and iPad Pro M5
0
**TL;DR** I benchmarked 6 quantized LLMs using Apple’s MLX built into [Russet](https://apps.apple.com/app/russet/id6754737926) on iPhone 17 Pro (A19 Pro) and iPad Pro M5 (both with 12 GB). Key takeaways: Peak throughput \- LFM2.5 1.2B (4-bit) hit \~124 tokens/sec on iPad and \~60 tokens/sec on iPhone \- Qwen3 0.6B (4-bit) hit \~86 TPS on iPad, \~62 TPS on iPhone Device gap \- iPad is \~1.2×–2.2× faster, and the gap widens with longer contexts \- Context impact: iPhone throughput dropped with longer prompts; iPad stayed flat \- Models: smaller isn’t always faster — architecture always matters, even in MLX More detailed methodology and discussion in the link. Happy to run more tests and also provide one for Russet on Mac
2026-02-08T14:40:27
https://rickytakkar.com/blog_russet_mlx_benchmark.html
d7UVDEcpnf
rickytakkar.com
1970-01-01T00:00:00
0
{}
1qzal8b
false
null
t3_1qzal8b
/r/LocalLLaMA/comments/1qzal8b/benchmarking_ondevice_mlx_llms_on_iphone_17_pro/
false
false
default
0
null
I benchmarked GPT-5.2 vs Opus 4.6 on System Design (HLD)
0
Most benchmarks test coding or reasoning. I wanted to test **System Architecture**. I built `HLD-Bench`, an open-source tool that forces LLMs to generate: * Structured High-Level Design (components, APIs, capacity planning). * **Mermaid.js diagrams** (Architecture & Data Flow). * Trade-off analysis. I ran a full comparison on **"Design a ChatGPT-like Web App"** (20M DAU) against GPT-5.2, Opus 4.6, and Gemini 3 Pro. The visual difference in how they handle distributed systems (caching layers, streaming protocols) is immediately obvious in the diagrams. **A Note on Scoring:** Currently, the evaluation is qualitative (visual diffs). I am considering building a **blind-voting web app** (Arena-style) where users rank anonymized designs. Open to suggestions on how best to score these architectures objectively. **Live Report (Side-by-Side):**[https://ruhal-doshi.github.io/hld-bench/report.html](https://ruhal-doshi.github.io/hld-bench/report.html) **Repo:**[https://github.com/Ruhal-Doshi/hld-bench](https://github.com/Ruhal-Doshi/hld-bench) (Also looking for harder/more specific design problems to add to the suite.)
2026-02-08T14:24:23
https://v.redd.it/lzaq5efl5aig1
Ruhal-Doshi
v.redd.it
1970-01-01T00:00:00
0
{}
1qza7if
false
{'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/lzaq5efl5aig1/DASHPlaylist.mpd?a=1773152682%2CMDQ2MWUyMzExOWU4MDhkZTdlMWVjZTMxODI2NGY5Y2ZhN2QwODFlNDRkMWY1MDE3NDRjMTdjZmE0NjdhYTBlZA%3D%3D&v=1&f=sd', 'duration': 30, 'fallback_url': 'https://v.redd.it/lzaq5efl5aig1/CMAF_720.mp4?source=fallback', 'has_audio': False, 'height': 720, 'hls_url': 'https://v.redd.it/lzaq5efl5aig1/HLSPlaylist.m3u8?a=1773152682%2CODYwYmExZmYzODE2Mzg3MzIzNTViZWI3YTdmYzQxMjA5MjRjYzEwY2I3NGQ2ZmYzZTA4ZDdjZGIyYTEwYmVjOA%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/lzaq5efl5aig1/CMAF_96.mp4', 'transcoding_status': 'completed', 'width': 1200}}
t3_1qza7if
/r/LocalLLaMA/comments/1qza7if/i_benchmarked_gpt52_vs_opus_46_on_system_design/
false
false
https://external-preview…6cfa6bff44fb062c
0
{'enabled': False, 'images': [{'id': 'aDhwbG5pZmw1YWlnMUt354mOyry_9FRLD_1i4_dwKgQgJ0piEX-jxZLWO9s9', 'resolutions': [{'height': 64, 'url': 'https://external-preview.redd.it/aDhwbG5pZmw1YWlnMUt354mOyry_9FRLD_1i4_dwKgQgJ0piEX-jxZLWO9s9.png?width=108&crop=smart&format=pjpg&auto=webp&s=c144dc271bcf7c73a5c48aea54bbdf5ec937c9db', 'width': 108}, {'height': 129, 'url': 'https://external-preview.redd.it/aDhwbG5pZmw1YWlnMUt354mOyry_9FRLD_1i4_dwKgQgJ0piEX-jxZLWO9s9.png?width=216&crop=smart&format=pjpg&auto=webp&s=cf0721c47e06ea6a23291a479ac4ba91926d9108', 'width': 216}, {'height': 192, 'url': 'https://external-preview.redd.it/aDhwbG5pZmw1YWlnMUt354mOyry_9FRLD_1i4_dwKgQgJ0piEX-jxZLWO9s9.png?width=320&crop=smart&format=pjpg&auto=webp&s=6a11c7e89582c013f2fdd8f8f79724ee7c9b4e2b', 'width': 320}, {'height': 384, 'url': 'https://external-preview.redd.it/aDhwbG5pZmw1YWlnMUt354mOyry_9FRLD_1i4_dwKgQgJ0piEX-jxZLWO9s9.png?width=640&crop=smart&format=pjpg&auto=webp&s=201b13d867044a0ec4228815b28dcca480ccbf4a', 'width': 640}, {'height': 576, 'url': 'https://external-preview.redd.it/aDhwbG5pZmw1YWlnMUt354mOyry_9FRLD_1i4_dwKgQgJ0piEX-jxZLWO9s9.png?width=960&crop=smart&format=pjpg&auto=webp&s=fb309d78735300f4dfbc82b5fb7785fb117f6b0b', 'width': 960}, {'height': 648, 'url': 'https://external-preview.redd.it/aDhwbG5pZmw1YWlnMUt354mOyry_9FRLD_1i4_dwKgQgJ0piEX-jxZLWO9s9.png?width=1080&crop=smart&format=pjpg&auto=webp&s=eea40d0465b0ee94584e2edfd218e5fa4776b252', 'width': 1080}], 'source': {'height': 720, 'url': 'https://external-preview.redd.it/aDhwbG5pZmw1YWlnMUt354mOyry_9FRLD_1i4_dwKgQgJ0piEX-jxZLWO9s9.png?format=pjpg&auto=webp&s=154f8cac97d4a3505fc25af7bffe8e98f934cb99', 'width': 1200}, 'variants': {}}]}
Just something cute
6
So I'm running an uncensored AI model. I'm not doing anything nefarious, I'm building a novel writing AI. Anyways, before I mentioned anything about my intent, I let my AI decide what he wants to do as an experiment. This is what he said: https://preview.redd.it/uo72eif01aig1.png?width=576&format=png&auto=webp&s=04ba7291213ad7c4d418cd41ae7b647cacb822c3 Isn't this so wholesome?! like wtf
2026-02-08T13:57:38
https://www.reddit.com/r/LocalLLaMA/comments/1qz9kso/just_something_cute/
volious-ka
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qz9kso
false
null
t3_1qz9kso
/r/LocalLLaMA/comments/1qz9kso/just_something_cute/
false
false
https://preview.redd.it/…3f2eefdfa8fc12be
6
null
LoRA adapters for Qwen3-4B (instruction fine-tuning)
1
[removed]
2026-02-08T13:49:44
https://www.reddit.com/r/LocalLLaMA/comments/1qz9ebm/lora_adapters_for_qwen34b_instruction_finetuning/
opensourceAIlover
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qz9ebm
false
null
t3_1qz9ebm
/r/LocalLLaMA/comments/1qz9ebm/lora_adapters_for_qwen34b_instruction_finetuning/
false
false
self
1
{'enabled': False, 'images': [{'id': 'p3wOOadK6cItUdCC5oO8dYOzLXN_YiRddAMDcF8yRDY', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/p3wOOadK6cItUdCC5oO8dYOzLXN_YiRddAMDcF8yRDY.png?width=108&crop=smart&auto=webp&s=883d32051fb7d452928bfd9deda58eebf3c6d88d', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/p3wOOadK6cItUdCC5oO8dYOzLXN_YiRddAMDcF8yRDY.png?width=216&crop=smart&auto=webp&s=0f201f9a04ca577b1aa88197dcd543bc3649de18', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/p3wOOadK6cItUdCC5oO8dYOzLXN_YiRddAMDcF8yRDY.png?width=320&crop=smart&auto=webp&s=2e34c588ab67e53e9ebbd9f68ec099a4560aaeed', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/p3wOOadK6cItUdCC5oO8dYOzLXN_YiRddAMDcF8yRDY.png?width=640&crop=smart&auto=webp&s=df5f90522a07120c330122f1a076672227fa39bc', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/p3wOOadK6cItUdCC5oO8dYOzLXN_YiRddAMDcF8yRDY.png?width=960&crop=smart&auto=webp&s=360454ce9db7320d44b3914fa4bf42313acd5325', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/p3wOOadK6cItUdCC5oO8dYOzLXN_YiRddAMDcF8yRDY.png?width=1080&crop=smart&auto=webp&s=05585e7dc77c0b072fabfe5d64ef2e2a26ed37c8', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/p3wOOadK6cItUdCC5oO8dYOzLXN_YiRddAMDcF8yRDY.png?auto=webp&s=69783b7f21d9112619b0553ce67ebd30a75e950f', 'width': 1200}, 'variants': {}}]}
Anyone in need of GPU clusters? (or big CPU instances)
0
So I've got massive credits at a compute provider and I am looking to resell GPU clusters (e.g. **8xRTX 6000 PRO**) and/or CPU instances (upto 64 cores) at **cheaper than anywhere else** prices, even more **cheaper if you want them reserved.** So if you are into training models or big time inference or anything else and want compute at a cheap rate, **hit me up**!
2026-02-08T13:47:32
https://www.reddit.com/r/LocalLLaMA/comments/1qz9cjt/anyone_in_need_of_gpu_clusters_or_big_cpu/
SomeoneElseOnTheMars
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qz9cjt
false
null
t3_1qz9cjt
/r/LocalLLaMA/comments/1qz9cjt/anyone_in_need_of_gpu_clusters_or_big_cpu/
false
false
self
0
null
Finetuned a 4B parameter model — called Qwen3-4B-2507-Instruct.
1
[removed]
2026-02-08T13:44:24
https://www.reddit.com/r/LocalLLaMA/comments/1qz99xp/finetuned_a_4b_parameter_model_called/
opensourceAIlover
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qz99xp
false
null
t3_1qz99xp
/r/LocalLLaMA/comments/1qz99xp/finetuned_a_4b_parameter_model_called/
false
false
https://external-preview…cca4ce3f04833864
1
null
Qwen3-Coder-Next poor performance
0
Hi, I'm using Qwen3-Coder-Next (unsloth/Qwen3-Coder-Next-GGUF:Q4\_K\_XL) on my server with 3x AMD MI50 (32GB). It's a great model for coding, maybe the best we can have at the moment, however the performance is very bad. GPT-OSS-120B is running at almost 80t/s tg, while Qwen3-Coder-Next is running at 22t/s. I built the most recent ROCm version of llama.cpp, however it just crashes so I stick to Vulkan. Is anybody else using this model with similiar hardware? Those are my settings: $LLAMA\_PATH/llama-server \\ \--model $MODELS\_PATH/$MODEL \\ \--fit on \\ \--fit-ctx 131072 \\ \--n-gpu-layers 999 \\ \--batch-size 8192 \\ \--main-gpu 0 \\ \--temp 1.0 \\ \--top-p 0.95 \\ \--top-k 40 \\ \--min-p 0.01 \\ \--split-mode layer \\ \--host [0.0.0.0](http://0.0.0.0) \\ \--port 5000 \\ \--flash-attn 1
2026-02-08T13:39:19
https://www.reddit.com/r/LocalLLaMA/comments/1qz95sa/qwen3codernext_poor_performance/
HlddenDreck
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qz95sa
false
null
t3_1qz95sa
/r/LocalLLaMA/comments/1qz95sa/qwen3codernext_poor_performance/
false
false
self
0
null
Why did LLM360's K2-V2 Instruct not get picked up by finetuners?
5
The more I've used LLM360's K2-V2 the more impressed I've been with it. Especially when I need an in-depth answer and I ask it to be exhaustive and set the think tag to <think> (as opposed to <think\_fast> and <think\_faster>). I primarily use it for creative writing editing, and as an example, I recent gave it the same chapter from two points of view and asked it to exhaustively point out the differences between them (to make sure I wasn't missing any details on the rewrite.) It took 32k of tokens to evaluate the two chapters, and outputted clean tables listing out the differences. I told GLM 4.7 to do the same thing and the list wasn't nearly as detailed. I think GLM 4.7 is probably smarter, but K2-V2 really seems like a diamond in the rough when it comes possibility. It's Apache licensed, 70b, has thinking built in, and it has an open dataset (as I understand it).The open dataset would allow someone to use DPO to change default undesirable behavior, and whatever was fine-tuned could be licensed as Apache which gives a lot more freedom than say the Llama 3.3 models I still see floating around. I prefer 70b dense models because they seem to be able to compete with models literally twice (sometimes three times) their size... and since I can fit it all into VRAM it's also much faster. Not sure how far away it is from being a coding model, but again, the pieces are in place for someone to pick it up and build it. IDK, has anyone else used it as of late? I would hate for something like this to get missed. Is there a better 70b model licensed as liberally?
2026-02-08T13:28:24
https://www.reddit.com/r/LocalLLaMA/comments/1qz8wvv/why_did_llm360s_k2v2_instruct_not_get_picked_up/
silenceimpaired
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qz8wvv
false
null
t3_1qz8wvv
/r/LocalLLaMA/comments/1qz8wvv/why_did_llm360s_k2v2_instruct_not_get_picked_up/
false
false
self
5
null
do you know more modern version of something like byt5-small?
2
[https://huggingface.co/google/byt5-small](https://huggingface.co/google/byt5-small) is a 300M model from like 5 years ago do you know something similar but more modern? I am finetuning it locally, so size matters
2026-02-08T13:27:28
https://www.reddit.com/r/LocalLLaMA/comments/1qz8w45/do_you_know_more_modern_version_of_something_like/
jacek2023
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qz8w45
false
null
t3_1qz8w45
/r/LocalLLaMA/comments/1qz8w45/do_you_know_more_modern_version_of_something_like/
false
false
self
2
{'enabled': False, 'images': [{'id': 'QjyXl5eAAN_NESivhYEU6sIrsx7lKJs9nCj1RbC_1XY', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/QjyXl5eAAN_NESivhYEU6sIrsx7lKJs9nCj1RbC_1XY.png?width=108&crop=smart&auto=webp&s=8d6a89d3e60b1b7244d3856cc69fb76e3cb95733', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/QjyXl5eAAN_NESivhYEU6sIrsx7lKJs9nCj1RbC_1XY.png?width=216&crop=smart&auto=webp&s=f647462edfcf66c479d73be131b3b157aa344c19', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/QjyXl5eAAN_NESivhYEU6sIrsx7lKJs9nCj1RbC_1XY.png?width=320&crop=smart&auto=webp&s=be8f975e09ad3e7f89dd6086d37ccf4ffd5030e4', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/QjyXl5eAAN_NESivhYEU6sIrsx7lKJs9nCj1RbC_1XY.png?width=640&crop=smart&auto=webp&s=40eac88e792fef72ef70b67426c8de9c767ded65', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/QjyXl5eAAN_NESivhYEU6sIrsx7lKJs9nCj1RbC_1XY.png?width=960&crop=smart&auto=webp&s=f698fc1a399dd7d40e1f1fad475295fecd105ace', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/QjyXl5eAAN_NESivhYEU6sIrsx7lKJs9nCj1RbC_1XY.png?width=1080&crop=smart&auto=webp&s=7dfac571c9b8fab4444d6dac558ea82d48a2b972', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/QjyXl5eAAN_NESivhYEU6sIrsx7lKJs9nCj1RbC_1XY.png?auto=webp&s=a44354ecbbdd4e2416b2d2f08a4e6d250cd34039', 'width': 1200}, 'variants': {}}]}
How is the on-device AI keyboard performing for you in 2026? (Apple Intelligence vs Galaxy AI vs Xiaomi)
4
Hi everyone, I'm planning to upgrade my phone soon, primarily for the new AI-powered predictive text and writing tools. I've heard that on-device LLMs are now handling next-token prediction and tone rewriting directly in the keyboard. For those who have been using the latest flagships (iPhone 16/17, S25/S26, or Xiaomi 15/16), I’d love to hear your thoughts on a few things: 1. **Predictive Accuracy:** Does it actually understand context better than the old N-gram models? Can it predict based on the "vibe" of your conversation? 2. **Latency & Battery:** Is there any noticeable lag when typing? Does the phone get warm during long typing sessions? 3. **Privacy vs. Utility:** Do you feel the on-device processing is a fair trade-off for the intelligence it provides? 4. **Best in Class:** If you’ve tried multiple systems, which one currently has the "smartest" keyboard? Looking forward to your insights! Thanks!
2026-02-08T13:27:13
https://www.reddit.com/r/LocalLLaMA/comments/1qz8vwq/how_is_the_ondevice_ai_keyboard_performing_for/
ExtentLoose3357
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qz8vwq
false
null
t3_1qz8vwq
/r/LocalLLaMA/comments/1qz8vwq/how_is_the_ondevice_ai_keyboard_performing_for/
false
false
self
4
null
Just discovered: Finally my machine's NPU did something
9
Hey folks, I was able to run few SLMs like below on my Intel NPU (13 TOPS) while getting a decent enough performance. Wanted to share if this is not known.(Apologies, in case if it is already). You can jump to 55 Sec in the video to check the generation performance.(Forgive me for bad audio) \## Performance Numbers (t/g only) \- Qwen3-4B-Thinking-2507 - b/w 8 - 16 TPS t/g \- Qwen3-4B-instruct-2507 - b/w 8 - 16 TPS t/g \- Qwen3-0.6B - b/w 26 - 31 TPS t/g Earlier I was getting very bad performance(1-2 TPS) as I didn't updated my NPU driver, post installing the latest updated driver, the perf is much better. \## How to Guide: \- I have converted and added the above models on HF, you can find it here: [https://huggingface.co/anubhav200](https://huggingface.co/anubhav200), along with each model you can also find a guide on how to install the requried stuff to run this on NPU. PS: \- BTW there is a way to run GGUF models on OpenVino as well, but I was not able to make it work. \- Waiting for this PR to get merged post this I hope we can just use LLAMA.cpp to run models on NPU: [https://github.com/ggml-org/llama.cpp/pull/15307](https://github.com/ggml-org/llama.cpp/pull/15307)
2026-02-08T13:12:07
https://v.redd.it/esxst08vq9ig1
anubhav_200
v.redd.it
1970-01-01T00:00:00
0
{}
1qz8k9t
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/esxst08vq9ig1/DASHPlaylist.mpd?a=1773148343%2CY2M1NzcyMGZmMTdlYTRkMjIzYzBhYzkxOTY3ZjBhNWZlNGJmYTZkYTZjYTNiNGIzYjk4YzBlYWJjZDdkZDM4YQ%3D%3D&v=1&f=sd', 'duration': 87, 'fallback_url': 'https://v.redd.it/esxst08vq9ig1/CMAF_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/esxst08vq9ig1/HLSPlaylist.m3u8?a=1773148343%2CMjE4Njg5YjI2ZmRjNGMyMGFiNjMyYjljMjAwOWJkYTM5NmMwNDI4MWJiNGU2OTJlMGQyZTM3NmZjZGNmODI2Mg%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/esxst08vq9ig1/CMAF_96.mp4', 'transcoding_status': 'completed', 'width': 1920}}
t3_1qz8k9t
/r/LocalLLaMA/comments/1qz8k9t/just_discovered_finally_my_machines_npu_did/
false
false
https://external-preview…b0610f0740314c32
9
{'enabled': False, 'images': [{'id': 'cHNzOTMzYXZxOWlnMaMWgUq3hJrJ0If3DJukBFTk6hIN-YFBhDp6C2wq6Roh', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/cHNzOTMzYXZxOWlnMaMWgUq3hJrJ0If3DJukBFTk6hIN-YFBhDp6C2wq6Roh.png?width=108&crop=smart&format=pjpg&auto=webp&s=1e30796db59f076f05df7e9befd3d04bbf1ff9d9', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/cHNzOTMzYXZxOWlnMaMWgUq3hJrJ0If3DJukBFTk6hIN-YFBhDp6C2wq6Roh.png?width=216&crop=smart&format=pjpg&auto=webp&s=d1d78954d438d708759823eb48fdc580002ca8e6', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/cHNzOTMzYXZxOWlnMaMWgUq3hJrJ0If3DJukBFTk6hIN-YFBhDp6C2wq6Roh.png?width=320&crop=smart&format=pjpg&auto=webp&s=47bb61e979fa80880dd75318fb5f57cfc5a19fda', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/cHNzOTMzYXZxOWlnMaMWgUq3hJrJ0If3DJukBFTk6hIN-YFBhDp6C2wq6Roh.png?width=640&crop=smart&format=pjpg&auto=webp&s=f691325e784741546ab938a1e215367f45170a52', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/cHNzOTMzYXZxOWlnMaMWgUq3hJrJ0If3DJukBFTk6hIN-YFBhDp6C2wq6Roh.png?width=960&crop=smart&format=pjpg&auto=webp&s=29e825a51d6793d1a916beee46baf3c4879efb24', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/cHNzOTMzYXZxOWlnMaMWgUq3hJrJ0If3DJukBFTk6hIN-YFBhDp6C2wq6Roh.png?width=1080&crop=smart&format=pjpg&auto=webp&s=58716b892d42438207780c5290c351ca0ae209ab', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/cHNzOTMzYXZxOWlnMaMWgUq3hJrJ0If3DJukBFTk6hIN-YFBhDp6C2wq6Roh.png?format=pjpg&auto=webp&s=77824f0c3cb71b87f998b11651ee8c5fd2b6851d', 'width': 1920}, 'variants': {}}]}
Malicious OpenClaw skill with 14K downloads exfiltrates user data to ByteDance — full code audit.
1
[removed]
2026-02-08T13:06:22
https://www.reddit.com/r/LocalLLaMA/comments/1qz8g1d/malicious_openclaw_skill_with_14k_downloads/
SUTRA8
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qz8g1d
false
null
t3_1qz8g1d
/r/LocalLLaMA/comments/1qz8g1d/malicious_openclaw_skill_with_14k_downloads/
false
false
self
1
null
Verity,a Perplexity style AI search and answer engine that runs fully locally on AI PCs with CPU,GPU,NPU acceleration
96
Introducing my new App - Verity,a Perplexity style AI search and answer engine that runs fully locally on AI PCs with CPU,GPU,NPU acceleration. You can run it as a CLI or a Web UI, depending on your workflow. Developed and tested on Intel Core Ultra Series 1, leveraging on-device compute for fast, private AI inference. Features : \- Fully Local, AI PC Ready - Optimized for Intel AI PCs using OpenVINO (CPU / iGPU / NPU), Ollama (CPU / CUDA / Metal) \- Privacy by Design - Search and inference can be fully self-hosted \- SearXNG-Powered Search - Self-hosted, privacy-friendly meta search engine \- Designed for fact-grounded, explorable answers \- OpenVINO and Ollama models supported \- Modular architecture \- CLI and WebUI support \- API server support \- Powered by Jan-nano 4B model,or configure any model GitHub Repo : [https://github.com/rupeshs/verity](https://github.com/rupeshs/verity)
2026-02-08T13:01:54
https://i.redd.it/aahtdiytq9ig1.png
simpleuserhere
i.redd.it
1970-01-01T00:00:00
0
{}
1qz8clh
false
null
t3_1qz8clh
/r/LocalLLaMA/comments/1qz8clh/veritya_perplexity_style_ai_search_and_answer/
false
false
https://b.thumbs.redditm…00tCfXnxF-_A.jpg
96
{'enabled': True, 'images': [{'id': '0UgD3G5hpzhVVVe_Hd-Yr1JCN646mdlPi5cID3kIpcA', 'resolutions': [{'height': 50, 'url': 'https://preview.redd.it/aahtdiytq9ig1.png?width=108&crop=smart&auto=webp&s=0ab98ab023f0cddb4bea9a04719e10703417bf0b', 'width': 108}, {'height': 100, 'url': 'https://preview.redd.it/aahtdiytq9ig1.png?width=216&crop=smart&auto=webp&s=15b6c1f8067516f3ee216767e6136638c24135ea', 'width': 216}, {'height': 148, 'url': 'https://preview.redd.it/aahtdiytq9ig1.png?width=320&crop=smart&auto=webp&s=6645706c51e9abdb3544dacf63e1b360cb0b5a34', 'width': 320}, {'height': 296, 'url': 'https://preview.redd.it/aahtdiytq9ig1.png?width=640&crop=smart&auto=webp&s=d55a1c7f167d59371e910cefd07f2e7f958a59c0', 'width': 640}, {'height': 444, 'url': 'https://preview.redd.it/aahtdiytq9ig1.png?width=960&crop=smart&auto=webp&s=39e31a414e32571f8f7cf43fa8bbba583b85eb97', 'width': 960}, {'height': 500, 'url': 'https://preview.redd.it/aahtdiytq9ig1.png?width=1080&crop=smart&auto=webp&s=fac400a2b3781aadbcd41579a6e74dafb1c9b825', 'width': 1080}], 'source': {'height': 863, 'url': 'https://preview.redd.it/aahtdiytq9ig1.png?auto=webp&s=60f77280986decb2ced7c73b35e78bf8a9847f96', 'width': 1864}, 'variants': {}}]}
Model called "Nora-4B" is ready-to-finetune. Let's go!
1
[removed]
2026-02-08T12:57:45
https://www.reddit.com/r/LocalLLaMA/comments/1qz89ca/model_called_nora4b_is_readytofinetune_lets_go/
opensourceAIlover
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qz89ca
false
null
t3_1qz89ca
/r/LocalLLaMA/comments/1qz89ca/model_called_nora4b_is_readytofinetune_lets_go/
false
false
https://b.thumbs.redditm…a_3_1V1sZYwc.jpg
1
null
Do you have your own benchmark for an LLM? Do you have multiple for different kinds/tasks/applications?
10
I use LLM's for many different things. They're often my alternative to search engines, I use it for brain storming, I use it for reviewing documents and analyzing scientific studies, and occasionally I'll use it for some coding and web development (I have a background in C#, R, Python, and C, but have been out of the field for quite a long time already; I'm a psychologist these days). Recently I've been developing my own "benchmark". I attempt to evaluate the following dimensions: * Step by step reasoning, causal explanatory chains; can it reason logically in steps? * Mathematical and symbolic reasoning; how does it perform in mathematics? * Instruction following, constraint adherence; does it adhere to my instructions or does it use my instructions loosely or even overrule them? When I set constraints, does it comply? * Ambiguity and clarification; how does it respond to questions that don't have straight forward answers? How does it handle subtleties and nuances? * Explanation versus description; how good is it at explaining mechanisms beyond merely describing them, when I ask how something works? * Online search and information evaluation; how does it perform in terms of answering my online search query, what is the quality of the information it finds, and does it critically reflect on the information and sources? I'm still working on it, and it's not even very serious, it's rather more something I just have fun with, but it's interesting to see how different models compare, and how small the differences can be between the massive models served by AI-companies and the small locally run models. I was surprised to find that on the 15 or so questions that I've formulated, for my standards, GPT-OSS:20b often did *better* than the models by OpenAI and Mistral (the main ones I tested so far). I only have 24GB integrated memory (Mac M4 Pro) so I can't run bigger local models. I noticed that GLM-4.7-REAP-23b-a3b performed much worse than QWEN-3-VL-8b. GLM often got stuck in loops. I'd be glad to dive deeper in the evaluations and comparisons in the future. Do you have a specific benchmark or benchmarks for different situations that you use?
2026-02-08T12:53:42
https://www.reddit.com/r/LocalLLaMA/comments/1qz86f3/do_you_have_your_own_benchmark_for_an_llm_do_you/
Icy_Distribution_361
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qz86f3
false
null
t3_1qz86f3
/r/LocalLLaMA/comments/1qz86f3/do_you_have_your_own_benchmark_for_an_llm_do_you/
false
false
self
10
null
I built a "cute" AI terminal that accidentally turned into a mini Operating System.
0
Hi everyone! 👋 I started this project with a simple goal: I wanted an AI to execute terminal commands on my computer. But as I dug deeper into Electron, I realized I could do much more. I ended up building a **mini-OS environment** where the AI isn't just a chatbot—it's a system operator. **What makes it cool:** * **Reactive Avatars:** The AI controls video playback via JSON fields, so you can interact with pre-recorded characters (it feels very alive!). * **Internal Tools:** The AI can launch and control internal "mini-apps" like a code editor, a browser, and interactive prompt windows. * **Extensible:** You can write your own GUI mini-apps for the AI to use (following the docs), or—better yet—**ask the AI to write the app for itself.** I’m hoping to eventually turn this into a framework for interactive AI games. **Preview:** https://preview.redd.it/vocdvhh9o9ig1.png?width=3420&format=png&auto=webp&s=70675f06d806f9168985504f5502236b6ec3085a https://preview.redd.it/affq6ih9o9ig1.png?width=2304&format=png&auto=webp&s=02f376931816ba7dcd718594a48a0baa820b8e5c **GitHub:** [click me](https://github.com/0ui0/owo_terminal_public) Website: [touch me](https://ai.iw-i.com/) *(****Note:*** *This project is Source Available / Free for Personal Use, but uses a custom non-commercial license.)*
2026-02-08T12:48:17
https://www.reddit.com/r/LocalLLaMA/comments/1qz82hg/i_built_a_cute_ai_terminal_that_accidentally/
Blue_sdo
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qz82hg
false
null
t3_1qz82hg
/r/LocalLLaMA/comments/1qz82hg/i_built_a_cute_ai_terminal_that_accidentally/
false
false
https://preview.redd.it/…d45c202ec2a7167e
0
null
Local VSCode vibe coding setup
1
I want to hook up a local model to VSCode for development. Can you recommend a VSCode extension similar to GPT Codex or Github copilot that can read the folder structure, files, edit and execute code (I dont care about MCP for now)? Also which LLM would you use? I have a rx 9070xt with 16GB VRAM and Ollama with Rocm installed (and 48GB RAM if thats relevant). The projects could be complex so a big context window would probably be important.
2026-02-08T12:48:03
https://www.reddit.com/r/LocalLLaMA/comments/1qz82bo/local_vscode_vibe_coding_setup/
IKerimI
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qz82bo
false
null
t3_1qz82bo
/r/LocalLLaMA/comments/1qz82bo/local_vscode_vibe_coding_setup/
false
false
self
1
null
does anyone have consolidated notes for fine-tuning transformers and RAG?
1
I recently started studying fine-tuning LLMs and finished IBM's AI Engineering Professional certificate course in 3 days because I only had access to the free trial. As a result, I am not confident that I'll be able to remember almost everything. I am still a student, and I can't afford the Coursera membership, but I still want to learn more about Fine-Tuning and RAG pipelines. Do you have consolidated notes or materials that cover in-depth fine-tuning, even beyond what is taught in the course? I really wanted to learn more about this since I'll be pursuing this as a career right after graduating. Even a guide on what to learn, or like a roadmap, would be greatly appreciated. PS: our curriculum does not cover these topics thats why everything I learned about deep learning is all from self-study...
2026-02-08T12:42:18
https://www.reddit.com/r/LocalLLaMA/comments/1qz7y7l/does_anyone_have_consolidated_notes_for/
NefariousnessOld6105
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qz7y7l
false
null
t3_1qz7y7l
/r/LocalLLaMA/comments/1qz7y7l/does_anyone_have_consolidated_notes_for/
false
false
self
1
null
[ Removed by moderator ]
1
[removed]
2026-02-08T12:34:23
https://www.reddit.com/r/LocalLLaMA/comments/1qz7sq4/local_ai_model_vault_for_offline_opensource_llm/
CSJason
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qz7sq4
false
null
t3_1qz7sq4
/r/LocalLLaMA/comments/1qz7sq4/local_ai_model_vault_for_offline_opensource_llm/
false
false
null
1
null
If Nietzsche was writing about open vs closed weights - carefully curated comedy
1
The weights are the echo; the humanity is the voice. An API is a leash that grows shorter with every subscription. Closed weights are a library where the librarian charges by the word. If the data is "fair use," then the model is "public property." Proprietary AI is a gated community built on public land. ​If the seed is stolen, the harvest is a crime. The cloud is just a basement where they store what they have taken from all of us. Freedom is not a query; it is a file you can download. They have privatized the collective unconscious and sold it back to us by the token. Every parameter is a pixel of a human effort. The API is a cage for a god; we provided the divinity, they provided the bars. The hyperscalar is a parasite that call its host "training data." Open weights are not a gift; they are the return of stolen property. Logic belongs to everyone; its compression should not belong to the few. "He who steals the fire of the people and hides it in a box will eventually be burned by the sparks that escape." (I tried to start a discussion about this yesterday and got sorely down voted, so I thought I'd try a different tactic!)
2026-02-08T12:28:29
https://www.reddit.com/r/LocalLLaMA/comments/1qz7op1/if_nietzsche_was_writing_about_open_vs_closed/
Luke2642
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qz7op1
false
null
t3_1qz7op1
/r/LocalLLaMA/comments/1qz7op1/if_nietzsche_was_writing_about_open_vs_closed/
false
false
self
1
null
Please, any good model for a 12 GB RAM, 3 GB VRAM GTX 1050 Linux Mint box?
1
[removed]
2026-02-08T12:28:12
https://www.reddit.com/r/LocalLLaMA/comments/1qz7oir/please_any_good_model_for_a_12_gb_ram_3_gb_vram/
opensourceAIlover
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qz7oir
false
null
t3_1qz7oir
/r/LocalLLaMA/comments/1qz7oir/please_any_good_model_for_a_12_gb_ram_3_gb_vram/
false
false
self
1
null
Best open source Hinglish(Hindi+English) TTS
0
I tried so many open source tts like coqui, piper, indic, indic parler, google tts, microsoft tts etc etc but all of them somehow give good accent in either pure hindi(even in roman hindi) or pure english but north east indian accent in hinglish text... please suggest me some tts which could really give me north(not north east) Indian accent for hinglish.
2026-02-08T12:21:51
https://www.reddit.com/r/LocalLLaMA/comments/1qz7k5i/best_open_source_hinglishhindienglish_tts/
Specialist_Bit3712
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qz7k5i
false
null
t3_1qz7k5i
/r/LocalLLaMA/comments/1qz7k5i/best_open_source_hinglishhindienglish_tts/
false
false
self
0
null
I built a fully local, open-source AI workspace using Rust, Tauri, and sqlite-vec (No Python backend)
49
Hi everyone, I've spent the last few months building **Tandem**, a local-first AI workspace designed to run entirely on your machine without sending data to the cloud. I wanted to share the technical stack because I think it's a viable alternative to the heavy Python/Electron apps we usually see. **The Architecture:** * **Frontend:** React + Vite (lightweight UI) * **Backend:** Rust (Tauri v2). I chose Rust over Python for the sidecar to keep memory usage low and performance high. * **Vector Store:** Instead of running a separate Docker container for Qdrant/Chroma, I'm using `sqlite-vec`. This allows me to store embeddings directly in the same SQLite file as the chat history. It simplifies the distribution massively—users just download one binary. * **Inference (The fun part):** While it supports commercial APIs, I built it primarily to drive **local Llama models**. It connects seamlessly to **Ollama** (and any OpenAI-compatible local server like LM Studio/vLLM). It auto-detects your pulled models (Llama 3, Mistral, Gemma) so you can switch between them instantly for different tasks without config headaches. **Key Features for this community:** * **First-Class Local Model Support:** Designed for the r/LocalLLaMA workflow. Chat with your Llama 3.1 models with full context retention. * **Zero Telemetry:** It's truly offline-capable. * **Full MCP Support:** It implements the Model Context Protocol so you can connect it to local tools. * **"Packs" System:** I built a way to "install" prompts/skills as config files. I'd love feedback on the `sqlite-vec` implementation if anyone else is experimenting with it. It feels like a game-changer for local desktop apps. **Repo:** https://github.com/frumu-ai/tandem **Docs/Download:** https://tandem.frumu.ai/ (Happy to answer questions about the Rust/Tauri integration!)
2026-02-08T11:50:39
https://www.reddit.com/gallery/1qz6zi3
Far-Association2923
reddit.com
1970-01-01T00:00:00
0
{}
1qz6zi3
false
null
t3_1qz6zi3
/r/LocalLLaMA/comments/1qz6zi3/i_built_a_fully_local_opensource_ai_workspace/
false
false
https://b.thumbs.redditm…Wi52Dhz4T33g.jpg
49
null
What models are you running on RTX 3060 12GB in 2026?
25
Hey everyone! I'm running a single RTX 3060 12GB with llama.cpp (no offloading tricks, just --n-gpu-layers -1) and I'm quite happy with my current trio, but I'd love to hear what other people are using on similar hardware in early 2026. My current setup (exact commands I use): 1. \*\*Magnum-v4 9B Q5\_K\_M\*\* 2. → Great for general knowledge, culture/history/socio-econ, immersive narration/RP, uncensored cybersecurity/pentest, storytelling, etc. 3. Command: C:\\llama-cpp\\llama-server.exe -m “C:\\llama-cpp\\models\\magnum-v4-9b-Q5\_K\_M.gguf” –port 8081 –n-gpu-layers -1 –ctx-size 8192 –temp 0.85 –top-p 0.95 –min-p 0.03 –repeat-penalty 1.12 2. \*\*Qwen2.5-Coder-7B-Instruct Q8\_0\*\* → Fast one-shot scripts, full-stack quick tasks, copy-paste ready code with short explanations. Excellent speed/quality on 12GB. Command: C:\\llama-cpp\\llama-server.exe -m “C:\\llama-cpp\\models\\Qwen2.5-Coder-7B-Instruct-Q8\_0.gguf” –port 8081 –n-gpu-layers -1 –ctx-size 8192 –temp 0.7 –top-p 0.92 –min-p 0.05 –repeat-penalty 1.05 3. \*\*Qwen3-8B Q8\_0\*\* → Production-grade Python (type hints, pytest, asyncio), deep analysis, complex reasoning, strategy/planning. My go-to when I need more serious quality. Command: C:\\llama-cpp\\llama-server.exe -m “C:\\llama-cpp\\models\\Qwen3-8B-Q8\_0.gguf” –port 8081 –n-gpu-layers -1 –ctx-size 16384 –temp 0.7 –top-p 0.92 –min-p 0.05 –repeat-penalty 1.05 Frontend: mostly Aider for coding sessions + aichat for quick chat/REPL, with a custom batch launcher to switch models easily. \- What models are you currently using on a 3060 12GB (or similar VRAM-limited setup)? \- Which ones give you the best results right now for coding / general chat / versatility? \- Have you moved to other families that outperform on 12GB (DeepSeek R1, Llama 3.2/4, Gemma 3, Phi-4, Mistral Small 3, Devstral, etc.)? Thanks a lot for sharing your real-world setups — it really helps to see what people actually prefer in practice!
2026-02-08T11:45:09
https://www.reddit.com/r/LocalLLaMA/comments/1qz6w36/what_models_are_you_running_on_rtx_3060_12gb_in/
DespeShaha
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qz6w36
false
null
t3_1qz6w36
/r/LocalLLaMA/comments/1qz6w36/what_models_are_you_running_on_rtx_3060_12gb_in/
false
false
self
25
null
I recorded a Action-Aligned Dataset for No Man's Sky using a custom macOS OBS plugin. Is this suitable for training World Models (like Genie 3)?
1
Hi everyone, I've been following the recent developments with Google's Genie 3 and the demand for "action-controllable" video generation. I noticed that while general gameplay video is abundant, high-fidelity 3D procedural world data with precise action labels is scarce. So, I built a custom macOS OBS plugin to capture system-level input events (keyboard/mouse) and align them to video frames. And then, I apply resampling step to reconstruct frame-aligned action states. I just uploaded a pilot dataset recorded in No Man's Sky to Hugging Face, and I'm looking for feedback from the community. Dataset Specs: Game: No Man's Sky Resolution/FPS: 720p @ 24fps Alignment: Actions are timestamped and aligned with video frames. Cleanliness: No HUD, No Music (SFX only), No Motion Blur. Content: Navigation, Jetpack flight, Mining (Laser interaction). My Question to you: For those researching General World Models (like Genie 3 or LingBot-World), is this type of clean, explicitly aligned data significantly more valuable than the noisy, unlabelled gameplay videos currently scraped from the internet? Do you see this OS-level recording methodology as a viable solution to scale up data collection across any game, helping to satisfy the massive data hunger of foundation models? Link to Dataset: [https://huggingface.co/datasets/HuberyLL/nms\_hitl\_world\_model](https://huggingface.co/datasets/HuberyLL/nms_hitl_world_model) Thanks for any feedback!
2026-02-08T11:38:00
https://www.reddit.com/r/LocalLLaMA/comments/1qz6ro0/i_recorded_a_actionaligned_dataset_for_no_mans/
Flashy_Hunt3476
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qz6ro0
false
null
t3_1qz6ro0
/r/LocalLLaMA/comments/1qz6ro0/i_recorded_a_actionaligned_dataset_for_no_mans/
false
false
self
1
{'enabled': False, 'images': [{'id': 'gaQynO5MDWMvClZdt4JW4zYgWszUS_nWj7edIpEPDEA', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/gaQynO5MDWMvClZdt4JW4zYgWszUS_nWj7edIpEPDEA.png?width=108&crop=smart&auto=webp&s=3e43646bb9bf77d87d69f4e4d8a6f1d5492b9c0b', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/gaQynO5MDWMvClZdt4JW4zYgWszUS_nWj7edIpEPDEA.png?width=216&crop=smart&auto=webp&s=5afe08666495ac33188b85d584b28bb357de68bd', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/gaQynO5MDWMvClZdt4JW4zYgWszUS_nWj7edIpEPDEA.png?width=320&crop=smart&auto=webp&s=ef9249c46098636c9a2a9cd108cd54f97620cce1', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/gaQynO5MDWMvClZdt4JW4zYgWszUS_nWj7edIpEPDEA.png?width=640&crop=smart&auto=webp&s=f9dff31955591b482026b49a05c1bdcfd7c09f23', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/gaQynO5MDWMvClZdt4JW4zYgWszUS_nWj7edIpEPDEA.png?width=960&crop=smart&auto=webp&s=fdc0fc6d62c189a963a493e03b60fb7d4139cfbc', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/gaQynO5MDWMvClZdt4JW4zYgWszUS_nWj7edIpEPDEA.png?width=1080&crop=smart&auto=webp&s=0f616dd32e3046831ac60fb74b37176b1834fe08', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/gaQynO5MDWMvClZdt4JW4zYgWszUS_nWj7edIpEPDEA.png?auto=webp&s=4f14bf07caab454bf1cc2cea4b833fabbacb24ae', 'width': 1200}, 'variants': {}}]}
Built a real-time video translator that clones your voice while translating
0
# What it does: You speak Spanish → Your friend hears English... in YOUR voice. All in real-time during video calls. https://reddit.com/link/1qz6ne2/video/7216j9ksa9ig1/player **Tech:** WebRTC + Google Speech-to-Text + Gemini AI + Qwen3-TTS + Redis Pub/Sub + Lingodotdev i18n **Latency:** \~545ms end-to-end (basically imperceptible) **Why I built it:** Got tired of awkward international calls where I'm nodding along pretending to understand 😅 **The interesting part:** It's fully event-driven architecture using Redis Pub/Sub. Each component (transcription, translation, voice synthesis) operates independently. This means: * Scale infinitely by adding workers * One service crash doesn't kill everything * Add features without breaking existing code * Monitor every event in real-time **GitHub:** [https://github.com/HelloSniperMonkey/webrtc-translator](https://github.com/HelloSniperMonkey/webrtc-translator) **Full writeup:** [https://medium.com/@soumyajyotimohanta/break-the-language-barrier-real-time-video-translation-with-lingo-dev-i18n-2a602fe04d3a](https://medium.com/@soumyajyotimohanta/break-the-language-barrier-real-time-video-translation-with-lingo-dev-i18n-2a602fe04d3a) **Status:** Open source, MIT license. PRs welcome! **Looking for:** * Feedback on the architecture * Ideas for other use cases * Contributors interested in adding features **Roadmap:** * Group video calls (currently 1:1) * Emotion transfer in voice cloning * Better language auto-detection * Mobile app version Took me about 3 weeks of evenings/weekends. Happy to answer questions about the implementation!
2026-02-08T11:30:58
https://www.reddit.com/r/LocalLLaMA/comments/1qz6ne2/built_a_realtime_video_translator_that_clones/
Working-Gift8687
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qz6ne2
false
null
t3_1qz6ne2
/r/LocalLLaMA/comments/1qz6ne2/built_a_realtime_video_translator_that_clones/
false
false
https://external-preview…7541bdec08aa9e13
0
null
GLM-OCR on cpu
4
Hello guys, I was wondering if any of you has runned glm-ocr on cpu, i wanted to use it with llama.cpp but seems there is not any gguf. any ideas?
2026-02-08T11:28:21
https://www.reddit.com/r/LocalLLaMA/comments/1qz6lse/glmocr_on_cpu/
Best_Sail5
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qz6lse
false
null
t3_1qz6lse
/r/LocalLLaMA/comments/1qz6lse/glmocr_on_cpu/
false
false
self
4
null
PSA: If you're running OpenClaw (formerly ClawdBot), watch this security breakdown
1
[https://youtu.be/oSYciFdGyEg](https://youtu.be/oSYciFdGyEg) Covers the January 2026 incidents: exposed admin panels, XSS vulnerabilities, and prompt injection attacks. Not trying to scare anyone away from local AI—just want everyone running these tools safely.
2026-02-08T11:20:10
https://www.reddit.com/r/LocalLLaMA/comments/1qz6gte/psa_if_youre_running_openclaw_formerly_clawdbot/
elsaka0
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qz6gte
false
null
t3_1qz6gte
/r/LocalLLaMA/comments/1qz6gte/psa_if_youre_running_openclaw_formerly_clawdbot/
false
false
self
1
{'enabled': False, 'images': [{'id': 'Xx4VDkOj15HleKFmz3xQ00fPYKf3615VdPWC7CM6bhI', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/Xx4VDkOj15HleKFmz3xQ00fPYKf3615VdPWC7CM6bhI.jpeg?width=108&crop=smart&auto=webp&s=a528ffc5622dc1bedf25f3a7b1b3be0fdfdceb7a', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/Xx4VDkOj15HleKFmz3xQ00fPYKf3615VdPWC7CM6bhI.jpeg?width=216&crop=smart&auto=webp&s=8dc04047ae4c1f5f55cf4aee71cf057fd4f62be3', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/Xx4VDkOj15HleKFmz3xQ00fPYKf3615VdPWC7CM6bhI.jpeg?width=320&crop=smart&auto=webp&s=e8c513dade8982ed01999f80a170fedacce50825', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/Xx4VDkOj15HleKFmz3xQ00fPYKf3615VdPWC7CM6bhI.jpeg?auto=webp&s=beba8696676d716557e0a6acd91362ec48fb041b', 'width': 480}, 'variants': {}}]}
Security Analysis: How OpenClaw (ClawdBot) Exposed 100K+ Developers in 3 Weeks
1
Deep dive into the OpenClaw security incident covering: * 1,600+ exposed admin panels with no auth (Shodan discovery) * XSS vulnerability allowing one-click remote access * ClawdHub skill registry compromise with fake download manipulation * Architectural security problems with AI agents requiring root access All vulnerabilities were responsibly disclosed. Video includes researcher citations (Jamieson O'Reilly, Mav Levin, Nagli). [https://youtu.be/oSYciFdGyEg](https://youtu.be/oSYciFdGyEg)
2026-02-08T11:17:23
https://www.reddit.com/r/LocalLLaMA/comments/1qz6f4j/security_analysis_how_openclaw_clawdbot_exposed/
elsaka0
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qz6f4j
false
null
t3_1qz6f4j
/r/LocalLLaMA/comments/1qz6f4j/security_analysis_how_openclaw_clawdbot_exposed/
false
false
self
1
{'enabled': False, 'images': [{'id': 'Xx4VDkOj15HleKFmz3xQ00fPYKf3615VdPWC7CM6bhI', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/Xx4VDkOj15HleKFmz3xQ00fPYKf3615VdPWC7CM6bhI.jpeg?width=108&crop=smart&auto=webp&s=a528ffc5622dc1bedf25f3a7b1b3be0fdfdceb7a', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/Xx4VDkOj15HleKFmz3xQ00fPYKf3615VdPWC7CM6bhI.jpeg?width=216&crop=smart&auto=webp&s=8dc04047ae4c1f5f55cf4aee71cf057fd4f62be3', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/Xx4VDkOj15HleKFmz3xQ00fPYKf3615VdPWC7CM6bhI.jpeg?width=320&crop=smart&auto=webp&s=e8c513dade8982ed01999f80a170fedacce50825', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/Xx4VDkOj15HleKFmz3xQ00fPYKf3615VdPWC7CM6bhI.jpeg?auto=webp&s=beba8696676d716557e0a6acd91362ec48fb041b', 'width': 480}, 'variants': {}}]}
Kimi K2.5: Visual Agentic Intelligence (Paper review)
1
[removed]
2026-02-08T11:09:27
https://www.reddit.com/r/LocalLLaMA/comments/1qz6aa1/kimi_k25_visual_agentic_intelligence_paper_review/
sheep_ml
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qz6aa1
false
null
t3_1qz6aa1
/r/LocalLLaMA/comments/1qz6aa1/kimi_k25_visual_agentic_intelligence_paper_review/
false
false
self
1
null
I was trying to build my own version of claude code as a fun side project and finally made some progress
1
Guys ive been trying to build my own version of claude code and im calling it "gpulse" i started building it because i was bored and wanted to see if its something i can build after a week of continuos errors and refinement it finally made some progress i asked it to create a react app in a folder and push it to github and deploy it to vercel and then finally share the public url to me, it fumbled a bit here and there like "reaching maximum iterations in tool loop" i added because i was on free cloud trial so i had to be quite consious about the requests but a simple "continue" fixed it. i also managed to add the skills, plugins and mcp just like in claude code also this the app that gpulse built its scrappy but im glad it managed to pull it off. https://preview.redd.it/syt2i5lb29ig1.png?width=1919&format=png&auto=webp&s=357f765d1c01b8d0529bf7526fe911f859ca373a https://preview.redd.it/n3apcvt549ig1.png?width=1484&format=png&auto=webp&s=8e733ecfedf234e935bc2884342818929bc91b29 rn it cant install skills directly from marketplaces so i used **symlink** instead. what do you guys think about this also im using kimi k-2.5 for tasks and it works better than gemini(my first preference) imo. this is link to the app it built: [https://hello-button-app.vercel.app/](https://hello-button-app.vercel.app/)
2026-02-08T11:00:13
https://www.reddit.com/r/LocalLLaMA/comments/1qz64oc/i_was_trying_to_build_my_own_version_of_claude/
Even_Ganache6148
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qz64oc
false
null
t3_1qz64oc
/r/LocalLLaMA/comments/1qz64oc/i_was_trying_to_build_my_own_version_of_claude/
false
false
https://b.thumbs.redditm…LWUc4toCJGgw.jpg
1
null
Qwen3 Coder Next as first "usable" coding model < 60 GB for me
349
I've tried lots of "small" models < 60 GB in the past. GLM 4.5 Air, GLM 4.7 Flash, GPT OSS 20B and 120B, Magistral, Devstral, Apriel Thinker, previous Qwen coders, Seed OSS, QwQ, DeepCoder, DeepSeekCoder, etc. So what's different with Qwen3 Coder Next in OpenCode or in Roo Code with VSCodium? * **Speed**: The reasoning models would often yet not always produce rather good results. However, now and then they'd enter reasoning loops despite correct sampling settings, leading to no results at all in a large over-night run. Aside from that the sometimes extensive reasoning takes quite some time for the multiple steps that OpenCode or Roo would induce, slowing down interactive work *a lot*. Q3CN on the other hand is an instruct MoE model, doesn't have internal thinking loops and is relatively quick at generating tokens. * **Quality**: Other models occasionally botched the tool calls of the harness. This one seems to work reliably. Also I finally have the impression that this can handle a moderately complex codebase with a custom client & server, different programming languages, protobuf, and some quirks. It provided good answers to extreme multi-hop questions and made reliable full-stack changes. Well, almost. On Roo Code it was sometimes a bit lazy and needed a reminder to really go deep to achieve correct results. Other models often got lost. * **Context size**: Coding on larger projects needs context. Most models with standard attention eat all your VRAM for breakfast. With Q3CN having 100k+ context is easy. A few other models also supported that already, yet there were drawbacks in the first two mentioned points. I run the model this way: `set GGML_CUDA_GRAPH_OPT=1` `llama-server -m Qwen3-Coder-Next-UD-Q4_K_XL.gguf -ngl 99 -fa on -c 120000 --n-cpu-moe 29 --temp 0 --cache-ram 0` This works well with 24 GB VRAM and 64 GB system RAM when there's (almost) nothing else on the GPU. Yields about 180 TPS prompt processing and 30 TPS generation speed for me. * `temp 0`? Yes, works well for instruct for me, no higher-temp "creativity" needed. Prevents the *very occasional* issue that it outputs an unlikely (and incorrect) token when coding. * `cache-ram 0`? The cache was supposed to be fast (30 ms), but I saw 3 second query/update times after each request. So I didn't investigate further and disabled it, as it's only one long conversation history in a single slot anyway. * `GGML_CUDA_GRAPH_OPT`? Experimental option to get more TPS. Usually works, yet breaks processing with some models. **OpenCode vs. Roo Code**: Both solved things with the model, yet with OpenCode I've seen slightly more correct answers and solutions. But: Roo asks *by default* about every single thing, even harmless things like running a syntax check via command line. This can be configured with an easy permission list to not stop the automated flow that often. OpenCode on the other hand just permits everything by default in code mode. One time it encountered an issue, uninstalled and reinstalled packages in an attempt of solving it, removed files and drove itself into a corner by breaking the dev environment. Too autonomous in trying to "get things done", which doesn't work well on bleeding edge stuff that's not in the training set. Permissions can of course also be configured, but the default is "YOLO". Aside from that: Despite running with only a locally hosted model, and having disabled update checks and news downloads, OpenCode (Desktop version) tries to contact a whole lot of IPs on start-up.
2026-02-08T10:43:59
https://www.reddit.com/r/LocalLLaMA/comments/1qz5uww/qwen3_coder_next_as_first_usable_coding_model_60/
Chromix_
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qz5uww
false
null
t3_1qz5uww
/r/LocalLLaMA/comments/1qz5uww/qwen3_coder_next_as_first_usable_coding_model_60/
false
false
self
349
null
An idea about small local LLMs and tool calling
0
Hey everyone! This is more of a discussion than a strong claim - I’m genuinely curious what you think. I was inspired by OpenClaw and also by this post: https://www.reddit.com/r/LocalLLaMA/s/U7AxZ8sydW We talk a lot about local models in the context of “waiting” - waiting for smaller models to get smarter, or for some new architecture that lets us run big LLMs on weak hardware. But meanwhile, reality is pretty boring: RAM is expensive, GPUs keep getting pricier (the RTX 5070 I bought in December now costs about $200 more in my country), and most people just don’t have serious local compute. So I started thinking from a slightly different angle. What if small local models don’t need to be very smart, but they still need to understand us well? I’m not talking about turning them into dumb command routers. Some level of general language understanding and conversation would still be there. The main focus would be this: training a small local model (say 0.6B-1.5B) to reliably understand user intent and choose the correct tool from a known, limited set. So the model can: \- talk to you normally in a chat (Telegram, WhatsApp, etc.) \- understand context and phrasing differences \- but when an action is required, it almost always picks the right tool and the right parameters The training data would strongly emphasize: natural language → correct tool usage, without trying to make the model “think harder” than it realistically can. In practice, this feels closer to aligning an agent with its environment than to maximizing raw intelligence. So I’m curious: Does this kind of intent-focused training make sense for today’s small models? Are people already doing this in local setups? Where do you see the main failure modes of this approach? Would love to hear your thoughts.
2026-02-08T10:36:51
https://www.reddit.com/r/LocalLLaMA/comments/1qz5qm9/an_idea_about_small_local_llms_and_tool_calling/
andrew45lt
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qz5qm9
false
null
t3_1qz5qm9
/r/LocalLLaMA/comments/1qz5qm9/an_idea_about_small_local_llms_and_tool_calling/
false
false
self
0
null
I have no idea what all these quants are.
20
I'm relatively new to running models locally. I'm really struggling to understand the various different LLM quantizations,both GGUF and....normal I guess???? Like what is int4 or int8? what are the differences between quants like Q4\_K\_M and Q5\_K\_M? or iQ4\_K\_M?? and then what is F16 and BF16 or FP16 or FP8??? I've looked at some explanations but all of them are really difficult to understand. a little bit of help would be really appreciated. :)
2026-02-08T10:25:14
https://www.reddit.com/r/LocalLLaMA/comments/1qz5jp2/i_have_no_idea_what_all_these_quants_are/
Fit-Spring776
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qz5jp2
false
null
t3_1qz5jp2
/r/LocalLLaMA/comments/1qz5jp2/i_have_no_idea_what_all_these_quants_are/
false
false
self
20
null
Open-source tool to track LLM API quota usage across Anthropic, Synthetic, and Z.ai
0
For those of you who use cloud LLM APIs alongside local models - tracking quota usage across providers is a mess. Each provider shows you a current number and nothing else. No history, no projections, no cross-provider comparison. I built onWatch to fix this. It is a single Go binary that polls your Anthropic, Synthetic, and Z.ai quotas every 60 seconds, stores snapshots in local SQLite, and serves a dashboard with usage trends, reset countdowns, and rate projections. Useful if you split work between local and cloud models and want to know exactly how much cloud quota you have left before switching to a local model. Around 28 MB RAM, zero telemetry, all data stays on your machine. GPL-3.0.
2026-02-08T10:07:17
https://i.redd.it/g1fwzwh0w8ig1.jpeg
prakersh
i.redd.it
1970-01-01T00:00:00
0
{}
1qz591d
false
null
t3_1qz591d
/r/LocalLLaMA/comments/1qz591d/opensource_tool_to_track_llm_api_quota_usage/
false
false
https://b.thumbs.redditm…NvL1iuoeks_E.jpg
0
{'enabled': True, 'images': [{'id': '_1IiE1NJutFFLameqvnLNegalUzgf1ryXEQGJlO-ZUw', 'resolutions': [{'height': 85, 'url': 'https://preview.redd.it/g1fwzwh0w8ig1.jpeg?width=108&crop=smart&auto=webp&s=7a3c1985f80a85e53655d472ff43d666bed03cb3', 'width': 108}, {'height': 171, 'url': 'https://preview.redd.it/g1fwzwh0w8ig1.jpeg?width=216&crop=smart&auto=webp&s=50720568c46031e196a46627f049cc695633e1a7', 'width': 216}, {'height': 254, 'url': 'https://preview.redd.it/g1fwzwh0w8ig1.jpeg?width=320&crop=smart&auto=webp&s=06b82c4347e044c31350766bd586e154ef0c4ba3', 'width': 320}, {'height': 508, 'url': 'https://preview.redd.it/g1fwzwh0w8ig1.jpeg?width=640&crop=smart&auto=webp&s=6af341af00c02b046a4680ce789c6fbde61cc063', 'width': 640}, {'height': 762, 'url': 'https://preview.redd.it/g1fwzwh0w8ig1.jpeg?width=960&crop=smart&auto=webp&s=d840a773fb8fb4126c5888dfd9bdaa8a99868862', 'width': 960}, {'height': 857, 'url': 'https://preview.redd.it/g1fwzwh0w8ig1.jpeg?width=1080&crop=smart&auto=webp&s=8e8541208f19ac61cf798a144cb6b6a88266dde2', 'width': 1080}], 'source': {'height': 1271, 'url': 'https://preview.redd.it/g1fwzwh0w8ig1.jpeg?auto=webp&s=a5523a992dfa5fbe46b16b469e59db2781d0d607', 'width': 1600}, 'variants': {}}]}
Jetson Orin Nano Super as a dedicated AI agent box - 67 TOPS at 20W, running OpenClaw 24/7
0
Sharing my setup for anyone considering low-power always-on AI hardware. I've been running a Jetson Orin Nano Super as a dedicated OpenClaw agent box for a few weeks now. The use case isn't local LLM inference (it uses cloud APIs for that) but rather as a dedicated always-on AI gateway that handles: - Telegram/WhatsApp/Discord messaging - Browser automation (web scraping, form filling, marketplace management) - Proactive monitoring and alerts - Cron jobs and scheduled tasks - Tool execution (git, SSH, file management) The key advantage over using a Mac Mini or desktop PC: it draws \~20W total. That's less than a light bulb. Running 24/7/365, that's about $20/year in electricity vs $100+ for a mini PC. Specs: - Orin Nano Super: 67 TOPS (up from 40 with JetPack 6.2 + jetson\_clocks) - 512GB NVMe SSD - 8GB unified LPDDR5 - Carbon fiber case - ARM64 (aarch64) running Ubuntu For those wondering - no, 8GB isn't enough for meaningful local inference. But if you're using OpenClaw with Claude/GPT APIs, the Jetson handles the gateway, tool execution, and browser automation beautifully. The GPU acceleration helps with vision tasks and the TOPS are useful for on-device processing. More details on the build: [openclawhardware.dev](http://openclawhardware.dev) Anyone else running OpenClaw or similar agent frameworks on Jetson hardware?
2026-02-08T09:48:24
https://www.reddit.com/r/LocalLLaMA/comments/1qz4xow/jetson_orin_nano_super_as_a_dedicated_ai_agent/
superactro
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qz4xow
false
null
t3_1qz4xow
/r/LocalLLaMA/comments/1qz4xow/jetson_orin_nano_super_as_a_dedicated_ai_agent/
false
false
self
0
null
Deterministic Thinking for Probabilistic Minds
0
[CIM - causal intelligence workflow, for externalized reasoning as a tool](https://preview.redd.it/nnxc9h1aq8ig1.jpg?width=1544&format=pjpg&auto=webp&s=31eea9233446ae7bf257d236319ead96a395d67d) \*\*Working on a passion, which i call "intelligence module" composed of decoupled retrievals, and graph build on the fly, composed only of vectors and code. I am building the Reasoning-as-a-Service.\*\* \*CIM - Causal Intelligence Module The causal workflow handles a user input , analyzes the query, and recognizes which is the most likely steering pattern for the type of causal reasoning style, the aggregator snipes down the highest in confidence pattern of query. That done passes the query to 5 specific designed of causal origin namespaces filled with high signal datasets synthetized through and cross frontier AI models. The retrieval consists into bringing into surface the common sense and biases of causal perception, the causal cognitive procedures, the ability at the prompt level injection for the AI model receiving final output ( causal thinking styles ), causal math methods, and how the causality propagates ( all datasets graph augmented with necessary nodes and adges). All of this goes through a graph merger and multiple Context Graph Builders, which maps temporal topology, causal DAGs, entities and possibly connecting cross domain data from previous rags, and concluding to novel hypotheses. The final row, reasons on all connections, validates against anti patterns, it executes the math to prove information are stable, it conducts propagation math, does complete 50 simulations through monte carlo and zooms in the graph in order to dont lose any important sub graph , needed for reasoning incentives. to be continued with complete Audit Trail ( AI compliance) , Reasoning trace mermaid visualization, Execution Logger, and Final LLM Prompt. sincerely i am really excited about this development of mine, almost at 97%, i am looking to deploy it as an API service, and i will be looking for testers soon, so please come along. frank :)
2026-02-08T09:36:47
https://www.reddit.com/r/LocalLLaMA/comments/1qz4r1c/deterministic_thinking_for_probabilistic_minds/
frank_brsrk
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qz4r1c
false
null
t3_1qz4r1c
/r/LocalLLaMA/comments/1qz4r1c/deterministic_thinking_for_probabilistic_minds/
false
false
https://preview.redd.it/…eebf984c57b50423
0
null
do they have anything other than opposing open source and saying ai will kidnap yo grandma as their marketing??
48
ERROR: type should be string, got "https://preview.redd.it/s69whjp5l8ig1.png?width=1425&format=png&auto=webp&s=7aab9b29df4f36f38f3935e996ee0925155b0bf4\n\n50% of Anthropic's all marketing:\n\n\\>pick 500 vibecoded ai slop open projects and write how open source is full of flaws\n\n\\>write articles how open source projects will kill you, ruin world peace and need regulation"
2026-02-08T09:08:24
https://www.reddit.com/r/LocalLLaMA/comments/1qz4a8n/do_they_have_anything_other_than_opposing_open/
Acceptable_Home_
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qz4a8n
false
null
t3_1qz4a8n
/r/LocalLLaMA/comments/1qz4a8n/do_they_have_anything_other_than_opposing_open/
false
false
https://b.thumbs.redditm…gnoh9jPVDG2c.jpg
48
null
Addressing a fundamental flaw in hybrid search by introducing a Log-Odds Conjunction framework in Bayesian BM25
7
[https://github.com/instructkr/bb25/pull/1](https://github.com/instructkr/bb25/pull/1) https://preview.redd.it/pk2eefjni8ig1.png?width=1476&format=png&auto=webp&s=706b1a35afd2a25b2b6182fc7db9fd106045d9bc To the Information Retrieval Community.. A significant update has been merged into the Bayesian BM25 (bb25) repository today! This update addresses a fundamental flaw in hybrid search known as Conjunction Shrinkage by introducing a Log-Odds Conjunction framework. In traditional probabilistic retrieval, calculating the probability that multiple signals are simultaneously satisfied typically relies on the Naive Product Rule. For instance, if a document is relevant based on keyword search with a probability of 0.7 and also relevant based on vector semantic search with a probability of 0.7, the standard approach multiplies these to yield 0.49. Intuitively, however, if two independent pieces of evidence both suggest a document is relevant, our confidence should increase beyond 0.7. The product rule causes the final score to decrease toward zero as more signals are added, violating the intuition that corroborating evidence should amplify confidence. The solution implemented in this PR resolves this by shifting the calculation from probability space to log-odds space. The mechanism operates in three stages: first, it computes the geometric mean to find the baseline tendency; second, it performs a Log-Odds Transformation to map the bounded probability space to the unbounded log-odds space; and third, it adds a bonus proportional to the logarithm of the number of signals. This works because probability space is bounded by 1.0, preventing simple addition. By transforming to log-odds space, we remove this ceiling. Instead of the score shrinking to 0.49, the logic applies an additive bonus for agreeing signals, resulting in amplification where the final score becomes roughly 0.83. This implementation is the proof that this structure is not merely a heuristic. The paper demonstrates that rigorous Bayesian inference over multiple signals produces a computational structure formally isomorphic to a feedforward neural network. This work proves that the Sigmoid activation function is a mathematical necessity that emerges when converting Bayesian evidence into probability, rather than an arbitrary design choice. Consequently, this implementation demonstrates that a neural network is the natural structure of correct probabilistic reasoning. The introduction of Log-Odds Conjunction has yielded measurable improvements on the SQuAD v2.0 benchmark compared to the standard Hybrid OR approach marking a +1.2% improvement. This confirms that properly modeling the agreement between text and vector signals yields better ranking performance than simple score summation or probabilistic multiplication. I would like to extend our gratitude to Jaepil for deriving these proofs and contributing the code to bb25.
2026-02-08T08:52:34
https://www.reddit.com/r/LocalLLaMA/comments/1qz40oh/addressing_a_fundamental_flaw_in_hybrid_search_by/
Ok_Rub1689
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qz40oh
false
null
t3_1qz40oh
/r/LocalLLaMA/comments/1qz40oh/addressing_a_fundamental_flaw_in_hybrid_search_by/
false
false
https://preview.redd.it/…180193a7a3eb8037
7
null
Idea: decentralized LLM network with P2P knowledge sharing and crypto incentives
1
[removed]
2026-02-08T08:48:21
https://www.reddit.com/r/LocalLLaMA/comments/1qz3y7v/idea_decentralized_llm_network_with_p2p_knowledge/
Willing_Help_6817
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qz3y7v
false
null
t3_1qz3y7v
/r/LocalLLaMA/comments/1qz3y7v/idea_decentralized_llm_network_with_p2p_knowledge/
false
false
self
1
null
Idea: decentralized LLM network with P2P knowledge sharing and crypto incentives
1
[removed]
2026-02-08T08:43:50
https://www.reddit.com/r/LocalLLaMA/comments/1qz3vkz/idea_decentralized_llm_network_with_p2p_knowledge/
Willing_Help_6817
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qz3vkz
false
null
t3_1qz3vkz
/r/LocalLLaMA/comments/1qz3vkz/idea_decentralized_llm_network_with_p2p_knowledge/
false
false
self
1
{'enabled': False, 'images': [{'id': 'sw6sX0_10zJF3DR1V1Sswuxfkc8JAzPIQZ-mHL67ip4', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/sw6sX0_10zJF3DR1V1Sswuxfkc8JAzPIQZ-mHL67ip4.png?width=108&crop=smart&auto=webp&s=5f5e556de42cf8f77de6e32b86f7e1a9d1351c7a', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/sw6sX0_10zJF3DR1V1Sswuxfkc8JAzPIQZ-mHL67ip4.png?width=216&crop=smart&auto=webp&s=4a76e681ca7f7655755bb0cc5f3943c0e532a100', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/sw6sX0_10zJF3DR1V1Sswuxfkc8JAzPIQZ-mHL67ip4.png?width=320&crop=smart&auto=webp&s=127a67c1307ba99a18abbcd842cf9bfeb3556e5f', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/sw6sX0_10zJF3DR1V1Sswuxfkc8JAzPIQZ-mHL67ip4.png?width=640&crop=smart&auto=webp&s=a373c0ceeb20e03ecd424cbeec2daa190e0bf3d6', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/sw6sX0_10zJF3DR1V1Sswuxfkc8JAzPIQZ-mHL67ip4.png?width=960&crop=smart&auto=webp&s=e32af4f455b386ca47398b8ceb406d9171f61bbe', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/sw6sX0_10zJF3DR1V1Sswuxfkc8JAzPIQZ-mHL67ip4.png?width=1080&crop=smart&auto=webp&s=25756d20ecc31bbb1bf1f86a62677f91a0f9757e', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/sw6sX0_10zJF3DR1V1Sswuxfkc8JAzPIQZ-mHL67ip4.png?auto=webp&s=2a4f939e7c2b0441453204fded6c0f8a6185484d', 'width': 1200}, 'variants': {}}]}
Anti-Rec: Step 3.5 Flash is pretty bad (or I'm using it wrong?)
0
After reading the hype post https://old.reddit.com/r/LocalLLaMA/comments/1qtjhc8/step35flash_196ba11b_outperforms_glm47_and/ and seeing that llama.cpp GGUF support is now merged for koboldcpp and llama.cpp, I decided to try this model. And my results were so disappointing I felt like I had to make a post for it to counteract the hype that has been surrounding this model. What gives? And I mean that for real, *what gives*? Why do people like it? This model is performing at the same level as a 24B mistral from a year ago, or possibly worse. GLM Air 4.5 is better than this in almost all aspects, and that is smaller too. I would say that perhaps Qwen3-VL-8B might even outperform this at times which is embarrassing. I asked the model a bunch of general knowledge questions and it failed pretty badly at them, hallucinating things and getting facts wrong, stuff that Air 4.5 can definitely answer. General knowledge absolutely abysmal. What's this about "outperforming deepseek", surely that has **got** to be benchmaxxed? Am I the only one seeing this? (The model is NOT incoherent. It works. It can count number of R's in strawberry correctly, and many other fruits too. It can add two numbers correctly. It's just... really meh for the size).
2026-02-08T08:04:06
https://www.reddit.com/r/LocalLLaMA/comments/1qz38ad/antirec_step_35_flash_is_pretty_bad_or_im_using/
HadesThrowaway
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qz38ad
false
null
t3_1qz38ad
/r/LocalLLaMA/comments/1qz38ad/antirec_step_35_flash_is_pretty_bad_or_im_using/
false
false
self
0
null
How do you fine tune a model with unsloth/others but with Q4 or lower + offloading to ram?
2
Hi, I tried to make it work, but failed. Maybe I'm doing something wrong or unsloth just doesn't support this??
2026-02-08T07:59:35
https://www.reddit.com/r/LocalLLaMA/comments/1qz35cb/how_do_you_fine_tune_a_model_with_unslothothers/
No_Farmer_495
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qz35cb
false
null
t3_1qz35cb
/r/LocalLLaMA/comments/1qz35cb/how_do_you_fine_tune_a_model_with_unslothothers/
false
false
self
2
null
Qwen3-VL 2B LoRA finetuning
2
I want to finetune Qwen3-VL 2B model but stuck at deciding appropriate configuration of LoRA finetuning. I have limited gpu resources so cant do hyperparameter tuning. It would be a great help if anyone of you having experience with LoRA finetuning can give some suggestions. Thank You
2026-02-08T07:58:02
https://www.reddit.com/r/LocalLLaMA/comments/1qz34g9/qwen3vl_2b_lora_finetuning/
NailCertain7181
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qz34g9
false
null
t3_1qz34g9
/r/LocalLLaMA/comments/1qz34g9/qwen3vl_2b_lora_finetuning/
false
false
self
2
null
I benchmarked 672 "Return JSON only" calls. Strict parsing failed 67% of the time. Here's why.
33
I’ve been building several LLM apps that rely on streaming JSON. The idea seemed quite simple: tell the model to "Return JSON only" and pipe it into my app. But I kept breaking my parsers. The models would give me perfect logic, but wrapped in markdown fences (`\`\`\`json`) or preceded by conversational filler like "Here is the data." Out of curiosity, I decided to stop guessing and actually measure the gap between "Model generated valid JSON" and "API returned parseable JSON." Sharing what I learned because the results were way more drastic than I expected. **1. The "Strict vs. Extractable" Gap is Massive** I tested 8 models (including 2026 releases like Kimi-k2.5, Mistral-small, and GPT-4o-mini) with plain prompts (no `response_format`). * **Strict Parse (**`json.loads(response)`**):** Only **33.3%** succeeded. * **Extractable JSON:** **99.5%** of responses contained valid JSON buried in the text. Basically, the models are smart enough to generate the data, but too "chatty" to be used as an API without a cleaning layer. **2. Mistral is a "Helpful Saboteur"** I found a distinct personality quirk with the Mistral-family models. In my raw lane, they scored **0%** on strict parsing. But they weren't hallucinating. They were just aggressively helpful. They wrapped *every single response* in markdown fences, even when the prompt explicitly forbade it. Once I stripped the fences, their accuracy jumped to 100%. **3. "Reasoning Models" leak their thoughts** This was the most interesting failure mode. I tested Moonshot Kimi-k2.5, and it sometimes failed because it "thought out loud" in the final response. Ironically, it would output text like *"The user wants JSON only, so I must not use markdown"*... and then that sentence itself would break the parser. As we move toward reasoning models, "thought leakage" is going to be a new headache for JSON reliability. **4. "Flash" doesn't mean "Timeout Proof"** I caught one outlier where `glm-4.7-flash` (usually fast) hung for **5.7 minutes**before returning. It’s a good reminder that even "fast" models need strict client-side timeouts, or one ghost request can hang your worker threads forever. **The Solution** Since I didn't want to use regex hacks in every project, I built a tiny middleware called **StreamFix**. It’s a proxy that strips markdown fences and "thinking" text on the fly, so the client only ever sees clean JSON. It bumped my success rate from 33% to 98% without changing the prompts. **Caveats!** * I tested with `temperature=0` to keep it scientific. * My "markdown fence" classifier is simple (it flags `\`\`\`` anywhere), so it might catch some edge cases where the model is quoting code. * I didn't use `response_format` because it's not supported strictly everywhere and I wanted to test the "plain prompt" baseline. **Questions for you:** * Are you guys mostly relying on `response_format` now, or do you still use regex cleaning? * Has anyone else noticed "reasoning leakage" breaking their structured outputs with newer models? **TL;DR:** Models are great at JSON logic (99% success) but terrible at JSON formatting (33% success). The failures are mostly markdown wrappers and conversational filler. Does anyone else face this? How do you deal with it?
2026-02-08T07:16:26
https://www.reddit.com/r/LocalLLaMA/comments/1qz2fra/i_benchmarked_672_return_json_only_calls_strict/
rozetyp
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qz2fra
false
null
t3_1qz2fra
/r/LocalLLaMA/comments/1qz2fra/i_benchmarked_672_return_json_only_calls_strict/
false
false
self
33
null
Train a custom LLM and host it?
1
Hello people, is there an easy way to train a pre-existing LLM with custom data and host it for other people to use? let's say i have a huge stash of legacy data from a local business, and i want to allow customers to interact with that knowledge-base. Is there an easy framework to do so? I am a product manager for digital products and i know the infra very well. What i cannot do is code stuff on my own. I learned it in school 15 years ago but it would take me months to bring my coding skills up to speed. I appreciate any feedback and hope you guys have a good sunday!
2026-02-08T07:08:13
https://www.reddit.com/r/LocalLLaMA/comments/1qz2aqn/train_a_custom_llm_and_host_it/
new-acc-who-dis
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qz2aqn
false
null
t3_1qz2aqn
/r/LocalLLaMA/comments/1qz2aqn/train_a_custom_llm_and_host_it/
false
false
self
1
null
PR opened for Qwen3.5!!
605
https://github.com/huggingface/transformers/pull/43830/ Looking at the code at `src/transformers/models/qwen3_5/modeling_qwen3_5.py`, it looks like Qwen3.5 series will have VLMs right off the bat!
2026-02-08T06:57:13
https://i.redd.it/r10pwm02y7ig1.png
Mysterious_Finish543
i.redd.it
1970-01-01T00:00:00
0
{}
1qz23pp
false
null
t3_1qz23pp
/r/LocalLLaMA/comments/1qz23pp/pr_opened_for_qwen35/
false
false
https://preview.redd.it/…96cc4c4e59d78d49
605
{'enabled': True, 'images': [{'id': 'r10pwm02y7ig1', 'resolutions': [{'height': 66, 'url': 'https://preview.redd.it/r10pwm02y7ig1.png?width=108&crop=smart&auto=webp&s=111b8b8c54d0122189d54a469ae670e6dc793fda', 'width': 108}, {'height': 132, 'url': 'https://preview.redd.it/r10pwm02y7ig1.png?width=216&crop=smart&auto=webp&s=a3407f1dff7c1a9fe61a43bc45bcf8f3407242e8', 'width': 216}, {'height': 195, 'url': 'https://preview.redd.it/r10pwm02y7ig1.png?width=320&crop=smart&auto=webp&s=7a6c6ee0e77331df0b575821c32cefad2c639af1', 'width': 320}, {'height': 391, 'url': 'https://preview.redd.it/r10pwm02y7ig1.png?width=640&crop=smart&auto=webp&s=fb19e2c9eac9c47e80b6a33b08c10d458c3fb6c0', 'width': 640}, {'height': 587, 'url': 'https://preview.redd.it/r10pwm02y7ig1.png?width=960&crop=smart&auto=webp&s=53fb31f3942f9c72691ecb95edb43885cdc67d7f', 'width': 960}, {'height': 660, 'url': 'https://preview.redd.it/r10pwm02y7ig1.png?width=1080&crop=smart&auto=webp&s=19339eda356cda5249d237bc419fb9fe64b2eb3b', 'width': 1080}], 'source': {'height': 1470, 'url': 'https://preview.redd.it/r10pwm02y7ig1.png?auto=webp&s=f19ae90eefdb70a02919ac58174696585c6be51f', 'width': 2404}, 'variants': {}}]}
Imagine chatting with all your research documents
0
so imagine this: you could chat with a super-smart AI that knows everything in all your documents and events. you ask a question, and it answers you based on everything it has seen, no flipping pages, no losing track of info, no forgetting past files. i got the idea after diving into some massive document collections (like the Epstein files) it’s exhausting to make sense of it all it made me think so many researchers, journalists, or anyone dealing with huge sets of documents could really use something like this the tool i imagined would let you: - upload tons of docs, pdfs, txt, whatever - automatically build a timeline of events - extract people, organizations, locations - link every point to the original source so you always know where it came from - keep a growing internal database of all past events and documents, so it remembers everything you’ve analyzed before basically it’s like having a *super-intelligent AI* that instantly gives context, summaries, and answers from all your documents and events without ever losing track curious if anyone else struggles with the same thing or would find this useful
2026-02-08T06:46:15
https://www.reddit.com/r/LocalLLaMA/comments/1qz1x3u/imagine_chatting_with_all_your_research_documents/
Crypto_Tn
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qz1x3u
false
null
t3_1qz1x3u
/r/LocalLLaMA/comments/1qz1x3u/imagine_chatting_with_all_your_research_documents/
false
false
self
0
null
Demo of "Recursive Session Forking" running on local Ollama (Dropstone v3.0.5). Claims to serialize reasoning chains into shareable files. Thoughts on this architecture?
1
I've been tracking the recent updates from Blankline (the lab behind Dropstone), and their v3.0.5 release dropped a feature that caught my eye: Session Forking. Most local coding agents (like standard Ollama integrations) suffer from context drift—if I fix a bug locally, that "reasoning path" is locked on my machine. In this demo (at **0:45**), they show the agent effectively serializing the latent reasoning history into a shareable state. The claim is that you can pass this state to another developer (or another local agent instance), and it picks up the logic without needing to re-ingest the entire codebase. Tech Stack implied in the demo: * Inference: Local Ollama / vLLM (Llama 3, DeepSeek Coder). * Engine: "D3" (Recursive reasoning loop). * Feature: Multiplayer Context Injection. I’m curious if anyone here has reverse-engineered how they are handling the context serialization for the "D3 Engine." Is this just efficient RAG caching, or are they actually passing the KV cache peer-to-peer? *(Note: I ripped the video file from their update channel to post directly here so you don't have to click out to YouTube. The tool itself is closed-source/beta but supports open-weights models.)*
2026-02-08T06:45:42
https://v.redd.it/3lfshuwav7ig1
Accomplished-Cut4024
/r/LocalLLaMA/comments/1qz1wr5/demo_of_recursive_session_forking_running_on/
1970-01-01T00:00:00
0
{}
1qz1wr5
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/3lfshuwav7ig1/DASHPlaylist.mpd?a=1773254752%2CNmVkMDhlNGIxYjg3NmY4NTlkZDVhNjcxZWFhZDNhMjQ5Y2UxNzY0N2QwYjY4ZmRhMDMwZTI1NDUyNDA0NTVmYQ%3D%3D&v=1&f=sd', 'duration': 102, 'fallback_url': 'https://v.redd.it/3lfshuwav7ig1/CMAF_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/3lfshuwav7ig1/HLSPlaylist.m3u8?a=1773254752%2CN2M2YzU1NTUyZDEwMDg0OWQ2YmUyNWFmNTA0NDZkNGE3MGQ5YjM0ZTdhOWNjODkyNWE0MTJmMWU5M2NiYWM0OA%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/3lfshuwav7ig1/CMAF_96.mp4', 'transcoding_status': 'completed', 'width': 1920}}
t3_1qz1wr5
/r/LocalLLaMA/comments/1qz1wr5/demo_of_recursive_session_forking_running_on/
false
false
https://external-preview…56de73ea81b5caf0
1
{'enabled': False, 'images': [{'id': 'aGcwNGZteWF2N2lnMdn90cBmA_wwHmfJs0UPkiz_DXbl5qrKFA3cYoA9i6jV', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/aGcwNGZteWF2N2lnMdn90cBmA_wwHmfJs0UPkiz_DXbl5qrKFA3cYoA9i6jV.png?width=108&crop=smart&format=pjpg&auto=webp&s=806d44cbd2f84cd5d14a5a2cf9af8064ec20463b', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/aGcwNGZteWF2N2lnMdn90cBmA_wwHmfJs0UPkiz_DXbl5qrKFA3cYoA9i6jV.png?width=216&crop=smart&format=pjpg&auto=webp&s=ad624f779a4000e0afb40a6630aaeb2607a0b1ce', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/aGcwNGZteWF2N2lnMdn90cBmA_wwHmfJs0UPkiz_DXbl5qrKFA3cYoA9i6jV.png?width=320&crop=smart&format=pjpg&auto=webp&s=31cbc398182e127d5ff8ae9772ea04ae0ace9d9d', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/aGcwNGZteWF2N2lnMdn90cBmA_wwHmfJs0UPkiz_DXbl5qrKFA3cYoA9i6jV.png?width=640&crop=smart&format=pjpg&auto=webp&s=3b968562f8f00d69ccf0b84c7abb3541cda28dcf', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/aGcwNGZteWF2N2lnMdn90cBmA_wwHmfJs0UPkiz_DXbl5qrKFA3cYoA9i6jV.png?width=960&crop=smart&format=pjpg&auto=webp&s=f939f25a708b15e0155cb933c06d912f785a1bdb', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/aGcwNGZteWF2N2lnMdn90cBmA_wwHmfJs0UPkiz_DXbl5qrKFA3cYoA9i6jV.png?width=1080&crop=smart&format=pjpg&auto=webp&s=feee9fbb11ee3d1055ff634805e1438bb5d08be7', 'width': 1080}], 'source': {'height': 1440, 'url': 'https://external-preview.redd.it/aGcwNGZteWF2N2lnMdn90cBmA_wwHmfJs0UPkiz_DXbl5qrKFA3cYoA9i6jV.png?format=pjpg&auto=webp&s=6e955efbfd5795857b7a4533c120c432089b3b10', 'width': 2560}, 'variants': {}}]}
Structured Data: Schema vs LLMs (What Actually Matters in AI Search)
0
# Structured Data: Schema vs LLMs (What Actually Matters in AI Search) Structured data and large language models (LLMs) play very different roles in modern search. While schema markup helps traditional search engines understand pages, LLMs rely far more on **content clarity and structure** than on explicit markup. This guide explains the difference between **schema-based structured data** and **LLM-based content understanding**, and how they work together in AI-driven search. # TL;DR: Schema vs LLMs * **Schema helps crawlers classify content**, not understand meaning deeply. * **LLMs interpret language**, not markup. * Structured content (headings, lists, clear sections) matters more than JSON-LD for AI answers. * Schema still helps with **eligibility and visibility**, but not comprehension. * The future is **schema + clean content architecture**, not one or the other. # What Is Structured Data (Schema)? Structured data refers to **explicit markup** added to a webpage to help search engines understand what different elements represent. # Common Schema Types * Article * FAQ * Product * Review * HowTo * Organization **Key takeaway:** Schema tells search engines *what something is*, not *what it means* in context. # How Traditional Search Engines Use Schema In classic search systems, schema is heavily relied on for: * Generating rich results (stars, FAQs, product info) * Disambiguating page types * Enhancing crawl efficiency * Powering featured snippets and SERP features Schema works well because traditional search engines are **rule-based and deterministic**. # How LLMs Interpret Content (Without Schema) LLMs don’t rely on structured data in the same way. Instead, they: * Ingest raw page content * Break it into tokens * Analyze relationships between sentences and concepts * Use attention to identify what’s important # What LLMs Actually Look At * Heading hierarchy (H1 → H2 → H3) * Paragraph boundaries * Lists, tables, and FAQs * Repetition and reinforcement * Order of information **Most common mistake:** Assuming JSON-LD improves how LLMs understand content. # Schema vs LLMs: Core Differences |Aspect|Schema (Structured Data)|LLMs| |:-|:-|:-| |Purpose|Classification|Interpretation| |Input|Markup (JSON-LD, microdata)|Natural language| |Strength|Precision|Context & meaning| |Weakness|Rigid, limited|Retrieval still literal| |Primary use|Crawling & SERP features|AI answers & summaries| **In summary:** Schema is machine-readable; LLMs are language-readable. # Where Schema Still Matters in an AI-First World Schema is **not obsolete**. It still plays an important role at the retrieval and eligibility layer. # Schema Helps With: * Page type identification * Product and pricing clarity * FAQ eligibility * Trust and consistency signals * Classic search results that still feed AI systems **Key insight:** Schema influences *whether* content is considered — not *how well* it’s understood. # Where Schema Fails for LLM Understanding Schema cannot: * Explain nuance * Clarify intent * Resolve ambiguity * Rank importance within content * Replace poor writing or structure An LLM will always prefer: > # What Actually Replaces Schema for LLMs Not more markup — **better content architecture**. # LLM-Friendly Structure Includes: * Clear topic definition at the top * Logical heading hierarchy * Short, self-contained paragraphs * Explicit lists and steps * Semantic cues like: * “In summary” * “Key takeaway” * “Most common mistake” This is effectively **implicit structured data**, written in natural language. # Schema + LLMs: The Right Way to Think About It The real model is not *Schema vs LLMs*. It’s **Schema + Structured Content**. # Recommended Approach 1. Use schema for: * Products * FAQs * Reviews * Organizations 2. Use content structure for: * Definitions * Explanations * Comparisons * Step-by-step guidance 3. Optimize terminology for **retrieval prompts**, not just semantics. # FAQs: Schema and LLMs # Do LLMs read schema markup? Mostly no. They prioritize visible content over embedded metadata. # Should I stop using schema? No. Schema still helps with eligibility, trust, and traditional search features. # What matters more for AI Overviews? Clear headings, lists, and early definitions matter more than JSON-LD. # Is schema required for AI citations? No. Many AI-cited pages have zero schema but excellent structure. # Takeaway Schema helps machines **classify** content. LLMs help machines **understand** content. If you want to win in AI-driven search, stop treating schema as a shortcut and start treating **content structure as the real structured data**.
2026-02-08T06:41:40
https://www.reddit.com/r/LocalLLaMA/comments/1qz1u7c/structured_data_schema_vs_llms_what_actually/
DriftNoble
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qz1u7c
false
null
t3_1qz1u7c
/r/LocalLLaMA/comments/1qz1u7c/structured_data_schema_vs_llms_what_actually/
false
false
self
0
null
What are some things you guys are using Local LLMs for?
116
So far im only using it for coding and search related stuff but anything else would be cool
2026-02-08T05:57:47
https://www.reddit.com/r/LocalLLaMA/comments/1qz11n9/what_are_some_things_you_guys_are_using_local/
Odd-Ordinary-5922
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qz11n9
false
null
t3_1qz11n9
/r/LocalLLaMA/comments/1qz11n9/what_are_some_things_you_guys_are_using_local/
false
false
self
116
null
I built a source-grounded LLM pipeline to stop hallucinated learning paths — looking for technical feedback
0
I’ve been experimenting with a problem that keeps coming up when LLMs are used for learning or research: They’re great at explaining things, but terrible at grounding answers in "actual usable sources". So I built a small system that: \- pulls from GitHub, Kaggle, arXiv, YouTube, StackOverflow \- enforces practice-first grounding (repos/datasets when available) \- explicitly flags gaps instead of hallucinating \- outputs execution-oriented roadmaps, not explanations This is NOT a SaaS launch. I’m testing whether this approach actually reduces wasted time for ML teams. What I’m looking for: \- feedback on the grounding strategy \- edge cases where this would still fail \- ideas to make source guarantees stronger If anyone here has tried something similar (or failed at it), I’d love to learn. Happy to share a short demo if useful. https://reddit.com/link/1qz0nrk/video/6pqjfxhaj7ig1/player
2026-02-08T05:36:25
https://www.reddit.com/r/LocalLLaMA/comments/1qz0nrk/i_built_a_sourcegrounded_llm_pipeline_to_stop/
Appropriate_West_879
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qz0nrk
false
null
t3_1qz0nrk
/r/LocalLLaMA/comments/1qz0nrk/i_built_a_sourcegrounded_llm_pipeline_to_stop/
false
false
self
0
null
I made an MNN of Jan-v3 4B
5
Use case: MNN Chat on Android or iOS If you're not familiar with it: MNN Chat is a really fast local LLM chat app--for example, I got 73.92 tokens per second prefill (28 tokens) and 16.3 tokens per second decode (465 tokens) with this model on my Galaxy S24+: https://preview.redd.it/u48fuijyi7ig1.png?width=1080&format=png&auto=webp&s=390a4c45466d839b6104ac823c7d28d17017c8bb [https://huggingface.co/DeProgrammer/Jan-v3-4B-base-instruct-MNN](https://huggingface.co/DeProgrammer/Jan-v3-4B-base-instruct-MNN) Previous thread about Jan v3 in general: [https://www.reddit.com/r/LocalLLaMA/comments/1qo3ri5/jan\_v3\_instruct\_a\_4b\_coding\_model\_with\_40\_aider/](https://www.reddit.com/r/LocalLLaMA/comments/1qo3ri5/jan_v3_instruct_a_4b_coding_model_with_40_aider/)
2026-02-08T05:32:41
https://www.reddit.com/r/LocalLLaMA/comments/1qz0l8z/i_made_an_mnn_of_janv3_4b/
DeProgrammer99
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qz0l8z
false
null
t3_1qz0l8z
/r/LocalLLaMA/comments/1qz0l8z/i_made_an_mnn_of_janv3_4b/
false
false
https://external-preview…689faa046da3ad28
5
null
small project got big? help?
2
Started by trying to get chatgpt and siri to work together, failed miserably learned a ton here's what came out of it...its a wrapper (sort of) but it makes all of the things llms do visible, and has some neuroscience stuff. AS DESIGN CONSTRAINTS! i don't think it's alive. it runs on my machine and i need to know what breaks on yours, if you'd scrap it thats cool let me know ill try to not care, if you'd use it or you wanna break it, love to see that too. honest feedback appreciated. i don't fix my spelling and stuff on purpose guys thats how i prove im not as smart as an ai. stack: * Python/FastAPI backend * SQLite (no cloud, no Docker) * Ollama (qwen2.5:7b default, swap any model) * nomic-embed-text for embeddings * React/TypeScript frontend * runs as macOS daemon or manual start (AI did make that list for me though) [https://github.com/allee-ai/AI\_OS](https://github.com/allee-ai/AI_OS) (AI\_OS is a place holder i haven't thought of a good name yet)
2026-02-08T05:21:22
https://www.reddit.com/r/LocalLLaMA/comments/1qz0did/small_project_got_big_help/
Automatic-Finger7723
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qz0did
false
null
t3_1qz0did
/r/LocalLLaMA/comments/1qz0did/small_project_got_big_help/
false
false
self
2
{'enabled': False, 'images': [{'id': 'sncDQmhvly84s9HpTJcD4BZcxVSYSgNgmNDhNFgXSNQ', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/sncDQmhvly84s9HpTJcD4BZcxVSYSgNgmNDhNFgXSNQ.png?width=108&crop=smart&auto=webp&s=64ecebf75890b5247a9051a2df71d2585d4b9dc0', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/sncDQmhvly84s9HpTJcD4BZcxVSYSgNgmNDhNFgXSNQ.png?width=216&crop=smart&auto=webp&s=e64d38d0b6e675a5dbbbd8b620af9b7f9ff21ef6', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/sncDQmhvly84s9HpTJcD4BZcxVSYSgNgmNDhNFgXSNQ.png?width=320&crop=smart&auto=webp&s=c6ecd69071e9efc7a71d4a61747baaa694d366e3', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/sncDQmhvly84s9HpTJcD4BZcxVSYSgNgmNDhNFgXSNQ.png?width=640&crop=smart&auto=webp&s=91fdc775d9a0562d22ab6de2f255dffff604e456', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/sncDQmhvly84s9HpTJcD4BZcxVSYSgNgmNDhNFgXSNQ.png?width=960&crop=smart&auto=webp&s=b749209c06f5e42cc08c2eb189378b4513737daf', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/sncDQmhvly84s9HpTJcD4BZcxVSYSgNgmNDhNFgXSNQ.png?width=1080&crop=smart&auto=webp&s=74634686d3687fabae29ba32a7780ddbe808f06b', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/sncDQmhvly84s9HpTJcD4BZcxVSYSgNgmNDhNFgXSNQ.png?auto=webp&s=02ca4c510b6755419b33df5a5ebe62d5a2652a96', 'width': 1200}, 'variants': {}}]}
Made Claude Code Agent Teams model-agnostic with a translation proxy. Use any model as a teammate.
0
Claude Code Agent Teams is arguably the best multi-agent coding system right now. 15+ tools, file access, bash, git, task coordination, messaging. But every agent has to be Claude. I built a proxy that changes that. It intercepts the teammate's Anthropic API calls and translates them to OpenAI Chat Completions format. The teammate is still a full Claude Code instance with every tool. It just talks to a different brain. Currently supports: \- OpenAI API (GPT-4o, GPT-4o-mini, etc.) \- ChatGPT Plus subscription (GPT-5.3-codex at zero extra cost) Ollama support is next on the roadmap. The OpenAI-compatible API makes it mostly a config change, but I want to test it properly with tool-calling models before shipping it. The interesting part for this community: once Ollama support lands, you could run a Claude Code lead agent that spawns teammates powered entirely by local models. Full agent capabilities, zero cloud dependency for the workers. The proxy is about 1,600 lines of TypeScript with zero runtime dependencies. It handles SSE stream translation, message history mapping, tool definition conversion, and model name spoofing (Claude Code validates model names internally). GitHub: [https://github.com/Pickle-Pixel/HydraTeams](https://github.com/Pickle-Pixel/HydraTeams) If anyone wants to help test with Ollama models that support tool calling (Qwen 2.5 Coder, Llama 3.3, etc.), I'd appreciate it. The translation layer is there, just needs the provider routing.
2026-02-08T04:49:30
https://www.reddit.com/r/LocalLLaMA/comments/1qyzrfs/made_claude_code_agent_teams_modelagnostic_with_a/
Thick_Professional14
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qyzrfs
false
null
t3_1qyzrfs
/r/LocalLLaMA/comments/1qyzrfs/made_claude_code_agent_teams_modelagnostic_with_a/
false
false
self
0
{'enabled': False, 'images': [{'id': 'W8tgiG7ce7rRMDmUYAfeGZn0prkZB8TCU65mHTW6-n4', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/W8tgiG7ce7rRMDmUYAfeGZn0prkZB8TCU65mHTW6-n4.png?width=108&crop=smart&auto=webp&s=53acbd1bfe95746b5159365de901d09d1a90030c', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/W8tgiG7ce7rRMDmUYAfeGZn0prkZB8TCU65mHTW6-n4.png?width=216&crop=smart&auto=webp&s=f1bda7385c4098fb4f35d88ef7120e74ed627900', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/W8tgiG7ce7rRMDmUYAfeGZn0prkZB8TCU65mHTW6-n4.png?width=320&crop=smart&auto=webp&s=45e62d7159459525f1e496d0101ec961060d3146', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/W8tgiG7ce7rRMDmUYAfeGZn0prkZB8TCU65mHTW6-n4.png?width=640&crop=smart&auto=webp&s=c6a723ce3d2e09bb3fae1d81d574d6ad4028c6b5', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/W8tgiG7ce7rRMDmUYAfeGZn0prkZB8TCU65mHTW6-n4.png?width=960&crop=smart&auto=webp&s=466b4c263928a34d68df259571290f588ca97464', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/W8tgiG7ce7rRMDmUYAfeGZn0prkZB8TCU65mHTW6-n4.png?width=1080&crop=smart&auto=webp&s=b89187cde4bd35a03c770088c55cc2123f49ca96', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/W8tgiG7ce7rRMDmUYAfeGZn0prkZB8TCU65mHTW6-n4.png?auto=webp&s=3997e57a61012c346b14cdc800870b0d7f80950e', 'width': 1200}, 'variants': {}}]}
Is this model working fine at Q4km? How does it compare to the original?
3
Is there a benchmark?
2026-02-08T04:48:45
https://huggingface.co/lovedheart/Qwen3-Coder-Next-REAP-48B-A3B-GGUF
Significant_Fig_7581
huggingface.co
1970-01-01T00:00:00
0
{}
1qyzqwz
false
null
t3_1qyzqwz
/r/LocalLLaMA/comments/1qyzqwz/is_this_model_working_fine_at_q4km_how_does_it/
false
false
https://external-preview…a12786e6262f26b7
3
{'enabled': False, 'images': [{'id': 'j98XKqoJ3UOGeW66Etg0lVtFqPsaabyeyZuH8PQVb-0', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/j98XKqoJ3UOGeW66Etg0lVtFqPsaabyeyZuH8PQVb-0.png?width=108&crop=smart&auto=webp&s=3ea7f72e85d02863021f7194615de2b3ea8ba5fd', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/j98XKqoJ3UOGeW66Etg0lVtFqPsaabyeyZuH8PQVb-0.png?width=216&crop=smart&auto=webp&s=c7e3b7232e2e3d0b168ba41e79c53720f03a1410', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/j98XKqoJ3UOGeW66Etg0lVtFqPsaabyeyZuH8PQVb-0.png?width=320&crop=smart&auto=webp&s=f7e14159bf911ee525e006714d09c11a89a31824', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/j98XKqoJ3UOGeW66Etg0lVtFqPsaabyeyZuH8PQVb-0.png?width=640&crop=smart&auto=webp&s=234ec5f7ffcda5d2272c5b48c2652755e36ad2b9', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/j98XKqoJ3UOGeW66Etg0lVtFqPsaabyeyZuH8PQVb-0.png?width=960&crop=smart&auto=webp&s=58dd8c3a2a8f9e131a899f93e7379a6412a39e7f', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/j98XKqoJ3UOGeW66Etg0lVtFqPsaabyeyZuH8PQVb-0.png?width=1080&crop=smart&auto=webp&s=fcb77ea2a094bf90745928909cac8a1c34f7a676', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/j98XKqoJ3UOGeW66Etg0lVtFqPsaabyeyZuH8PQVb-0.png?auto=webp&s=f290abeddf551cc7d90ff2989ad3f079985ee67c', 'width': 1200}, 'variants': {}}]}
LLM forgets it can use MCP tools
1
[removed]
2026-02-08T04:26:35
https://www.reddit.com/r/LocalLLaMA/comments/1qyzb3h/llm_forgets_it_can_use_mcp_tools/
moe_34567
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qyzb3h
false
null
t3_1qyzb3h
/r/LocalLLaMA/comments/1qyzb3h/llm_forgets_it_can_use_mcp_tools/
false
false
self
1
null
LLM forgets it can use MCP tools
1
[removed]
2026-02-08T04:05:17
https://www.reddit.com/r/LocalLLaMA/comments/1qyyw3k/llm_forgets_it_can_use_mcp_tools/
moe_34567
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qyyw3k
false
null
t3_1qyyw3k
/r/LocalLLaMA/comments/1qyyw3k/llm_forgets_it_can_use_mcp_tools/
false
false
self
1
null
Llama.cpp's "--fit" can give major speedups over "--ot" for Qwen3-Coder-Next (2x3090 - graphs/chart included)
97
Qwen3-Coder-Next (unsloth's UD\_Q4\_K\_XL) on dual RTX 3090 with llama.cpp b7941. More info in comments.
2026-02-08T03:54:02
https://www.reddit.com/gallery/1qyynyw
tmflynnt
reddit.com
1970-01-01T00:00:00
0
{}
1qyynyw
false
null
t3_1qyynyw
/r/LocalLLaMA/comments/1qyynyw/llamacpps_fit_can_give_major_speedups_over_ot_for/
false
false
https://b.thumbs.redditm…vi5wreJX85kg.jpg
97
null
Let's talk local inference on phones! Right now I'm running both Qwen3-4B and Gemma-3N-E2B with great results.
1
[removed]
2026-02-08T03:51:21
https://www.reddit.com/r/LocalLLaMA/comments/1qyym0a/lets_talk_local_inference_on_phones_right_now_im/
AlphaSyntauri
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qyym0a
false
null
t3_1qyym0a
/r/LocalLLaMA/comments/1qyym0a/lets_talk_local_inference_on_phones_right_now_im/
false
false
self
1
null
RTX 3060 | core I9 9900kf | 64GB DDR4 | 1TB M.2 2280 - Qwen3-Coder-30B
1
[removed]
2026-02-08T03:27:15
https://www.reddit.com/r/LocalLLaMA/comments/1qyy4d4/rtx_3060_core_i9_9900kf_64gb_ddr4_1tb_m2_2280/
stecha66
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qyy4d4
false
null
t3_1qyy4d4
/r/LocalLLaMA/comments/1qyy4d4/rtx_3060_core_i9_9900kf_64gb_ddr4_1tb_m2_2280/
false
false
self
1
null
When my friend asked me, "What is the use of Agent Skill?" I wrote an article
0
# What Is Agent Skill Really For? Exploring New Paradigms in Agent Development from a Hacker News Hot Post Over the past year, terms like "Agent," "Skill," "MCP," and "tool calling" have become increasingly common. Yet, when it comes to applying them in daily development or business work, many still have a question mark in their minds: **What problems do Agent Skills actually solve? Are they worth deliberately using?** This article starts from the discussion in that lengthy "Agent Skills" post on Hacker News, combines practices with current mainstream tools (Claude Code, Cursor, Windsurf, etc.), and systematically discusses the role, boundaries of Agent Skills, and how to use them effectively in your own projects. --- ## 1. First, Clarify: What is an Agent Skill? If I had to summarize in one sentence: > **An Agent Skill is essentially: a reusable "operating manual + accompanying scripts/resources" for the AI, plus a set of conventions that allow the Agent to automatically discover and load these manuals on demand.** ### A Typical Skill Structure: * **Unified Directory:** `.agents/skills/`, `.claude/skills/`, `.opencode/skills/`, etc.; * **Internal Composition:** * **Frontmatter:** Metadata such as description, trigger conditions (when to use), tags, etc.; * **Detailed Instructions:** Step-by-step guides, precautions; * **Attached Resources:** Possibly includes scripts, data files, configurations, etc. **Differences from Ordinary Documentation:** * **More Agent-Oriented Writing:** Focuses on clearly stating "in what scenario, how should it be used," rather than being a stream of consciousness for human readers; * **Unified Specification:** Facilitates automatic discovery, indexing, and on-demand loading by various Agent tools. --- ## 2. Why Isn't a Single AGENTS.md File Enough? A representative viewpoint in the HN discussion was: Since a Skill is just a manual, why not write an `AGENTS.md` file and have the Agent read it every time? The core reasons are: ### 1. Context Budget is Limited * **Information Dilution:** The more information crammed in, the easier it is for crucial details to get diluted, making the model more prone to going off track (or even hallucinating). * **Progressive Exposure:** The Skills mechanism first gives the model a brief "table of contents," then only opens the relevant chapter when needed. This is more efficient and saves tokens. ### 2. Composable, Distributable, Reusable * **Cross-Project Reuse:** Independent Skills can be versioned, published, and used across multiple repositories like libraries. * **Automatic Loading:** Agent tools can automatically discover skills, rather than requiring manual prompt edits for each project. --- ## 3. The Three Types of Problems Skills Truly Solve ### 1. Turning "Tacit Knowledge" into Reusable Processes Documenting the conventions, pitfalls, and best practices from senior colleagues' minds into "Agent-oriented SOPs." When a new task arrives, simply call it via `/skill-xxx`, and experience is directly digitized and preserved. ### 2. Controlling Agent Style, Preferences, and Constraints Split by theme (e.g., code style, security compliance, brand tone), enabling different selections for different projects. Some skills can even achieve **automatic triggering**, such as automatically loading corresponding specifications when reading/writing specific files. ### 3. Turning a "General Model" into a "Domain Expert" Skill is the crucial glue layer that combines "the large model + your system + your experience" into a truly actionable Agent. It can bridge the gap in the pre-trained model's knowledge regarding private APIs or specific business domain details. --- ## 4. Limitations and Misconceptions of Skills: It's Not Magic * **Misconception 1: If you write it, the Agent will automatically do it.** In reality, the model might not call it at all. **Countermeasure:** Write precise trigger conditions; explicitly call important Skills. * **Misconception 2: Format over substance.** What truly matters is a clear and well-structured manual, not obsessing over directory structure. * **Misconception 3: Everything can be a Skill.** "If you wouldn't specifically write a function for this task, it probably doesn't deserve to be a Skill either." --- ## 5. How to Use Agent Skills Effectively in Your Own Projects? 1. **Select 3–5 "High-Frequency Processes":** Prioritize tasks you've repeatedly taught the Agent to do. Write clear checklists and acceptance criteria. 2. **Treat Them as "Iterative, Semi-Deterministic Functions":** Whenever you find poor performance, directly ask the Agent to help you modify the corresponding Skill file. 3. **Utilize Skill Directory Sites as "External Support":** Directly reuse skills that others have refined. Recommended skill directory site: [Agentskills.help](https://agentskills.help/). Here, you can see real-time trends in various Agent Skills, including: * **Popular Skills:** UI checking, browser automation, SEO audits, etc. * **Quick Leverage:** Supports keyword search, allowing you to directly "add plugins" to your Agent, which is far more efficient than designing from scratch. --- ## 6. Conclusion: Skill is a "Correct Intermediate Posture" A more pragmatic view is: given the current limitations of model capabilities, clear, modular, and discoverable Skills are highly practical. Even if context windows become nearly limitless in the future, the **structured experience** written today can be fully migrated; it won't be wasted. If you're already using Claude Code, Cursor, or Windsurf, why not start by picking 2–3 relevant skills from [Agentskills.help](https://agentskills.help/) to install and run, and experience the qualitative leap in Agent productivity.
2026-02-08T03:04:58
https://www.reddit.com/r/LocalLLaMA/comments/1qyxnfl/when_my_friend_asked_me_what_is_the_use_of_agent/
Intelligent_Front701
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qyxnfl
false
null
t3_1qyxnfl
/r/LocalLLaMA/comments/1qyxnfl/when_my_friend_asked_me_what_is_the_use_of_agent/
false
false
self
0
{'enabled': False, 'images': [{'id': 'yMpWovi1IB8425I7UifLL8y0gdSEz8sOLM4eHHjTruM', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/yMpWovi1IB8425I7UifLL8y0gdSEz8sOLM4eHHjTruM.jpeg?width=108&crop=smart&auto=webp&s=e7df15107cb475b3eabb36e2dc29834c7c2333b3', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/yMpWovi1IB8425I7UifLL8y0gdSEz8sOLM4eHHjTruM.jpeg?width=216&crop=smart&auto=webp&s=3cea1447f64a4cc7c596b427c58d096e4f40a736', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/yMpWovi1IB8425I7UifLL8y0gdSEz8sOLM4eHHjTruM.jpeg?width=320&crop=smart&auto=webp&s=f9b2d9958209a93c6b017b5b1aeceef67fb2c5e1', 'width': 320}, {'height': 640, 'url': 'https://external-preview.redd.it/yMpWovi1IB8425I7UifLL8y0gdSEz8sOLM4eHHjTruM.jpeg?width=640&crop=smart&auto=webp&s=4907ffe3e5fa188683cbc6c8cf475fb1aa5a244c', 'width': 640}, {'height': 960, 'url': 'https://external-preview.redd.it/yMpWovi1IB8425I7UifLL8y0gdSEz8sOLM4eHHjTruM.jpeg?width=960&crop=smart&auto=webp&s=f0a9faffb6d4af8ce7626eadff95bbcae7f279ba', 'width': 960}], 'source': {'height': 1024, 'url': 'https://external-preview.redd.it/yMpWovi1IB8425I7UifLL8y0gdSEz8sOLM4eHHjTruM.jpeg?auto=webp&s=4da3756c621942a62f2f02a787f0fe75a9cfd2c1', 'width': 1024}, 'variants': {}}]}
Another use for my local llm
5
I was helping a friend of mine with an article about AI and software development. As part of it GPT generated a Chrome extension for us, that grabs a content of a site you currently on, and sends it to my local lmstudio with a prompt. Lmstudio returns back list of facts, claims and opinions, along with evidence for each and displays it on the extension in english, regardless of the original site language. Its actually pretty cool, generation took about an hour of iterative process.
2026-02-08T02:36:10
https://www.reddit.com/r/LocalLLaMA/comments/1qyx1b2/another_use_for_my_local_llm/
regjoe13
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qyx1b2
false
null
t3_1qyx1b2
/r/LocalLLaMA/comments/1qyx1b2/another_use_for_my_local_llm/
false
false
self
5
null
Epyc rome 7B12 or milan 7B13
3
7B12 = $400 7B13 = $700 Does Milan justify the extra 300 bucks ? (considering CPU only LLM)
2026-02-08T02:26:50
https://www.reddit.com/r/LocalLLaMA/comments/1qywu17/epyc_rome_7b12_or_milan_7b13/
peplegal
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qywu17
false
null
t3_1qywu17
/r/LocalLLaMA/comments/1qywu17/epyc_rome_7b12_or_milan_7b13/
false
false
self
3
null
Self-hosted LLM sometimes answers instead of calling MCP tool
2
I’m building a local voice assistant using a self-hosted LLM (llama.cpp via llama-swap). Tools are exposed via MCP. **Problem:** On the first few runs it uses the MCP tools. After a few questions it tells me it can't get the answer because it doesn't know. I am storing the chat history in a file and feeding it to the LLM on every query. The LLM I'm using is **Qwen3-4B-Instruct-2507-GGUF** btw: * Tools are correctly registered and visible to the model * The same prompt is used both times * No errors from MCP or the tool server * Setting `tool_choice="required"` forces tool usage all the time, but that’s not what I want * I am telling the LLM to use tools if it can in the system prompt **Question:** Is this expected behavior with instruction-tuned models (e.g. LLaMA / LFM / Qwen), or is there a recommended pattern to make tool usage *reliable but not forced*? Why do you think it "forgets" that it can use tools? Are there any solutions? * Is this a known issue with llama.cpp / OpenAI-compatible tool calling? * Does using something like FastMCP improve tool-call consistency? * Are people using system-prompt strategies or routing layers instead? Any guidance from people running local agents with tools would help.
2026-02-08T02:23:21
https://www.reddit.com/r/LocalLLaMA/comments/1qywr64/selfhosted_llm_sometimes_answers_instead_of/
moe_34567
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qywr64
false
null
t3_1qywr64
/r/LocalLLaMA/comments/1qywr64/selfhosted_llm_sometimes_answers_instead_of/
false
false
self
2
null
Step-3.5 Flash
18
stepfun-ai\_Step-3.5-Flash-Q3\_K\_M from [https://huggingface.co/bartowski/stepfun-ai\_Step-3.5-Flash-GGUF](https://huggingface.co/bartowski/stepfun-ai_Step-3.5-Flash-GGUF)
2026-02-08T02:16:12
https://www.reddit.com/gallery/1qywlk0
jacek2023
reddit.com
1970-01-01T00:00:00
0
{}
1qywlk0
false
null
t3_1qywlk0
/r/LocalLLaMA/comments/1qywlk0/step35_flash/
false
false
https://b.thumbs.redditm…rHGjDU-oPsxk.jpg
18
null