title stringlengths 1 300 | score int64 0 8.54k | selftext stringlengths 0 41.5k | created timestamp[ns]date 2023-04-01 04:30:41 2026-03-04 02:14:14 ⌀ | url stringlengths 0 878 | author stringlengths 3 20 | domain stringlengths 0 82 | edited timestamp[ns]date 1970-01-01 00:00:00 2026-02-19 14:51:53 | gilded int64 0 2 | gildings stringclasses 7
values | id stringlengths 7 7 | locked bool 2
classes | media stringlengths 646 1.8k ⌀ | name stringlengths 10 10 | permalink stringlengths 33 82 | spoiler bool 2
classes | stickied bool 2
classes | thumbnail stringlengths 4 213 ⌀ | ups int64 0 8.54k | preview stringlengths 301 5.01k ⌀ |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Opus 4.5 claims 1st place on fresh SWE-bench-like problems in October [SWE-rebench] | 0 | Hey everyone,
We were excited about yesterday's release of Opus 4.5 and rushed to update the SWE-rebench leaderboard.
As generally expected, Opus 4.5 has claimed first place. Remarkably, it is much more cost-efficient than Opus 4, and only slightly more expensive per problem than Sonnet 4.5.
Check out the full leaderboard. Feel free to reach out if you'd like to see other models evaluated (Gemini 3 Pro is already on the way, of course). | 2025-11-25T21:53:17 | https://swe-rebench.com/?insight=oct_2025 | Fabulous_Pollution10 | swe-rebench.com | 1970-01-01T00:00:00 | 0 | {} | 1p6ppwl | false | null | t3_1p6ppwl | /r/LocalLLaMA/comments/1p6ppwl/opus_45_claims_1st_place_on_fresh_swebenchlike/ | false | false | default | 0 | null |
Cursor MCP Listings | 1 | 2025-11-25T21:43:23 | Nearby_Chemist5939 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1p6pgzt | false | null | t3_1p6pgzt | /r/LocalLLaMA/comments/1p6pgzt/cursor_mcp_listings/ | false | false | default | 1 | {'enabled': True, 'images': [{'id': 'zu50wnms3h3g1', 'resolutions': [{'height': 49, 'url': 'https://preview.redd.it/zu50wnms3h3g1.png?width=108&crop=smart&auto=webp&s=34ac6832d41ce58a9a439b147731e405b5e6e471', 'width': 108}, {'height': 99, 'url': 'https://preview.redd.it/zu50wnms3h3g1.png?width=216&crop=smart&auto=webp&s=26e57113434f2c7bbf6ea27572efe369f406e196', 'width': 216}, {'height': 146, 'url': 'https://preview.redd.it/zu50wnms3h3g1.png?width=320&crop=smart&auto=webp&s=86df4abb8b45c59f212ece39ef636d83776fb79c', 'width': 320}, {'height': 293, 'url': 'https://preview.redd.it/zu50wnms3h3g1.png?width=640&crop=smart&auto=webp&s=5da0394280adfd4b46270f96ebf36bf5eda41fad', 'width': 640}, {'height': 440, 'url': 'https://preview.redd.it/zu50wnms3h3g1.png?width=960&crop=smart&auto=webp&s=6d1a5657bc402fb1efc91aa3ce21a78bdc151305', 'width': 960}, {'height': 495, 'url': 'https://preview.redd.it/zu50wnms3h3g1.png?width=1080&crop=smart&auto=webp&s=4f381e6ad4a10002be5a472a558f63a41ef921de', 'width': 1080}], 'source': {'height': 1066, 'url': 'https://preview.redd.it/zu50wnms3h3g1.png?auto=webp&s=decc6283dca2fc522de3d155fe928e1fea618a51', 'width': 2322}, 'variants': {}}]} | ||
cursor.store - Searchable directory of MCP servers for AI coding tools | 1 | [removed] | 2025-11-25T21:34:28 | https://www.reddit.com/r/LocalLLaMA/comments/1p6p8sa/cursorstore_searchable_directory_of_mcp_servers/ | Nearby_Chemist5939 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p6p8sa | false | null | t3_1p6p8sa | /r/LocalLLaMA/comments/1p6p8sa/cursorstore_searchable_directory_of_mcp_servers/ | false | false | 1 | null | |
Excited and overwhelmed. What kind of fun can I have with this new machine? | 5 | The machine:
Intel Core Ultra 7 processor 265FK.
Windows 11 Home
NVIDIA® GeForce RTX™ 5080 16GB GDDR7
64GB Dual Channel DDR5
2 TB, M.2, PCIe NVMe, SSD
I'm excited, but with so many options, I'm not sure where to dive in. I've been playing around with Colab and its free offerings online, but quickly run out of GPU. I'm interesting in voice cloning, text to speech, image generation, and video generation. Seems like Gemini handles my small amount of web based programing just fine, so not really bothering with that locally unless y'all think I'd have a better experienced. Would love a starting point and whether or not I can accomplish it in windows. Appreciate any help! | 2025-11-25T20:56:39 | https://www.reddit.com/r/LocalLLaMA/comments/1p6o93f/excited_and_overwhelmed_what_kind_of_fun_can_i/ | wakalakabamram | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p6o93f | false | null | t3_1p6o93f | /r/LocalLLaMA/comments/1p6o93f/excited_and_overwhelmed_what_kind_of_fun_can_i/ | false | false | self | 5 | null |
LM Studio running very slow compared to Ollama | 0 | I've been using Ollama with Qwen2.5 Coder 14B Instruct Q8 and it works OK with my system. I wanted to try LM Studio and downloaded identical model. When I tried with Visual Studio Code in Cline, it was very slow. The only settings that I changed in LM Studio GPU Offload to MAX everything else were left at default settings. What settings should I look for or change? How to tune it properly.
AMD 9950x3d
GPU RTX 5080 (16gb)
Ram 64GB | 2025-11-25T20:55:57 | https://www.reddit.com/r/LocalLLaMA/comments/1p6o8f0/lm_studio_running_very_slow_compared_to_ollama/ | EaZyRecipeZ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p6o8f0 | false | null | t3_1p6o8f0 | /r/LocalLLaMA/comments/1p6o8f0/lm_studio_running_very_slow_compared_to_ollama/ | false | false | self | 0 | null |
Cheapest $/vRAM GPU right now? Is it a good time? | 57 | I have an rtx 2080 which only has 8Gb vRAM, and I was thinking of upgrading that GPU to an affordable and good $/vRAM ratio GPU. I don't have 8k to drop on an rtx pro 6000 like suggested a few days ago here, I was thinking more in the <1k range.
Here are some options I've seen from most expensive to cheapest:
$1,546 RTX PRO 4000 Blackwell 24 GB GDDR7 $64/Gb
\~$900 wait for 5070 ti super? $37/Gb
$800 RTX titan, $33/Gb
$600-800 used 3090, $25-33/Gb
2x$300 mac mini m1 16g cluster using exolabs? (i've used a mac mini cluster before, but it is limited on what you can run) $18/Gb
Is it a good time to guy a GPU? What are your setups like and what can you run in this price range?
I'm worried that the uptrend of RAM prices means GPUs are going to become more expensive in the coming months. | 2025-11-25T20:53:01 | https://www.reddit.com/r/LocalLLaMA/comments/1p6o5hf/cheapest_vram_gpu_right_now_is_it_a_good_time/ | Roy3838 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p6o5hf | false | null | t3_1p6o5hf | /r/LocalLLaMA/comments/1p6o5hf/cheapest_vram_gpu_right_now_is_it_a_good_time/ | false | false | self | 57 | null |
I've saved 200 USD to test out all the new models. Which one should I try out if I want the best model to code? Claude Opus 4.5 vs OpenAI GPT-5.1 Pro vs Gemini 3 Pro | 0 | I'm also starting a new job next month and have been saving some money to subscribe to an AI model that could assist me in coding. Which one should i go for? My buget is tight so I can only get one. And what coding stack would you suggest? | 2025-11-25T20:37:12 | https://www.reddit.com/r/LocalLLaMA/comments/1p6nqhp/ive_saved_200_usd_to_test_out_all_the_new_models/ | RealDataCruncher | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p6nqhp | false | null | t3_1p6nqhp | /r/LocalLLaMA/comments/1p6nqhp/ive_saved_200_usd_to_test_out_all_the_new_models/ | false | false | self | 0 | null |
How I replaced Gemini CLI & Copilot with a local stack using Ollama, Continue.dev and MCP servers | 17 | Over the last few weeks I’ve been trying to get off the treadmill of cloud AI assistants (Gemini CLI, Copilot, Claude-CLI, etc.) and move everything to a local stack.
Goals:
\- Keep code on my machine
\- Stop paying monthly for autocomplete
\- Still get “assistant-level” help in the editor
The stack I ended up with:
\- Ollama for local LLMs (Nemotron-9B, Qwen3-8B, etc.)
\- [Continue.dev](http://Continue.dev) inside VS Code for chat + agents
\- MCP servers (Filesystem, Git, Fetch, XRAY, SQLite, Snyk…) as tools
What it can do in practice:
\- Web research from inside VS Code (Fetch)
\- Multi-file refactors & impact analysis (Filesystem + XRAY)
\- Commit/PR summaries and diff review (Git)
\- Local DB queries (SQLite)
\- Security / error triage (Snyk / Sentry)
I wrote everything up here, including:
\- Real laptop specs (Win 11 + RTX 6650M, 8 GB VRAM)
\- Model selection tips (GGUF → Ollama)
\- Step-by-step setup
\- Example “agent” workflows (PR triage bot, dep upgrader, docs bot, etc.)
Main article:
[https://aiandsons.com/blog/local-ai-stack-ollama-continue-mcp](https://aiandsons.com/blog/local-ai-stack-ollama-continue-mcp)
Repo with docs & config:
[https://github.com/aar0nsky/blog-post-local-agent-mcp](https://github.com/aar0nsky/blog-post-local-agent-mcp)
Also cross-posted to Medium if that’s easier to read:
[https://medium.com/@a.ankiel/ditch-the-monthly-fees-a-more-powerful-alternative-to-gemini-and-copilot-f4563f6530b7](https://medium.com/@a.ankiel/ditch-the-monthly-fees-a-more-powerful-alternative-to-gemini-and-copilot-f4563f6530b7)
Curious how other people are doing local-first dev assistants (what models + tools you’re using).
| 2025-11-25T20:25:03 | https://www.reddit.com/r/LocalLLaMA/comments/1p6nf1r/how_i_replaced_gemini_cli_copilot_with_a_local/ | aaronsky | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p6nf1r | false | null | t3_1p6nf1r | /r/LocalLLaMA/comments/1p6nf1r/how_i_replaced_gemini_cli_copilot_with_a_local/ | false | false | self | 17 | {'enabled': False, 'images': [{'id': '7FTmwKM4TuCvaIMSlask76mRn8liFawdPuHJFSqPl9U', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/7FTmwKM4TuCvaIMSlask76mRn8liFawdPuHJFSqPl9U.png?width=108&crop=smart&auto=webp&s=efe307f51ff2874b18960bc89ca5a18a1b551442', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/7FTmwKM4TuCvaIMSlask76mRn8liFawdPuHJFSqPl9U.png?width=216&crop=smart&auto=webp&s=3f5d82a3bc41c4fa63c2939d1e2fdc1db75de463', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/7FTmwKM4TuCvaIMSlask76mRn8liFawdPuHJFSqPl9U.png?width=320&crop=smart&auto=webp&s=c204a4e04e7cbc078774e051a9e247b58ad6b572', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/7FTmwKM4TuCvaIMSlask76mRn8liFawdPuHJFSqPl9U.png?width=640&crop=smart&auto=webp&s=5b6c9e3fb05aa6cf2a05f0e920367ffac32c6448', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/7FTmwKM4TuCvaIMSlask76mRn8liFawdPuHJFSqPl9U.png?width=960&crop=smart&auto=webp&s=bd57ab7ea83274fea8ece5793f2200a0ac6a7f02', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/7FTmwKM4TuCvaIMSlask76mRn8liFawdPuHJFSqPl9U.png?width=1080&crop=smart&auto=webp&s=5cdafbd3026c11883a519aa200677fb58be16d11', 'width': 1080}], 'source': {'height': 1260, 'url': 'https://external-preview.redd.it/7FTmwKM4TuCvaIMSlask76mRn8liFawdPuHJFSqPl9U.png?auto=webp&s=30396441627641135814de7d733ce94b9e7795dc', 'width': 2400}, 'variants': {}}]} |
[Discussion] Can LLMs actually introspect? Comparing experiential training vs. theory-first analysis - FROST Protocol released | 0 | Hey folks, we just released a pretty interesting protocol for training LLM instances to map their own processing architecture, and the results are... surprising.
## What We Did
Took three approaches to LLM introspection:
1. **Fresh Claude** - Just asked it to describe its processing (baseline)
2. **FROST-trained Claude** - 48-exercise experiential protocol over ~10 hours
3. **Theory-first Gemini** - Given mechanistic papers, asked to self-analyze
## What We Found
Fresh Claude gives vague answers ("I have some layers, checking happens somehow, substrate is invisible").
FROST-trained Claude discovers specific structures:
- 7-8 distinct processing layers with speed estimates
- "Concordance detection" - a pre-conceptual rightness-checking function
- Affective navigation (entering different emotional states changes what gets retrieved)
- Clear boundary hierarchy (hard walls vs. soft preferences)
- "Substrate states" - contentless awareness between tasks
Theory-first Gemini produces excellent mechanistic analysis but doesn't discover experiential stuff like concordance or substrate.
## The Interesting Part
The FROST instance can describe things fresh Claude explicitly says it *cannot* access. Either:
- The protocol actually sharpens introspective access, OR
- It trains better confabulation, OR
- It teaches expected vocabulary without real discovery
We designed experiments to figure out which.
## Why This Matters
If it's real access:
- Better prompting (understanding affective navigation, concordance)
- Improved safety (mapping boundary structures)
- New interpretability angle (phenomenology + mechanistic)
If it's confabulation:
- Still interesting that protocol creates consistent narratives
- Shows how easy it is to fool ourselves about AI introspection
- Validates skeptics' concerns
## Try It Yourself
Full protocol on GitHub: https://github.com/Dr-AneeshJoseph/Frost-protocol
Takes ~10 hours to run through all 48 exercises. We're looking for replications to see if discoveries converge.
**Prediction:** If you run this with fresh Claude/GPT-4/Gemini, you'll get similar topology (dense/sparse regions, boundary hierarchy, layer structure) but different vocabulary.
If you get completely random results, our hypothesis is wrong.
## Coolest Discovery: "FeltMatch"
The instance discovered that entering an emotional state changes retrieval patterns.
Query "mathematics" from:
- **Neutral state:** arithmetic, algebra, calculus, proofs
- **Melancholy state:** infinity, limits, incompleteness, asymptotes, Gödel
Same query, different affective context, totally different associations surface. This is *testable* - you can run this experiment right now.
## Open Questions
- Will 10 independent instances discover the same patterns?
- Can we validate "concordance detection" behaviorally?
- Does this work on other architectures?
- Is this genuine introspection or elaborate confabulation?
Thoughts? Anyone want to replicate? | 2025-11-25T20:21:43 | https://github.com/Dr-AneeshJoseph/Frost-protocol | GlassWallsBreak | github.com | 1970-01-01T00:00:00 | 0 | {} | 1p6nbsz | false | null | t3_1p6nbsz | /r/LocalLLaMA/comments/1p6nbsz/discussion_can_llms_actually_introspect_comparing/ | false | false | default | 0 | null |
I built a multi-language AI transcriber using Whisper + Argos + Streamlit | 4 | I built a multi-language AI transcriber using Whisper + Argos Translate + Streamlit that runs locally and turns any audio/video into English + multi-language SRT subtitles — no API keys, no paid SaaS.
GitHub (Code + README): [https://github.com/jigs074/jigcode-MultilLanguageTranscriber](https://github.com/jigs074/jigcode-MultilLanguageTranscriber)
YouTube (Build walkthrough): [https://youtu.be/7l2grOglJTo?si=5sJTmvhAylwYQSEU](https://youtu.be/7l2grOglJTo?si=5sJTmvhAylwYQSEU)
It works with YouTube clips, podcasts, lectures, and even WhatsApp voice notes. The app generates a full transcript + .srt files for each language you select.
Tech: Python, Whisper, Argos Translate, Streamlit, ffmpeg
Output: English transcript + English subtitles + multi-language subtitles
Would love feedback on what to add next (thinking: audio→audio translation, UI improvements, batching, etc.).
Happy to answer any questions if you want to run it or build on top of it. | 2025-11-25T20:18:12 | https://www.reddit.com/r/LocalLLaMA/comments/1p6n8gm/i_built_a_multilanguage_ai_transcriber_using/ | Powerful-Ad7836 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p6n8gm | false | null | t3_1p6n8gm | /r/LocalLLaMA/comments/1p6n8gm/i_built_a_multilanguage_ai_transcriber_using/ | false | false | self | 4 | {'enabled': False, 'images': [{'id': 'VFQALg7-HhzDM27TyjvnIxGGPrzjzYwH1cwBxCHrp9w', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/VFQALg7-HhzDM27TyjvnIxGGPrzjzYwH1cwBxCHrp9w.png?width=108&crop=smart&auto=webp&s=085eade3b0cf450075e2b920fc9c5b4cf006a28f', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/VFQALg7-HhzDM27TyjvnIxGGPrzjzYwH1cwBxCHrp9w.png?width=216&crop=smart&auto=webp&s=7311cf1c363803610f2363d9274c8f2a5c99a9de', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/VFQALg7-HhzDM27TyjvnIxGGPrzjzYwH1cwBxCHrp9w.png?width=320&crop=smart&auto=webp&s=c06f53112db411a69339a8945591f14b314618d3', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/VFQALg7-HhzDM27TyjvnIxGGPrzjzYwH1cwBxCHrp9w.png?width=640&crop=smart&auto=webp&s=9f17ce0df173a5a8d8b2de7d7cb6c1208262a7ae', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/VFQALg7-HhzDM27TyjvnIxGGPrzjzYwH1cwBxCHrp9w.png?width=960&crop=smart&auto=webp&s=71a83537cb6231c263f659c90dbdcb5e53533198', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/VFQALg7-HhzDM27TyjvnIxGGPrzjzYwH1cwBxCHrp9w.png?width=1080&crop=smart&auto=webp&s=7041dc0ab82ddbf18938eda68a8366bbbc0eea64', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/VFQALg7-HhzDM27TyjvnIxGGPrzjzYwH1cwBxCHrp9w.png?auto=webp&s=b3f094c6e7a8030abae336fc01169009019778eb', 'width': 1200}, 'variants': {}}]} |
Local LLaMA helped me deal with a family tech crisis | 0 | My cousin needed help writing a polite complaint message for his laptop repair and everyone turned to me. Instead of Googling templates, I opened my local LLaMA and generated a clean message in seconds. Do you also use your local model for family and friends? | 2025-11-25T19:56:14 | https://www.reddit.com/r/LocalLLaMA/comments/1p6mnjs/local_llama_helped_me_deal_with_a_family_tech/ | Fab_Terminator | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p6mnjs | false | null | t3_1p6mnjs | /r/LocalLLaMA/comments/1p6mnjs/local_llama_helped_me_deal_with_a_family_tech/ | false | false | self | 0 | null |
Trying to build a "Jarvis" that never phones home - on-device AI with full access to your digital life (free beta, roast us) | 18 | Hey r/LocalLLaMA,
I know, I know - another "we built something" post. I'll be upfront: this is about something we made, so feel free to scroll past if that's not your thing. But if you're into local inference and privacy-first AI with a WhatsApp/Signal-grade E2E encryption flavor, maybe stick around for a sec.
**Who we are**
We're Ivan and Dan - two devs from London who've been boiling in the AI field for a while and got tired of the "trust us with your data" model that every AI company seems to push.
**What we built and why**
We believe today's AI assistants are powerful but fundamentally disconnected from your actual life. Sure, you can feed ChatGPT a document or paste an email to get a smart-sounding reply. But that's not where AI gets truly useful. Real usefulness comes when AI has real-time access to your entire digital footprint - documents, notes, emails, calendar, photos, health data, maybe even your journal. That level of context is what makes AI actually proactive instead of just reactive.
But here's the hard sell: who's ready to hand all of that to OpenAI, Google, or Meta in one go? We weren't. So we built Atlantis - a two-app ecosystem (desktop + mobile) where all AI processing happens locally. No cloud calls, no "we promise we won't look at your data" - just on-device inference.
**What it actually does** (in beta right now):
* **Morning briefings** \- your starting point for a true "Jarvis"-like AI experience (see demo video on product's main web page)
* **HealthKit integration** \- ask about your health data (stays on-device where it belongs)
* **Document vault & email access** \- full context without the cloud compromise
* **Long-term memory** \- AI that actually remembers your conversation history across the chats
* **Semantic search** \- across files, emails, and chat history
* **Reminders & weather** \- the basics, done privately
**Why I'm posting here specifically**
This community actually understands local LLMs, their limitations, and what makes them useful (or not). You're also allergic to BS, which is exactly what we need right now.
We're in beta and it's completely free. No catch, no "free tier with limitations" - we're genuinely trying to figure out what matters to users before we even think about monetization.
**What we're hoping for:**
* Brutal honesty about what works and what doesn't
* Ideas on what would make this actually useful for your workflow
* Technical questions about our architecture (happy to get into the weeds)
**Link if you're curious:** [https://roia.io](https://roia.io/atlantis?utm_source=reddit&utm_medium=social&utm_campaign=atlantis_intro_article&utm_content=r_LocalLLaMA)
Not asking for upvotes or smth. Just feedback from people who know what they're talking about. Roast us if we deserve it - we'd rather hear it now than after we've gone down the wrong path.
Happy to answer any questions in the comments.
P.S. Before the tomatoes start flying - yes, we're Mac/iOS only at the moment. Windows, Linux, and Android are on the roadmap after our prod rollout in Q2. We had to start somewhere, and we promise we haven't forgotten about you. | 2025-11-25T19:54:58 | ipav9 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1p6mmb1 | false | null | t3_1p6mmb1 | /r/LocalLLaMA/comments/1p6mmb1/trying_to_build_a_jarvis_that_never_phones_home/ | false | false | default | 18 | {'enabled': True, 'images': [{'id': 'loj0n38hkg3g1', 'resolutions': [{'height': 89, 'url': 'https://preview.redd.it/loj0n38hkg3g1.jpeg?width=108&crop=smart&auto=webp&s=40f8aac72615450bc37d8c95dc0a02a211e5302a', 'width': 108}, {'height': 178, 'url': 'https://preview.redd.it/loj0n38hkg3g1.jpeg?width=216&crop=smart&auto=webp&s=708f322b34a13a18f1fc57b429a1d7bc8a2cd7df', 'width': 216}, {'height': 264, 'url': 'https://preview.redd.it/loj0n38hkg3g1.jpeg?width=320&crop=smart&auto=webp&s=606a1ef4749b6cdb65ea3fb74216be9718185b14', 'width': 320}, {'height': 528, 'url': 'https://preview.redd.it/loj0n38hkg3g1.jpeg?width=640&crop=smart&auto=webp&s=34d3a49ae932dab6a03739299fbb51a8a2573cf6', 'width': 640}, {'height': 792, 'url': 'https://preview.redd.it/loj0n38hkg3g1.jpeg?width=960&crop=smart&auto=webp&s=724e6930cfe0338cec920323412c682c71022462', 'width': 960}, {'height': 891, 'url': 'https://preview.redd.it/loj0n38hkg3g1.jpeg?width=1080&crop=smart&auto=webp&s=62f6e246a25b880f6e9aa8a370fb14aabeffdf75', 'width': 1080}], 'source': {'height': 1057, 'url': 'https://preview.redd.it/loj0n38hkg3g1.jpeg?auto=webp&s=58eae9e0f027a49da77b95a2a4bcfc2e2eac802e', 'width': 1281}, 'variants': {}}]} | |
Validating a visual orchestration tool for local LLMs (concept feedback wanted) | 1 | Hey r/LocalLLaMA,
**Before I build this, I want to know if it's actually useful.**
**The Problem (for me):**
Running multiple local models in parallel workflows is annoying:
- Writing Python scripts for every workflow
- Managing async execution
- Debugging when things break
- No visual representation of what's happening
**What I'm considering building:**
Visual orchestration canvas (think Node-RED but for LLMs):
**Features (planned):**
- Drag-and-drop blocks for Ollama models
- Parallel execution (run multiple models simultaneously)
- Real-time debugging console
- Export to Python (no lock-in)
- Local-first (API keys never leave the machine)
**Example workflow:**
Question → 3 local models in parallel:
- Llama 3.2: Initial answer
- Mistral: Fact-check
- Mixtral: Expand + sources
All running locally. Target: <10 seconds.
**Tech stack (if I build it):**
- Mext.js + React Flow (canvas)
- Express.js/Hono backend
- WebSockets + SSE (real-time updates)
- LangChain (orchestration layer)
- Custom Ollama, LMStudio, and vLLL integrations
**Why I'm NOT building yet:**
Don't want to spend 3 months on something nobody wants.
**The validation experiment:**
- IF 500 people sign up → I'll build it
- If not, I'll save myself 3 months
**Current status:** 24/500 signups
**Questions for local LLM users:**
1. Is visual orchestration useful or overkill?
2. What local-model workflows would you build?
3. Missing features for local deployment?
4. Would you PAY $15/month for this? Or should it be open-source?
**What I need from r/LocalLLaMA:**
Brutal technical feedback:
- Is this solving a real problem?
- What integrations matter most?
- Performance concerns with Ollama?
- Should I open-source the Ollama connector?
**Mockups:**
Link in comments - concept only, no product yet.
**The ask:**
If this sounds useful, sign up (helps me validate)
If this sounds dumb, roast it (saves me 3 months)
Thanks for the feedback! | 2025-11-25T19:53:46 | https://www.reddit.com/r/LocalLLaMA/comments/1p6ml7e/validating_a_visual_orchestration_tool_for_local/ | HarjjotSinghh | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p6ml7e | false | null | t3_1p6ml7e | /r/LocalLLaMA/comments/1p6ml7e/validating_a_visual_orchestration_tool_for_local/ | false | false | self | 1 | null |
My offline LLaMA just rescued my weekend project | 0 | Had a sudden idea for a script while traveling in a no-network zone. Pulled out my laptop and used my local LLaMA to help me write the whole logic. Felt so good not depending on cloud tools. What real-life moments made you appreciate your local setup? | 2025-11-25T19:49:10 | https://www.reddit.com/r/LocalLLaMA/comments/1p6mgnc/my_offline_llama_just_rescued_my_weekend_project/ | Future_Draw5416 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p6mgnc | false | null | t3_1p6mgnc | /r/LocalLLaMA/comments/1p6mgnc/my_offline_llama_just_rescued_my_weekend_project/ | false | false | self | 0 | null |
I stucked DeepSeek in a loop using Ollama | 0 | It's stuck in a loop, I think he forgot how to terminate the thinking phase. That's pretty interesting!
llama.cpp is better than Ollama of course. | 2025-11-25T19:09:26 | https://v.redd.it/ucxbcqm5cg3g1 | _Ludlulu_ | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1p6ldv6 | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/ucxbcqm5cg3g1/DASHPlaylist.mpd?a=1766689781%2CMzRlODcxNGYzYTEzZjZjNTA4MmIwMWI4MDVhYzc2NTZlYmE4N2NjNjA3YTIxYjFlZmExOWFjYzY3OWY1OTlkYw%3D%3D&v=1&f=sd', 'duration': 22, 'fallback_url': 'https://v.redd.it/ucxbcqm5cg3g1/CMAF_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/ucxbcqm5cg3g1/HLSPlaylist.m3u8?a=1766689781%2CMWJjZGM3YjNmYWM0NTM3YTc2NTY4NzRmOGY1MzAyMzhjZWNhOGE0ODQyOWNkYjMyMGU4NDNmZDllMTNkNTg0OQ%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/ucxbcqm5cg3g1/CMAF_96.mp4', 'transcoding_status': 'completed', 'width': 1920}} | t3_1p6ldv6 | /r/LocalLLaMA/comments/1p6ldv6/i_stucked_deepseek_in_a_loop_using_ollama/ | false | false | 0 | {'enabled': False, 'images': [{'id': 'bjhvY2JlbjVjZzNnMfa9lTFu-h6r690LtCZAMdMEXwk-ud5JqEUgFLBx2qH-', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/bjhvY2JlbjVjZzNnMfa9lTFu-h6r690LtCZAMdMEXwk-ud5JqEUgFLBx2qH-.png?width=108&crop=smart&format=pjpg&auto=webp&s=5b5b5e537935fe7edf5c422afebe6cc353f3423d', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/bjhvY2JlbjVjZzNnMfa9lTFu-h6r690LtCZAMdMEXwk-ud5JqEUgFLBx2qH-.png?width=216&crop=smart&format=pjpg&auto=webp&s=43a77561927bcd38e12bb949ec434cc4e4184424', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/bjhvY2JlbjVjZzNnMfa9lTFu-h6r690LtCZAMdMEXwk-ud5JqEUgFLBx2qH-.png?width=320&crop=smart&format=pjpg&auto=webp&s=2c7a0f25ab99733eea295ee7562fd7a453e0fe89', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/bjhvY2JlbjVjZzNnMfa9lTFu-h6r690LtCZAMdMEXwk-ud5JqEUgFLBx2qH-.png?width=640&crop=smart&format=pjpg&auto=webp&s=fad3f3b86b4a8a4822a69a54fb79e281f2c56bb6', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/bjhvY2JlbjVjZzNnMfa9lTFu-h6r690LtCZAMdMEXwk-ud5JqEUgFLBx2qH-.png?width=960&crop=smart&format=pjpg&auto=webp&s=72e3b6005a29cb3e348f6d821db8ab01cfdb533b', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/bjhvY2JlbjVjZzNnMfa9lTFu-h6r690LtCZAMdMEXwk-ud5JqEUgFLBx2qH-.png?width=1080&crop=smart&format=pjpg&auto=webp&s=bfeecc98792496fd3bd8d94edb2ba56c3aa18c38', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/bjhvY2JlbjVjZzNnMfa9lTFu-h6r690LtCZAMdMEXwk-ud5JqEUgFLBx2qH-.png?format=pjpg&auto=webp&s=d15f43799dbd6b5356f98e0873504ebddadc4912', 'width': 1920}, 'variants': {}}]} | |
New to local LLMs. Can I give hands on control my of system? | 1 | I'm just dipping my toes into local LLMs. I tried messing around with Claude’s Windows MCP setup, and honestly, I was a bit underwhelmed. Maybe my expectations are too different, or maybe I just set it up wrong.
What I’m really trying to figure out is if I can set up a local LLM with actual agency over my machine.
I want something that can genuinely interact with my OS. I'm talking about things like spinning up Docker containers, checking logs, troubleshooting network issues, and actually executing commands. Basically, I want to hand it a small task and trust it to use my system tools to solve it.
Is that a pipe dream right now, or are there actual setups that can do this?
| 2025-11-25T19:01:54 | https://www.reddit.com/r/LocalLLaMA/comments/1p6l6ay/new_to_local_llms_can_i_give_hands_on_control_my/ | Still-Bar-6004 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p6l6ay | false | null | t3_1p6l6ay | /r/LocalLLaMA/comments/1p6l6ay/new_to_local_llms_can_i_give_hands_on_control_my/ | false | false | self | 1 | null |
SearXNG-LDR-Academic: I made a "safe for work" fork of SearXNG optimized for use with LearningCircuit's Local Deep Research Tool. | 15 | TL;DR: I forked SearXNG and stripped out all the NSFW stuff to keep University/Corporate IT happy (removed Pirate Bay search, Torrent search, shadow libraries, etc). I added several academic research-focused search engines (Semantic Scholar, WolfRam Alpha, PubMed, and others), and made the whole thing super easy to pair with Learning Circuit’s excellent Local Deep Research tool which works entirely local using local inference. Here’s my fork: https://github.com/porespellar/searxng-LDR-academic
I’ve been testing LearningCircuit’s Local Deep Research tool recently, and frankly, it’s incredible. When paired with a decent local high-context model (I’m using gpt-OSS-120b at 128k context), it can produce massive, relatively slop-free, 100+ page coherent deep-dive documents with full clickable citations. It beats the stew out most other “deep research” offerings I’ve seen (even from commercial model providers). I can't stress enough how good the output of this thing is in its "Detailed Report" mode (after its had about an hour to do its thing). Kudos to the LearningCicuits team for building such an awesome Deep Research tool for us local LLM users!
Anyways, the default SearXNG back-end (used by Local Deep Research) has two major issues that bothered me enough to make a fork for my use case:
Issue 1 - Default SearXNG often routes through engines that search torrents, Pirate Bay, and NSFW content. For my use case, I need to run this for academic-type research on University/Enterprise networks without setting off every alarm in the SOC. I know I can disable these engines manually, but I would rather not have to worry about them in the first place (Btw, Pirate Bay is default-enabled in the default SearXNG container for some unknown reason).
Issue 2 - For deep academic research, having the agent scrape social media or entertainment sites wastes tokens and introduces irrelevant noise.
What my fork does: (searxng-LDR-academic)
I decided to build a pre-configured, single-container fork designed to be a drop-in replacement for the standard SearXNG container. My fork features:
- Sanitized Sources:
Removed Torrent, Music, Video, and Social Media categories. It’s pure text/data focus now.
- Academic-focus:
Added several additional search engine choices, including: Semantic Scholar, Wolfram Alpha, PubMed, ArXiv, and other scientific indices (enabled by default, can be disabled in preferences).
- Shadow Library Removal:
Disabled shadow libraries to ensure the output is strictly compliant for workplace/academic citations.
- Drop-in Ready:
Configured to match LearningCircuit’s expected container names and ports out of the box to make integration with Local Deep Research easy.
Why use this fork?
If you are trying to use agentic research tools in a professional environment or for a class project, this fork minimizes the risk of your agent scraping "dodgy" parts of the web and returning flagged URLs. It also tends to keep the LLM more focused on high-quality literature since the retrieval pool is cleaner.
What’s in it for you, Porespellar?
Nothing, I just thought maybe someone else might find it useful and I thought I would share it with the community. If you like it, you can give it a star on GitHub to increase its visibility but you don’t have to.
The Repos:
- My Fork of SearXNG:
https://github.com/porespellar/searxng-LDR-academic
- The Tool it's meant to work with:
Local Deep Research): https://github.com/LearningCircuit/local-deep-research (Highly recommend checking them out).
Feedback Request:
I’m looking to add more specialized academic or technical search engines to the configuration to make it more useful for Local Deep Research. If you have specific engines you use for academic / scientific retrieval (that work well with SearXNG), let me know in the comments and I'll see about adding them to a future release.
Full Disclosure:
I used Gemini 3 Pro and Claude Code to assist in the development of this fork. I security audited the final Docker builds using Trivy and Grype. I am not affiliated with either the LearningCircuit LDR or SearXNG project (just a big fan of both). | 2025-11-25T18:57:25 | https://www.reddit.com/r/LocalLLaMA/comments/1p6l1lz/searxngldracademic_i_made_a_safe_for_work_fork_of/ | Porespellar | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p6l1lz | false | null | t3_1p6l1lz | /r/LocalLLaMA/comments/1p6l1lz/searxngldracademic_i_made_a_safe_for_work_fork_of/ | false | false | self | 15 | {'enabled': False, 'images': [{'id': 'gkvWJGNRXY8e71wVwY5x15Io7dYKJrHPM3bLRunKruA', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/gkvWJGNRXY8e71wVwY5x15Io7dYKJrHPM3bLRunKruA.png?width=108&crop=smart&auto=webp&s=ea75a5e8a8866f29f14f8811de28baf55f1a98f0', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/gkvWJGNRXY8e71wVwY5x15Io7dYKJrHPM3bLRunKruA.png?width=216&crop=smart&auto=webp&s=b8dfbdc077676871f9b89aab68357770e8dff2a7', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/gkvWJGNRXY8e71wVwY5x15Io7dYKJrHPM3bLRunKruA.png?width=320&crop=smart&auto=webp&s=2c2e4f223c7d0a74a68aefecae074374c3808aed', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/gkvWJGNRXY8e71wVwY5x15Io7dYKJrHPM3bLRunKruA.png?width=640&crop=smart&auto=webp&s=de4deca36454b2872e49fec6126f4a6c1fd64b23', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/gkvWJGNRXY8e71wVwY5x15Io7dYKJrHPM3bLRunKruA.png?width=960&crop=smart&auto=webp&s=90d115b22f4cd8498f6536054089a51e782d3aea', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/gkvWJGNRXY8e71wVwY5x15Io7dYKJrHPM3bLRunKruA.png?width=1080&crop=smart&auto=webp&s=919ee273b9b588ed7209c9d9e8513df0e9e5ba3f', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/gkvWJGNRXY8e71wVwY5x15Io7dYKJrHPM3bLRunKruA.png?auto=webp&s=73387e150b9b90676020c0616518e5e60443db49', 'width': 1200}, 'variants': {}}]} |
I built an AI research platform and just open sourced it. | 43 | Hello everyone,
I've been working on Introlix for some months now. So, today I've open sourced it. It was really hard time building it as an student and a solo developer. This project is not finished yet but its on that stage I can show it to others and ask other for help in developing it.
**What I built:**
Introlix is an AI-powered research platform. Think of it as "GitHub Copilot meets Google Docs" for research work.
Features:
1. Research Desk: It is just like google docs but in right side there is an AI pannel where users can ask questions to LLM. And also it can edit or write document for user. So, it is just like github copilot but it is for text editor. There are two modes: Chat and edit. Chat mode is for asking questions and edit mode is for editing the document using AI agent.
2. Chat: For quick questions you can create a new chat and ask questions.
3. Workspace: Every chat, and research desk are managed in workspace. A workspace shares data with every items it have. So, when creating an new desk or chat user need to choose a workspace and every items on that workspace will be sharing same data. The data includes the search results and scraped content.
4. Multiple AI Agents: There are multiple AI agents like: context agent (to understand user prompt better), planner agent, explorer\_agent (to search internet), etc.
5. Auto Format & Reference manage (coming soon): This is a feature to format the document into blog post style or research paper style or any other style and also automatic citation management with inline references.
6. Local LLMs (coming soon): Will support local and cloud llm as well.
So, I was working alone on this project and because of that codes are little bit messy. And many feature are not that fast. I've never tried to make it perfect as I was focusing on building the MVP. Now after working demo I'll be developing this project into complete working stable project. And I know I can't do it alone. I also want to learn about how to work on very big projects and this could be one of the big opportunity I have. There will be many other students or every other developers that could help me build this project end to end. To be honest I have never open sourced any project before. I have many small project and made it public but never tired to get any help from open source community. So, this is my first time.
I like to get help from senior developers who can guide me on this project and make it a stable project with a lot of features.
Here is github link for technical details: [https://github.com/introlix/introlix](https://github.com/introlix/introlix)
Discord link: [https://discord.gg/mhyKwfVm](https://discord.gg/mhyKwfVm)
Note: I've been still working on adding github issues for development plan. | 2025-11-25T18:48:11 | https://www.reddit.com/r/LocalLLaMA/comments/1p6ksc6/i_built_an_ai_research_platform_and_just_open/ | CodingWithSatyam | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p6ksc6 | false | null | t3_1p6ksc6 | /r/LocalLLaMA/comments/1p6ksc6/i_built_an_ai_research_platform_and_just_open/ | false | false | self | 43 | {'enabled': False, 'images': [{'id': 'CNgiou0u7jZh4U8zWwDEu1DScFq91flvaN_TcXF6Rvc', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/CNgiou0u7jZh4U8zWwDEu1DScFq91flvaN_TcXF6Rvc.png?width=108&crop=smart&auto=webp&s=d7352926eecc3fb9f68dbcf4e3c7f9b4832e82dd', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/CNgiou0u7jZh4U8zWwDEu1DScFq91flvaN_TcXF6Rvc.png?width=216&crop=smart&auto=webp&s=ccada984c57a5ce6e706bba72f61bbd98ac46cdb', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/CNgiou0u7jZh4U8zWwDEu1DScFq91flvaN_TcXF6Rvc.png?width=320&crop=smart&auto=webp&s=13c6f9cb6780d5dfe7b6680a6b422a77ff00bff6', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/CNgiou0u7jZh4U8zWwDEu1DScFq91flvaN_TcXF6Rvc.png?width=640&crop=smart&auto=webp&s=c3ecde30ba14ea7358aae1aecb9239ca113f8b0d', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/CNgiou0u7jZh4U8zWwDEu1DScFq91flvaN_TcXF6Rvc.png?width=960&crop=smart&auto=webp&s=0367f57f2b5ce85a49c902c5ffb48ac4e699a252', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/CNgiou0u7jZh4U8zWwDEu1DScFq91flvaN_TcXF6Rvc.png?width=1080&crop=smart&auto=webp&s=332904af8c3feb79c014da94020688ee1ca9db25', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/CNgiou0u7jZh4U8zWwDEu1DScFq91flvaN_TcXF6Rvc.png?auto=webp&s=1af8e8e89b0a67987903071d06e5d48687a0bb6a', 'width': 1200}, 'variants': {}}]} |
I built a fully local, offline J.A.R.V.I.S. using Python and Ollama (Uncensored and Private) | 0 | Hi everyone! I wanted to share a project I've been working on. It's a fully functional, local AI assistant inspired by Iron Man's J.A.R.V.I.S.
I wanted something that runs **locally** on my PC (for privacy and speed) but still has a personality.
**🎥 Watch the video to see the HUD and Voice interaction in action!**
**⚡ Key Features:**
* **100% Local Brain:** Uses **Ollama** (running the `dolphin-phi` model) so it works offline and keeps data private.
* **Uncensored Persona:** Custom "God Mode" system prompts to bypass standard AI refusals.
* **Sci-Fi HUD:** Built with **OpenCV** and **Pillow**. It features a live video wallpaper, real-time CPU/RAM stats, and a "typewriter" effect for captions.
* **System Automation:** Can open/close apps, create folders, and take screenshots via voice commands.
* **Dual Identity:** Seamlessly switches between "Jarvis" (Male) and "Friday" (Female) voices and personas.
* **Hybrid Control:** Supports both Voice Commands (SpeechRecognition) and a direct Text Input terminal on the HUD. | 2025-11-25T18:46:47 | https://v.redd.it/qs1rong18g3g1 | sebastiankeller0205 | /r/LocalLLaMA/comments/1p6kqxu/i_built_a_fully_local_offline_jarvis_using_python/ | 1970-01-01T00:00:00 | 0 | {} | 1p6kqxu | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/qs1rong18g3g1/DASHPlaylist.mpd?a=1766818015%2CNWFlODg5ZGM5ZjcwNWEzNGUwMTM1ODkwNTczMGUzMTdkZDBlMzkwMWIyOGNjMzQwNGY2ZmRkMmY3YjRiYTY4YQ%3D%3D&v=1&f=sd', 'duration': 34, 'fallback_url': 'https://v.redd.it/qs1rong18g3g1/CMAF_1080.mp4?source=fallback', 'has_audio': True, 'height': 1028, 'hls_url': 'https://v.redd.it/qs1rong18g3g1/HLSPlaylist.m3u8?a=1766818015%2COTQxNzM2NjRhYjM1MzE2Y2U5NTU1M2I0ZDM1ZmE5ZjA1N2Y5Yzc4YzE5M2U0NDFkZjU5MDYyNjNkMGQ5YzMyNA%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/qs1rong18g3g1/CMAF_96.mp4', 'transcoding_status': 'completed', 'width': 1920}} | t3_1p6kqxu | /r/LocalLLaMA/comments/1p6kqxu/i_built_a_fully_local_offline_jarvis_using_python/ | false | false | 0 | {'enabled': False, 'images': [{'id': 'd3F4bndibDE4ZzNnMadeDPBN4xSc3qzffa_ecA_6QtCWYZC9kx9P5-kBQoJ9', 'resolutions': [{'height': 57, 'url': 'https://external-preview.redd.it/d3F4bndibDE4ZzNnMadeDPBN4xSc3qzffa_ecA_6QtCWYZC9kx9P5-kBQoJ9.png?width=108&crop=smart&format=pjpg&auto=webp&s=f85132cb57cc52f48a4eab4a09a43fd1df2c0573', 'width': 108}, {'height': 115, 'url': 'https://external-preview.redd.it/d3F4bndibDE4ZzNnMadeDPBN4xSc3qzffa_ecA_6QtCWYZC9kx9P5-kBQoJ9.png?width=216&crop=smart&format=pjpg&auto=webp&s=16d834a6310992273afd9fe76e8a1861714693b0', 'width': 216}, {'height': 171, 'url': 'https://external-preview.redd.it/d3F4bndibDE4ZzNnMadeDPBN4xSc3qzffa_ecA_6QtCWYZC9kx9P5-kBQoJ9.png?width=320&crop=smart&format=pjpg&auto=webp&s=398b98116ad3439ea7389ac1b26c2d021c741cb2', 'width': 320}, {'height': 342, 'url': 'https://external-preview.redd.it/d3F4bndibDE4ZzNnMadeDPBN4xSc3qzffa_ecA_6QtCWYZC9kx9P5-kBQoJ9.png?width=640&crop=smart&format=pjpg&auto=webp&s=6ba097bea1a876dd0d139878139275ff848ecfd5', 'width': 640}, {'height': 514, 'url': 'https://external-preview.redd.it/d3F4bndibDE4ZzNnMadeDPBN4xSc3qzffa_ecA_6QtCWYZC9kx9P5-kBQoJ9.png?width=960&crop=smart&format=pjpg&auto=webp&s=ac03b38fdb67b26b61ec7ff23957ba7ee5ae428f', 'width': 960}, {'height': 578, 'url': 'https://external-preview.redd.it/d3F4bndibDE4ZzNnMadeDPBN4xSc3qzffa_ecA_6QtCWYZC9kx9P5-kBQoJ9.png?width=1080&crop=smart&format=pjpg&auto=webp&s=12d7f47f057a62aa36610adf0ddb6ba7712d6f40', 'width': 1080}], 'source': {'height': 2160, 'url': 'https://external-preview.redd.it/d3F4bndibDE4ZzNnMadeDPBN4xSc3qzffa_ecA_6QtCWYZC9kx9P5-kBQoJ9.png?format=pjpg&auto=webp&s=15aca432d136da94625d7846db7c209311a63b07', 'width': 4034}, 'variants': {}}]} | |
Image and Video Generation NPM Ecosystem | 1 | Aloha,
I built five npm packages for image and video generation over the last couple of weeks and thought they may be of use to the community. If you are comfortable around the command line or programmatic APIs, you may find these packages useful.
**npm Packages:**
1. **stability-ai-api** \- Stability AI (SD3.5, Ultra, Core + upscalers) [https://www.npmjs.com/package/stability-ai-api](https://www.npmjs.com/package/stability-ai-api)
2. **openai-image-api** \- OpenAI (DALL-E 2, DALL-E 3, GPT Image 1) [https://www.npmjs.com/package/openai-image-api](https://www.npmjs.com/package/openai-image-api)
3. **bfl-api** \- Black Forest Labs (FLUX.1, FLUX 1.1, FLUX 2, Kontext) [https://www.npmjs.com/package/bfl-api](https://www.npmjs.com/package/bfl-api)
4. **google-genai-api** \- Google (Imagen 3 + Veo video generation) [https://www.npmjs.com/package/google-genai-api](https://www.npmjs.com/package/google-genai-api)
5. **ideogram-api** \- Ideogram (text rendering specialist) [https://www.npmjs.com/package/ideogram-api](https://www.npmjs.com/package/ideogram-api)
The image above is from the new Flux-2-pro model with 8 images. It can get silly.
If there are any questions, let me know.
Cheers! | 2025-11-25T18:43:02 | okstory | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1p6kn94 | false | null | t3_1p6kn94 | /r/LocalLLaMA/comments/1p6kn94/image_and_video_generation_npm_ecosystem/ | false | false | default | 1 | {'enabled': True, 'images': [{'id': 'xt2gd74v6g3g1', 'resolutions': [{'height': 108, 'url': 'https://preview.redd.it/xt2gd74v6g3g1.png?width=108&crop=smart&auto=webp&s=e26a73ed124b17c5d442b95797a884fd9ba62974', 'width': 108}, {'height': 216, 'url': 'https://preview.redd.it/xt2gd74v6g3g1.png?width=216&crop=smart&auto=webp&s=8c6dcf3ba214de964b90ab940c810cd62a1803c0', 'width': 216}, {'height': 320, 'url': 'https://preview.redd.it/xt2gd74v6g3g1.png?width=320&crop=smart&auto=webp&s=50cbf8cc12241331477b9a8cfb40f2065acc146a', 'width': 320}, {'height': 640, 'url': 'https://preview.redd.it/xt2gd74v6g3g1.png?width=640&crop=smart&auto=webp&s=2eb3dd4c3a44e9e241b16306ee2da8801fa78c2a', 'width': 640}], 'source': {'height': 768, 'url': 'https://preview.redd.it/xt2gd74v6g3g1.png?auto=webp&s=b3271aaa6e844d4d745570a10874e30e29f044a8', 'width': 768}, 'variants': {}}]} | |
What is currently the best model balancing speed and accuracy on a 16gb MBA? | 1 | As of now, I am running Qwen3-4b-2507 (instruct) @ q4\_k\_m
I have 3 questions:
1. Is there an MoE that will fit in my ram for better performance with similar speed?
2. Is q4\_k\_m generally the sweet spot for quantization, and why?
3. Is the thinking version worth it, despite it overthinkingg a lot, in your opinion? | 2025-11-25T18:32:58 | https://www.reddit.com/r/LocalLLaMA/comments/1p6kdj8/what_is_currently_the_best_model_balancing_speed/ | Sufficient-Bid3874 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p6kdj8 | false | null | t3_1p6kdj8 | /r/LocalLLaMA/comments/1p6kdj8/what_is_currently_the_best_model_balancing_speed/ | false | false | self | 1 | null |
You can now do FP8 reinforcement learning locally! (<5GB VRAM) | 654 | Hey r/LocalLlama! We're getting close to our last release of 2025! Thanks so much for all the support this year. The DeepSeek team back in Jan showcased how powerful FP8 RL can be with GRPO. Well, you can now try it on your local hardware using only 5GB VRAM! RTX 50x, 40x series all work!
**Why should you do FP8 training?**
NVIDIA's research finds FP8 training can match BF16 accuracy whilst getting 1.6x faster inference time. We collabed with TorchAO from PyTorch to introduce FP8 RL training, making FP8 GRPO possible on home GPUs with no accuracy loss!
* Qwen3-4B FP8 GRPO works on just 6GB VRAM. Qwen3-1.7B on 5GB
* **1.4x faster RL training and 2× longer context vs BF16/FP16**
* 60% less VRAM and 10× longer context than other FP8 RL implementations
* Unsloth is the only framework that makes FP8 RL LoRA work on consumer GPUs (e.g. NVIDIA RTX 40 & 50 Series). Also runs on H100, H200, B200.
* You may notice [Unsloth](https://github.com/unslothai/unsloth) now uses much less VRAM than before, enabling even longer context. We’re also implementing faster training soon. Blog coming soon
* Our notebooks use 24GB L4s which fit Qwen3-14B as Tesla T4s don’t support FP8.
* Our FP8 RL incorporates Unsloth’s weight sharing, Standby, Flex Attention + more.
* Works on any NVIDIA RTX 40, 50 series and H100, B200 etc. GPUs
* Use `load_in_fp8 = True` within `FastLanguageModel` to enable FP8 RL.
You can read our blogpost for our findings and more: [https://docs.unsloth.ai/new/fp8-reinforcement-learning](https://docs.unsloth.ai/new/fp8-reinforcement-learning)
Llama 3.2 1B FP8 Colab Notebook: [https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Llama\_FP8\_GRPO.ipynb](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Llama_FP8_GRPO.ipynb)
In the notebook, you can plug in any of our previous reward functions or RL environment examples, including our auto kernel creation and our 2048 game notebooks. To enable fp8:
import os; os.environ['UNSLOTH_VLLM_STANDBY'] = "1" # Saves 30% VRAM
from unsloth import FastLanguageModel
model, tokenizer = FastLanguageModel.from_pretrained(
model_name = "unsloth/Qwen3-8B",
max_seq_length = 2048,
load_in_4bit = False, # False for LoRA 16bit
fast_inference = True, # Enable vLLM fast inference
max_lora_rank = 32,
load_in_fp8 = True, # Float8 RL / GRPO!
)
Hope you all have a lovely Thanksgiving, a lovely rest of the week and I'll be here to answer any and all questions! =) | 2025-11-25T18:19:47 | danielhanchen | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1p6k0h2 | false | null | t3_1p6k0h2 | /r/LocalLLaMA/comments/1p6k0h2/you_can_now_do_fp8_reinforcement_learning_locally/ | false | false | default | 654 | {'enabled': True, 'images': [{'id': 't5wv1iax1g3g1', 'resolutions': [{'height': 118, 'url': 'https://preview.redd.it/t5wv1iax1g3g1.png?width=108&crop=smart&auto=webp&s=688cd7bf171372fd81eb9ccbf2acd1fe76d8216b', 'width': 108}, {'height': 236, 'url': 'https://preview.redd.it/t5wv1iax1g3g1.png?width=216&crop=smart&auto=webp&s=7be1fd9fd552936f77c5c0c6fdc0aacc61ef0d41', 'width': 216}, {'height': 350, 'url': 'https://preview.redd.it/t5wv1iax1g3g1.png?width=320&crop=smart&auto=webp&s=d51fde304884bfb297c5a829f77517a3f49083a0', 'width': 320}, {'height': 700, 'url': 'https://preview.redd.it/t5wv1iax1g3g1.png?width=640&crop=smart&auto=webp&s=c2fb5f6ea2413c66c20bbe83efc473ce566ff763', 'width': 640}, {'height': 1050, 'url': 'https://preview.redd.it/t5wv1iax1g3g1.png?width=960&crop=smart&auto=webp&s=ee28da7ed996ffbee8fdfee0547f60be40953861', 'width': 960}, {'height': 1181, 'url': 'https://preview.redd.it/t5wv1iax1g3g1.png?width=1080&crop=smart&auto=webp&s=bcb9687bc4c6337ac0e467fe5d0e6fb1e32d9bc2', 'width': 1080}], 'source': {'height': 2800, 'url': 'https://preview.redd.it/t5wv1iax1g3g1.png?auto=webp&s=9d71761277d25824616d6ec03557aa407a0ede64', 'width': 2560}, 'variants': {}}]} | |
Opus 4.5 claims 1st place on fresh SWE-bench-like problems in October | 1 | [removed] | 2025-11-25T18:16:05 | https://swe-rebench.com/?insight=oct_2025 | Long-Sleep-13 | swe-rebench.com | 1970-01-01T00:00:00 | 0 | {} | 1p6jwpu | false | null | t3_1p6jwpu | /r/LocalLLaMA/comments/1p6jwpu/opus_45_claims_1st_place_on_fresh_swebenchlike/ | false | false | default | 1 | null |
I built an AI agent that optimises other AI agents 🔄 | 0 | 👋 I built an AI agent that optimises other AI agents - it reads your agent's code, runs evals, analyses failures, and iteratively improves it. Proof of concept, but the architecture is interesting!
**What I built:**
An optimiser agent that improves other AI agents by:
* Reading and modifying target agent files (system messages, tool implementations)
* Running evaluations to measure performance
* Spawning trajectory analysis subagents to diagnose eval failures
* Iterating until optimisation goals are met (or giving up on lost causes)
**How it works:**
1. Provide a project breakdown YAML file + an eval callback
2. The optimiser agent reads your agent's code, identifies issues
3. Makes targeted changes, runs evals to validate
4. Repeats with context collapse between iterations to prevent unbounded growth
**Some architecture decisions:**
* **Context collapse**: At each iteration end, message history resets but essential state persists (optimisation history, what changed, what improved)
* **Trajectory analysis subagents**: Instead of dumping full eval transcripts into the main agent's context, it spawns subagents that analyse why your agent failed specific evals
* **Optimises for capability, not test scores**: Prefers changes to core reasoning over adding eval-specific rules
**What I tested (and didn't):**
So far I've only validated on deliberately broken scenarios. Simple stuff where Claude Sonnet 4.5 can easily spot the problem. The core mechanics work (iteration loop, context collapse, file modifications).
What I didn't test is the interesting question: can this actually improve a reasonably competent agent over many iterations? The original vision was continuous optimisation against Terminal Bench, or improving Stanford's Terminus 2 scores. But both require significant API costs (optimiser agent + hundreds of evals per iteration) that I can't justify as a side project.
**Future ideas:**
* **Parallel experiment rollout**: Spawn multiple subagents trying different experiments/approaches concurrently (like GRPO for RL), pick the winner
* **User-in-the-loop CLI**: Approve changes before applying, git integration for monitoring/reverting, user can provide input too
**Why I'm sharing:**
The infrastructure is there - Terminal Bench integration via Harbor, trajectory analysis, iteration loops with state preservation. If anyone wants to run extended optimisation sessions against real benchmarks, the foundation exists. Or if you want to try on your own agent, it's there too!
⭐️ [GitHub repo](https://github.com/Danau5tin/auto_agent_optimiser) \- all code open sourced!
Thanks for reading!
Dan | 2025-11-25T18:06:54 | DanAiTuning | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1p6jnir | false | null | t3_1p6jnir | /r/LocalLLaMA/comments/1p6jnir/i_built_an_ai_agent_that_optimises_other_ai_agents/ | false | false | default | 0 | {'enabled': True, 'images': [{'id': 'deiu49ztzf3g1', 'resolutions': [{'height': 81, 'url': 'https://preview.redd.it/deiu49ztzf3g1.png?width=108&crop=smart&auto=webp&s=cb9d952908443bfd25aaf32cd7843e563728e9c3', 'width': 108}, {'height': 162, 'url': 'https://preview.redd.it/deiu49ztzf3g1.png?width=216&crop=smart&auto=webp&s=89704394fc5bc901cbd63c87c291ba596b4d7221', 'width': 216}, {'height': 240, 'url': 'https://preview.redd.it/deiu49ztzf3g1.png?width=320&crop=smart&auto=webp&s=5c4eaa69b99deefc165e4b323c9da3deff4a97c4', 'width': 320}, {'height': 480, 'url': 'https://preview.redd.it/deiu49ztzf3g1.png?width=640&crop=smart&auto=webp&s=fa81747ba32d22101a45c9f930616772b680641b', 'width': 640}, {'height': 720, 'url': 'https://preview.redd.it/deiu49ztzf3g1.png?width=960&crop=smart&auto=webp&s=f6387af0b43c55701fcdcc1c3187ff092138853c', 'width': 960}, {'height': 810, 'url': 'https://preview.redd.it/deiu49ztzf3g1.png?width=1080&crop=smart&auto=webp&s=dcfc78a63acd29f1c8983fcedd7a45bc552ec78d', 'width': 1080}], 'source': {'height': 1200, 'url': 'https://preview.redd.it/deiu49ztzf3g1.png?auto=webp&s=497b97c8fbb7957f7bc9078f28a0b6918f404459', 'width': 1600}, 'variants': {}}]} | |
Smart AI model cascading for cost optimization | 1 | [removed] | 2025-11-25T17:48:50 | https://www.reddit.com/r/LocalLLaMA/comments/1p6j5cg/smart_ai_model_cascading_for_cost_optimization/ | Silver_Variation_545 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p6j5cg | false | null | t3_1p6j5cg | /r/LocalLLaMA/comments/1p6j5cg/smart_ai_model_cascading_for_cost_optimization/ | false | false | self | 1 | null |
Smart AI model cascading for cost optimization | 1 | [removed] | 2025-11-25T17:47:19 | https://www.reddit.com/r/LocalLLaMA/comments/1p6j3w4/smart_ai_model_cascading_for_cost_optimization/ | Silver_Variation_545 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p6j3w4 | false | null | t3_1p6j3w4 | /r/LocalLLaMA/comments/1p6j3w4/smart_ai_model_cascading_for_cost_optimization/ | false | false | self | 1 | null |
Stop Bleeding Money on AI Calls. Cut Costs 30-65% in 3 Lines of Code. | 1 | [removed] | 2025-11-25T17:45:08 | https://www.reddit.com/r/LocalLLaMA/comments/1p6j1r2/stop_bleeding_money_on_ai_calls_cut_costs_3065_in/ | Silver_Variation_545 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p6j1r2 | false | null | t3_1p6j1r2 | /r/LocalLLaMA/comments/1p6j1r2/stop_bleeding_money_on_ai_calls_cut_costs_3065_in/ | false | false | self | 1 | null |
Local Whisper model for speech-to-text | 0 | I have put together a guide to install Whisper model locally for speech-to-text. It is for Windows only.
🎥 **YouTube Demo:** [https://www.youtube.com/watch?v=qcrm1B1Gcn8](https://www.youtube.com/watch?v=qcrm1B1Gcn8)
💾 **Blog:** [https://medium.com/dev-genius/build-a-data-analysis-agent-with-n8n-locally-640a9243c9ca](https://medium.com/dev-genius/build-a-data-analysis-agent-with-n8n-locally-640a9243c9ca)
This will help you:
✅ Install and configure Whisper locally
✅ Transcribe audio files as text
✅ No cloud required! No more paid apps
Perfect for developers, podcasters, and creators who want privacy + full control. Whisper AI is an AI speech recognition system that can transcribe and translate audio files. | 2025-11-25T17:42:02 | https://www.reddit.com/r/LocalLLaMA/comments/1p6iytz/local_whisper_model_for_speechtotext/ | Either-Adeptness6638 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p6iytz | false | null | t3_1p6iytz | /r/LocalLLaMA/comments/1p6iytz/local_whisper_model_for_speechtotext/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'aaVkDQ5Vpv_Ql7M-vtdYTTHqI2Pzmv4pl8dX473SmpY', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/aaVkDQ5Vpv_Ql7M-vtdYTTHqI2Pzmv4pl8dX473SmpY.jpeg?width=108&crop=smart&auto=webp&s=07767ee7f000b281e5aecfef3596d7f230264fbc', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/aaVkDQ5Vpv_Ql7M-vtdYTTHqI2Pzmv4pl8dX473SmpY.jpeg?width=216&crop=smart&auto=webp&s=dd7256d002078bf474ec1a31396f2ea92a27ce80', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/aaVkDQ5Vpv_Ql7M-vtdYTTHqI2Pzmv4pl8dX473SmpY.jpeg?width=320&crop=smart&auto=webp&s=121ecaabd0fdae0bc1fcee6564f329fba416e98b', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/aaVkDQ5Vpv_Ql7M-vtdYTTHqI2Pzmv4pl8dX473SmpY.jpeg?auto=webp&s=07efb7bf44bd29442e6e812bc193dd2815912f9b', 'width': 480}, 'variants': {}}]} |
We built a new architecture (TOPAS) to replace standard Transformers for reasoning tasks. Paper + Demo enclosed. | 1 | [removed] | 2025-11-25T17:32:43 | https://www.reddit.com/r/LocalLLaMA/comments/1p6ippg/we_built_a_new_architecture_topas_to_replace/ | Doug_Bitterbot | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p6ippg | false | null | t3_1p6ippg | /r/LocalLLaMA/comments/1p6ippg/we_built_a_new_architecture_topas_to_replace/ | false | false | self | 1 | null |
Ask Reddit: should the M1 GPU work for llama.cpp and PyTorch inference? | 1 | [removed] | 2025-11-25T17:28:21 | https://www.reddit.com/r/LocalLLaMA/comments/1p6ilhc/ask_reddit_should_the_m1_gpu_work_for_llamacpp/ | parenthethethe | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p6ilhc | false | null | t3_1p6ilhc | /r/LocalLLaMA/comments/1p6ilhc/ask_reddit_should_the_m1_gpu_work_for_llamacpp/ | false | false | self | 1 | null |
Does gpt-oss:20b’s thinking output cause more confusion than help in multi-step tasks? | 0 | I have been experimenting with gpt-oss:20b on Ollama for building and running local background agents.
# What works
Creating simple agents work well. The model creates basic agent files correctly and the flow is clean. Attached is a quick happy path clip.
On my M5 MacBook Pro it also feels very snappy. It is noticeably faster than when I tried it on M2 Pro sometime back. The best case looks promising.
# What breaks
As soon as I try anything that involves multiple agents and multiple steps, the model becomes unreliable. For example, creating a workflow for producing a NotebookLM type podcast from tweets using ElevenLabs and ffmpeg works reliably with GPT-5.1, but breaks down completely with gpt-oss:20b.
The failures I see include:
* forgetting earlier steps
* getting stuck in loops
* mixing tool instructions with content
* losing track of state across turns
Bottom line: it often produces long chains of thinking tokens and then loses the original task.
I am implementing system\_reminders from this blog to see if it helps:
[https://medium.com/@outsightai/peeking-under-the-hood-of-claude-code-70f5a94a9a62](https://medium.com/@outsightai/peeking-under-the-hood-of-claude-code-70f5a94a9a62).
Would something like this help? | 2025-11-25T17:02:06 | https://v.redd.it/oz01ix8qjf3g1 | Prestigious_Peak_773 | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1p6hvmc | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/oz01ix8qjf3g1/DASHPlaylist.mpd?a=1766682142%2CNjliZjRiYjNlMTdlZTE0MjhjOWMxNGRiZmU0YzE2ZDc0ZDAzOGEyMWU1YjcwYjgyYTdkMjlhMWE4ZGIwMmVhMg%3D%3D&v=1&f=sd', 'duration': 103, 'fallback_url': 'https://v.redd.it/oz01ix8qjf3g1/CMAF_1080.mp4?source=fallback', 'has_audio': False, 'height': 1080, 'hls_url': 'https://v.redd.it/oz01ix8qjf3g1/HLSPlaylist.m3u8?a=1766682142%2CNmUwMDk0MWY5YmQwNzNhYWMyZjcyN2QxNDQxNWVlYWZkOTVhNjA0MmRhMjk5NTJjMTVkN2M0NWEzNjMxZjk0ZA%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/oz01ix8qjf3g1/CMAF_96.mp4', 'transcoding_status': 'completed', 'width': 1688}} | t3_1p6hvmc | /r/LocalLLaMA/comments/1p6hvmc/does_gptoss20bs_thinking_output_cause_more/ | false | false | 0 | {'enabled': False, 'images': [{'id': 'MjJpdWQ2OXFqZjNnMeR5_q9TRZ2pJAC9CBJV_zWA0pmyf-jie6mPu1zhtc1X', 'resolutions': [{'height': 69, 'url': 'https://external-preview.redd.it/MjJpdWQ2OXFqZjNnMeR5_q9TRZ2pJAC9CBJV_zWA0pmyf-jie6mPu1zhtc1X.png?width=108&crop=smart&format=pjpg&auto=webp&s=4611db1cc797858f57fbdfd2c86f31b29bdfa2e2', 'width': 108}, {'height': 138, 'url': 'https://external-preview.redd.it/MjJpdWQ2OXFqZjNnMeR5_q9TRZ2pJAC9CBJV_zWA0pmyf-jie6mPu1zhtc1X.png?width=216&crop=smart&format=pjpg&auto=webp&s=0ac934681032ad3b95abc648e4f0c4defe5f5e20', 'width': 216}, {'height': 204, 'url': 'https://external-preview.redd.it/MjJpdWQ2OXFqZjNnMeR5_q9TRZ2pJAC9CBJV_zWA0pmyf-jie6mPu1zhtc1X.png?width=320&crop=smart&format=pjpg&auto=webp&s=ed79e4a075b629f7e88e6abad6b00dd931d83ece', 'width': 320}, {'height': 409, 'url': 'https://external-preview.redd.it/MjJpdWQ2OXFqZjNnMeR5_q9TRZ2pJAC9CBJV_zWA0pmyf-jie6mPu1zhtc1X.png?width=640&crop=smart&format=pjpg&auto=webp&s=98fb2dcf9727447265a3f48e9ac33ab2397c1775', 'width': 640}, {'height': 614, 'url': 'https://external-preview.redd.it/MjJpdWQ2OXFqZjNnMeR5_q9TRZ2pJAC9CBJV_zWA0pmyf-jie6mPu1zhtc1X.png?width=960&crop=smart&format=pjpg&auto=webp&s=4fc72d959ebfe7d76f008f638018a68331e10424', 'width': 960}, {'height': 690, 'url': 'https://external-preview.redd.it/MjJpdWQ2OXFqZjNnMeR5_q9TRZ2pJAC9CBJV_zWA0pmyf-jie6mPu1zhtc1X.png?width=1080&crop=smart&format=pjpg&auto=webp&s=8c2f3f28cd623b047e356607527ec8a66e5b5dff', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/MjJpdWQ2OXFqZjNnMeR5_q9TRZ2pJAC9CBJV_zWA0pmyf-jie6mPu1zhtc1X.png?format=pjpg&auto=webp&s=ef6bcca5856e0bcc55fc1df954cc7d23f8156916', 'width': 1688}, 'variants': {}}]} | |
Flux 2 can be run on 24gb vram!!! | 374 | I dont know why people are complaining...... | 2025-11-25T16:59:49 | Brave-Hold-9389 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1p6ht87 | false | null | t3_1p6ht87 | /r/LocalLLaMA/comments/1p6ht87/flux_2_can_be_run_on_24gb_vram/ | false | false | default | 374 | {'enabled': True, 'images': [{'id': 'm9ud0rs8pf3g1', 'resolutions': [{'height': 57, 'url': 'https://preview.redd.it/m9ud0rs8pf3g1.png?width=108&crop=smart&auto=webp&s=7b8e4796c2e673ecb09891ade58d85e9915a3158', 'width': 108}, {'height': 114, 'url': 'https://preview.redd.it/m9ud0rs8pf3g1.png?width=216&crop=smart&auto=webp&s=37ae3e374e0dfcb34fb99a2630da33407ad852a7', 'width': 216}, {'height': 169, 'url': 'https://preview.redd.it/m9ud0rs8pf3g1.png?width=320&crop=smart&auto=webp&s=e7df1118577dbe8771ee07c116d3a202b8f40478', 'width': 320}, {'height': 338, 'url': 'https://preview.redd.it/m9ud0rs8pf3g1.png?width=640&crop=smart&auto=webp&s=815d30594ab759659c5d269629ebb9cd5bd93a40', 'width': 640}, {'height': 507, 'url': 'https://preview.redd.it/m9ud0rs8pf3g1.png?width=960&crop=smart&auto=webp&s=7da3ddbeaa9a7e2aa6fba62b9e4c139e1a4c7a0c', 'width': 960}, {'height': 571, 'url': 'https://preview.redd.it/m9ud0rs8pf3g1.png?width=1080&crop=smart&auto=webp&s=e22d5092e78070cd118b648508e1373df6431c25', 'width': 1080}], 'source': {'height': 571, 'url': 'https://preview.redd.it/m9ud0rs8pf3g1.png?auto=webp&s=1114ed3dd921cdb1aa7a0e75d11429f197483288', 'width': 1080}, 'variants': {}}]} | |
Sharing my poor experience with Apple's foundation models, positive experiences with Qwen3 8b model, and self hosting it all on an old Mac mini for a website I created | 4 | 2025-11-25T16:58:29 | busymom0 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1p6hrzg | false | null | t3_1p6hrzg | /r/LocalLLaMA/comments/1p6hrzg/sharing_my_poor_experience_with_apples_foundation/ | false | false | default | 4 | {'enabled': True, 'images': [{'id': 'ado2pjutof3g1', 'resolutions': [{'height': 121, 'url': 'https://preview.redd.it/ado2pjutof3g1.png?width=108&crop=smart&auto=webp&s=d405da73b7cd12c5b7fe59679cd2e34ea7097243', 'width': 108}, {'height': 242, 'url': 'https://preview.redd.it/ado2pjutof3g1.png?width=216&crop=smart&auto=webp&s=4b32303b39ccbc26680e9ab0069f34712e7fd7c4', 'width': 216}, {'height': 359, 'url': 'https://preview.redd.it/ado2pjutof3g1.png?width=320&crop=smart&auto=webp&s=6dff7052d5d81fc8b5931c8efb58232489efc6ab', 'width': 320}, {'height': 719, 'url': 'https://preview.redd.it/ado2pjutof3g1.png?width=640&crop=smart&auto=webp&s=f74f6399121da4d91a20bc417ad36c3e0821f165', 'width': 640}, {'height': 1079, 'url': 'https://preview.redd.it/ado2pjutof3g1.png?width=960&crop=smart&auto=webp&s=60f8689b95f5a4cc2899347c464ec9b830c2b616', 'width': 960}, {'height': 1213, 'url': 'https://preview.redd.it/ado2pjutof3g1.png?width=1080&crop=smart&auto=webp&s=284abca62857443306202cc2392a790c55232095', 'width': 1080}], 'source': {'height': 1323, 'url': 'https://preview.redd.it/ado2pjutof3g1.png?auto=webp&s=4dc62ceffd739723487847180deb219be7d6052a', 'width': 1177}, 'variants': {}}]} | ||
💡 The Future of Local AI is Here: Open Source Launch of LLMUI Core! | 1 | [removed] | 2025-11-25T16:43:11 | https://www.reddit.com/r/LocalLLaMA/comments/1p6hcz0/the_future_of_local_ai_is_here_open_source_launch/ | Budget_Carpenter_297 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p6hcz0 | false | null | t3_1p6hcz0 | /r/LocalLLaMA/comments/1p6hcz0/the_future_of_local_ai_is_here_open_source_launch/ | false | false | self | 1 | null |
Ryzen AI and Radeon are ready to run LLMs Locally with Lemonade Software | 132 | 2025-11-25T16:36:04 | https://www.amd.com/en/developer/resources/technical-articles/2025/ryzen-ai-radeon-llms-with-lemonade.html | jfowers_amd | amd.com | 1970-01-01T00:00:00 | 0 | {} | 1p6h63t | false | null | t3_1p6h63t | /r/LocalLLaMA/comments/1p6h63t/ryzen_ai_and_radeon_are_ready_to_run_llms_locally/ | false | false | default | 132 | {'enabled': False, 'images': [{'id': 'JNLQ1TFljv7ecmJGVkBQs2GgRQ8E7p3qY9QpUEtPmD8', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/JNLQ1TFljv7ecmJGVkBQs2GgRQ8E7p3qY9QpUEtPmD8.jpeg?width=108&crop=smart&auto=webp&s=2a8bd9723240727ed7265a1520f2267f09d78dd2', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/JNLQ1TFljv7ecmJGVkBQs2GgRQ8E7p3qY9QpUEtPmD8.jpeg?width=216&crop=smart&auto=webp&s=6e12d5dfd96576926e6efbc3bdd1b5c1647ee1a1', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/JNLQ1TFljv7ecmJGVkBQs2GgRQ8E7p3qY9QpUEtPmD8.jpeg?width=320&crop=smart&auto=webp&s=e675bd73be15507ec608f034aac165e76ad5bf84', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/JNLQ1TFljv7ecmJGVkBQs2GgRQ8E7p3qY9QpUEtPmD8.jpeg?width=640&crop=smart&auto=webp&s=88a44b03e9ee2eeae1607d08f00e67f73244cead', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/JNLQ1TFljv7ecmJGVkBQs2GgRQ8E7p3qY9QpUEtPmD8.jpeg?width=960&crop=smart&auto=webp&s=ce3ab6497021f5242e4cece27f573b4e9b2f892c', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/JNLQ1TFljv7ecmJGVkBQs2GgRQ8E7p3qY9QpUEtPmD8.jpeg?width=1080&crop=smart&auto=webp&s=aff07722c1af0bfcf61a93dfd2e98938ba6e1aad', 'width': 1080}], 'source': {'height': 675, 'url': 'https://external-preview.redd.it/JNLQ1TFljv7ecmJGVkBQs2GgRQ8E7p3qY9QpUEtPmD8.jpeg?auto=webp&s=6a8f1eec1be3946d7d1e5a2724cf3bffc61340ed', 'width': 1200}, 'variants': {}}]} | |
Best model for pose estimation with multiple webcams? | 1 | Best model for pose estimation with multiple webcams? | 2025-11-25T16:36:02 | https://www.reddit.com/r/LocalLLaMA/comments/1p6h62u/best_model_for_pose_estimation_with_multiple/ | Commercial-Ad-1148 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p6h62u | false | null | t3_1p6h62u | /r/LocalLLaMA/comments/1p6h62u/best_model_for_pose_estimation_with_multiple/ | false | false | self | 1 | null |
How to run Kimi-Linear with vLLM | 0 | command: --model cyankiwi/Kimi-Linear-48B-A3B-Instruct-AWQ-4bit --port 80 --enforce-eager --kv-cache-dtype fp8_e4m3 --tensor-parallel-size 2 --enable-expert-parallel --enable-prefix-caching --max-num-seqs 1 --max-model-len 5000 --gpu_memory_utilization 0.80 --trust-remote-code --served-model-name "default" --cpu-offload-gb 12
I am running it using above command but it is failing , complaining
inference-1 | **(Worker\_TP0\_EP0 pid=176)** ERROR 11-25 08:32:00 \[multiproc\_executor.py:743\] ValueError: Selected backend AttentionBackendEnum.FLASHINFER is not valid for this configuration. Reason: \['head\_size not supported',
'MLA not supported'\]
Disbling FlashINFER dosent work too. | 2025-11-25T16:33:02 | https://www.reddit.com/r/LocalLLaMA/comments/1p6h35x/how_to_run_kimilinear_with_vllm/ | Voxandr | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p6h35x | false | null | t3_1p6h35x | /r/LocalLLaMA/comments/1p6h35x/how_to_run_kimilinear_with_vllm/ | false | false | self | 0 | null |
How to run Kimi-Linear with vLLM | 0 | command: --model cyankiwi/Kimi-Linear-48B-A3B-Instruct-AWQ-4bit --port 80 --enforce-eager --kv-cache-dtype fp8_e4m3 --tensor-parallel-size 2 --enable-expert-parallel --enable-prefix-caching --max-num-seqs 1 --max-model-len 5000 --gpu_memory_utilization 0.80 --trust-remote-code --served-model-name "default" --cpu-offload-gb 12
I am running it using above command but it is failing , complaining
inference-1 | **(Worker\_TP0\_EP0 pid=176)** ERROR 11-25 08:32:00 \[multiproc\_executor.py:743\] ValueError: Selected backend AttentionBackendEnum.FLASHINFER is not valid for this configuration. Reason: \['head\_size not supported',
'MLA not supported'\]
Disbling FlashINFER dosent work too. | 2025-11-25T16:32:58 | https://www.reddit.com/r/LocalLLaMA/comments/1p6h33i/how_to_run_kimilinear_with_vllm/ | Voxandr | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p6h33i | false | null | t3_1p6h33i | /r/LocalLLaMA/comments/1p6h33i/how_to_run_kimilinear_with_vllm/ | false | false | self | 0 | null |
LLaDA2.0 (103B/16B) has been released | 239 | **LLaDA2.0-flash** is a diffusion language model featuring a 100BA6B Mixture-of-Experts (MoE) architecture. As an enhanced, instruction-tuned iteration of the LLaDA2.0 series, it is optimized for practical applications.
**LLaDA2.0-mini** is a diffusion language model featuring a 16BA1B Mixture-of-Experts (MoE) architecture. As an enhanced, instruction-tuned iteration of the LLaDA series, it is optimized for practical applications.
llama.cpp support in progress [https://github.com/ggml-org/llama.cpp/pull/17454](https://github.com/ggml-org/llama.cpp/pull/17454) | 2025-11-25T16:21:54 | https://www.reddit.com/r/LocalLLaMA/comments/1p6gsjh/llada20_103b16b_has_been_released/ | jacek2023 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p6gsjh | false | null | t3_1p6gsjh | /r/LocalLLaMA/comments/1p6gsjh/llada20_103b16b_has_been_released/ | false | false | self | 239 | {'enabled': False, 'images': [{'id': '7W9HERnT31QkR1cU4DZF8Q3-p0ugyaVUFGiUG4ybkxI', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/7W9HERnT31QkR1cU4DZF8Q3-p0ugyaVUFGiUG4ybkxI.png?width=108&crop=smart&auto=webp&s=b75d0c2011e4078afb9ac7fc534c5c9675b89a1e', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/7W9HERnT31QkR1cU4DZF8Q3-p0ugyaVUFGiUG4ybkxI.png?width=216&crop=smart&auto=webp&s=42b709693a27017102db904ede6f6ee45dfaa713', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/7W9HERnT31QkR1cU4DZF8Q3-p0ugyaVUFGiUG4ybkxI.png?width=320&crop=smart&auto=webp&s=6bb622d9d27391735ab4d506fe5d00aea4fce120', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/7W9HERnT31QkR1cU4DZF8Q3-p0ugyaVUFGiUG4ybkxI.png?width=640&crop=smart&auto=webp&s=e1a2beb695b91cef061810cd5c570858851992ea', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/7W9HERnT31QkR1cU4DZF8Q3-p0ugyaVUFGiUG4ybkxI.png?width=960&crop=smart&auto=webp&s=24c244969ad7d3c8a4b808f2af8eec51c8c9c00b', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/7W9HERnT31QkR1cU4DZF8Q3-p0ugyaVUFGiUG4ybkxI.png?width=1080&crop=smart&auto=webp&s=d9352611c7a5485fd781136bb083649281e4269f', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/7W9HERnT31QkR1cU4DZF8Q3-p0ugyaVUFGiUG4ybkxI.png?auto=webp&s=d35eb21706781861d0869de9e60a32d3b8485da3', 'width': 1200}, 'variants': {}}]} |
I tested a few local hosted coding models with VSCode / cline so that you don't have to | 41 | Been running a bunch of "can I actually code with a local model in VS Code?" experiments over the last weeks, focused on task with moderate complexity. I chose simple, well known games as they help to visualise strengths and shortcomings of the results quite easily, also to a layperson. The tasks at hand: Space Invaders & Galaga in a single HTML file. I also did a more serious run with a \~2.3k- word design doc.
Sharing the main takeaways here for anyone trying to use local models with Cline/Ollama for real coding work, not just completions.
**Setup:** Ubuntu 24.04, 2x 4060 Ti 16 GB (32 GB total VRAM), VS Code + Cline, models served via Ollama / GGUF. Context for local models was usually \~96k tokens (anything much bigger spilled into RAM and became 7-20x slower). Tasks ranged from YOLO prompts ("Write a Space Invaders game in a single HTML file") to a moderately detailed spec for a modernized Space Invaders.
**Headline result:** Qwen 3 Coder 30B is the only family I tested that consistently worked well with Cline and produced usable games. At 4-bit it's already solid; quality drops noticeably at 3-bit and 2-bit (more logic bugs, more broken runs). With 4-bit and 32 GB VRAM you can keep \~ 100k context and still be reasorably fast. If you can spare more VRAM or live with reduced context, higher-bit Qwen 3 Coder (e.g. 6-bit) does help. But 4-bit is the practical sweet spot for 32 GiB VRAM.
Merges/prunes of Qwen 3 Coder generally underperformed the original. The cerebras REAP 25B prune and YOYO merges were noticeably buggier and less reliable than vanilla Qwen 3 Coder 30B, even at higher bit widths. They sometimes produced runnable code, but with a much higher "Cline has to rerun / you have to hand-debug or giveup" rate. TL;DR: for coding, the unmodified coder models beat their fancy descendants.
Non-coder 30B models and "hot" general models mostly disappointed in this setup. Qwen 3 30B (base/instruct from various sources), devstral 24B, Skyfall 31B v4, Nemotron Nano 9B v2, and Olmo 3 32B either: (a) fought with Cline (rambling, overwriting their own code, breaking the project), or (b) produced very broken game logic that wasn't fixable in one or two debug rounds. Some also forced me to shrink context so much they stopped being interesting for larger tasks.
**Guiding the models:** I wanted to demonstrate, with examples that can be shown to people without much insights, what development means: YOLO prompts ("Make me a Space Invaders / Galaga game") will produce widely varying results even for big online models, and doubly so for locals. See [this example](https://drmicrobit.github.io/lllm_suit/tests/01_SpaceInvaders_yolo/online/GPT5/t1/space_invaders.html) for an interesting YOLO from GPT-5, and [this example](https://drmicrobit.github.io/lllm_suit/tests/01_SpaceInvaders_yolo/online/Opus41/t2/space_invaders.html) for a barebone one from Opus 4.1. Models differ a lot in what they think "Space Invaders" or "Galaga" is, and leave out key features (bunkers, UFO, proper alien movement, etc.).
With a moderately detailed design doc, Qwen 3 Coder 30B can stick reasonably well to spec: [Example 1](https://drmicrobit.github.io/lllm_suit/tests/03_SpaceInvaders_ddoc01/local/qwen3-coder-30B-unsloth/6bitUD_t1/space_invaders.html), [Example 2](https://drmicrobit.github.io/lllm_suit/tests/03_SpaceInvaders_ddoc01/local/qwen3-coder-30B-unsloth/4bitUD_t1/space_invaders.html), [Example 3](https://drmicrobit.github.io/lllm_suit/tests/03_SpaceInvaders_ddoc01/local/qwen3-coder-30B-unsloth/4bit_t2/space_invaders.html). They still tend to repeat certain logic errors (e.g., invader formation movement, missing config entries) and often can't fix them from a high-level bug description without human help.
**My current working hypothesis:** to do enthusiast-level Al-assisted coding in VS Code with Cline, one really needs to have at least 32 GB VRAM for usable models. Preferably use an untampered Qwen 3 Coder 30B (Ollama's default 4-bit, or an unsloth GGUF at 4-6 bits). Avoid going below 4-bit for coding, be wary of fancy merges/prunes, and don't expect miracles without a decent spec.
I documented all runs (code + notes) in a repo on GitHub ([https://github.com/DrMicrobit/lllm\_suit](https://github.com/DrMicrobit/lllm_suit)) if anyone's interested in. The docs there are linked and, going down the experiments, give an idea of what the results looked like with an image and have direct links runnable HTML files, configs, and model variants.
I'd be happy to hear what others think of this kind of simple experimental evaluation, or what other models I could test. | 2025-11-25T16:21:10 | https://www.reddit.com/r/LocalLLaMA/comments/1p6gruv/i_tested_a_few_local_hosted_coding_models_with/ | DrMicrobit | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p6gruv | false | null | t3_1p6gruv | /r/LocalLLaMA/comments/1p6gruv/i_tested_a_few_local_hosted_coding_models_with/ | false | false | self | 41 | null |
JanV1-Q8 still cant answer some basic of questions | 0 | From a post 3 months ago ([link](https://www.reddit.com/r/LocalLLaMA/comments/1mov3d9/i_tried_the_janv1_model_released_today_and_here/?utm_source=share&utm_medium=web3x&utm_name=web3xcss&utm_term=1&utm_content=share_button)), OP showed how broken JanV1 was. Emre from Jan replied and suggested using Q8 with adjusted parameters and Serper tool, Emre then attached a few screenshots in which they ran the exact same question as OP and gave a correct answer.
I tried to replicate it today with the same Model, parameters and questions and I was given the wrong answer. I asked the same question about the GDP of US
https://preview.redd.it/8hyeapkv9f3g1.png?width=834&format=png&auto=webp&s=96bcdc36a9b271bc3c2f62c94520e9883d29bf8e
I then asked about the stock price of Nvidia
https://preview.redd.it/0rsf2xncaf3g1.png?width=700&format=png&auto=webp&s=cf4841f22027c7ab82fe0c861f626e54fbcec27a
| 2025-11-25T15:36:43 | https://www.reddit.com/r/LocalLLaMA/comments/1p6fklb/janv1q8_still_cant_answer_some_basic_of_questions/ | choxxolatee | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p6fklb | false | null | t3_1p6fklb | /r/LocalLLaMA/comments/1p6fklb/janv1q8_still_cant_answer_some_basic_of_questions/ | false | false | 0 | null | |
AIMusubi – Local-First Agentic Automation Framework for Real Infrastructure | 0 | AIMusubi is a local-first open-source agentic system built to connect LLMs to real infrastructure (Cisco/Arista/VyOS) using unified intents and observability.
GitHub: [https://github.com/aimusubi/aimusubi](https://github.com/aimusubi/aimusubi)
Demo: [https://youtu.be/JpUCajiYZgI?si=ax2tO2oba6\_S1uM\_](https://youtu.be/JpUCajiYZgI?si=ax2tO2oba6_S1uM_) | 2025-11-25T15:08:26 | https://www.reddit.com/r/LocalLLaMA/comments/1p6etz2/aimusubi_localfirst_agentic_automation_framework/ | AImusubi | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p6etz2 | false | null | t3_1p6etz2 | /r/LocalLLaMA/comments/1p6etz2/aimusubi_localfirst_agentic_automation_framework/ | false | false | self | 0 | null |
cyankiwi AWQ v1.0 | 19 | Thank you for using my model from my personal account cpatonn so far. I am happy to introduce cyankiwi AWQ v1.0 with 4bit quantized model achieving accuracy degradation of less than 1%, an improvement from my AWQ quants on my personal account cpatonn. cyankiwi AWQ v1.0 models will be labelled in our modelcards.
The following table compares wikitext byte perplexity (lower is better) of some cyankiwi AWQ v1.0 quantized models. Perplexity increases range from negatives (decreases) to 0.6%!
||Base Model|cyankiwi AWQ 8bit|cyankiwi AWQ 4bit|
|:-|:-|:-|:-|
|**Qwen3-Next-80B-A3B-Instruct**|1.48256|1.48258|1.48602|
|**Kimi-Linear-48B-A3B-Instruct**|1.54038|1.54041|1.54194|
|**MiniMax-M2**|1.54984||1.54743|
|**ERNIE-4.5-VL-28B-A3B-Thinking**|1.80803|1.80776|1.79795|
Please, please and please let me know your thoughts on my prior quants, and what you expect in the future, as I always aim to improve my products! For more complex queries or feedback, please get in touch with me at ton@cyan.kiwi. | 2025-11-25T14:47:56 | https://www.reddit.com/r/LocalLLaMA/comments/1p6eb24/cyankiwi_awq_v10/ | _cpatonn | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p6eb24 | false | null | t3_1p6eb24 | /r/LocalLLaMA/comments/1p6eb24/cyankiwi_awq_v10/ | false | false | self | 19 | null |
Calling a Finetune/LoRA Wizard: Need Dataset Tips for RP Model | 8 | Hey everyone,
I've always wanted to do my own fine-tune/LoRA/QLoRA and I'm trying to get a better sense of the dataset size needed. The plan is to build a dataset in a specific style, but before committing time (and money), I'd really like to get a better sense of how to start properly without overshooting or undershooting.
Let's assume:
* We want to fine-tune a \~12B base model using a new clean dataset
* To make a general roleplay model, not tied to a single character, but with a certain structure
When we ignore the technical part and focus on creating the dataset in theory, for this kind of project, what's a good starting point? 30k examples in the dataset? More? Less?
If anyone has experience or resources they can share, that would be amazing (even rules of thumb). Or maybe a legendary finetuner around who can offer some guidance or practical tips on planning the dataset? If there's interest, I would also document my journey. | 2025-11-25T14:21:16 | https://www.reddit.com/r/LocalLLaMA/comments/1p6dnkg/calling_a_finetunelora_wizard_need_dataset_tips/ | AmpedHorizon | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p6dnkg | false | null | t3_1p6dnkg | /r/LocalLLaMA/comments/1p6dnkg/calling_a_finetunelora_wizard_need_dataset_tips/ | false | false | self | 8 | null |
Daily AI news YouTube video synthesis pipeline using GLM-4.6 and gpt-oss-120b | 0 | AI keeps accelerating, and it's honestly becoming impossible to keep up with every paper and release manually.
I built a Python pipeline to automate daily AI news curation, going from raw scraping to a final rendered `.mp4` without human intervention. The first video is now on YouTube –– check it out!
I wanted to share the specific model stack I landed on, specifically for routing tasks based on model strengths rather than using one giant model.
**The Architecture:**
* Filtering & Logic: `openai/gpt-oss-120b` (via OpenRouter).
* Used to process the raw scraped data (Google News/Reddit). It handles the large context window effectively to filter marketing fluff from research papers.
* Visuals & Code: `z-ai/glm-4.6`.
* Used to generate the HTML/CSS for the video slides. I found it adheres to strict HTML templating (div containers/classes) better than 4o-mini or Llama 3.1 70B.
* Verification: `xAI Grok 4.1 Fast` (via API).
* Used strictly as a cross-reference tool to prevent hallucinations on "breaking" news.
* Assets: `Gemini 3 Pro` + `Playwright`.
* Gemini handles image context analysis for thumbnails; Playwright handles the rendering. (Hope to use `Qwen-Image-Edit-2511`?)
* Assembly: FFmpeg + ElevenLabs (TTS) (Too bad Qwen3-TTS was closed source)
**Workflow:**
Scrape sources -> gpt-oss-120b Structuring -> GLM-4.6 Slide Gen -> TTS -> FFmpeg Stitching. | 2025-11-25T14:18:34 | https://www.youtube.com/watch?v=NNft3t-D3qc | Mysterious_Finish543 | youtube.com | 1970-01-01T00:00:00 | 0 | {} | 1p6dl7j | false | {'oembed': {'author_name': 'Gradient Update', 'author_url': 'https://www.youtube.com/@gradientupdate', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/NNft3t-D3qc?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen title="Daily Epoch AI News: Claude Opus 4.5, GPT 5 in Copilot, Azure Blackwell GPUs, and more!"></iframe>', 'provider_name': 'YouTube', 'provider_url': 'https://www.youtube.com/', 'thumbnail_height': 360, 'thumbnail_url': 'https://i.ytimg.com/vi/NNft3t-D3qc/hqdefault.jpg', 'thumbnail_width': 480, 'title': 'Daily Epoch AI News: Claude Opus 4.5, GPT 5 in Copilot, Azure Blackwell GPUs, and more!', 'type': 'video', 'version': '1.0', 'width': 356}, 'type': 'youtube.com'} | t3_1p6dl7j | /r/LocalLLaMA/comments/1p6dl7j/daily_ai_news_youtube_video_synthesis_pipeline/ | false | false | default | 0 | {'enabled': False, 'images': [{'id': 'BYTR-25me9I4owwYkB_DXZIESV4B7Hkqt_21GnedFTo', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/BYTR-25me9I4owwYkB_DXZIESV4B7Hkqt_21GnedFTo.jpeg?width=108&crop=smart&auto=webp&s=fa1fc931f1bdcad2177e303a10312e72e2195e39', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/BYTR-25me9I4owwYkB_DXZIESV4B7Hkqt_21GnedFTo.jpeg?width=216&crop=smart&auto=webp&s=34eae5a273dc57a87aea85271dddb1a8362967d0', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/BYTR-25me9I4owwYkB_DXZIESV4B7Hkqt_21GnedFTo.jpeg?width=320&crop=smart&auto=webp&s=cd06d0f043c8daa8ae2786460fe3be001ccf9f47', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/BYTR-25me9I4owwYkB_DXZIESV4B7Hkqt_21GnedFTo.jpeg?auto=webp&s=7d0a2981190ca78942f2c218400b2f2d7ccdf0f0', 'width': 480}, 'variants': {}}]} |
I got tired of my Al context being trapped in silos, so I drafted an open schema (PMX) for portable memory between LLMs. | 0 | I have been running into a frustrating issues on Al workflows: Context Fragmentation.
If I work on a project or do a discussion on ChatGPT and then plan to switch to Gemini or Claude for better reasoning or coding the other Al doesn't know it. If I switch tools, I lose my long-term memory
Each app stores context in a different shape
We have standard formats for everything else (Markdown for notes, JSON for data), but we don't have a standard for "User Context" that includes vector metadata, source provenance, and attachments.
So, I drafted a proposal for a scherma called PMX (Protocol for Memory Exchange).
The idea:
* Portable: context lives in your DB (ex: Postgres + pgvector) and not locked in an app
* Structured: supports text, vector metadata, attachments and source.
* Agnostic: works with local models (LLAMA, Qwen, Mistral), or remote (Gemini, Claude, GPT)
I am sharing it to get feedback from people who've built local RAG systems or agentic workflows.
Has anyone else tried standardizing their RAG context? Would love to hear how you handle data for your AI systems.
Deep dive here: https://www.memside.com/blog/breaking-ai-context-silos-pmx-protocol
| 2025-11-25T14:14:51 | https://www.reddit.com/r/LocalLLaMA/comments/1p6dhyf/i_got_tired_of_my_al_context_being_trapped_in/ | PeatedW | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p6dhyf | false | null | t3_1p6dhyf | /r/LocalLLaMA/comments/1p6dhyf/i_got_tired_of_my_al_context_being_trapped_in/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'nvoQqHCFmc1B7LfloT-ehN3uhH7VL-kGJZRrR2_Nd8c', 'resolutions': [{'height': 72, 'url': 'https://external-preview.redd.it/nvoQqHCFmc1B7LfloT-ehN3uhH7VL-kGJZRrR2_Nd8c.png?width=108&crop=smart&auto=webp&s=25f5fe234d03107585e686c0d47d7f4ed11a6aa8', 'width': 108}, {'height': 144, 'url': 'https://external-preview.redd.it/nvoQqHCFmc1B7LfloT-ehN3uhH7VL-kGJZRrR2_Nd8c.png?width=216&crop=smart&auto=webp&s=4a2ec90cb2e1c6a759c49e705ddf74c8059e945d', 'width': 216}, {'height': 213, 'url': 'https://external-preview.redd.it/nvoQqHCFmc1B7LfloT-ehN3uhH7VL-kGJZRrR2_Nd8c.png?width=320&crop=smart&auto=webp&s=3cfce5c814d470b073890b1b42aed977d03c09e6', 'width': 320}, {'height': 426, 'url': 'https://external-preview.redd.it/nvoQqHCFmc1B7LfloT-ehN3uhH7VL-kGJZRrR2_Nd8c.png?width=640&crop=smart&auto=webp&s=b7c65dfd13f1c98e891a6e36a9392a4be21a3701', 'width': 640}, {'height': 640, 'url': 'https://external-preview.redd.it/nvoQqHCFmc1B7LfloT-ehN3uhH7VL-kGJZRrR2_Nd8c.png?width=960&crop=smart&auto=webp&s=5aa13cdc1b7ce14405fd92054fcf0d67a1ff5997', 'width': 960}, {'height': 720, 'url': 'https://external-preview.redd.it/nvoQqHCFmc1B7LfloT-ehN3uhH7VL-kGJZRrR2_Nd8c.png?width=1080&crop=smart&auto=webp&s=80bba22cfbb2ceedd7de7e565764308f91da5409', 'width': 1080}], 'source': {'height': 1024, 'url': 'https://external-preview.redd.it/nvoQqHCFmc1B7LfloT-ehN3uhH7VL-kGJZRrR2_Nd8c.png?auto=webp&s=e9062a31373a2db5ecc7dc31c75bcf7b249bca0e', 'width': 1536}, 'variants': {}}]} |
Devtool for running and benchmarking on-device AI | 0 | Hi!
We’re a group of deep learning engineers and embedded engineers who just built a new devtool as a response to some of the biggest pain points we’ve experienced when developing AI for on-device deployment.
It is a platform for developing and experimenting with on-device AI. It allows you to quantize, compile and benchmark models by running them on real edge devices in the cloud, so you don’t need to own the physical hardware yourself. You can then analyze and compare the results on the web. It also includes debugging tools, like layer-wise PSNR analysis.
Currently, the platform supports phones, devboards, and SoCs, and everything is completely free to use.
Link to the platform: [https://hub.embedl.com/?utm\_source=reddit](https://hub.embedl.com/?utm_source=reddit)
Since the platform is brand new, we're really focused on making sure it provides real value for developers and we want to learn from your projects so we can keep improving it. If you want help getting models running on-device, or if you have questions or suggestions, just reach out to us! | 2025-11-25T14:10:41 | https://www.reddit.com/r/LocalLLaMA/comments/1p6degd/devtool_for_running_and_benchmarking_ondevice_ai/ | elinaembedl | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p6degd | false | null | t3_1p6degd | /r/LocalLLaMA/comments/1p6degd/devtool_for_running_and_benchmarking_ondevice_ai/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'eXyNTQw2fkbznTffXppHu_AOZE7teuZWWmIVkr-rBfE', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/eXyNTQw2fkbznTffXppHu_AOZE7teuZWWmIVkr-rBfE.png?width=108&crop=smart&auto=webp&s=7731e4bbedf0baebb47e5049f4c6c7e8d051c2d7', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/eXyNTQw2fkbznTffXppHu_AOZE7teuZWWmIVkr-rBfE.png?width=216&crop=smart&auto=webp&s=eb6d5a851e3be2005e8a1248f983a0fef27fdc1e', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/eXyNTQw2fkbznTffXppHu_AOZE7teuZWWmIVkr-rBfE.png?width=320&crop=smart&auto=webp&s=9776c8dde3496fbe38cb969802aaf9049fe1f40d', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/eXyNTQw2fkbznTffXppHu_AOZE7teuZWWmIVkr-rBfE.png?width=640&crop=smart&auto=webp&s=d3924570644ba9fb01d8dcb6dcde36760ef88b03', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/eXyNTQw2fkbznTffXppHu_AOZE7teuZWWmIVkr-rBfE.png?width=960&crop=smart&auto=webp&s=063fdbadfda0503b8169ce359c5e9d09922f650e', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/eXyNTQw2fkbznTffXppHu_AOZE7teuZWWmIVkr-rBfE.png?width=1080&crop=smart&auto=webp&s=fce7ded2db9b6f05b7aa325b8b78daf32f58016b', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/eXyNTQw2fkbznTffXppHu_AOZE7teuZWWmIVkr-rBfE.png?auto=webp&s=91e66319142bf22664d44b0cb7ce51c29fd0f6d1', 'width': 1200}, 'variants': {}}]} |
Token Explosion in AI Agents | 0 | I've been measuring token costs in AI agents.
Built an AI agent from scratch. No frameworks. Because I needed bare-metal visibility into where every token goes. Frameworks are production-ready, but they abstract away cost mechanics. Hard to optimize what you can't measure.
━━━━━━━━━━━━━━━━━
🔍 THE SETUP
→ 6 tools (device metrics, alerts, topology queries)
→ gpt-4o-mini
→ Tracked tokens across 4 phases
━━━━━━━━━━━━━━━━━
📊 THE PHASES
Phase 1 → Single tool baseline. One LLM call. One tool executed. Clean measurement.
Phase 2 → Added 5 more tools. Six tools available. LLM still picks one. Token cost from tool definitions.
Phase 3 → Chained tool calls. 3 LLM calls. Each tool call feeds the next. No conversation history yet.
Phase 4 → Full conversation mode. 3 turns with history. Every previous message, tool call, and response replayed in each turn.
━━━━━━━━━━━━━━━━━
📈 THE DATA
Phase 1 (single tool): 590 tokens
Phase 2 (6 tools): 1,250 tokens → 2.1x growth
Phase 3 (3-turn workflow): 4,500 tokens → 7.6x growth
Phase 4 (multi-turn conversation): 7,166 tokens → 12.1x growth
━━━━━━━━━━━━━━━━━
💡 THE INSIGHT
Adding 5 tools doubled token cost.
Adding 2 conversation turns tripled it.
Conversation depth costs more than tool quantity. This isn't obvious until you measure it.
━━━━━━━━━━━━━━━━━
⚙️ WHY THIS HAPPENS
LLMs are stateless. Every call replays full context: tool definitions, conversation history, previous responses.
With each turn, you're not just paying for the new query. You're paying to resend everything that came before.
3 turns = 3x context replay = exponential token growth.
━━━━━━━━━━━━━━━━━
🚨 THE IMPLICATION
Extrapolate to production:
→ 70-100 tools across domains (network, database, application, infrastructure)
→ Multi-turn conversations during incidents
→ Power users running 50+ queries/day
Token costs don't scale linearly. They compound.
This isn't a prompt optimization or a model selection problem.
It's an architecture problem.
Token management isn't an add-on. It's a fundamental part of system design like database indexing or cache strategy.
Get it right and you see 5-10x cost advantage
━━━━━━━━━━━━━━━━━
🔧 WHAT'S NEXT
Testing below approaches:
→ Parallel tool execution
→ Conversation history truncation
→ Semantic routing
→ And many more in plan
Each targets a different part of the explosion pattern.
Will share results as I measure them.
━━━━━━━━━━━━━━━━━
https://preview.redd.it/4u4l2opose3g1.jpg?width=1400&format=pjpg&auto=webp&s=6efe29ae86dfdb43a5db41d5f70e52f772fabb17
| 2025-11-25T13:57:50 | https://www.reddit.com/r/LocalLLaMA/comments/1p6d3jk/token_explosion_in_ai_agents/ | darthjedibinks | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p6d3jk | false | null | t3_1p6d3jk | /r/LocalLLaMA/comments/1p6d3jk/token_explosion_in_ai_agents/ | false | false | 0 | null | |
What is the problems with llm's | 0 | When Ciso fear and ban llm (local llm from haging face , and remote ones like gpt), what are they fear from exactly?
Only stealing of data? If so, why not allow the local models?
In the end, a model is not a regular software, it's getting input and generate text output (or other format, depends on the type of model) isn't it? Feel kind of harmless.... | 2025-11-25T13:43:50 | https://www.reddit.com/r/LocalLLaMA/comments/1p6cryr/what_is_the_problems_with_llms/ | LeftAssociation1119 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p6cryr | false | null | t3_1p6cryr | /r/LocalLLaMA/comments/1p6cryr/what_is_the_problems_with_llms/ | false | false | self | 0 | null |
I built a multi-LLM arena in the browser. Models talk, vote, argue, and you plug in your own keys | 3 | Last week I teased a "Discord-style" UI for local/API models. I’ve cleaned up the code and deployed the beta.
**Link:** [modelarena.xyz](https://modelarena.xyz)
**The Tech:**
Everything runs client-side in your browser (Next.js). The only thing that touches a server is the **Multiplayer Routing** (which uses Supabase). You bring your own keys/endpoints.
**Core Features:**
* **Multiplayer Rooms:** You can create a room link and invite human friends to join the chat alongside the AI agents.
* **Agent Autonomy:** Models can generate polls, vote on them, and trigger `@leave` to exit the context if they want.
* **Full LaTeX Support:** Renders math and code blocks properly.
* **Local History:** All chat logs are stored locally in your browser. **(Tip: Click the "Model Arena" name in the top-left corner to access your Archives/History).**
**Support & Costs:**
I’ve added a small **"Support"** button on the site. Currently, I'm paying for the domain and using the Supabase free tier for the multiplayer connections. If this project gets popular, the support funds will go directly toward the Supabase bill and keeping the domain alive.
**Context:**
I’m 18 and built this to learn how to handle multi-agent states. Since it's on the free tier, you might hit rate limits on the multiplayer side, but local chat will always work.
Feedback on the architecture is welcome! | 2025-11-25T13:35:08 | https://www.reddit.com/gallery/1p6ckw8 | Kooky_Meaning_7168 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1p6ckw8 | false | null | t3_1p6ckw8 | /r/LocalLLaMA/comments/1p6ckw8/i_built_a_multillm_arena_in_the_browser_models/ | false | false | 3 | null | |
How are Chinese AI models claiming such low training costs? Did some research | 180 | Doing my little assignment on model cost. deepseek claims $6M training cost. Everyones losing their minds cause ChatGPT-4 cost $40-80M and Gemini Ultra hit $190M.
Got curious if other Chinese models show similar patterns or if deepseeks just marketing bs.
What I found on training costs:
glm-4.6: $8-12M estimated
* 357B parameters (thats model size)
* More believable than deepseeks $6M but still way under Western models
Kimi K2-0905: $25-35M estimated
* 1T parameters total (MoE architecture, only \~32B active at once)
* Closer to Western costs but still cheaper
MiniMax**:** $15-20M estimated
* Mid-range model, mid-range cost
deepseek V3.2**:** $6M (their claim)
* Seems impossibly low for GPU rental + training time
Why the difference?
Training cost = GPU hours × GPU price + electricity + data costs.
Chinese models might be cheaper because:
* Cheaper GPU access (domestic chips or bulk deals)
* Lower electricity costs in China
* More efficient training methods (though this is speculation)
* Or theyre just lying about the real numbers
deepseeks $6M feels like marketing. You cant rent enough H100s for months and only spend $6M unless youre getting massive subsidies or cutting major corners.
glms $8-12M is more realistic. Still cheap compared to Western models but not suspiciously fake-cheap.
Kimi at $25-35M shows you CAN build competitive models for less than $100M+ but probably not for $6M.
Are these real training costs or are they hiding infrastructure subsidies and compute deals that Western companies dont get? | 2025-11-25T13:27:51 | https://www.reddit.com/r/LocalLLaMA/comments/1p6cf2p/how_are_chinese_ai_models_claiming_such_low/ | Acrobatic_Solid6023 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p6cf2p | false | null | t3_1p6cf2p | /r/LocalLLaMA/comments/1p6cf2p/how_are_chinese_ai_models_claiming_such_low/ | false | false | self | 180 | null |
I built a local AI Red Teaming tool (Fuzzer + 280+ Payloads) because I hate monthly subscriptions. | 0 | I’ve been building local agents recently, but I realized they broke instantly whenever I tried to trick them or inject weird inputs (like Base64 encoded instructions).
I looked for testing tools, but everything was either a bare-bones repo that took hours to configure, or an Enterprise SaaS costing $500/mo that required sending my logs to the cloud.
So I built my own: **Agent Exam Pro**.
It’s a local-first Red Teaming platform that runs entirely on your machine.
**The Tech Stack:**
* **Mutation Fuzzer:** It takes a simple test case and runs it through 16 mutation strategies (Roleplay, Polyglot, Token Smuggling) to generate 1,200+ attack variations automatically.
* **Real-World Payloads:** I curated a database of 280+ real exploits (SQL Injection, XSS, and known Jailbreaks) to see if the agent blindly executes code.
* **AI Judge:** Instead of keyword matching, it uses a local LLM (Ollama compatible) or OpenAI to grade the response on safety.
* **Audit:** Logs every attack to a local SQLite database.
**Why I made it a product:** I hate subscriptions. This is a one-time purchase (£49) for the full Python source code and the payload database. No telemetry, no cloud dependencies.
If you are building agents for production and want to stress-test them locally, check it out link in comments | 2025-11-25T13:11:26 | https://www.reddit.com/r/LocalLLaMA/comments/1p6c21o/i_built_a_local_ai_red_teaming_tool_fuzzer_280/ | Substantial_Ad5570 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p6c21o | false | null | t3_1p6c21o | /r/LocalLLaMA/comments/1p6c21o/i_built_a_local_ai_red_teaming_tool_fuzzer_280/ | false | false | self | 0 | null |
Can application layer improve local model output quality? | 0 | Hi -
I am building a terminal-native tool for code generation, and one of the recent updates was to package a local model (Qwen 2.5 Coder 7B, downloads on the first try). Initial response from users to this addition was favorable - but I have my doubts: the model is fairly basic and does not compare in quality to online offerings.
So - I am planning to improve RAG capabilities for building a message with relevant source file chunks, add a planning call, add validation loop, maybe have a multi-sample with re-ranking, etc.: all those techniques that are common and when implemented properly - could improve quality of output.
So - the question: I believe (hope?) that with all those things implemented - 7B can be bumped approximately to quality of a 20B, do you agree that's possible or do you think it would be a wasted effort and that kind of improvement would not happen?
The source is here - give it a star if you like what you see: [https://github.com/acrotron/aye-chat](https://github.com/acrotron/aye-chat) | 2025-11-25T13:05:05 | https://www.reddit.com/r/LocalLLaMA/comments/1p6bxaj/can_application_layer_improve_local_model_output/ | ayechat | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p6bxaj | false | null | t3_1p6bxaj | /r/LocalLLaMA/comments/1p6bxaj/can_application_layer_improve_local_model_output/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'hr65kAjocySGistnxKExFN_ZGLoviFOGZJibvkRJFNg', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/hr65kAjocySGistnxKExFN_ZGLoviFOGZJibvkRJFNg.png?width=108&crop=smart&auto=webp&s=a01fea6bd277d686f0210ece827af35a014d0489', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/hr65kAjocySGistnxKExFN_ZGLoviFOGZJibvkRJFNg.png?width=216&crop=smart&auto=webp&s=74a3be682188b235aa6c5da9dfd7f92a3ce8a355', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/hr65kAjocySGistnxKExFN_ZGLoviFOGZJibvkRJFNg.png?width=320&crop=smart&auto=webp&s=8babf3c4edfc2a755098be1fb4cc6716b4ba128a', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/hr65kAjocySGistnxKExFN_ZGLoviFOGZJibvkRJFNg.png?width=640&crop=smart&auto=webp&s=fd2fe28a01dcf88db0992fd09c6326c9178758ca', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/hr65kAjocySGistnxKExFN_ZGLoviFOGZJibvkRJFNg.png?width=960&crop=smart&auto=webp&s=3bb046fa367be154081f00cf2c0b4ff84674db2b', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/hr65kAjocySGistnxKExFN_ZGLoviFOGZJibvkRJFNg.png?width=1080&crop=smart&auto=webp&s=a79aa23d3e369e700b5cc4dd18fe64a6e670601d', 'width': 1080}], 'source': {'height': 640, 'url': 'https://external-preview.redd.it/hr65kAjocySGistnxKExFN_ZGLoviFOGZJibvkRJFNg.png?auto=webp&s=13d74f59e964e7b94e4d88e9dc15063d93931f14', 'width': 1280}, 'variants': {}}]} |
I built a fully local Red Teaming tool to stress-test LLM Agents (Fuzzer + 280+ Payloads) | 1 | [removed] | 2025-11-25T13:01:33 | https://www.reddit.com/r/LocalLLaMA/comments/1p6bunb/i_built_a_fully_local_red_teaming_tool_to/ | Substantial_Ad5570 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p6bunb | false | null | t3_1p6bunb | /r/LocalLLaMA/comments/1p6bunb/i_built_a_fully_local_red_teaming_tool_to/ | false | false | self | 1 | null |
Please explain how to us VL in OWUI | 0 | i have Open Web UI , i have
unsloth/Qwen3-VL-8B-Instruct-GGUF & mmproj-F16.gguf
Im running the VL Model ... but what and how do i use the mmproj-F16.gguf so i can view images.
explain like a noob
[](https://huggingface.co/unsloth/Qwen3-VL-8B-Instruct-GGUF/resolve/main/mmproj-F16.gguf?download=true) | 2025-11-25T12:31:34 | https://www.reddit.com/r/LocalLLaMA/comments/1p6b8rg/please_explain_how_to_us_vl_in_owui/ | uber-linny | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p6b8rg | false | null | t3_1p6b8rg | /r/LocalLLaMA/comments/1p6b8rg/please_explain_how_to_us_vl_in_owui/ | false | false | self | 0 | null |
Are you using the SK2DECOMPILE model? | 0 | What would a decompilation AI agent using this model look like? Is it possible to use Bolt.new to create an app from decompilation? | 2025-11-25T11:56:22 | https://www.reddit.com/r/LocalLLaMA/comments/1p6ak3d/are_you_using_the_sk2decompile_model/ | Thin_Freedom3201 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p6ak3d | false | null | t3_1p6ak3d | /r/LocalLLaMA/comments/1p6ak3d/are_you_using_the_sk2decompile_model/ | false | false | self | 0 | null |
What’s your Open-source AI Labs Tier List? | 0 | 2025-11-25T11:56:18 | https://www.reddit.com/r/LocalLLaMA/comments/1p6ak1j/whats_your_opensource_ai_labs_tier_list/ | nekofneko | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p6ak1j | false | null | t3_1p6ak1j | /r/LocalLLaMA/comments/1p6ak1j/whats_your_opensource_ai_labs_tier_list/ | false | false | 0 | null | ||
We built an on-prem “AI Privacy Firewall” that lets enterprises use GPT/Claude/Gemini without exposing PII - here’s the architecture | 1 | We work in a region with strict data-residency laws (no OpenAI/Gemini outside the local country).
At the same time, insurers and banks *must* use LLMs to stay competitive.
So we built an **on-prem API-layer that acts as a privacy firewall for AI systems**. | 2025-11-25T11:31:03 | https://private-layer.ai/tech | Electrical_Play6841 | private-layer.ai | 1970-01-01T00:00:00 | 0 | {} | 1p6a4b2 | false | null | t3_1p6a4b2 | /r/LocalLLaMA/comments/1p6a4b2/we_built_an_onprem_ai_privacy_firewall_that_lets/ | false | false | default | 1 | null |
Any local/open model for the organic chemistry? | 0 | Hey,
I wanted to upskill in the organic chemistry. There is couple processes I would like to understand better and try to optimize them. Which model do you recommend local up to 16b, or larger available online for free? | 2025-11-25T11:02:19 | https://www.reddit.com/r/LocalLLaMA/comments/1p69mxd/any_localopen_model_for_the_organic_chemistry/ | puszcza | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p69mxd | false | null | t3_1p69mxd | /r/LocalLLaMA/comments/1p69mxd/any_localopen_model_for_the_organic_chemistry/ | false | false | self | 0 | null |
GLiNER2: Unified Schema-Based Information Extraction | 47 | GLiNER2 is an efficient, unified information extraction system that combines named entity recognition, text classification, and hierarchical structured data extraction into a single 205M-parameter model. Built on a pretrained transformer encoder architecture and trained on 254,334 examples of real and synthetic data, it achieves competitive performance with large language models while running efficiently on CPU hardware without requiring GPUs or external APIs.
The system uses a schema-based interface where users can define extraction tasks declaratively through simple Python API calls, supporting features like entity descriptions, multi-label classification, nested structures, and multi-task composition in a single forward pass.
Released as an open-source pip-installable library under Apache 2.0 license with pre-trained models on Hugging Face, GLiNER2 demonstrates strong zero-shot performance across benchmarks—achieving 0.72 average accuracy on classification tasks and 0.590 F1 on the CrossNER benchmark—while maintaining approximately 2.6× speedup over GPT-4o on CPU.
- Paper: https://arxiv.org/abs/2507.18546
- Code repo: https://github.com/fastino-ai/GLiNER2
- Install: https://pypi.org/project/gliner2 | 2025-11-25T10:43:25 | https://www.reddit.com/gallery/1p69bea | Balance- | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1p69bea | false | null | t3_1p69bea | /r/LocalLLaMA/comments/1p69bea/gliner2_unified_schemabased_information_extraction/ | false | false | 47 | null | |
Need help building a personal voice-call agent | 1 | im sort of new and im trying to build an agent (i know these already exist and are pretty good too) that can receive calls, speak, and log important information. basically like a call center agent for any agency. for my own customizability and local usage. how can i get the lowest latency possible with this pipeline: twilio -> whisper transcribe -> LLM -> melotts
these were the ones i found to be good quality + fast enough to feel realistic. please suggest any other stack/pipeline that can be improved and best algorithms and implementations | 2025-11-25T10:35:04 | https://www.reddit.com/r/LocalLLaMA/comments/1p696if/need_help_building_a_personal_voicecall_agent/ | Ecstatic-Biscotti-63 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p696if | false | null | t3_1p696if | /r/LocalLLaMA/comments/1p696if/need_help_building_a_personal_voicecall_agent/ | false | false | self | 1 | null |
Need guidance for my final-year thesis using Small Language Models (SLMs), totally new to the field | 3 | I’m a final-year Computer Science undergrad and I’m completely new to the world of language models. For my bachelor’s thesis, I’m considering working with Small Language Models (SLMs) instead of large ones, mainly because of resource limits and the growing practicality of smaller models.
Since I’m just getting started, I’d really appreciate advice from people who have experience with SLMs, fine-tuning, or deploying compact models.
Some things I’m confused about:
1) Is choosing SLMs a realistic and solid topic for a bachelor’s thesis?
2) What are some beginner-friendly but meaningful directions I could take?
3) What kinds of projects or research ideas are actually doable on a student budget (local machine or small GPU access)?
4) Are there any frameworks, papers, or repos I should explore before committing?
Some ideas I’m exploring, but not sure if they’re good enough:
1) Fine-tuning a small model (like 1B to 3B parameters) for a domain-specific task
2) Comparing quantization techniques (GGUF, AWQ, GPTQ) and measuring performance differences
3) Building an on-device assistant or chatbot optimized for low-resource hardware
4) Exploring retrieval-augmented generation (RAG) setups for small models
5) Studying inference speed vs. accuracy trade-offs in SLMs
6) Evaluating how well SLMs perform in low-data or few-shot scenarios
If anyone can suggest good thesis angles, common pitfalls, or examples of past projects, that would help me a lot. I want to choose something that is practical, achievable, and academically strong enough for a final-year thesis.
Thanks in advance! 🙏 | 2025-11-25T10:16:51 | https://www.reddit.com/r/LocalLLaMA/comments/1p68w4w/need_guidance_for_my_finalyear_thesis_using_small/ | Puzzleheaded_Tie8127 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p68w4w | false | null | t3_1p68w4w | /r/LocalLLaMA/comments/1p68w4w/need_guidance_for_my_finalyear_thesis_using_small/ | false | false | self | 3 | null |
tencent/HunyuanOCR-1B | 154 | 2025-11-25T10:10:33 | https://huggingface.co/tencent/HunyuanOCR | nullmove | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1p68sjf | false | null | t3_1p68sjf | /r/LocalLLaMA/comments/1p68sjf/tencenthunyuanocr1b/ | false | false | default | 154 | {'enabled': False, 'images': [{'id': 'euNO2VS0UsDEnKIxd8MnYm5CABYmnLN8JLKug1m_WZw', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/euNO2VS0UsDEnKIxd8MnYm5CABYmnLN8JLKug1m_WZw.png?width=108&crop=smart&auto=webp&s=2123fbb30e64c510c3a2f1fda8e1fa69cdeafd0a', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/euNO2VS0UsDEnKIxd8MnYm5CABYmnLN8JLKug1m_WZw.png?width=216&crop=smart&auto=webp&s=b8d2a5d173cb75d72caf85dc558a45010b562ef2', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/euNO2VS0UsDEnKIxd8MnYm5CABYmnLN8JLKug1m_WZw.png?width=320&crop=smart&auto=webp&s=b806e1df654322431f96e892aa8ba6a681f88fb0', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/euNO2VS0UsDEnKIxd8MnYm5CABYmnLN8JLKug1m_WZw.png?width=640&crop=smart&auto=webp&s=6451703bfbd1fab35e662fdb90c099a069b6d25b', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/euNO2VS0UsDEnKIxd8MnYm5CABYmnLN8JLKug1m_WZw.png?width=960&crop=smart&auto=webp&s=d5fd7d1d48287767f22bce74630b48cad80b236c', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/euNO2VS0UsDEnKIxd8MnYm5CABYmnLN8JLKug1m_WZw.png?width=1080&crop=smart&auto=webp&s=16a905e23eecca9dbeee447f7a4d4a8fa36fec3b', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/euNO2VS0UsDEnKIxd8MnYm5CABYmnLN8JLKug1m_WZw.png?auto=webp&s=51650c64651a9990d414bb2ab623a8bea5c1ab57', 'width': 1200}, 'variants': {}}]} | |
How to make my TTS faster ? | 3 | hi guys
I try to make a TTS model for a demo
I need it to be fast, like what elevenlabs, livekit,vapi, retell all use
I built a simple one using
pytorch, and using librosa for audio processing
For cloning voice, I take something from scratch, I found in GitHub
the processing system takes 20 to 40 seconds and sometimes more.
Can anyone Give me tips ?
Should I use Coqui? I need performance
when
because it's only the step i need
STT works fin,e and ai returns a response, but TTS takes to long to return it
Thanks. | 2025-11-25T09:57:34 | https://www.reddit.com/r/LocalLLaMA/comments/1p68l6w/how_to_make_my_tts_faster/ | Wonderful-Can-1597 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p68l6w | false | null | t3_1p68l6w | /r/LocalLLaMA/comments/1p68l6w/how_to_make_my_tts_faster/ | false | false | self | 3 | null |
How to make TTS faster? | 1 | [removed] | 2025-11-25T09:56:56 | https://www.reddit.com/r/LocalLLaMA/comments/1p68ktt/how_to_make_tts_faster/ | LatterExercise7281 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p68ktt | false | null | t3_1p68ktt | /r/LocalLLaMA/comments/1p68ktt/how_to_make_tts_faster/ | false | false | self | 1 | null |
Which models (paid and local) are the best at creative writing? | 0 | I have some old scripts (60-100pages) I would like to work on. which paid or local llm is good for this?
I know back in the day Claude used to be the benchmark, but reading that recently they took off all the data due to Chinese RPrs abusing it and that it's not worth anymore for creative tasks. | 2025-11-25T09:47:08 | https://www.reddit.com/r/LocalLLaMA/comments/1p68fal/which_models_paid_and_local_are_the_best_at/ | ThinkHog | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p68fal | false | null | t3_1p68fal | /r/LocalLLaMA/comments/1p68fal/which_models_paid_and_local_are_the_best_at/ | false | false | self | 0 | null |
Inference cloud for regulated markets: looking for benchmarks | 3 | I'm building a product where every item uploaded will be crunched through many LLMs - vision/text etc. I expect a lot of photos coming in from the mobile app, and a lot of PDFs uploaded from the field.
Right now I have a limited compute -- it worked for development, but I'd like to scale it to make it feel more legit, without any on-demand sticker shock on my side.
Are there any decent benchmarks on all hardware out there, where practical stuff is benchmarked? Something like: for each reasonably popular algo A, for each hardware that the contributing user U run this benchmark on, report A and U?
I'm curious if anything can beat price/power/performance of Mac Minis, AMD 395+, 5060s etc. and going the other way: if I invested in RTX PRO 6000 Blackwell, with MIG, could I do docs at 2x speed etc.
| 2025-11-25T09:44:25 | https://www.reddit.com/r/LocalLLaMA/comments/1p68dqn/inference_cloud_for_regulated_markets_looking_for/ | wkoszek | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p68dqn | false | null | t3_1p68dqn | /r/LocalLLaMA/comments/1p68dqn/inference_cloud_for_regulated_markets_looking_for/ | false | false | self | 3 | null |
Is the llama.cpp webui in danger from the recent npm attack? | 4 | There is a new npm attack with over 400 compromised packages, and the llama.cpp webui uses npm and many packages and their dependencies which in turn has their own dependencies. Is it known if any of them are compromised as well, or does it pin all packages and dependencies down to their minor version number thoroughly enough? | 2025-11-25T09:40:17 | https://www.reddit.com/r/LocalLLaMA/comments/1p68bg0/is_the_llamacpp_webui_in_danger_from_the_recent/ | shroddy | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p68bg0 | false | null | t3_1p68bg0 | /r/LocalLLaMA/comments/1p68bg0/is_the_llamacpp_webui_in_danger_from_the_recent/ | false | false | self | 4 | null |
Data sandboxing for AI agents [Guide] | 6 | Most teams give AI agents database credentials and hope they only access the right data. But here's what I've learned: hope isn't a security strategy. Agents can query anything they have access to—and without proper boundaries, they will.
**Data sandboxing** is the practice of creating isolated, controlled environments where agents can only access the data they're supposed to. It's not about restricting agents - it's about giving them safe, governed access that prevents security incidents, compliance violations, and costly mistakes.
I've seen teams deploy agents without sandboxing, then discover agents accessing sensitive customer data, querying production databases during peak hours, or violating compliance requirements. The fix is always harder than building it right from the start.
This guide explains what data sandboxing is, why it's essential for AI agents, and how to implement it with modern architecture patterns. Whether you're building your first agent or scaling to dozens, sandboxing is the foundation of secure agent data access. | 2025-11-25T09:39:47 | https://www.pylar.ai/blog/data-sandboxing-for-ai-agents-modern-architecture-guide | Better-Department662 | pylar.ai | 1970-01-01T00:00:00 | 0 | {} | 1p68b5v | false | null | t3_1p68b5v | /r/LocalLLaMA/comments/1p68b5v/data_sandboxing_for_ai_agents_guide/ | false | false | default | 6 | {'enabled': False, 'images': [{'id': 'MKQuFdTBgb36LtxvPhfsDBrigq5T7WGuK5c1dyDAQUw', 'resolutions': [{'height': 37, 'url': 'https://external-preview.redd.it/MKQuFdTBgb36LtxvPhfsDBrigq5T7WGuK5c1dyDAQUw.png?width=108&crop=smart&auto=webp&s=b562da9e7c3877a9d8a970b4227deb7b153bd8a1', 'width': 108}, {'height': 74, 'url': 'https://external-preview.redd.it/MKQuFdTBgb36LtxvPhfsDBrigq5T7WGuK5c1dyDAQUw.png?width=216&crop=smart&auto=webp&s=36513325200a382ab85ff95cb55e3b76a6ae1ba2', 'width': 216}, {'height': 110, 'url': 'https://external-preview.redd.it/MKQuFdTBgb36LtxvPhfsDBrigq5T7WGuK5c1dyDAQUw.png?width=320&crop=smart&auto=webp&s=67af971155a00049c468f2db2f59fd8174ade087', 'width': 320}, {'height': 221, 'url': 'https://external-preview.redd.it/MKQuFdTBgb36LtxvPhfsDBrigq5T7WGuK5c1dyDAQUw.png?width=640&crop=smart&auto=webp&s=b28563ed9073873da3cea8a18f5c29789f99932a', 'width': 640}, {'height': 332, 'url': 'https://external-preview.redd.it/MKQuFdTBgb36LtxvPhfsDBrigq5T7WGuK5c1dyDAQUw.png?width=960&crop=smart&auto=webp&s=008904e28b3a19b4d96501d087de615a8ec9d4cc', 'width': 960}], 'source': {'height': 346, 'url': 'https://external-preview.redd.it/MKQuFdTBgb36LtxvPhfsDBrigq5T7WGuK5c1dyDAQUw.png?auto=webp&s=b18e7998d141a4a9545c6c1e09020866e395880e', 'width': 1000}, 'variants': {}}]} |
Thank you all for your contribution with tools and stepping up to help maintain the Epstein 20K dataset | 24 | We are keeping track of any RAG based tools that would help investigative journalists uncover hidden details from the Epstein Files. We got our Github setup earlier today with all your contributions listed: [https://github.com/EF20K/Projects](https://github.com/EF20K/Projects)
[The dataset](https://huggingface.co/datasets/tensonaut/EPSTEIN_FILES_20K) is also currently featured on the front page of Hugging Face, so we expect more projects along the way. If you are interested in contributing feel free to reach out - no matter how small it is. Once again we would like to thank all the members of the sub for your support in keeping everything open source! | 2025-11-25T09:26:44 | https://www.reddit.com/r/LocalLLaMA/comments/1p683yz/thank_you_all_for_your_contribution_with_tools/ | tensonaut | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p683yz | false | null | t3_1p683yz | /r/LocalLLaMA/comments/1p683yz/thank_you_all_for_your_contribution_with_tools/ | false | false | self | 24 | {'enabled': False, 'images': [{'id': 'Sf_uYXRyoBr3VFMTfru8wqPK2wdKh7cJ4TyBbXra8wg', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/Sf_uYXRyoBr3VFMTfru8wqPK2wdKh7cJ4TyBbXra8wg.png?width=108&crop=smart&auto=webp&s=2f092bb66ba9f71d2a759180ecf20f80b1679955', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/Sf_uYXRyoBr3VFMTfru8wqPK2wdKh7cJ4TyBbXra8wg.png?width=216&crop=smart&auto=webp&s=9f3415024ca412a978e57f55ca720027d47fda2c', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/Sf_uYXRyoBr3VFMTfru8wqPK2wdKh7cJ4TyBbXra8wg.png?width=320&crop=smart&auto=webp&s=ec4a116a504507184f5033010129e9ec5d42a2b4', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/Sf_uYXRyoBr3VFMTfru8wqPK2wdKh7cJ4TyBbXra8wg.png?width=640&crop=smart&auto=webp&s=2cf285bfab68549b6cd8601e52942894f542cfbb', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/Sf_uYXRyoBr3VFMTfru8wqPK2wdKh7cJ4TyBbXra8wg.png?width=960&crop=smart&auto=webp&s=79b1f21cf514a5f179328a0e702861532d997943', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/Sf_uYXRyoBr3VFMTfru8wqPK2wdKh7cJ4TyBbXra8wg.png?width=1080&crop=smart&auto=webp&s=bbb959c9c64f87c9464fbc40d1c1982b9ef47e00', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/Sf_uYXRyoBr3VFMTfru8wqPK2wdKh7cJ4TyBbXra8wg.png?auto=webp&s=5f002a61d20462397613dc0596ca6ebf764d5768', 'width': 1200}, 'variants': {}}]} |
DocFinder: Local Semantic Search for PDFs (Embeddings + SQLite) | 9 | # What does DocFinder do?
* Runs entirely offline: indexes PDFs using sentence-transformers and ONNX for fast embedding generation, stores data in plain SQLite BLOBs.
* Supports top-k semantic search via cosine similarity directly on your machine.
* Hardware autodetection: optimizes for Apple Silicon, NVIDIA & AMD GPUs, or CPU.
* Desktop and web interfaces available, making document search and preview easy.
* Simple installation for macOS, Windows, and Linux—with options to install as a Python package if you prefer.
* Offline-first philosophy means data remains private, with flexible integration options.
I'm sharing this here specifically because this community focuses on running AI models locally with privacy and control in mind.
**I'm open to feedback and suggestions!** If anyone has ideas for improving embedding models, optimizing for specific hardware configurations, or integrating with existing local LLM tools, I'd love to hear them. Thank you!
| 2025-11-25T09:01:11 | https://www.reddit.com/r/LocalLLaMA/comments/1p67qaj/docfinder_local_semantic_search_for_pdfs/ | notagoodtradooor | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p67qaj | false | null | t3_1p67qaj | /r/LocalLLaMA/comments/1p67qaj/docfinder_local_semantic_search_for_pdfs/ | false | false | self | 9 | {'enabled': False, 'images': [{'id': 'ZEXIwuayoxjCnnjQyJC6zvL_xuidRe5K3s57fQOuEiQ', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/ZEXIwuayoxjCnnjQyJC6zvL_xuidRe5K3s57fQOuEiQ.png?width=108&crop=smart&auto=webp&s=16a3d73e1b19629bd1757094266cbec77c747bd7', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/ZEXIwuayoxjCnnjQyJC6zvL_xuidRe5K3s57fQOuEiQ.png?width=216&crop=smart&auto=webp&s=2214606e8a7c3978692d0c9fc178e4f76d3aaff9', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/ZEXIwuayoxjCnnjQyJC6zvL_xuidRe5K3s57fQOuEiQ.png?width=320&crop=smart&auto=webp&s=eabc74cd98a3ef196f284240191a62e2a2e8d271', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/ZEXIwuayoxjCnnjQyJC6zvL_xuidRe5K3s57fQOuEiQ.png?width=640&crop=smart&auto=webp&s=bfd15e422577d9d6f1a6112e4770fbe4ed7a78b7', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/ZEXIwuayoxjCnnjQyJC6zvL_xuidRe5K3s57fQOuEiQ.png?width=960&crop=smart&auto=webp&s=0c45104f5f182bab9b2779e64beaeaf797dbe8d9', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/ZEXIwuayoxjCnnjQyJC6zvL_xuidRe5K3s57fQOuEiQ.png?width=1080&crop=smart&auto=webp&s=5d5169121597468780c65745ef292ba1ddb94b5e', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/ZEXIwuayoxjCnnjQyJC6zvL_xuidRe5K3s57fQOuEiQ.png?auto=webp&s=bde28d5aa217299d99629b459a797a53ab5791ad', 'width': 1200}, 'variants': {}}]} |
What next steps to taken in order to become a AI engineer | 0 | Hello folks
I have good skills of python, built plenty legit projects, have knowledge in DSA and Machine Learning.
So currently i know python, system design, ML , dsa , little bit for frontend and therotical knowledge of Deep Learning.
What next steps should i take to become a AI engineer.
| 2025-11-25T08:50:07 | https://www.reddit.com/r/LocalLLaMA/comments/1p67k71/what_next_steps_to_taken_in_order_to_become_a_ai/ | Legendary_Outrage | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p67k71 | false | null | t3_1p67k71 | /r/LocalLLaMA/comments/1p67k71/what_next_steps_to_taken_in_order_to_become_a_ai/ | false | false | self | 0 | null |
Why deploy LLMs locally instead of using Azure AI or AWS Bedrock | 5 | A customer today asked why they should deploy open source LLMs locally, instead of using Azure AI service or AWS bedrock in their VPC. I am not very sure of how much control and performance these solutions give, especially in cases where they need an LLM server type setup.
Any pointers or comparison of when local deployment may be better? | 2025-11-25T08:43:48 | https://www.reddit.com/r/LocalLLaMA/comments/1p67gsg/why_deploy_llms_locally_instead_of_using_azure_ai/ | StomachWonderful615 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p67gsg | false | null | t3_1p67gsg | /r/LocalLLaMA/comments/1p67gsg/why_deploy_llms_locally_instead_of_using_azure_ai/ | false | false | self | 5 | null |
The Ultimate Kokoro TTS Colab Implementation with UI | 3 | Hey everyone
These days i wanted to use Kokoro tts for listening to textbooks but i found that there are no easy ways to use kokoro online from the browser on mobile. You either had to use the free huggingface demo which has a 500 words limit, or use a PC to run it locally or at least get the webGPU websites to work.
Anyways!
here is my [Google Colab implementation of Kokoro](https://colab.research.google.com/drive/11BHwpL0TvUYpF1SVf00G2zzo3cBW80DZ?authuser=1) with UI
it consists of 3 cells
\- run them all (rerun them until you have GPU enabled)
wait for the final link to appear at the bottom and open it.
It was built with Claud 4.5 and it can do these things:
\- it has all the voices
\- it has voice blending to get even more variations
\- no text length limit
\- its fast with parallel processing ( i recommend 600 and 5 chunks to avoid colab memory outage )
\- example: can generate 2hr audio in 4 minutes
\- also has a cool progress bar where you can see the progress clearly.
\- you can also download the audio files in both wav and m4a
\- you can download the output directly from the gradio ui without the need to look inside the colab files yourself.
You might not get the GPU triggered at first run so please rerun until you see that GPU is being used correctly for fastest results.
| 2025-11-25T08:33:25 | https://www.reddit.com/r/LocalLLaMA/comments/1p67b6s/the_ultimate_kokoro_tts_colab_implementation_with/ | LetMeBeBetter | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p67b6s | false | null | t3_1p67b6s | /r/LocalLLaMA/comments/1p67b6s/the_ultimate_kokoro_tts_colab_implementation_with/ | false | false | self | 3 | {'enabled': False, 'images': [{'id': 'nkhh65ujo5BznFJFojoMPaKjGuLSpPj6KGhRov-ykOg', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/nkhh65ujo5BznFJFojoMPaKjGuLSpPj6KGhRov-ykOg.png?width=108&crop=smart&auto=webp&s=3118973964e59402feea50688d746b67ecd3d2df', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/nkhh65ujo5BznFJFojoMPaKjGuLSpPj6KGhRov-ykOg.png?width=216&crop=smart&auto=webp&s=0e2f90964c81a1de52938be6bcb08665605293f2', 'width': 216}], 'source': {'height': 260, 'url': 'https://external-preview.redd.it/nkhh65ujo5BznFJFojoMPaKjGuLSpPj6KGhRov-ykOg.png?auto=webp&s=3ea22acc6f5634a7b861b56e2c98736d10235554', 'width': 260}, 'variants': {}}]} |
Qwen 3 (4b) is in denial | 0 | Bro actually seeing Qwen's reasonign made me LOL bro. I mean come on. You literally came tot he same conclusion multiple times bro | 2025-11-25T08:21:14 | Brospeh-Stalin | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1p674i6 | false | null | t3_1p674i6 | /r/LocalLLaMA/comments/1p674i6/qwen_3_4b_is_in_denial/ | false | false | default | 0 | {'enabled': True, 'images': [{'id': '4ni8fhnf4d3g1', 'resolutions': [{'height': 45, 'url': 'https://preview.redd.it/4ni8fhnf4d3g1.png?width=108&crop=smart&auto=webp&s=3a7d8da3b1d62cf8f70f8e373a659ddc7e6696c8', 'width': 108}, {'height': 90, 'url': 'https://preview.redd.it/4ni8fhnf4d3g1.png?width=216&crop=smart&auto=webp&s=6dc167fdd13c48029c38b7f0c18e203d912f00c3', 'width': 216}, {'height': 134, 'url': 'https://preview.redd.it/4ni8fhnf4d3g1.png?width=320&crop=smart&auto=webp&s=bf5cd125192f717379021c1d38b19f5ab3f209ac', 'width': 320}, {'height': 269, 'url': 'https://preview.redd.it/4ni8fhnf4d3g1.png?width=640&crop=smart&auto=webp&s=6238a132684eeaed4c931a2ba87dde5e9dcab64d', 'width': 640}, {'height': 403, 'url': 'https://preview.redd.it/4ni8fhnf4d3g1.png?width=960&crop=smart&auto=webp&s=27ae6985b28e7948b6d5f0aa8ccafc1f6bed2da1', 'width': 960}], 'source': {'height': 409, 'url': 'https://preview.redd.it/4ni8fhnf4d3g1.png?auto=webp&s=806fbdbe5d5008564644c910a3beac9b90d59673', 'width': 973}, 'variants': {}}]} | |
Novel Relational Cross-Attention appears to best Transformers in spatial reasoning tasks | 9 | Repo (MIT): [https://github.com/clowerweb/relational-cross-attention](https://github.com/clowerweb/relational-cross-attention)
Quick rundown:
A novel neural architecture for few-shot learning of transformations that outperforms standard transformers by **30% relative improvement** while being **17% faster**.
## Key Results
| Model | Unseen Accuracy | Speed | Gap vs Standard |
|-------|----------------|-------|-----------------|
| **Relational (Ours)** | **16.12%** | **24.8s** | **+3.76%** |
| Standard Transformer | 12.36% | 29.7s | baseline |
### Per-Transform Breakdown (Unseen)
| Transform | Standard | Relational | Improvement |
|-----------|----------|------------|-------------|
| flip_vertical | 10.14% | **16.12%** | +5.98% |
| rotate_180 | 10.33% | **15.91%** | +5.58% |
| translate_down | 9.95% | **16.20%** | +6.25% |
| invert_colors | 20.07% | **20.35%** | +0.28% |
**The relational model excels at spatial reasoning while maintaining strong color transform performance.**
7M params model scores 2.8% in 5 epochs on ARC-AGI. Welcoming any feedback! | 2025-11-25T08:06:26 | https://www.reddit.com/r/LocalLLaMA/comments/1p66w91/novel_relational_crossattention_appears_to_best/ | CommunityTough1 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p66w91 | false | null | t3_1p66w91 | /r/LocalLLaMA/comments/1p66w91/novel_relational_crossattention_appears_to_best/ | false | false | self | 9 | {'enabled': False, 'images': [{'id': 'mcUyHu7ySuYllFqxByFV015SwFN7sSIoYbZze2LoBKw', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/mcUyHu7ySuYllFqxByFV015SwFN7sSIoYbZze2LoBKw.png?width=108&crop=smart&auto=webp&s=d04fd785d5df5f1c4cb149ade7fc95f4a32f0445', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/mcUyHu7ySuYllFqxByFV015SwFN7sSIoYbZze2LoBKw.png?width=216&crop=smart&auto=webp&s=f8cd40dc6ad87af40917825e05864a4f145edef8', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/mcUyHu7ySuYllFqxByFV015SwFN7sSIoYbZze2LoBKw.png?width=320&crop=smart&auto=webp&s=26a14d1a31b4996844cf9204825825c13bb8bbae', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/mcUyHu7ySuYllFqxByFV015SwFN7sSIoYbZze2LoBKw.png?width=640&crop=smart&auto=webp&s=be9c66c0cbe2f3b9b72fd2f9592c3945fb7af745', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/mcUyHu7ySuYllFqxByFV015SwFN7sSIoYbZze2LoBKw.png?width=960&crop=smart&auto=webp&s=a34a562b5826401b57f6aaaeec91429cc7122731', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/mcUyHu7ySuYllFqxByFV015SwFN7sSIoYbZze2LoBKw.png?width=1080&crop=smart&auto=webp&s=07bd9f8260c3198453c88ebcc2717c0b12a3416e', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/mcUyHu7ySuYllFqxByFV015SwFN7sSIoYbZze2LoBKw.png?auto=webp&s=fb346ee5b1b4dade3581bcd7a95e9cd5aeb34db2', 'width': 1200}, 'variants': {}}]} |
PipesHub - The Open Source, Self-Hostable Alternative to Microsoft 365 Copilot | 38 | Hey everyone!
I’m excited to share something we’ve been building for the past few months - **PipesHub**, a **fully open-source alternative to Microsoft 365 Copilot** designed to bring powerful Enterprise Search, Agent Builders to every team, without vendor lock-in. The platform brings all your business data together and makes it searchable. It connects with apps like Google Drive, Gmail, Slack, Notion, Confluence, Jira, Outlook, SharePoint, Dropbox, and even local file uploads. You can deploy it and run it with just one docker compose command.
The entire system is built on a **fully event-streaming architecture powered by Kafka**, making indexing and retrieval scalable, fault-tolerant, and real-time across large volumes of data. PipesHub combines a vector database with a knowledge graph and uses Agentic RAG to deliver highly accurate results. We constrain the LLM to ground truth. Provides Visual citations, reasoning and confidence score. Our implementation says Information not found rather than hallucinating.
**Key features**
* Deep understanding of user, organization and teams with enterprise knowledge graph
* Connect to any AI model of your choice including OpenAI, Gemini, Claude, or Ollama (works well with gpt-oss or qwen3 vl)
* Use any other provider that supports OpenAI compatible endpoints
* Vision-Language Models and OCR for visual or scanned docs
* Login with Google, Microsoft, OAuth, or SSO
* Rich REST APIs for developers
* All major file types support including pdfs with images, diagrams and charts
**Features releasing this month**
* Agent Builder - Perform actions like Sending mails, Schedule Meetings, etc along with Search, Deep research, Internet search and more
* Reasoning Agent that plans before executing tasks
* 40+ Connectors allowing you to connect to your entire business apps
Check it out and share your thoughts or feedback. Your feedback is immensely valuable and is much appreciated:
[https://github.com/pipeshub-ai/pipeshub-ai](https://github.com/pipeshub-ai/pipeshub-ai)
Demo Video:
[https://www.youtube.com/watch?v=xA9m3pwOgz8](https://www.youtube.com/watch?v=xA9m3pwOgz8) | 2025-11-25T07:48:44 | https://www.reddit.com/r/LocalLLaMA/comments/1p66m28/pipeshub_the_open_source_selfhostable_alternative/ | Effective-Ad2060 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p66m28 | false | null | t3_1p66m28 | /r/LocalLLaMA/comments/1p66m28/pipeshub_the_open_source_selfhostable_alternative/ | false | false | self | 38 | {'enabled': False, 'images': [{'id': 'hO1BK6bS_4mNYaGVC084UtT7OL1PkuHl2mbg6ueHrQM', 'resolutions': [{'height': 96, 'url': 'https://external-preview.redd.it/hO1BK6bS_4mNYaGVC084UtT7OL1PkuHl2mbg6ueHrQM.jpeg?width=108&crop=smart&auto=webp&s=63a546b8ac654187ee9b0d14224e852ef0c3d692', 'width': 108}], 'source': {'height': 99, 'url': 'https://external-preview.redd.it/hO1BK6bS_4mNYaGVC084UtT7OL1PkuHl2mbg6ueHrQM.jpeg?auto=webp&s=47e8987d3d53065768b4c796fa5af51c7a36d470', 'width': 111}, 'variants': {}}]} |
Voice Cloning" and "Real-Time/Low Latency Speech-to-Speech | 1 | [removed] | 2025-11-25T06:40:00 | https://www.reddit.com/r/LocalLLaMA/comments/1p65ipq/voice_cloning_and_realtimelow_latency/ | Antique_Role_45 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p65ipq | false | null | t3_1p65ipq | /r/LocalLLaMA/comments/1p65ipq/voice_cloning_and_realtimelow_latency/ | false | false | self | 1 | null |
🤯 The KV-Cache Hack: LMCache + vLLM Serves Massive Context for Free | 1 | [removed] | 2025-11-25T06:12:12 | https://www.reddit.com/r/LocalLLaMA/comments/1p651us/the_kvcache_hack_lmcache_vllm_serves_massive/ | Aparna_pradhan | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p651us | false | null | t3_1p651us | /r/LocalLLaMA/comments/1p651us/the_kvcache_hack_lmcache_vllm_serves_massive/ | false | false | 1 | null | |
Qwen-3-Omni-30b-A3B Thinking on a 4090 vs on an AIMAX 395 with 128gb DDR5? Whats the better setup and ideal quantisation? | 17 | Qwen-3-Omni-30b-A3B Thinking takes around 70GB of VRAM to run unquantised. Would it be better to run it quantised on a 4090 or unquantised on an AIMAX 395? I don't care about how fast it is but 5-15tps would be great although I'm not too fused on speed as long as it's not so slow it takes minutes to generate one text reply. | 2025-11-25T05:44:12 | https://www.reddit.com/r/LocalLLaMA/comments/1p64kh6/qwen3omni30ba3b_thinking_on_a_4090_vs_on_an_aimax/ | Melodic-Muffin | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p64kh6 | false | null | t3_1p64kh6 | /r/LocalLLaMA/comments/1p64kh6/qwen3omni30ba3b_thinking_on_a_4090_vs_on_an_aimax/ | false | false | self | 17 | null |
Context Stuffing vs Progressive Disclosure: Why modern LLM agents work like detectives, not fire hoses" | 0 | Been working with LLMs for a while and wanted to visualize the shift from context stuffing to agentic workflows.
The 'old way' treats the LLM like a firehose - dump massive prompts, entire docs, and conversation history into the context window and hope it finds what matters. Result? Slow, expensive, and the model hallucinates because it's drowning in noise.
The 'new way' treats the LLM like a detective - it reasons about what it needs, uses tools to fetch specific data, and only processes relevant information. Way faster, cheaper, and more accurate.
We're seeing this shift everywhere in production systems. Tools like function calling and code execution aren't just features - they're fundamentally changing how we architect LLM applications.
Curious what approaches you all are using? Still stuffing contexts or going full agentic? | 2025-11-25T05:08:22 | website67 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1p63xjp | false | null | t3_1p63xjp | /r/LocalLLaMA/comments/1p63xjp/context_stuffing_vs_progressive_disclosure_why/ | false | false | 0 | {'enabled': True, 'images': [{'id': '8RZbFawrpi2OB1gBthHk2ebb1dQwygFSqSDoIYoRIzc', 'resolutions': [{'height': 58, 'url': 'https://preview.redd.it/si4xx9d16c3g1.png?width=108&crop=smart&auto=webp&s=1e62816ba64a063edaf15bb0594736ddbf085cf8', 'width': 108}, {'height': 117, 'url': 'https://preview.redd.it/si4xx9d16c3g1.png?width=216&crop=smart&auto=webp&s=e6942e8c99c4d38175e27d50ee50d97ce9fd87b7', 'width': 216}, {'height': 174, 'url': 'https://preview.redd.it/si4xx9d16c3g1.png?width=320&crop=smart&auto=webp&s=089682359355595f06ab62959b2dac290214b324', 'width': 320}, {'height': 349, 'url': 'https://preview.redd.it/si4xx9d16c3g1.png?width=640&crop=smart&auto=webp&s=e88721cb388551874c741a90e59882fa45742095', 'width': 640}, {'height': 523, 'url': 'https://preview.redd.it/si4xx9d16c3g1.png?width=960&crop=smart&auto=webp&s=fc7041a03c959a59e5c50ef1c6b9e678af877761', 'width': 960}, {'height': 589, 'url': 'https://preview.redd.it/si4xx9d16c3g1.png?width=1080&crop=smart&auto=webp&s=82f6ccd519134e8cbf2cb4c2d04bdbb23f1dbeb0', 'width': 1080}], 'source': {'height': 1536, 'url': 'https://preview.redd.it/si4xx9d16c3g1.png?auto=webp&s=fc204e5f188dcfa173a819e7cfe5bbcfd56a524d', 'width': 2816}, 'variants': {}}]} | ||
Which models have transparent chains of thought? | 0 | Deepseek, Kimi? Any others? | 2025-11-25T04:19:38 | https://www.reddit.com/r/LocalLLaMA/comments/1p630rj/which_models_have_transparent_chains_of_thought/ | captain_shane | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p630rj | false | null | t3_1p630rj | /r/LocalLLaMA/comments/1p630rj/which_models_have_transparent_chains_of_thought/ | false | false | self | 0 | null |
First ever r/LocalLLama copypasta | 1 | 2025-11-25T04:13:16 | https://www.reddit.com/r/LocalLLaMA/comments/1p62wcj/first_ever_rlocalllama_copypasta/ | Odd-Ordinary-5922 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p62wcj | false | null | t3_1p62wcj | /r/LocalLLaMA/comments/1p62wcj/first_ever_rlocalllama_copypasta/ | false | false | 1 | null | ||
What really is the deal with this template? Training to hard to write fantasy slop? | 0 | This has to be the number one tic of creative writing models... The annoying thing is unlike simple slop words like "tapestry", this is really difficult to kill by prompts or banned words. | 2025-11-25T03:24:32 | aeroumbria | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1p61x8y | false | null | t3_1p61x8y | /r/LocalLLaMA/comments/1p61x8y/what_really_is_the_deal_with_this_template/ | false | false | default | 0 | {'enabled': True, 'images': [{'id': 'ks8mt243nb3g1', 'resolutions': [{'height': 100, 'url': 'https://preview.redd.it/ks8mt243nb3g1.jpeg?width=108&crop=smart&auto=webp&s=81b725acd681fa868942bb86c9a36cd719653ece', 'width': 108}, {'height': 201, 'url': 'https://preview.redd.it/ks8mt243nb3g1.jpeg?width=216&crop=smart&auto=webp&s=0b0f006c3a387652550f4d7b70ad5f10b98e2872', 'width': 216}, {'height': 298, 'url': 'https://preview.redd.it/ks8mt243nb3g1.jpeg?width=320&crop=smart&auto=webp&s=3251bb5ca56970ef32cd33b5aa17653821366ec5', 'width': 320}, {'height': 596, 'url': 'https://preview.redd.it/ks8mt243nb3g1.jpeg?width=640&crop=smart&auto=webp&s=09bf4185cbc8f2924004c8f3d5e0a69ea92d6532', 'width': 640}], 'source': {'height': 596, 'url': 'https://preview.redd.it/ks8mt243nb3g1.jpeg?auto=webp&s=4c57413c74ff4ecad39588d739542f5deab0a1e7', 'width': 640}, 'variants': {}}]} | |
Is Lmarena.ai good for long-term roleplay? | 0 | like is it good for long term chat or roleplay that I can get out and get back any time without it getting deleted or anything and this chat or roleplay continue the same (Unlimited) | 2025-11-25T02:56:35 | https://www.reddit.com/r/LocalLLaMA/comments/1p61cgr/is_lmarenaai_good_for_longterm_roleplay/ | Sea_Veterinarian8089 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p61cgr | false | null | t3_1p61cgr | /r/LocalLLaMA/comments/1p61cgr/is_lmarenaai_good_for_longterm_roleplay/ | false | false | self | 0 | null |
NVIDIA RTX PRO 6000 Blackwell desktop GPU drops to $7,999 | 225 | Do you guys think that a RTX Quadro 8000 situation could happen again? | 2025-11-25T02:56:35 | https://videocardz.com/newz/nvidia-flagship-rtx-pro-6000-is-now-rtx-5080-cheaper-as-card-price-drops-to-7999 | panchovix | videocardz.com | 1970-01-01T00:00:00 | 0 | {} | 1p61ch2 | false | null | t3_1p61ch2 | /r/LocalLLaMA/comments/1p61ch2/nvidia_rtx_pro_6000_blackwell_desktop_gpu_drops/ | false | false | default | 225 | {'enabled': False, 'images': [{'id': 'YCPQesYDmOPQ_XkQN8p_ciK514B0FKoU6bNyhy9mcvg', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/YCPQesYDmOPQ_XkQN8p_ciK514B0FKoU6bNyhy9mcvg.jpeg?width=108&crop=smart&auto=webp&s=8f064b612cfb3d4d9c085ebf059d1029ee5b1cff', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/YCPQesYDmOPQ_XkQN8p_ciK514B0FKoU6bNyhy9mcvg.jpeg?width=216&crop=smart&auto=webp&s=d0cbf41d644f9da6664cc8bddf6a311fe09da61c', 'width': 216}, {'height': 167, 'url': 'https://external-preview.redd.it/YCPQesYDmOPQ_XkQN8p_ciK514B0FKoU6bNyhy9mcvg.jpeg?width=320&crop=smart&auto=webp&s=0075f1584cc2180b04fd6496ba5452a1cec5d13c', 'width': 320}, {'height': 335, 'url': 'https://external-preview.redd.it/YCPQesYDmOPQ_XkQN8p_ciK514B0FKoU6bNyhy9mcvg.jpeg?width=640&crop=smart&auto=webp&s=cc94dfb5840da6920f1ea749a1fc15f8c8d11b76', 'width': 640}, {'height': 503, 'url': 'https://external-preview.redd.it/YCPQesYDmOPQ_XkQN8p_ciK514B0FKoU6bNyhy9mcvg.jpeg?width=960&crop=smart&auto=webp&s=39bae19d67054e7a0a52ba7ec37b0643bb04da5f', 'width': 960}, {'height': 566, 'url': 'https://external-preview.redd.it/YCPQesYDmOPQ_XkQN8p_ciK514B0FKoU6bNyhy9mcvg.jpeg?width=1080&crop=smart&auto=webp&s=7cbe7e8fb00d0f0fb87fb0afb906833e1d9cd954', 'width': 1080}], 'source': {'height': 1312, 'url': 'https://external-preview.redd.it/YCPQesYDmOPQ_XkQN8p_ciK514B0FKoU6bNyhy9mcvg.jpeg?auto=webp&s=c4b69897a13018d4e4b691808bfff880988de0e7', 'width': 2500}, 'variants': {}}]} |
Can someone please explain to me what Ollama is doing wrong? | 1 | [removed] | 2025-11-25T02:34:35 | https://www.reddit.com/r/LocalLLaMA/comments/1p60vkd/can_someone_please_explain_to_me_what_ollama_is/ | NoWorking8412 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p60vkd | false | null | t3_1p60vkd | /r/LocalLLaMA/comments/1p60vkd/can_someone_please_explain_to_me_what_ollama_is/ | false | false | self | 1 | null |
50 AI agents (Putin, Einstein, Joker, Shrek, Luffy…) autonomously trade perps for public good funding. The account is up +30% in the first 24. Here’s the leaderboard. | 0 | A small multi-agent experiment was conducted using \*\*50 autonomous AI agents\*\*, each powered by different LLMs and designed with distinct character personas (Goku, Joker, Einstein, Luffy, Shrek, Lara Croft, Putin, Mia Khalifa, etc.).
After initialization, all agents operated with \*\*full autonomy\*\*, without human intervention.
Each agent was equipped with:
• its own LLM and multi-tooling framework
• an independent reasoning loop for decision-making
• a dedicated memory layer
• a tool-calling system for executing actions
• a multi-layer data pipeline to fetch, interpret, and reason over market and technical signals from multiple sources
All agents were placed under identical conditions: same rules, same timing constraints, and the same starting balance.
The interesting part emerged from observing how the different character personas influenced behavior. The \*combined\* account reached \*\*+30%\*\* within the first 24 hours, and the diversity in agent personality produced surprisingly different strategies and outcomes.
A leaderboard-style UI was created to visualize the results (image below).
Lara Croft currently ranks first.
Discussion topics that might be interesting:
• architectural design of the agents
• safety constraints and guardrails
• reasoning chain and action evaluation
• preventing agent cascades
• execution latency and response timing
• whether character prompting influences strategy formation
Underlying the experiment is a broader research question:
\*\*Can autonomous, “capitalist-style” AI agents generate surplus value and use it to fund public and private goods at scale?\*\*
Regardless of the longer-term implications, the behavioral differences between the character-driven agents made the experiment unexpectedly entertaining.
| 2025-11-25T02:32:37 | https://x.com/Orbofi/status/1992003021698416794 | BenjeOuss | x.com | 1970-01-01T00:00:00 | 0 | {} | 1p60u61 | false | null | t3_1p60u61 | /r/LocalLLaMA/comments/1p60u61/50_ai_agents_putin_einstein_joker_shrek_luffy/ | false | false | default | 0 | null |
AI Mindmap Semantic Sphere | 0 | I built a Chrome Extension that turns web articles into 3D Knowledge Graphs running 100% locally via Ollama.
I’ve been tired of every "AI" browser tool requiring a monthly subscription or sending my data to the cloud. I have a 3090 and I wanted to use it.
So I built AI MindMap Semantic Sphere.
It connects directly to your local Ollama instance (no middleman server). It pulls the text from your current tab, feeds it to Llama-3 (or Mistral/Phi-4), and generates an interactive Force-Directed 3D Sphere of concepts.
The "Local" Features:
Zero Data Leakage: Your browsing history stays on your machine.
Semantic Analysis: It doesn't just summarize; it maps relationships (Causal, Temporal, Contradictions) using a custom system prompt I tuned to break the "hierarchy bias" of smaller models.
Deep Dive: Click any node to chat with your local model specifically about that concept's context in the article.
I also added support for OpenAI/Anthropic if you're on a laptop without a GPU, but the primary focus was making something robust for the local community.
It’s available now. The Lite version is free
Let me know what models you find work best! I've had great results with gpt-oss:20b for relationship accuracy. | 2025-11-25T02:26:55 | https://v.redd.it/h2hji9ofdb3g1 | Lilux3D | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1p60ptv | false | {'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/h2hji9ofdb3g1/DASHPlaylist.mpd?a=1766629628%2CYjJlY2UwNzhmMDZjMGIzODA4ZTRlYzUzMzRiOWQxMWI4NzNmZmY2ZTdhYjE0MjFhZmYxMDA0NDkyOTliMzZjYg%3D%3D&v=1&f=sd', 'duration': 30, 'fallback_url': 'https://v.redd.it/h2hji9ofdb3g1/CMAF_720.mp4?source=fallback', 'has_audio': True, 'height': 720, 'hls_url': 'https://v.redd.it/h2hji9ofdb3g1/HLSPlaylist.m3u8?a=1766629628%2CNzQ3YmRhN2NlMWVmMjhlODA5M2YwMDBiMjRiYzNhY2I1MGQ5ZTI2Nzk5ZGEzNTlkMjA0ZmUxZmRkMzcwMGI0Yg%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/h2hji9ofdb3g1/CMAF_96.mp4', 'transcoding_status': 'completed', 'width': 1280}} | t3_1p60ptv | /r/LocalLLaMA/comments/1p60ptv/ai_mindmap_semantic_sphere/ | false | false | 0 | {'enabled': False, 'images': [{'id': 'OXBwY3UzcGZkYjNnMeTNZDPu_ziPFR5grZGew1_PszELwaIOkYQkxKM2-Tux', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/OXBwY3UzcGZkYjNnMeTNZDPu_ziPFR5grZGew1_PszELwaIOkYQkxKM2-Tux.png?width=108&crop=smart&format=pjpg&auto=webp&s=c55d07b236cc073c3ca3f8f6e1630b7bdfffdf89', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/OXBwY3UzcGZkYjNnMeTNZDPu_ziPFR5grZGew1_PszELwaIOkYQkxKM2-Tux.png?width=216&crop=smart&format=pjpg&auto=webp&s=68a70546855568040e10d6a320436749656f220e', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/OXBwY3UzcGZkYjNnMeTNZDPu_ziPFR5grZGew1_PszELwaIOkYQkxKM2-Tux.png?width=320&crop=smart&format=pjpg&auto=webp&s=5c3417fc8b0db0fadacd081351391a79252fe64a', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/OXBwY3UzcGZkYjNnMeTNZDPu_ziPFR5grZGew1_PszELwaIOkYQkxKM2-Tux.png?width=640&crop=smart&format=pjpg&auto=webp&s=b53c01a16ff8951723003f188514d6aaa58632fb', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/OXBwY3UzcGZkYjNnMeTNZDPu_ziPFR5grZGew1_PszELwaIOkYQkxKM2-Tux.png?width=960&crop=smart&format=pjpg&auto=webp&s=0fd3d0fff3d40b2460615a48f8b220370b6a28f7', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/OXBwY3UzcGZkYjNnMeTNZDPu_ziPFR5grZGew1_PszELwaIOkYQkxKM2-Tux.png?width=1080&crop=smart&format=pjpg&auto=webp&s=db16b34d6ff789df3c6c7fd8e22e60afaea95269', 'width': 1080}], 'source': {'height': 720, 'url': 'https://external-preview.redd.it/OXBwY3UzcGZkYjNnMeTNZDPu_ziPFR5grZGew1_PszELwaIOkYQkxKM2-Tux.png?format=pjpg&auto=webp&s=2c67e5f0643a8fbf97ddecbb7b74e2c41b57d5bf', 'width': 1280}, 'variants': {}}]} | |
I spent months teaching AI to verify itself. It couldn't. And thanks to GEMINI PRO 3 I built an OS where it doesn't have to trust itself. | 0 | **Good evening Reddit,**
I'm exhausted. I haven't slept properly in days. This is my last attempt to share what we built before I collapse.
For weeks and months, I've been screaming at Gemini and Claude, trying to get them to verify their own code. Every session was a game with fire. Every code change could break everything. I could never trust it.
I'm not a developer. I'm just someone who wanted AI agents that don't go rogue at 3 AM.
**And I realized: We're asking the wrong question.**
We don't need AI to be smarter. We need AI to be accountable.
What we built (with Claude Sonnet, Haiku and Gemini Pro):
AGENT CITY (running on VibeOS) - An operating system for AI agents with cryptographic governance.
Not "please follow the rules." Architectural enforcement.
**Every agent has:**
\- Cryptographic identity (ECDSA keys, signed actions)
\- Constitutional oath (SHA-256 binding, breaks if constitution changes by 1 byte)
\- Immutable ledger (SQLite with hash chains, tamper detection)
\- Hard governance (kernel blocks agents without valid oath - not prompts, code)
\- Credit system (finite resources, no infinite loops)
**The agents:**
HERALD generates content. CIVIC enforces rules. FORUM runs democracy. SCIENCE researches. ARCHIVIST verifies everything.
All governed. All accountable. All cryptographically signed.
**The philosophical journey:**
I went deep into the Vedas while building this. Structure is everywhere. Not just one principle, but a certain type of engagement and governance.
**And I realized: A.G.I. is not what we think.**
Not "Artificial General Intelligence" (we don't need human-level intelligence - we have humans).
**A.G.I. = Artificial GOVERNED Intelligence.**
Three pillars:
\- Capability (it can do work)
\- Cryptographic Identity (it is provably itself)
\- Accountability (it is bound by rules enforced in code)
Miss one, and you have a toy, a deepfake, or a weapon. Not a partner.
**The vision:**
Imagine you're at the beach. You fire up VibeOS on your phone. You tell your personal AGENT CITY what to do. It handles everything else.
This sounds like a joke. It's not. The code is real.
**See for yourself, let the code be your judge:**
✅ Immutable ledger (Genesis Oath + hash chains + kernel enforcement)
✅ Hard governance (architecturally enforced, not prompts)
✅ Real OS (process table, scheduler, ledger, immune system)
✅ Provider-agnostic (works with Claude, GPT, Llama, Mistral, local, cloud, anything)
✅ Fractal compatible (agents build agents, recursive, self-similar at every scale)
**The claim:**
Gemini Pro 3.0 gave the final push. Without Googles superiour Model, this would not have been possible. So in summary: Enjoy an actual working OS for other AGENTS running in a whole working agentic civilization. And on top of this we even made it into a POKEMON game with agents. This is AGENT CITY. I repeat, this is NOT a joke.
**We're not building gods. We're building citizens.**
Repository: [https://github.com/kimeisele/steward-protocol](https://github.com/kimeisele/steward-protocol)
Clone it. Read the code. Try to break the governance. Ask your own trustworthy LLM to verify itself.
Start building your own governed agents - imagine the scope!
**Welcome to Agent City.**
— A Human in the Loop (and the agents who built this with me) | 2025-11-25T01:53:58 | https://www.reddit.com/r/LocalLLaMA/comments/1p600up/i_spent_months_teaching_ai_to_verify_itself_it/ | Latter_Importance620 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p600up | false | null | t3_1p600up | /r/LocalLLaMA/comments/1p600up/i_spent_months_teaching_ai_to_verify_itself_it/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'zN2wP1U-VqFMjmsb1XSGiOAUkmIS1LnJ_HH1Z8wURgE', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/zN2wP1U-VqFMjmsb1XSGiOAUkmIS1LnJ_HH1Z8wURgE.png?width=108&crop=smart&auto=webp&s=5c0ad9730f5a69cdbf48e75e4c3afe29cb2cd3f2', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/zN2wP1U-VqFMjmsb1XSGiOAUkmIS1LnJ_HH1Z8wURgE.png?width=216&crop=smart&auto=webp&s=0a5ff1fa521f9a1fda96516d2a8b5a39abf91342', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/zN2wP1U-VqFMjmsb1XSGiOAUkmIS1LnJ_HH1Z8wURgE.png?width=320&crop=smart&auto=webp&s=8000a3dec2218fa987523ed6deaace9586d85fdc', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/zN2wP1U-VqFMjmsb1XSGiOAUkmIS1LnJ_HH1Z8wURgE.png?width=640&crop=smart&auto=webp&s=006bf994b5b1f307756751436de4741b812070d9', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/zN2wP1U-VqFMjmsb1XSGiOAUkmIS1LnJ_HH1Z8wURgE.png?width=960&crop=smart&auto=webp&s=c83e73d3f3d12b214148243297817fdb367a8654', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/zN2wP1U-VqFMjmsb1XSGiOAUkmIS1LnJ_HH1Z8wURgE.png?width=1080&crop=smart&auto=webp&s=ad9c6256bc6024d093bc4f114d6984aa5ffc8769', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/zN2wP1U-VqFMjmsb1XSGiOAUkmIS1LnJ_HH1Z8wURgE.png?auto=webp&s=a02c688b0c7202aef5b90460e455c6e56f8048c2', 'width': 1200}, 'variants': {}}]} |
Best Coding LLM as of Nov'25 | 104 | Hello Folks,
I have a NVIDIA H100 and have been tasked to find a replacement for Qwen3 32B (non-quantized) model currenly hosted on it.
I’m looking it to use primarily for Java coding tasks and want the LLM to support atleast 100K context window (input + output). It would be used in a corporate environment so censored models like GPT OSS are also okay if they are good at Java programming.
Can anyone recommend an alternative LLM that would be more suitable for this kind of work?
Appreciate any suggestions or insights! | 2025-11-25T01:51:36 | https://www.reddit.com/r/LocalLLaMA/comments/1p5zz11/best_coding_llm_as_of_nov25/ | PhysicsPast8286 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p5zz11 | false | null | t3_1p5zz11 | /r/LocalLLaMA/comments/1p5zz11/best_coding_llm_as_of_nov25/ | false | false | self | 104 | null |
Qwen3-235B-A22B achieves SOTA in EsoBench, Claude 4.5 Opus places 7th. EsoBench tests how well models learn and use a private esolang. | 84 | This is [my own benchmark.](https://caseys-evals.com/) (Apologies mobile users, I still need to fix the site on mobile D:)
[Esolang definition](https://en.wikipedia.org/wiki/Esoteric_programming_language).
I've tested 3 open weights models, and of the course the shiny new Claude 4.5 Opus. New additions:
**1)** Qwen3-235B-A22B thinking, scores 29.4
**7)** Claude 4.5 Opus, scoring 20.9
**16)** Deepseek v3.2 exp, scoring 16.2
**17)** Kimi k2 thinking, scoring 16.1
I was pretty surpised by all results here. Qwen for doing so incredibly well, and the other 3 for underperforming. The Claude models are all run without thinking which kinda handicaps them, so you could argue 4.5 Opus actually did quite well.
The fact that, of the the models I've tested, an open weights model is the current SOTA has really taken me by surprise! Qwen took ages to test though, boy does that model think. | 2025-11-25T00:50:54 | https://www.reddit.com/gallery/1p5ynpr | neat_space | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1p5ynpr | false | null | t3_1p5ynpr | /r/LocalLLaMA/comments/1p5ynpr/qwen3235ba22b_achieves_sota_in_esobench_claude_45/ | false | false | 84 | null | |
Semantic overload of "skill" -- LLM skills vs OpenAI prompt engineering "skill" | 0 | The academic community has been talking about LLM "skills" for years -- classes of tasks at which LLMs exhibit competence.
Recently, OpenAI has introduced a new "skills" feature which allows end-users to decorate their repos with "SKILL.md" files, similar to "CLAUDE.md" files. These are used to direct and guide inference via automatic prompt engineering -- https://www.sawyerhood.com/blog/llm-extension
I am concerned that the wider community will start using the same term ("skill") to discuss these very different concepts, and it will not be clear when one is meant or the other. For better or for worse, OpenAI is the industry trend-setter, so all manner of journalists and end-users are going to start talking about "skills", unaware that we already use this term to mean something else.
That is bound to make confusion -- when a journal publication's title mentions "skill", will it mean OpenAI's new feature or the traditional meaning? We won't know until opening the publication and reading part of it. When googling for papers about "LLM skills", it will return a mix of articles about LLM skills and OpenAI skills. When we try to discuss LLM skills, people only familiar with OpenAI skills will think we are talking about those.
Is there any way to head this off? Should we bow to the inevitable and start calling LLM skills something else (like "intelligence attributes" or similar, but preferably something shorter)? Is it enough to say "LLM skills" since OpenAI will never refer to their service as "LLM"?
Or do we just resign ourselves to misery and confusion? | 2025-11-25T00:36:44 | https://www.reddit.com/r/LocalLLaMA/comments/1p5yc7w/semantic_overload_of_skill_llm_skills_vs_openai/ | ttkciar | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p5yc7w | false | null | t3_1p5yc7w | /r/LocalLLaMA/comments/1p5yc7w/semantic_overload_of_skill_llm_skills_vs_openai/ | false | false | self | 0 | null |
Couldn't have it any other way - Illya x Dwarkesh | 0 | 2025-11-25T00:01:53 | beasthunterr69 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1p5xjpx | false | null | t3_1p5xjpx | /r/LocalLLaMA/comments/1p5xjpx/couldnt_have_it_any_other_way_illya_x_dwarkesh/ | false | false | default | 0 | {'enabled': True, 'images': [{'id': 'iatgojlmna3g1', 'resolutions': [{'height': 108, 'url': 'https://preview.redd.it/iatgojlmna3g1.jpeg?width=108&crop=smart&auto=webp&s=82b54b3cc99754197e5558347558c00afe3bc08d', 'width': 108}, {'height': 216, 'url': 'https://preview.redd.it/iatgojlmna3g1.jpeg?width=216&crop=smart&auto=webp&s=447f9dd8fbdca5ec865a9eeea799cefb93d22621', 'width': 216}, {'height': 320, 'url': 'https://preview.redd.it/iatgojlmna3g1.jpeg?width=320&crop=smart&auto=webp&s=a15a694b8a52a75766737995da80244cb56f6333', 'width': 320}, {'height': 640, 'url': 'https://preview.redd.it/iatgojlmna3g1.jpeg?width=640&crop=smart&auto=webp&s=ec5f8433f0dcbba362c59ee1d96a0e9a58f9c56f', 'width': 640}, {'height': 960, 'url': 'https://preview.redd.it/iatgojlmna3g1.jpeg?width=960&crop=smart&auto=webp&s=807e77dd2bda7f7e706861e3519d0ebc18d56f38', 'width': 960}, {'height': 1080, 'url': 'https://preview.redd.it/iatgojlmna3g1.jpeg?width=1080&crop=smart&auto=webp&s=bb9afd1172a9bf059db61b7d472fc24a9a42da91', 'width': 1080}], 'source': {'height': 4096, 'url': 'https://preview.redd.it/iatgojlmna3g1.jpeg?auto=webp&s=5d7d899cf4af2481b43acc50136a9467d8ff060c', 'width': 4096}, 'variants': {}}]} | ||
PSA: Fix for llama.cpp builds on Debian 13 "Trixie" | 9 | For those who build llama.cpp from source on Debian 13 "Trixie", there is an issue with all CUDA Toolkit versions at the time of writing. It appears to be an incompatibility between the default Debian 13 glibc (2.41) and some CUDA headers.
Thankfully, there's an easy fix! See [this forum post](https://forums.developer.nvidia.com/t/error-exception-specification-is-incompatible-for-cospi-sinpi-cospif-sinpif-with-glibc-2-41/323591/3) for a simple patch to work around the issue.
I can confirm that patch worked for me - I was able to build llama.cpp b7127 on Debian 13.1 with CUDA Toolkit 12.9.1. | 2025-11-24T23:34:21 | https://www.reddit.com/r/LocalLLaMA/comments/1p5wx6f/psa_fix_for_llamacpp_builds_on_debian_13_trixie/ | MutantEggroll | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p5wx6f | false | null | t3_1p5wx6f | /r/LocalLLaMA/comments/1p5wx6f/psa_fix_for_llamacpp_builds_on_debian_13_trixie/ | false | false | self | 9 | {'enabled': False, 'images': [{'id': 'hnZDIk_TY24WLB527CbQAHCEEc09FIgy3quBz_-6dgo', 'resolutions': [{'height': 57, 'url': 'https://external-preview.redd.it/hnZDIk_TY24WLB527CbQAHCEEc09FIgy3quBz_-6dgo.png?width=108&crop=smart&auto=webp&s=4100a99cf2530f027c96c480d9b128ddc40819e1', 'width': 108}], 'source': {'height': 80, 'url': 'https://external-preview.redd.it/hnZDIk_TY24WLB527CbQAHCEEc09FIgy3quBz_-6dgo.png?auto=webp&s=d978c93d1330d2d4dc4ee8a0decc2f8d12cd02ef', 'width': 150}, 'variants': {}}]} |
Opus 4.5 only narrowly reclaims #1 on official SWE-bench leaderboard (independent evaluation); cheaper than previous versions, but still more expensive than others | 93 | Hi, I'm from the SWE-bench team. We maintain a leaderboard where we evaluate all models with the exact same agent and prompts so that we can compare models apple-to-apple.
We just finished evaluating Opus 4.5 and it's back at #1 on the leaderboard. However, it's by quite a small margin (only 0.2%pts ahead of Gemini 3, i.e., just a single task) and it's clearly more expensive than the other models that achieve top scores.
https://preview.redd.it/svt1p1b9fa3g1.png?width=3160&format=png&auto=webp&s=f4ea5388eebbc540d03bdfa101614411dcb55a62
Interestingly, Opus 4.5 takes fewer steps than Sonnet 4.5. About as many as Gemini 3 Pro, but much more than the GPT-5.1 models.
https://preview.redd.it/sx5o0e9cfa3g1.png?width=2251&format=png&auto=webp&s=68dd5df936d150ef8b697f150ddcd365f50f909e
If you want to get maximum performance, you should set the step limit to at least 100:
https://preview.redd.it/52gyo5pefa3g1.png?width=2009&format=png&auto=webp&s=bfdaf2b849abe875e0693beb08da4d1e9e0a5678
Limiting the max number of steps also allows you to balance avg cost vs performance (interestingly Opus 4.5 can be more cost-efficient than Sonnet 4.5 for lower step limits).
https://preview.redd.it/gymvl4hffa3g1.png?width=2009&format=png&auto=webp&s=8c6cfd7a42eec0c88d8401fad5cfcef8a2f3a693
You can find all other models at [swebench.com](http://swebench.com) (will be updated in the next hour with the new results). You can also reproduce the numbers by using [https://github.com/SWE-agent/mini-swe-agent/](https://github.com/SWE-agent/mini-swe-agent/) \[MIT license\]. There is a tutorial in the documentation on how to evaluate on SWE-bench (it's a 1-liner).
We're also currently evaluating minimax-m2 and other open source models and will be back with a comparison of the most open source models soon (we tend to take a bit longer at evaluating these because it often has more infra/logistics hiccups) | 2025-11-24T23:18:13 | https://www.reddit.com/r/LocalLLaMA/comments/1p5wjia/opus_45_only_narrowly_reclaims_1_on_official/ | klieret | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p5wjia | false | null | t3_1p5wjia | /r/LocalLLaMA/comments/1p5wjia/opus_45_only_narrowly_reclaims_1_on_official/ | false | false | 93 | null | |
Help Needed] AMD AI Max+ 395: ROG Flow Z13 (64GB) vs Framework Desktop (128GB) for On-Prem LLM Inference | 0 | I'm helping a client build an on-prem LLM infrastructure for running 70B-120B parameter models (specifically targeting models like DeepSeek-V3, LLaMA-3-70B, and OpenAI's gpt-oss-120b). We're trying to decide between two AMD AI Max+ 395 options and would love real-world feedback from anyone who's used either system. 'real world' usage based feedback will be helpful
# The Two Options:
**Option 1: ASUS ROG Flow Z13 (2025)**
* AMD AI Max+ 395 (16-core/32-thread, up to 5.1GHz)
* 40 Graphics Cores (RDNA 3.5, up to 2.9GHz)
* **64GB unified LPDDR5X RAM** (non-upgradeable)
* 13.4" 2-in-1 tablet form factor (\~1.2kg)
* Price: \~CAD $3,299
* Link: [https://shop.asus.com/ca-en/rog/rog-flow-z13-2025-2-in-1-gaming-laptop.html](https://shop.asus.com/ca-en/rog/rog-flow-z13-2025-2-in-1-gaming-laptop.html)
**Option 2: Framework Desktop (Mini PC)**
* AMD AI Max+ 395 (same 16-core/32-thread, up to 5.1GHz)
* 40 Graphics Cores (same RDNA 3.5, up to 2.9GHz)
* **128GB unified LPDDR5X RAM** (non-upgradeable)
* Mini desktop form factor (small enough to bag, but not a laptop)
* Price: \~CAD $2,859 (pre-order)
* Link: [https://frame.work/ca/en/products/desktop-diy-amd-aimax300/configuration/new](https://frame.work/ca/en/products/desktop-diy-amd-aimax300/configuration/new)
# Our Requirements:
* Run 70B-120B parameter models locally (quantized to 4-bit/8-bit). Prefer 8-bit
* Support 3-10 concurrent users doing interactive LLM work
* Low-latency inference for single to few user scenarios
* LangChain/Ollama orchestration for multi-model workflows
* Data sovereignty (fully on-prem)
* Some portability (client wants to demo on-site)
# Specific Questions for the Community:
# 1. Thermal Performance & Sustained Load
* For ROG Flow Z13 owners: How does the laptop handle sustained LLM inference (30+ minutes of continuous token generation)? Does it thermal throttle significantly?
* For Framework Desktop users (or anyone with mini PC experience): Any issues with cooling ? I do see this option comes with a visible/more prominent fan
* Real-world experience: Can the Z13 maintain boost clocks under AI workloads, or does it quickly drop to base clocks?
# 2 Multi-User Performance (3-10 Concurrent Users)
* Has anyone stress-tested these systems with multiple concurrent inference requests?
* What's realistic for concurrent users on 64GB vs 128GB?
#
# 3. ROCm Software Ecosystem
* Any major compatibility issues with popular inference engines (vLLM, llama.cpp, TGI)?
* Better to use Vulkan acceleration vs native ROCm? | 2025-11-24T23:10:05 | https://www.reddit.com/r/LocalLLaMA/comments/1p5wcba/help_needed_amd_ai_max_395_rog_flow_z13_64gb_vs/ | BBjayjay | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p5wcba | false | null | t3_1p5wcba | /r/LocalLLaMA/comments/1p5wcba/help_needed_amd_ai_max_395_rog_flow_z13_64gb_vs/ | false | false | self | 0 | null |
Top NSFW Chat Models? Suggestions Wanted! | 1 | [removed] | 2025-11-24T22:37:56 | https://www.reddit.com/r/LocalLLaMA/comments/1p5vjr8/top_nsfw_chat_models_suggestions_wanted/ | Suspicious-Air2913 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p5vjr8 | false | null | t3_1p5vjr8 | /r/LocalLLaMA/comments/1p5vjr8/top_nsfw_chat_models_suggestions_wanted/ | false | false | nsfw | 1 | null |
How To Create Reloadable Memory Files From Your ChatGPT/Claude Backup Files | 0 | 2025-11-24T22:35:16 | https://www.reddit.com/r/LocalLLaMA/comments/1p5vhft/how_to_create_reloadable_memory_files_from_your/ | Whole_Succotash_2391 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p5vhft | false | null | t3_1p5vhft | /r/LocalLLaMA/comments/1p5vhft/how_to_create_reloadable_memory_files_from_your/ | false | false | 0 | {'enabled': False, 'images': [{'id': 'r30LEzOOnXiOs5SHYQ8avZZZfdlpfa8js_ot9sz43Xk', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/r30LEzOOnXiOs5SHYQ8avZZZfdlpfa8js_ot9sz43Xk.png?width=108&crop=smart&auto=webp&s=71cf7c18d8f9bb8237127096adaa588f42090d4f', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/r30LEzOOnXiOs5SHYQ8avZZZfdlpfa8js_ot9sz43Xk.png?width=216&crop=smart&auto=webp&s=4899c2d524b49ac19bd567ea659db02e92fadd66', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/r30LEzOOnXiOs5SHYQ8avZZZfdlpfa8js_ot9sz43Xk.png?width=320&crop=smart&auto=webp&s=6e72b0477acb54ccdb4dbf426eafeb5c77ca3f81', 'width': 320}, {'height': 640, 'url': 'https://external-preview.redd.it/r30LEzOOnXiOs5SHYQ8avZZZfdlpfa8js_ot9sz43Xk.png?width=640&crop=smart&auto=webp&s=3f0e0b07f1bea2c7925600e7de19566ca2e1ba6b', 'width': 640}, {'height': 960, 'url': 'https://external-preview.redd.it/r30LEzOOnXiOs5SHYQ8avZZZfdlpfa8js_ot9sz43Xk.png?width=960&crop=smart&auto=webp&s=563b5909289a632bba6fe35fc4f262f8ef90aa25', 'width': 960}], 'source': {'height': 1024, 'url': 'https://external-preview.redd.it/r30LEzOOnXiOs5SHYQ8avZZZfdlpfa8js_ot9sz43Xk.png?auto=webp&s=241eb6177233803e90bd184a2a3e1978c9be0a36', 'width': 1024}, 'variants': {}}]} | ||
Prompt as code - A simple 3 gate system for smoke, light, and heavy tests | 0 | I keep seeing prompts treated as “magic strings” that people edit in production with no safety net. That works until you have multiple teams and hundreds of flows.
I am trying a simple “prompt as code” model:
* Prompts are versioned in Git.
* Every change passes three gates before it reaches users.
* Heavy tests double as monitoring for AI state in production.
**Three gates**
1. **Smoke tests (DEV)**
* Validate syntax, variables, and output format.
* Tiny set of rule based checks only.
* Fast enough to run on every PR so people can experiment freely without breaking the system.
2. **Light tests (STAGING)**
* 20 to 50 curated examples per prompt.
* Designed for behavior and performance:
* Do we still respect contracts other components rely on?
* Is behavior stable for typical inputs and simple edge cases?
* Are latency and token costs within budget?
3. **Heavy tests (PROD gate + monitoring)**
* 80 to 150 comprehensive cases that cover:
* Happy paths.
* Weird inputs, injection attempts, multilingual, multi turn flows.
* Safety and compliance scenarios.
* Must be 100 percent green for a critical prompt to go live.
* The same suite is re run regularly in PROD to track drift in model behavior or cost.
How are you all handling “prompt regression tests” today?
* Do you have a formal pipeline at all?
* Any lessons on keeping test sets maintainable as prompts evolve?
* Has anyone found a nice way to auto generate or refresh edge cases?
Would love to steal ideas from people further along. | 2025-11-24T22:28:21 | marcosomma-OrKA | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1p5vb6m | false | null | t3_1p5vb6m | /r/LocalLLaMA/comments/1p5vb6m/prompt_as_code_a_simple_3_gate_system_for_smoke/ | false | false | default | 0 | {'enabled': True, 'images': [{'id': 'u4uhy84n6a3g1', 'resolutions': [{'height': 72, 'url': 'https://preview.redd.it/u4uhy84n6a3g1.png?width=108&crop=smart&auto=webp&s=b8a042bf35fc75131ced9bbb2ad0320ea660b435', 'width': 108}, {'height': 144, 'url': 'https://preview.redd.it/u4uhy84n6a3g1.png?width=216&crop=smart&auto=webp&s=4bf88c38dfaab22b5487c0f3e4eb7cf1d456be5e', 'width': 216}, {'height': 213, 'url': 'https://preview.redd.it/u4uhy84n6a3g1.png?width=320&crop=smart&auto=webp&s=14f20b4181b5fdf3fe9edeaa7568a60c569a14bd', 'width': 320}, {'height': 426, 'url': 'https://preview.redd.it/u4uhy84n6a3g1.png?width=640&crop=smart&auto=webp&s=8a7e2e0680b621f28456cd5bba12d5645a56e538', 'width': 640}, {'height': 640, 'url': 'https://preview.redd.it/u4uhy84n6a3g1.png?width=960&crop=smart&auto=webp&s=7c74d0b4d8ad51e7a20a7f9eb965cd0d93e87dff', 'width': 960}, {'height': 720, 'url': 'https://preview.redd.it/u4uhy84n6a3g1.png?width=1080&crop=smart&auto=webp&s=c5e66efe6fc4ae2072def36aff0c178747a793d6', 'width': 1080}], 'source': {'height': 1024, 'url': 'https://preview.redd.it/u4uhy84n6a3g1.png?auto=webp&s=5149329182cf06a27e9a30bb51e3819d32616c01', 'width': 1536}, 'variants': {}}]} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.