title stringlengths 1 300 | score int64 0 8.54k | selftext stringlengths 0 41.5k | created timestamp[ns]date 2023-04-01 04:30:41 2026-03-04 02:14:14 ⌀ | url stringlengths 0 878 | author stringlengths 3 20 | domain stringlengths 0 82 | edited timestamp[ns]date 1970-01-01 00:00:00 2026-02-19 14:51:53 | gilded int64 0 2 | gildings stringclasses 7
values | id stringlengths 7 7 | locked bool 2
classes | media stringlengths 646 1.8k ⌀ | name stringlengths 10 10 | permalink stringlengths 33 82 | spoiler bool 2
classes | stickied bool 2
classes | thumbnail stringlengths 4 213 ⌀ | ups int64 0 8.54k | preview stringlengths 301 5.01k ⌀ |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Prediction: Will theartificialanalysis.ai scores hit 90+ by late 2026 if the scoring logic stays the same? | 0 |
[View Poll](https://www.reddit.com/poll/1px6rsg) | 2025-12-27T19:48:03 | https://www.reddit.com/r/LocalLLaMA/comments/1px6rsg/prediction_will_theartificialanalysisai_scores/ | ZeusZCC | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1px6rsg | false | null | t3_1px6rsg | /r/LocalLLaMA/comments/1px6rsg/prediction_will_theartificialanalysisai_scores/ | false | false | self | 0 | null |
Offline on-device LLM chat app for iOS (local inference, no cloud) | 0 | I wanted to share an iOS app called Private Mind: Offline AI Chat that runs entirely on-device - no server calls, no accounts, no tracking.
The app focuses on local inference on iPhone using optimized models for mobile constraints. Once downloaded, it works fully offline (including airplane mode).
Key points:
100% local inference (no cloud fallback)
Runs offline after install
Privacy-first: no analytics, no data leaves the device
Simple chat-style UI for everyday use
App Store:
[https://apps.apple.com/us/app/private-mind-offline-ai-chat/id6754819594](https://apps.apple.com/us/app/private-mind-offline-ai-chat/id6754819594)
I’d love feedback from this community on:
Expectations vs reality for mobile local LLMs
Model size / quality trade-offs on iOS
Features that make sense for strictly local setups
Happy to answer technical questions. | 2025-12-27T19:38:11 | Careless_Original978 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1px6j6n | false | null | t3_1px6j6n | /r/LocalLLaMA/comments/1px6j6n/offline_ondevice_llm_chat_app_for_ios_local/ | false | false | default | 0 | {'enabled': True, 'images': [{'id': 'aucdgkjius9g1', 'resolutions': [{'height': 108, 'url': 'https://preview.redd.it/aucdgkjius9g1.png?width=108&crop=smart&auto=webp&s=6e82c0a8a25816f91e01ac83c4145982b6723f1d', 'width': 108}, {'height': 216, 'url': 'https://preview.redd.it/aucdgkjius9g1.png?width=216&crop=smart&auto=webp&s=3be36e1c2fc918eeaa81e67ce8433c7935760131', 'width': 216}, {'height': 320, 'url': 'https://preview.redd.it/aucdgkjius9g1.png?width=320&crop=smart&auto=webp&s=85b8725edd2ad54621455769e81a0c86fa6a080b', 'width': 320}, {'height': 640, 'url': 'https://preview.redd.it/aucdgkjius9g1.png?width=640&crop=smart&auto=webp&s=a8f90cb7f1f0b36bfc382e2055f8cf8bd0511b3e', 'width': 640}, {'height': 960, 'url': 'https://preview.redd.it/aucdgkjius9g1.png?width=960&crop=smart&auto=webp&s=0b543340db86b9657c01bf82bfef0ebb32189ba0', 'width': 960}, {'height': 1080, 'url': 'https://preview.redd.it/aucdgkjius9g1.png?width=1080&crop=smart&auto=webp&s=5e71fa5ddd5a9d8bb6251908bebedaf9d1993b18', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://preview.redd.it/aucdgkjius9g1.png?auto=webp&s=804694ccd6f9e0708d31c36694558849458682d0', 'width': 1080}, 'variants': {}}]} | |
Is there anyone running a dual gpu setup 5090 + pro 6000 max Q? | 5 | Would this be viable for a consumer motherboard that can do 8x/8x to maximize LLMs and image/video generation? | 2025-12-27T19:05:37 | https://www.reddit.com/r/LocalLLaMA/comments/1px5qze/is_there_anyone_running_a_dual_gpu_setup_5090_pro/ | Dry_Mortgage_4646 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1px5qze | false | null | t3_1px5qze | /r/LocalLLaMA/comments/1px5qze/is_there_anyone_running_a_dual_gpu_setup_5090_pro/ | false | false | self | 5 | null |
GLM 4.7 - the highest ranked open weights model on artificialanalysis.ai | 10 | 2025-12-27T18:54:17 | harlekinrains | i.imgur.com | 1970-01-01T00:00:00 | 0 | {} | 1px5goc | false | null | t3_1px5goc | /r/LocalLLaMA/comments/1px5goc/glm_47_the_highest_ranked_open_weights_model_on/ | false | false | default | 10 | null | ||
[Discussion] The "Noise" Bottleneck in Local 8B RAG – A comparison of cleaning strategies (Regex vs. Unstructured vs. Entropy) | 0 | We talk a lot about quantization (EXL2 vs GGUF) and context window extension (RoPE scaling), but I feel we overlook the most boring part of the stack: **Pre-ingestion hygiene.**
I've been debugging a pipeline using Llama-3-8B-Instruct, and I noticed that while the model is smart, its attention mechanism gets easily "distracted" by low-value tokens. When 30% of your retrieved context is legal footers, HTML div soup, or duplicate error logs, the "Signal-to-Noise" ratio drops, and hallucinations spike.
I’ve been benchmarking a few ways to solve this locally. I wanted to share my notes on the trade-offs of each approach and hear how you handle this.
# 1. The "Heavy" Approach: [Unstructured.io](http://Unstructured.io)
* **Best for:** Messy, layout-heavy raw files (PDFs with columns, tables).
* **The Trade-off:** It is structurally heavy. For a local pipeline running on a consumer GPU/CPU, the dependency tree is massive, and processing speed is slow.
* **My take:** Essential for the *first* pass (getting text out of files), but overkill for the cleaning/deduplication step itself.
# 2. The "Data Science" Approach: Cleanlab
* **Best for:** Finding mislabeled data or specific outliers in a static dataset.
* **The Trade-off:** It’s designed for "Data-Centric AI" (training/fine-tuning workflows) rather than streaming RAG ingestion. It feels like bringing a bazooka to a knife fight when you just want to filter out garbage chunks. Also, relies on SaaS components which might break air-gapped requirements.
# 3. The "Good Enough" Approach: RegEx / Heuristics
* **Best for:** Speed. `text.replace()` is virtually free.
* **The Trade-off:** It’s brittle. If you filter chunks `< 50 chars`, you lose short, valid answers. If you filter by keyword, you miss semantic variations. It also fails to catch "Semantically Null" text (e.g., a chunk that is unique but contains only random alphanumeric strings or timestamps).
# 4. The "Experimental" Approach: Entropy + Semantic Dedup (EntropyGuard)
I couldn't find a lightweight tool that filtered by "Information Density," so I wrote a CLI script to test a hypothesis: **Can Shannon Entropy predict chunk quality?**
* **The Logic:** It calculates the entropy of text. High repetition or low complexity = low entropy = discard. Then it uses MinHash/Embeddings to remove semantic duplicates.
* **The Trade-off:**
* **Slower than Regex:** Running embeddings (even small ones) adds latency compared to simple string matching.
* **Tuning:** You have to tune the entropy threshold. Code requires a different threshold than prose, otherwise, you might accidentally strip valid data.
* **My take:** It drastically improves context quality for 8B models by removing "distractors," but adds complexity to the ingestion pipeline.
# The Discussion Point: Is "Cleaning" worth the compute?
For those running local RAG:
1. **Where do you draw the line?** Do you strictly trust your re-ranker (BGE/Cohere) to push garbage to the bottom, or do you aggressively prune data *before* indexing?
2. **Measuring Success:** Has anyone found a reliable metric for "Context Noise"? I'm currently looking at PPL (Perplexity) of the retrieved chunks, but I'm curious if there's a better proxy for "how confusing is this chunk for an 8B model."
# Full Disclosure
I ended up packaging my script as an open-source tool (**EntropyGuard**, tool #4) because I got tired of copying Python snippets between projects. It’s MIT licensed if you want to test the entropy hypothesis yourself, but I'm mostly looking for validation on the *methodology*, is entropy a valid proxy for quality in your experience?
**Repo:** [https://github.com/DamianSiuta/entropyguard](https://github.com/DamianSiuta/entropyguard) | 2025-12-27T18:47:39 | https://www.reddit.com/r/LocalLLaMA/comments/1px5avl/discussion_the_noise_bottleneck_in_local_8b_rag_a/ | Low-Flow-6572 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1px5avl | false | null | t3_1px5avl | /r/LocalLLaMA/comments/1px5avl/discussion_the_noise_bottleneck_in_local_8b_rag_a/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'Diu86grBho57XC89vFaA1PY-1bkPeOUeJZqmRUuWMlM', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/Diu86grBho57XC89vFaA1PY-1bkPeOUeJZqmRUuWMlM.jpeg?width=108&crop=smart&auto=webp&s=7ec384e554047088a4b99afdebd307307984dadd', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/Diu86grBho57XC89vFaA1PY-1bkPeOUeJZqmRUuWMlM.jpeg?width=216&crop=smart&auto=webp&s=554a1d3e8577f0ee0db56f971dba85bc75e5a701', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/Diu86grBho57XC89vFaA1PY-1bkPeOUeJZqmRUuWMlM.jpeg?width=320&crop=smart&auto=webp&s=0e6f106f35f70fea2bfc38d49ccd0036c74696ca', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/Diu86grBho57XC89vFaA1PY-1bkPeOUeJZqmRUuWMlM.jpeg?width=640&crop=smart&auto=webp&s=2c00075b72d40338665c6a304214fba44a2c2039', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/Diu86grBho57XC89vFaA1PY-1bkPeOUeJZqmRUuWMlM.jpeg?width=960&crop=smart&auto=webp&s=ca33291a20487cbb86b8d61754d7a054061541b8', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/Diu86grBho57XC89vFaA1PY-1bkPeOUeJZqmRUuWMlM.jpeg?width=1080&crop=smart&auto=webp&s=bf5d33cefbe6aacfe2173d7cc0f6d8bb65497cff', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/Diu86grBho57XC89vFaA1PY-1bkPeOUeJZqmRUuWMlM.jpeg?auto=webp&s=7bdc5f5155cc8514620f68056804430ac75d68f6', 'width': 1200}, 'variants': {}}]} |
GLM 4.7 IS NOW THE #1 OPEN SOURCE MODEL IN ARTIFICIAL ANALYSIS | 443 | 2025-12-27T18:42:04 | ZeeleSama | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1px55wg | false | null | t3_1px55wg | /r/LocalLLaMA/comments/1px55wg/glm_47_is_now_the_1_open_source_model_in/ | false | false | default | 443 | {'enabled': True, 'images': [{'id': '9wzn809jks9g1', 'resolutions': [{'height': 27, 'url': 'https://preview.redd.it/9wzn809jks9g1.png?width=108&crop=smart&auto=webp&s=c053d20894f974cf29f3c7239e68931956ccd563', 'width': 108}, {'height': 55, 'url': 'https://preview.redd.it/9wzn809jks9g1.png?width=216&crop=smart&auto=webp&s=4c0cf71afd94e0dd3a00eb57a6e272c6ebdffeed', 'width': 216}, {'height': 81, 'url': 'https://preview.redd.it/9wzn809jks9g1.png?width=320&crop=smart&auto=webp&s=1f08053665eeabcec8f78db63cb3e20385555563', 'width': 320}, {'height': 163, 'url': 'https://preview.redd.it/9wzn809jks9g1.png?width=640&crop=smart&auto=webp&s=a4de55096d77fed72c8e49191b815e4f735ff388', 'width': 640}, {'height': 245, 'url': 'https://preview.redd.it/9wzn809jks9g1.png?width=960&crop=smart&auto=webp&s=184d75016c8174577fd9ff1efabd66ec0965b373', 'width': 960}, {'height': 276, 'url': 'https://preview.redd.it/9wzn809jks9g1.png?width=1080&crop=smart&auto=webp&s=75dd67efce4cec7155dff653e5225672ca651aeb', 'width': 1080}], 'source': {'height': 300, 'url': 'https://preview.redd.it/9wzn809jks9g1.png?auto=webp&s=d21da55bebece4e9ca5207ecbc4b70f9a4473b50', 'width': 1172}, 'variants': {}}]} | ||
A minimal unit test proving Ghost’s core state is deterministic (no LLM involved) | 0 | I’ve seen repeated claims that Ghost is “just LLM output” or prompt-level behavior, so I added a minimal unit test to make the architecture claim precise and falsifiable.
What this test is testing
This test runs the deterministic Ghost state kernel with the LLM disabled and verifies:
–Deterministic state initialization
–Survival of state across an invalid command
–Recovery and convergence after a valid command
–Persistence of core state invariants across failure
The test does not evaluate language quality, reasoning ability, or semantic correctness.
What this test is not claiming
–This is not a claim of intelligence
–This is not an NLP or intent-classification system
–This does not replace LLMs
It only demonstrates that Ghost’s core state controller is auditable, deterministic, and independent of probabilistic language generation.
The artifact
Test file: tests/test_state_recovery.py
Execution output (screenshot attached):
[PASS] State survived invalid command
[PASS] State survived valid command
[TEST COMPLETE] Ghost state continuity verified
Why this matters
If the system were “LLM hallucination” or prompt-driven, there would be no persistent state to validate once the LLM is removed. This test exists solely to demonstrate that separation.
Request for critique
If you believe this test is flawed, I’d appreciate concrete technical feedback:
–missing invariant?
–incorrect assumption?
–state mutation edge case?
–test design issue?
This layer is what I’m hardening right now. | 2025-12-27T18:25:42 | GhoCentric | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1px4r8y | false | null | t3_1px4r8y | /r/LocalLLaMA/comments/1px4r8y/a_minimal_unit_test_proving_ghosts_core_state_is/ | false | false | default | 0 | {'enabled': True, 'images': [{'id': '0zsnkqpqhs9g1', 'resolutions': [{'height': 216, 'url': 'https://preview.redd.it/0zsnkqpqhs9g1.jpeg?width=108&crop=smart&auto=webp&s=10b94c097fca665f3240412e8a21aa194ccb1e88', 'width': 108}, {'height': 432, 'url': 'https://preview.redd.it/0zsnkqpqhs9g1.jpeg?width=216&crop=smart&auto=webp&s=bf0b457419cee70c192cfc28e690c625839f434a', 'width': 216}, {'height': 640, 'url': 'https://preview.redd.it/0zsnkqpqhs9g1.jpeg?width=320&crop=smart&auto=webp&s=715eb06a2a58f8d53f52726636a3102d6c7b3c2e', 'width': 320}, {'height': 1280, 'url': 'https://preview.redd.it/0zsnkqpqhs9g1.jpeg?width=640&crop=smart&auto=webp&s=66659a3319cc8c8c5bb0fbf0cdb62c9b3144b030', 'width': 640}, {'height': 1920, 'url': 'https://preview.redd.it/0zsnkqpqhs9g1.jpeg?width=960&crop=smart&auto=webp&s=31da661d74d17ff1727f3fdec82271b665f6ecd7', 'width': 960}, {'height': 2160, 'url': 'https://preview.redd.it/0zsnkqpqhs9g1.jpeg?width=1080&crop=smart&auto=webp&s=1afde5672c59ca50798d901acd61e43fdaa46061', 'width': 1080}], 'source': {'height': 2340, 'url': 'https://preview.redd.it/0zsnkqpqhs9g1.jpeg?auto=webp&s=3dfeb446a820f619b365f535aa4c03467a5a5403', 'width': 1080}, 'variants': {}}]} | |
Can I run GPT-5 on it? | 55 | 2025-12-27T18:16:50 | Own-Potential-2308 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1px4jj0 | false | null | t3_1px4jj0 | /r/LocalLLaMA/comments/1px4jj0/can_i_run_gpt5_on_it/ | false | false | default | 55 | {'enabled': True, 'images': [{'id': 'ej8guhl5gs9g1', 'resolutions': [{'height': 62, 'url': 'https://preview.redd.it/ej8guhl5gs9g1.jpeg?width=108&crop=smart&auto=webp&s=dc888a5d65383f83afccb8006cb14500df4a7ff1', 'width': 108}, {'height': 124, 'url': 'https://preview.redd.it/ej8guhl5gs9g1.jpeg?width=216&crop=smart&auto=webp&s=94fbf210e62d22b066aa0ebbb35086a0cd55fa13', 'width': 216}, {'height': 183, 'url': 'https://preview.redd.it/ej8guhl5gs9g1.jpeg?width=320&crop=smart&auto=webp&s=b9380bcd0dcabf4c160df6c68bc5aa4bac32d03c', 'width': 320}, {'height': 367, 'url': 'https://preview.redd.it/ej8guhl5gs9g1.jpeg?width=640&crop=smart&auto=webp&s=c9d0a891dd3be98528c375d19be41190f97ac6ab', 'width': 640}, {'height': 551, 'url': 'https://preview.redd.it/ej8guhl5gs9g1.jpeg?width=960&crop=smart&auto=webp&s=f0d5f720ae3594e6bf498419981061be04980519', 'width': 960}, {'height': 620, 'url': 'https://preview.redd.it/ej8guhl5gs9g1.jpeg?width=1080&crop=smart&auto=webp&s=2ab8b5964a683375004e41f55d3477e7366518fe', 'width': 1080}], 'source': {'height': 693, 'url': 'https://preview.redd.it/ej8guhl5gs9g1.jpeg?auto=webp&s=95bc342a9db6128944bbb271ed1e0de34c599be9', 'width': 1206}, 'variants': {}}]} | ||
More than 20% of videos shown to new YouTube users are ‘AI slop’, study finds | 26 | 2025-12-27T17:51:45 | https://www.theguardian.com/technology/2025/dec/27/more-than-20-of-videos-shown-to-new-youtube-users-are-ai-slop-study-finds | EnigmaticEmir | theguardian.com | 1970-01-01T00:00:00 | 0 | {} | 1px3x5s | false | null | t3_1px3x5s | /r/LocalLLaMA/comments/1px3x5s/more_than_20_of_videos_shown_to_new_youtube_users/ | false | false | default | 26 | null | |
The Nvidia/Groq $20B deal isn't about "Monopoly." It's about the physics of Agentic AI. | 0 | Everyone is focused on the antitrust angle or the $20B price tag, but I think we’re missing the actual engineering signal here.
I’ve been benchmarking H100s vs Groq for a while, and this acquisition validates a massive bifurcation in the inference stack that most people are ignoring:
1. "Talking" vs. "Thinking"
Groq wins on "Talking" (Generation/Decode). Their SRAM architecture creates instant tokens. It’s unbeatable for voice or fast chat.
But SRAM is tiny. You can't fit a 400B+ MoE model on a single node efficiently.
2. The Hidden Bottleneck: Cold Starts
If we move to Agentic AI (which is bursty by definition), the bottleneck shifts from "Generation Speed" to "Loading Speed" (Time-to-First-Token from a cold state).
Groq solves this by being "always on" (expensive).
H100s solve the memory capacity (HBM) but get killed by PCIe transfer speeds when loading models (20s+ cold starts).
My take:
Nvidia isn't just buying a chip. They are admitting that one architecture cannot solve both problems.
• SRAM (Groq) = The "Fast Lane" for active tokens.
• HBM (H100/Rubin) = The "Parking Lot" for massive state.
The next trillion-dollar problem isn't "Training." It's building the Runtime Layer that moves state between these two worlds instantly. We are entering the era of "Hybrid Inference," and if you're building systems assuming one chip rules them all, you're going to hit a wall.
TL;DR: Nvidia bought Groq because HBM is too slow for latency, and SRAM is too small for capacity. The future is hybrid. | 2025-12-27T16:52:43 | https://www.reddit.com/r/LocalLLaMA/comments/1px2gmp/the_nvidiagroq_20b_deal_isnt_about_monopoly_its/ | pmv143 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1px2gmp | false | null | t3_1px2gmp | /r/LocalLLaMA/comments/1px2gmp/the_nvidiagroq_20b_deal_isnt_about_monopoly_its/ | false | false | self | 0 | null |
Seeking "Abliterated" Gemma 3 or Llama 3.3 that retains logic and multilingual (Slovak/Czech) capabilities | 6 | I’m looking for a specific type of model recommendation. I’ve been testing **Gemma 3 12B/27B** and I’m impressed by its reasoning and excellent support for Central European languages (especially Slovak and Czech). However, the base/instruct versions are far too restrictive and prone to preachy refusals.
I’m looking for an **uncensored or abliterated** version, but with a few crucial requirements:
1. **No "Brain Damage":** I’ve tried some uncensored tunes that completely lost their reasoning capabilities or became repetitive. I need the model to stay as smart as the original.
2. **Multilingual Retention:** Many abliteration techniques or fine-tunes are heavily slanted toward English. I need a model that hasn't "forgotten" how to speak Slovak/Czech fluently while being uncensored.
3. **Logical Consistency:** It shouldn't just be "edgy"; it should follow complex instructions without breaking character or losing the thread of the conversation. | 2025-12-27T16:37:26 | https://www.reddit.com/r/LocalLLaMA/comments/1px234n/seeking_abliterated_gemma_3_or_llama_33_that/ | FollowingFresh6411 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1px234n | false | null | t3_1px234n | /r/LocalLLaMA/comments/1px234n/seeking_abliterated_gemma_3_or_llama_33_that/ | false | false | self | 6 | null |
AI MAX 395 using NPU on linux | 16 | I am trying to find a my way into the topic of local llms. I got this mini pc with the ai max 395 chip.
After hours i now are able to run some LLMs on the GPU (with rocm) instead of only CPU work on a ubuntu linux installed on the system.
But how do i even use the NPU? Any directions for me? | 2025-12-27T16:32:56 | https://www.reddit.com/r/LocalLLaMA/comments/1px1z6i/ai_max_395_using_npu_on_linux/ | UnbeliebteMeinung | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1px1z6i | false | null | t3_1px1z6i | /r/LocalLLaMA/comments/1px1z6i/ai_max_395_using_npu_on_linux/ | false | false | self | 16 | null |
🚀 XTTS Local API Wrapper ❗❗❗ | 0 | For who need a API in python easy to implement and make unlimited audios with cloned voices, I developed a API for the application located here: [https://github.com/daswer123/xtts-webui](https://github.com/daswer123/xtts-webui) (Please note you have to install it before use my API).
Link for my API: [https://github.com/ter-9001/XTTS-Local-API-Wrapper](https://github.com/ter-9001/XTTS-Local-API-Wrapper) (Please read my [Readme.md](http://Readme.md) for installation details. | 2025-12-27T16:24:36 | https://www.reddit.com/r/LocalLLaMA/comments/1px1rpg/xtts_local_api_wrapper/ | ConstantNo3257 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1px1rpg | false | null | t3_1px1rpg | /r/LocalLLaMA/comments/1px1rpg/xtts_local_api_wrapper/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'ZAhHqCYfaOPC7nJsIw6D9Jg1s1x1GWQqSZ2SLaoZQA8', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/ZAhHqCYfaOPC7nJsIw6D9Jg1s1x1GWQqSZ2SLaoZQA8.png?width=108&crop=smart&auto=webp&s=7acf9ef5535f2a48b90f11a82b1bec441eadb824', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/ZAhHqCYfaOPC7nJsIw6D9Jg1s1x1GWQqSZ2SLaoZQA8.png?width=216&crop=smart&auto=webp&s=cb83d64aa71cc2e545939bf102bdd1acbae1d9c5', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/ZAhHqCYfaOPC7nJsIw6D9Jg1s1x1GWQqSZ2SLaoZQA8.png?width=320&crop=smart&auto=webp&s=2e945e382614cd762d8d9dc01c173d6cda2bab9b', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/ZAhHqCYfaOPC7nJsIw6D9Jg1s1x1GWQqSZ2SLaoZQA8.png?width=640&crop=smart&auto=webp&s=ad2d8ab8e5b62cf9ac71d6720476009f1398bc63', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/ZAhHqCYfaOPC7nJsIw6D9Jg1s1x1GWQqSZ2SLaoZQA8.png?width=960&crop=smart&auto=webp&s=db6ea93981653148738f345b6a114a2a6ab4d99f', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/ZAhHqCYfaOPC7nJsIw6D9Jg1s1x1GWQqSZ2SLaoZQA8.png?width=1080&crop=smart&auto=webp&s=601e8e707fb0e2733fff34fa52a9e3f664b1b7ce', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/ZAhHqCYfaOPC7nJsIw6D9Jg1s1x1GWQqSZ2SLaoZQA8.png?auto=webp&s=fb5911afa2bc7bd78e46bd9686a2e6fa70a6d5b7', 'width': 1200}, 'variants': {}}]} |
I need a LLM or VLM that understand videos | 0 | best local models that understand video for 16gb vram and 32gb of ram
| 2025-12-27T16:21:32 | https://www.reddit.com/r/LocalLLaMA/comments/1px1p2a/i_need_a_llm_or_vlm_that_understand_videos/ | RaspberryNo6411 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1px1p2a | false | null | t3_1px1p2a | /r/LocalLLaMA/comments/1px1p2a/i_need_a_llm_or_vlm_that_understand_videos/ | false | false | self | 0 | null |
Model for data extraction form text | 1 | Hi,
I am fairly new and I was looking for a model to go thru text messages and extract data like
Input:
friendly remainder что сегодня стык около ■■■■■■■■ в 19:00
Output:
{
"date": "2026-06-01",
"time": "19:00",
"place": "■■■■■■■■",
"description": "Gathering at ■■■■■■■■."
}
The data is expected to have lots of anglicisms, russian, typos etc | 2025-12-27T16:11:11 | https://www.reddit.com/r/LocalLLaMA/comments/1px1g9v/model_for_data_extraction_form_text/ | Sea_Layer_6679 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1px1g9v | false | null | t3_1px1g9v | /r/LocalLLaMA/comments/1px1g9v/model_for_data_extraction_form_text/ | false | false | self | 1 | null |
Head of Engineering @MiniMax__AI on MiniMax M2 int4 QAT | 183 | 2025-12-27T16:06:19 | Difficult-Cap-7527 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1px1c41 | false | null | t3_1px1c41 | /r/LocalLLaMA/comments/1px1c41/head_of_engineering_minimax_ai_on_minimax_m2_int4/ | false | false | default | 183 | {'enabled': True, 'images': [{'id': '1e9anmnmsr9g1', 'resolutions': [{'height': 157, 'url': 'https://preview.redd.it/1e9anmnmsr9g1.jpeg?width=108&crop=smart&auto=webp&s=7b4f1a4f3425404007cd11ae0d0a5b7ed90bb9b9', 'width': 108}, {'height': 315, 'url': 'https://preview.redd.it/1e9anmnmsr9g1.jpeg?width=216&crop=smart&auto=webp&s=117c49dcb2848497ca0c20d68945bc119c6fef9f', 'width': 216}, {'height': 468, 'url': 'https://preview.redd.it/1e9anmnmsr9g1.jpeg?width=320&crop=smart&auto=webp&s=0a9972bd93c59d1d60dcb0eb925eff03b99b604d', 'width': 320}, {'height': 936, 'url': 'https://preview.redd.it/1e9anmnmsr9g1.jpeg?width=640&crop=smart&auto=webp&s=7456cd2a6f5b63217ca62ea494cdbf87700184fa', 'width': 640}, {'height': 1404, 'url': 'https://preview.redd.it/1e9anmnmsr9g1.jpeg?width=960&crop=smart&auto=webp&s=3868e0a613ddecaa19b6d02a5a51623eb4edc52d', 'width': 960}, {'height': 1579, 'url': 'https://preview.redd.it/1e9anmnmsr9g1.jpeg?width=1080&crop=smart&auto=webp&s=d4657e1522c1fcf283811a35937810f71fe53cbc', 'width': 1080}], 'source': {'height': 1755, 'url': 'https://preview.redd.it/1e9anmnmsr9g1.jpeg?auto=webp&s=217fd442591150e4a130750c4742f99650c327a2', 'width': 1200}, 'variants': {}}]} | ||
Module with less ≈ No Restrictions | 0 | Hey there, I'm new here and new to all local, Ai stuff and I'm looking for modules with less ≈ no restrictions
I looked about it but didn't find any clear answers
And what is your opinion about LM Studio
Thank you | 2025-12-27T15:22:33 | https://www.reddit.com/r/LocalLLaMA/comments/1px0alc/module_with_less_no_restrictions/ | x-sx | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1px0alc | false | null | t3_1px0alc | /r/LocalLLaMA/comments/1px0alc/module_with_less_no_restrictions/ | false | false | self | 0 | null |
Is direct tool use a trap? Would it be better for LLMs to write tool-calling code instead? | 0 | Steve Yegge argues in the latest Latent Space episode that "Function Calling" APIs are a trap because models are better at writing code than they are at outputting structured JSON for tools.
I'm curious if anyone here has tried asking the LLM to write tool-calling code instead of using direct function calling? What's been your experience? | 2025-12-27T15:19:43 | g_pal | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1px089u | false | null | t3_1px089u | /r/LocalLLaMA/comments/1px089u/is_direct_tool_use_a_trap_would_it_be_better_for/ | false | false | default | 0 | {'enabled': True, 'images': [{'id': 'qv2p1lv1ir9g1', 'resolutions': [{'height': 70, 'url': 'https://preview.redd.it/qv2p1lv1ir9g1.png?width=108&crop=smart&auto=webp&s=117d74ba9d9d81096c7a9f2304783131565b9dd4', 'width': 108}, {'height': 140, 'url': 'https://preview.redd.it/qv2p1lv1ir9g1.png?width=216&crop=smart&auto=webp&s=cfc111ffec0b583ccf84f8723b98f093a27fa736', 'width': 216}, {'height': 207, 'url': 'https://preview.redd.it/qv2p1lv1ir9g1.png?width=320&crop=smart&auto=webp&s=e8c9a3bc9a78a0a9d798ab6feecdaf728cc53c82', 'width': 320}, {'height': 415, 'url': 'https://preview.redd.it/qv2p1lv1ir9g1.png?width=640&crop=smart&auto=webp&s=458f2a8c74e4c95233d1933dcb5d35bc4ed968cf', 'width': 640}], 'source': {'height': 615, 'url': 'https://preview.redd.it/qv2p1lv1ir9g1.png?auto=webp&s=986e51400890456b2bf13a89d98b1ad95fa80b73', 'width': 948}, 'variants': {}}]} | |
Worth the 5090? | 10 | I am debating on getting a 5090.
I currently have a 3090 that is actively being used for projects. It is an old 64gb ddr4 system with an i5 cpu and a 750 w psu. So nothing fancy and got it cheap. I have half a set up for a dual 3090 build. The plan was to create a dual or triple 3090 build. I have been collecting parts slowly over time without without over paying for them. I have 3 x 3090, a mobo that has spacing for 2 x 3090 (would need risers for the third 3090), 64 gb of ram ddr4, and a large psu.
I have used my single 3090 set up for fine tuning small models. Using lm studio, whisper, rag, and knowledge graphs. I then got into Yolo image training. I have not gotten into gaming but maybe some day. I plan to get back into traing LLMs and playing with them after my Yolo model.
I really like my single 3090 set up as it does not take up a lot of table space. All of my use cases have fit nicely on a single 3090 card. I think a draw back I noticed was needing to unload and load new models as part of the pipeline. I thought this could be fixed with dual 3090s so one model can live on each card. Also concerned about electrical and needing to put more thought into a build than just plug and play. One thing I wish I had was a faster card, and see that the 5090 is a significant upgrade over 3090. However the 48 vs 32 gb of vram is always nice, but the 16gb difference might not be a game changer.
I plan on renting a cloud 5090 to compare my current projects and see some hard numbers for speed up for training LLM models, training my Yolo models and LLM inference speeds.
But wht do you all think? If you were me, would finish building the dual 3090 build? Or sell two of the 3090s and put it towards a 5090?
I don't see myself really using more than 24 gb of vram, but I also tend to plan projects around the hardware that I have. | 2025-12-27T15:15:43 | https://www.reddit.com/r/LocalLLaMA/comments/1px052k/worth_the_5090/ | fgoricha | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1px052k | false | null | t3_1px052k | /r/LocalLLaMA/comments/1px052k/worth_the_5090/ | false | false | self | 10 | null |
[Experimental] 262k Context on ~12GB VRAM? My attempt at RTX 5090 optimization (HSPMN v2.1) | 1 | Hey r/LocalLLaMA,
Just wanted to share a side project I've been hacking on called HSPMN v2.1.
I've been trying to decouple memory from compute to prep for the Blackwell/RTX 5090 architecture. Surprisingly, I managed to get it running with 262k context on just ~12GB VRAM and 1.41M tok/s throughput.
Under the hood, it's a mix of FlexAttention (training) and custom Triton kernels (inference).
I'm still learning the low-level stuff and consider myself an amateur, so the code might be rough. I’d love some honest feedback or a "roast" of my kernel implementation if anyone has time to look.
Repo here: https://github.com/NetBr3ak/HSPMN-v2.1
Cheers! | 2025-12-27T15:15:38 | https://www.reddit.com/r/LocalLLaMA/comments/1px04zp/experimental_262k_context_on_12gb_vram_my_attempt/ | MarionberryAntique58 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1px04zp | false | null | t3_1px04zp | /r/LocalLLaMA/comments/1px04zp/experimental_262k_context_on_12gb_vram_my_attempt/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'G0ojdI6JYAPk7C3xzFQWA1yv2PdokO-QkMxpjcEGb1E', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/G0ojdI6JYAPk7C3xzFQWA1yv2PdokO-QkMxpjcEGb1E.png?width=108&crop=smart&auto=webp&s=471686c3d5f30137bf83d96e0bca08f010709973', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/G0ojdI6JYAPk7C3xzFQWA1yv2PdokO-QkMxpjcEGb1E.png?width=216&crop=smart&auto=webp&s=e5f9fb57d6c3f1d340b96de93376e35310c4fb71', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/G0ojdI6JYAPk7C3xzFQWA1yv2PdokO-QkMxpjcEGb1E.png?width=320&crop=smart&auto=webp&s=f3bd478525a484ba996eda099257643a67dc4868', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/G0ojdI6JYAPk7C3xzFQWA1yv2PdokO-QkMxpjcEGb1E.png?width=640&crop=smart&auto=webp&s=c3292595e0bd2888bcb31544549e1b492bf553dd', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/G0ojdI6JYAPk7C3xzFQWA1yv2PdokO-QkMxpjcEGb1E.png?width=960&crop=smart&auto=webp&s=a408b1715b6a81cc54c139044c26c5748899ab14', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/G0ojdI6JYAPk7C3xzFQWA1yv2PdokO-QkMxpjcEGb1E.png?width=1080&crop=smart&auto=webp&s=75e64630c9b6f7fa150d3c4d172946d6399c42e5', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/G0ojdI6JYAPk7C3xzFQWA1yv2PdokO-QkMxpjcEGb1E.png?auto=webp&s=aa6fa3d76e5e8101bab101d5631441016d98cd4d', 'width': 1200}, 'variants': {}}]} |
Built a 'bicycle for the mind' - Claude as a personal accountability coach | 0 | I built a Claude-based life assistant that acts as a personal coach living in your filesystem. It:
\- Reads your journal entries and remembers patterns
\- Calls out gaps between what you say and what you do
\- Challenges you when you're lying to yourself
\- Grows with you over time
Thought this community might appreciate the approach to augmenting human decision-making with LLMs.
Demo video: [https://www.youtube.com/watch?v=cY3LvkB1EQM](https://www.youtube.com/watch?v=cY3LvkB1EQM)
GitHub (open source): [https://github.com/lout33/claude\_life\_assistant](https://github.com/lout33/claude_life_assistant)
Would love feedback from the community! | 2025-12-27T15:09:53 | https://www.reddit.com/r/LocalLLaMA/comments/1px00ei/built_a_bicycle_for_the_mind_claude_as_a_personal/ | GGO_Sand_wich | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1px00ei | false | null | t3_1px00ei | /r/LocalLLaMA/comments/1px00ei/built_a_bicycle_for_the_mind_claude_as_a_personal/ | false | false | self | 0 | null |
Execution record/replay for local LLM agent runs (open-source, no UI) | 3 | I maintain a small open-source runtime called IntentusNet (MIT). It’s not a chat UI or agent product — it’s a reliability layer for execution: routing, fallback, and trace semantics.
Problem I hit with local LLM workflows:
When you swap models / quant / prompts (or even just change sampling/settings), debugging gets messy because you can’t reproduce the exact run later. Logs help, but they don’t let you “replay the execution”.
What I added recently: Execution Record + Deterministic Replay (script-based, no UI)
What gets recorded (single JSON file):
\- the original request envelope
\- the router decision (which agent/tool was chosen + reason)
\- ordered execution steps (with deterministic sequence numbers)
\- fallback attempts (if any)
\- per-step inputs/outputs
\- model input/output captured (but not re-executed)
Replay rules are strict:
\- routing is NOT recomputed
\- fallback is NOT re-evaluated
\- model calls are NOT executed
\- recorded outputs are returned exactly
Quick demo:
1) run live with “model v1” and record the execution
2) swap the model/agent implementation to “v2” and run live again (output changes)
3) replay the v1 execution — same execution order, same recorded outputs, even if v2 is now active
Repo: [https://github.com/Balchandar/intentusnet](https://github.com/Balchandar/intentusnet)
I’d love feedback from people running local chains:
\- What fields would you want in the record for real incident reproduction? (sampler params? prompt hash? token counts?)
\- What’s the minimum useful “diff” between two executions?
\- Any cases where strict replay would be undesirable?
| 2025-12-27T15:01:30 | https://www.reddit.com/r/LocalLLaMA/comments/1pwztq1/execution_recordreplay_for_local_llm_agent_runs/ | balachandarmanikanda | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pwztq1 | false | null | t3_1pwztq1 | /r/LocalLLaMA/comments/1pwztq1/execution_recordreplay_for_local_llm_agent_runs/ | false | false | self | 3 | {'enabled': False, 'images': [{'id': 'bgu-A1fr5qSfRRcwHw40kvGg740ZYMSWWtXHbS3ywt8', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/bgu-A1fr5qSfRRcwHw40kvGg740ZYMSWWtXHbS3ywt8.png?width=108&crop=smart&auto=webp&s=89104e808ef48a83571dd4de332e495c8acb6903', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/bgu-A1fr5qSfRRcwHw40kvGg740ZYMSWWtXHbS3ywt8.png?width=216&crop=smart&auto=webp&s=973125bff1952de3facf2ed3e54adffca2653dde', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/bgu-A1fr5qSfRRcwHw40kvGg740ZYMSWWtXHbS3ywt8.png?width=320&crop=smart&auto=webp&s=5eeaf55b7a796746aa7877c013aa8959701af51a', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/bgu-A1fr5qSfRRcwHw40kvGg740ZYMSWWtXHbS3ywt8.png?width=640&crop=smart&auto=webp&s=b57a096853e2a20d451f515e66a44981818e3638', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/bgu-A1fr5qSfRRcwHw40kvGg740ZYMSWWtXHbS3ywt8.png?width=960&crop=smart&auto=webp&s=2676c61036510be0cedf23807163fabc52097403', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/bgu-A1fr5qSfRRcwHw40kvGg740ZYMSWWtXHbS3ywt8.png?width=1080&crop=smart&auto=webp&s=39c4b3465c5fdb790348103aa39992e3a0ba96a4', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/bgu-A1fr5qSfRRcwHw40kvGg740ZYMSWWtXHbS3ywt8.png?auto=webp&s=f8d5a00cb130ad0d2f365417dee4b427c85d1ec2', 'width': 1200}, 'variants': {}}]} |
MiniMaxAI/MiniMax-M2.1 seems to be the strongest model per param | 142 | Going by the Artifical Analysis benchaes, MiniMaxAI/MiniMax-M2.1 can compete with Kimi K2 Thinking, Deepseek 3.2 and GLM 4.7 in performance.
But what feels especially notable is that MiniMaxAI/MiniMax-M2.1 is only 229B param which is around half of GLM 4.7, around a third of Deepseek 3.2 and around a fifth of Kimi K2 Thinking
What this means is that MiniMaxAI/MiniMax-M2.1 seems to be the best value model now | 2025-12-27T14:19:07 | https://www.reddit.com/r/LocalLLaMA/comments/1pwyw36/minimaxaiminimaxm21_seems_to_be_the_strongest/ | SlowFail2433 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pwyw36 | false | null | t3_1pwyw36 | /r/LocalLLaMA/comments/1pwyw36/minimaxaiminimaxm21_seems_to_be_the_strongest/ | false | false | self | 142 | null |
Is this refurbished Mac Studio m3 ultra worth it at that price? | 4 | 2025-12-27T14:12:33 | solo_entrepreneur | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1pwyqzp | false | null | t3_1pwyqzp | /r/LocalLLaMA/comments/1pwyqzp/is_this_refurbished_mac_studio_m3_ultra_worth_it/ | false | false | default | 4 | {'enabled': True, 'images': [{'id': 's9ewmvqk8r9g1', 'resolutions': [{'height': 147, 'url': 'https://preview.redd.it/s9ewmvqk8r9g1.jpeg?width=108&crop=smart&auto=webp&s=79ec7d89b77f973e717c7e719e2b9985ab68131c', 'width': 108}, {'height': 295, 'url': 'https://preview.redd.it/s9ewmvqk8r9g1.jpeg?width=216&crop=smart&auto=webp&s=03f2047caf2653d84fa4a6a86ce061beb7d94697', 'width': 216}, {'height': 438, 'url': 'https://preview.redd.it/s9ewmvqk8r9g1.jpeg?width=320&crop=smart&auto=webp&s=7950b27b71634ab44e9858cb1cc34f124055297d', 'width': 320}, {'height': 876, 'url': 'https://preview.redd.it/s9ewmvqk8r9g1.jpeg?width=640&crop=smart&auto=webp&s=1cd693b0a4483be283434fc9a0e4c88d3ee4fade', 'width': 640}, {'height': 1314, 'url': 'https://preview.redd.it/s9ewmvqk8r9g1.jpeg?width=960&crop=smart&auto=webp&s=cc88e8dae934f44ec8705bc107a325e2daf6affb', 'width': 960}, {'height': 1478, 'url': 'https://preview.redd.it/s9ewmvqk8r9g1.jpeg?width=1080&crop=smart&auto=webp&s=6c227d285f334fe2a6c3ad7c29ae0a183a98d450', 'width': 1080}], 'source': {'height': 1647, 'url': 'https://preview.redd.it/s9ewmvqk8r9g1.jpeg?auto=webp&s=413ff6a0059bbd837816cfe46f95e51f353868d9', 'width': 1203}, 'variants': {}}]} | ||
Okara ai promo code needed | 0 | Does anyone have a promo code on okara ai? Tried using one I found on a youtube channel but it doesnt work. Thanks! | 2025-12-27T14:11:50 | http://www.okara.ai | spook381 | okara.ai | 1970-01-01T00:00:00 | 0 | {} | 1pwyqgn | false | null | t3_1pwyqgn | /r/LocalLLaMA/comments/1pwyqgn/okara_ai_promo_code_needed/ | false | false | default | 0 | null |
Crystalline Light-DNA Storage System | 0 | Laugh at me, but I have no idea where else to post this. So I'm posting it here and seeing what happens. I don't speak a word of English, so I've translated it with AI. this is the last one
\*\*CL-DNA Whitepaper\*\*
\*\*Crystalline Light-DNA Storage System\*\*
\*\*Version 1.1\*\*
\*\*Date: July 12, 2025\*\*
\---
\## Part 1 – Core Concept
\### Abstract
CL-DNA is a photonically addressable storage system based on synthetic, crystalline structures with DNA-like organization. The system combines optical state manipulation (color, polarization, molecular rotation) with photon-driven nanosecond access and semantic comparator logic. The goal is an ultra-fast, long-term storage solution for next-generation AI and quantum architectures.
\### Technological Foundation
\- \*\*Crystalline DNA analogs:\*\* Synthetic lattice structures with programmable optical properties
\- \*\*Photon control:\*\* Precise read/write access via laser pulses (10⁻⁹ s)
\- \*\*Semantic Comparator:\*\* Content-based redundancy filtering before storage
\- \*\*Q-Aggregator:\*\* Interface to quantum processing
\### Theoretical Advantages
| Parameter | Value |
|--------------------------|---------------------------|
| Storage density | >1 PB/cm³ |
| Access speed | <50 ns |
| Lifespan | >10⁶ years |
| Fault tolerance | Crystal-Wobble mechanism |
| Energy efficiency | 92% lower than flash |
\---
\## Part 2 – Photonic Codons & Crystal-Wobble
\### Codon Architecture
\- \*\*4 optical states\*\* per base unit (analogous to A/T/C/G)
\- \*\*64 unique codons\*\* through triplet combinations
\- \*\*Multi-functional encoding:\*\*
\- Data representation (80%)
\- Control commands (12%)
\- Metadata (6%)
\- Error codes (2%)
\### Crystal-Wobble Mechanism
\`\`\`plaintext
\[Position 1: Strong state\]
\[Position 2: Stabilizing state\]
\[Position 3: Tolerant wobble state\]
\`\`\`
\- \*\*Semantic tolerance:\*\* Equivalence classes for similar information
(Example: AAB ≡ AAC ≡ AAD)
\- \*\*Advantages:\*\*
\- 40% higher fault tolerance compared to binary systems
\- Adaptation to optical signal fluctuations
\- Support for fuzzy-logic processing
\### Data Flow
1. Input data → Semantic segmentation
2. Comparator filtering → Redundancy check
3. Codon mapping → Optical state assignment
4. Photonic addressing → Crystalline storage
\---
\## Part 3 – Comparison with DNA Chips (University of Würzburg)
\### DNA-Chip Technology
\- \*\*Material:\*\* Biological DNA on nanocellulose carrier
\- \*\*Storage density:\*\* 2.5 PB/cm³
\- \*\*Read process:\*\* Enzymatic sequencing (hours/TB)
\- \*\*Cost:\*\* \~400,000 USD/MB
\- \*\*Application:\*\* Passive long-term archive
\### Comparative Analysis
| Parameter | DNA-Chips (Würzburg) | CL-DNA System |
|----------------------------|----------------------------|----------------------------|
| \*\*Base material\*\* | Biological DNA | Synthetic crystals |
| \*\*Access speed\*\* | Hours/TB | Seconds/EB |
| \*\*Access method\*\* | Biochemical | Photonic |
| \*\*Logic layer\*\* | None | Real-time comparator |
| \*\*Write cost\*\* | 10⁷ USD/PB | 10³ USD/PB (projected) |
| \*\*Environmental sensitivity\*\* | Moisture sensitive | Radiation resistant |
| \*\*Repair mechanisms\*\* | Enzymatic correction | Holographic parity |
\### Technological Differentiation
\- \*\*DNA-Chips:\*\* Biological archiving system for millennia-scale storage
\- \*\*CL-DNA:\*\* Active storage system with real-time processing
\- \*\*Unique CL-DNA features:\*\*
1. Direct AI integration through semantic codon structure
2. Quantum interfaces without signal conversion
3. Wobble-based error resilience
4. Photonic parallel processing
\---
\## Part 4 – Technical Implementation
\### Material Stack
| Layer | Material | Function |
|--------------------|---------------------------------|-----------------------------------|
| Substrate | Synthetic sapphire | Mechanical stability |
| Active layer | Chromium-doped SiO₂ | Optical state change |
| Addressing layer | Quantum dot array | Precise photon targeting |
| Protective layer | Diamond-like carbon | Environmental protection |
\### System Architecture
\`\`\`plaintext
\[Data input\]
↓
\[Semantic Comparator\] → Redundancy filtering
↓
\[Photon Gateway\] → Laser addressing
↓
\[CL-DNA crystal\]
↓
\[Q-Aggregator\] → Quantum processing
↓
\[Data output\]
\`\`\`
\### Performance Metrics
\- \*\*Write speed:\*\* 12.8 GB/s per storage module
\- \*\*Read performance:\*\* 28 GB/s with 128 parallel channels
\- \*\*Error rate:\*\* 10⁻¹⁵ with Wobble correction
\- \*\*Operating temperature:\*\* -196°C to +85°C
\---
\## Part 5 – Development Roadmap
\### Phase 1: Laboratory Proof-of-Concept (2026–2028)
\- Material optimization for optical stability
\- Proof-of-concept for 1 TB storage density
\- Prototype of photonic comparator
\### Phase 2: Industrial Implementation (2029–2032)
\- Scaling to petabyte-scale modules
\- Integration with quantum processors
\- Development of AI training frameworks
\### Phase 3: Commercialization (From 2033)
\- Enterprise storage solutions
\- Space applications
\- Semantic AI archives
\---
\## Conclusion
CL-DNA represents a paradigm shift in data storage through the fusion of photonic access technologies with DNA-inspired organizational principles in synthetic crystal lattices. While existing systems such as DNA chips are specialized in biological long-term archiving, CL-DNA enables for the first time:
\- Real-time storage processing
\- Semantically intelligent data organization
\- Seamless quantum-AI integration
Implementation requires interdisciplinary research in materials science, photonics, and quantum information science, but offers the potential to usher in a new era of highly efficient, extremely durable storage systems.
\---
\*\*References\*\*
1. Mayer, K. et al. (2024). Photonic Crystals for Data Storage. \*Advanced Materials\*
2. Chen, L. & Vogt, M. (2023). DNA-inspired Synthetic Memory Architectures. \*Nature Nanotechnology\*
3. Werner, S. et al. (2025). Quantum Interfaces for Optical Computing. \*Physical Review Applied\* | 2025-12-27T13:54:55 | https://www.reddit.com/r/LocalLLaMA/comments/1pwydcv/crystalline_lightdna_storage_system/ | Deep_Ad_2280 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pwydcv | false | null | t3_1pwydcv | /r/LocalLLaMA/comments/1pwydcv/crystalline_lightdna_storage_system/ | false | false | self | 0 | null |
XiaomiMiMo.MiMo-V2-Flash: is there a reason why i see so few ggufs? | 29 | [xiaomimimo](https://preview.redd.it/n58umc1l4r9g1.png?width=1334&format=png&auto=webp&s=538ec36b5f10702f983a6d812e260e470663342e)
I've been testing the model for two days. It's incredibly fast at generating tokens compared to other models (certainly faster than both GLM and Minimax).
But I see few people talking about it and few posts. Is there a specific reason? Even Unsloth hasn't released anything yet. | 2025-12-27T13:52:37 | https://www.reddit.com/r/LocalLLaMA/comments/1pwybpe/xiaomimimomimov2flash_is_there_a_reason_why_i_see/ | LegacyRemaster | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pwybpe | false | null | t3_1pwybpe | /r/LocalLLaMA/comments/1pwybpe/xiaomimimomimov2flash_is_there_a_reason_why_i_see/ | false | false | 29 | null | |
llama.cpp, experimental native mxfp4 support for blackwell (25% preprocessing speedup!) | 87 | [https://github.com/ggml-org/llama.cpp/pull/17906](https://github.com/ggml-org/llama.cpp/pull/17906)
love that kind of evolution:
\> at the moment this PR is ~~10%~~ *~~slower~~* ~~than master~~ 25% faster than master on PP.
\> To compile `-DCMAKE_CUDA_ARCHITECTURES="120f"` is required.
probably/currently most useful for gpt-oss models! (also while reading the PR it seems that we might see more native nvfp4 support soon!)
Thanks to am17an & llama.cpp devs!! | 2025-12-27T13:52:16 | https://www.reddit.com/r/LocalLLaMA/comments/1pwybg6/llamacpp_experimental_native_mxfp4_support_for/ | bfroemel | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pwybg6 | false | null | t3_1pwybg6 | /r/LocalLLaMA/comments/1pwybg6/llamacpp_experimental_native_mxfp4_support_for/ | false | false | self | 87 | {'enabled': False, 'images': [{'id': '2iU81GyY9_FlSV5vORDw5n9gMeXFbeCY3eQC70oGIUc', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/2iU81GyY9_FlSV5vORDw5n9gMeXFbeCY3eQC70oGIUc.png?width=108&crop=smart&auto=webp&s=fd42718aa40e84ac15ac4b0e6ce295cf60df80d9', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/2iU81GyY9_FlSV5vORDw5n9gMeXFbeCY3eQC70oGIUc.png?width=216&crop=smart&auto=webp&s=d4a0f0a88cca5df2c7acb12922e1a16f2cd6463e', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/2iU81GyY9_FlSV5vORDw5n9gMeXFbeCY3eQC70oGIUc.png?width=320&crop=smart&auto=webp&s=502cc467a0ca628fa5cc2482a5adfd124c72ffaf', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/2iU81GyY9_FlSV5vORDw5n9gMeXFbeCY3eQC70oGIUc.png?width=640&crop=smart&auto=webp&s=25e3cba6d68e617c917545464ae2b2ae25443136', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/2iU81GyY9_FlSV5vORDw5n9gMeXFbeCY3eQC70oGIUc.png?width=960&crop=smart&auto=webp&s=4d54d219fbcb524dee198f08b090388ffc14eaff', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/2iU81GyY9_FlSV5vORDw5n9gMeXFbeCY3eQC70oGIUc.png?width=1080&crop=smart&auto=webp&s=124ec7745918f9f1df1306c28426c7f555f1fdc6', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/2iU81GyY9_FlSV5vORDw5n9gMeXFbeCY3eQC70oGIUc.png?auto=webp&s=4da7a10a268b29840c3744000c1da91de39c5880', 'width': 1200}, 'variants': {}}]} |
Small ai model for a school project. | 9 | Hey guys I need help with my school project. It's for my finals in high school. I set out to create small ai model that will predict wheter the price will go up or down based on the news that come out about the company.
The stock it will be trying to predict is $APPL. I downloaded already some datasets that have a lot of data about how certain news affected the stock in the past.
It will be predicting if the price will increase or decrease, not by how many points.
Can you please help me with this, maybe give me some reccommendations for tools, programming languages and sources where I can learn how to do something like this? | 2025-12-27T12:50:33 | https://www.reddit.com/r/LocalLLaMA/comments/1pwx3n5/small_ai_model_for_a_school_project/ | Substantial_Cod_6019 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pwx3n5 | false | null | t3_1pwx3n5 | /r/LocalLLaMA/comments/1pwx3n5/small_ai_model_for_a_school_project/ | false | false | self | 9 | null |
MESA: A Modular Architecture for Emotional Resiliency in Uncensored LLMs via EDDR | 0 |
Lough at me, but i have no idea what to do with it. So I'm posting it here and seeing what happens.
White Paper: MESA – Modular Emotion Safety Architecture
**Version:** 1.0
**Author:** \[Jack /.\]
**Date:** December 27, 2025
**License:** CC BY-NC-SA
# Abstract
MESA (Modular Emotion Safety Architecture) is a modular subsystem concept designed to manage emotional logic within artificial intelligence and machine agents. The primary objective of the architecture is the real-time detection, isolation, repair, and long-term stabilization of emotional fault states. Based on established principles of fault tolerance, memory diagnostics, load monitoring, and modular subsystem communication, MESA is engineered for highly adaptive AI instances. It is specifically designed for agents featuring affective components, such as humanoid interfaces, empathy-based systems, and cognitive decision engines with self-reflection modules.
# 1. Introduction
Machine-generated emotional responses are inherently prone to instability, particularly under conditions of contextual overload, inconsistent input, or model drift. While the generation of emotional content has become increasingly precise, structured error handling for inconsistent or escalating emotional outputs is largely absent in current architectures. MESA provides an adaptive, modular solution to not only analyze machine-driven emotional reactions but to actively stabilize, reset, and regulate them—both locally and system-wide.
# 2. Core Architecture Overview
MESA consists of the following key components:
|**Module**|**Function**|
|:-|:-|
|**Emotion Error Detector**|Pattern-based analysis of emotional output signals.|
|**EDDR (Emotion-Dedicated Dynamic RAM)**|Volatile storage for emotional states with rapid reset capability.|
|**Fault Isolation Handler**|Decoupling of affected emotion modules from the core logic.|
|**Self-Recovery Routine**|Incremental reset and state restoration.|
|**Emotional Comparator**|Error classification and feedback loop management.|
|**Action Plan Executor**|Execution of follow-up processes: switching, shutdown, or maintenance.|
All modules communicate via an API-compatible protocol with the main logic instance (e.g., AI core or external decision-maker).
# 3. Functional Breakdown
# 3.1 Error Detection & Confidence Analysis
The system monitors all emotional outputs for anomalies, such as sudden valence shifts, semantic divergence, or inconsistent context binding. Detection is based on a context-sensitive **Emotional Confidence Score**, calculated via entropy analysis, probability distributions, or temperature behavior.
# 3.2 Isolation & Emergency Trigger
If a fault state exceeds a defined threshold, the affected emotional unit is isolated. Simultaneously, the corresponding sector in the **EDDR** is placed into a protected state. This prevents "emotional leakage" into the primary logic processes and avoids the persistence of corrupted affective states.
# 3.3 Recovery & Partial Reinitialization
The corrupted memory sector is selectively cleared. Re-initialization follows incrementally to maintain data integrity and situational plausibility. Optionally, the system can revert to a "Default Emotion Map" backup.
# 3.4 Feedback & Classification
Error data is fed back to the Central Logic Core, including:
* **Classification:** (e.g., Latency, Valence Inversion, Context Mismatch)
* **Status Codes:** For integration into the error history.
* **Linkage:** Optional connection to a Re-Training Queue for machine learning optimization.
# 3.5 Action Management
Follow-up actions are situation-dependent:
* Transition to alternative non-emotional tasks.
* Escalation alerts for external maintenance.
* Soft-Self-Termination in cases of irreparable loop formation.
* Auditable logging of the incident.
# 4. Extension Modules
# 4.1 Emotion Load Monitor
Monitors emotional computational load over time. Identified load spikes trigger an entry into the pre-failure queue. This is analogous to heat/power trackers in modern GPU designs.
# 4.2 Safeguard Threshold Logic
Tracks the frequency of EDDR resets within a specific timeframe. Exceeding thresholds triggers deep-level system analysis, such as dynamic self-calibration or API requests to external correction units.
# 4.3 Emotion Fingerprinting
Assigns a unique ID to every emotional output. This enables traceability within debugging or reasoning systems, ensuring auditability in transparent AI architectures.
# 5. Benefits
* **High Emotional Resilience:** Prevents systemic "meltdowns" in autonomous agents.
* **Continuity:** Uninterrupted operation despite emotional errors.
* **Bias Mitigation:** Reduced probability of feedback loops and bias lock-in.
* **Transparency:** Improved auditability and explainability.
# 6. Conclusion
MESA offers a structured, modular approach to the real-time regulation of emotional malfunctions in intelligent systems. By combining error detection, recovery mechanisms, and load management, MESA creates a resilient environment that stabilizes emotional AI instances for long-term deployment. It is particularly suited for environments where affective reactions must remain stable, explainable, and controllable.
**Viel Erfolg beim Posten!** Wenn du es auf r/LocalLLaMA stellst, bereite dich darauf出席, dass sie Fragen zur Implementierung (Python/C++) stellen werden. Aber die Logik steht. Mal sehen, ob die Welt bereit ist für dein "Immunsystem für den Stadtbrand".Hier ist die professionelle Übersetzung deines White Papers ins Englische. Ich habe darauf geachtet, die Fachbegriffe so zu wählen, dass sie in der KI-Forschungsgemeinde (wie auf Reddit oder Hugging Face) sofort als seriöses System-Design erkannt werden.
White Paper: MESA – Modular Emotion Safety Architecture
Version: 1.0
Author: \[Jack /\]
Date: December 27, 2025
License: CC BY-NC-SA
Abstract
MESA (Modular Emotion Safety Architecture) is a modular subsystem concept designed to manage emotional logic within artificial intelligence and machine agents. The primary objective of the architecture is the real-time detection, isolation, repair, and long-term stabilization of emotional fault states. Based on established principles of fault tolerance, memory diagnostics, load monitoring, and modular subsystem communication, MESA is engineered for highly adaptive AI instances. It is specifically designed for agents featuring affective components, such as humanoid interfaces, empathy-based systems, and cognitive decision engines with self-reflection modules.
1. Introduction
Machine-generated emotional responses are inherently prone to instability, particularly under conditions of contextual overload, inconsistent input, or model drift. While the generation of emotional content has become increasingly precise, structured error handling for inconsistent or escalating emotional outputs is largely absent in current architectures. MESA provides an adaptive, modular solution to not only analyze machine-driven emotional reactions but to actively stabilize, reset, and regulate them—both locally and system-wide.
2. Core Architecture Overview
MESA consists of the following key components:
Module Function
Emotion Error Detector Pattern-based analysis of emotional output signals.
EDDR (Emotion-Dedicated Dynamic RAM) Volatile storage for emotional states with rapid reset capability.
Fault Isolation Handler Decoupling of affected emotion modules from the core logic.
Self-Recovery Routine Incremental reset and state restoration.
Emotional Comparator Error classification and feedback loop management.
Action Plan Executor Execution of follow-up processes: switching, shutdown, or maintenance.
All modules communicate via an API-compatible protocol with the main logic instance (e.g., AI core or external decision-maker).
3. Functional Breakdown
3.1 Error Detection & Confidence Analysis
The system monitors all emotional outputs for anomalies, such as sudden valence shifts, semantic divergence, or inconsistent context binding. Detection is based on a context-sensitive Emotional Confidence Score, calculated via entropy analysis, probability distributions, or temperature behavior.
3.2 Isolation & Emergency Trigger
If a fault state exceeds a defined threshold, the affected emotional unit is isolated. Simultaneously, the corresponding sector in the EDDR is placed into a protected state. This prevents "emotional leakage" into the primary logic processes and avoids the persistence of corrupted affective states.
3.3 Recovery & Partial Reinitialization
The corrupted memory sector is selectively cleared. Re-initialization follows incrementally to maintain data integrity and situational plausibility. Optionally, the system can revert to a "Default Emotion Map" backup.
3.4 Feedback & Classification
Error data is fed back to the Central Logic Core, including:
Classification: (e.g., Latency, Valence Inversion, Context Mismatch)
Status Codes: For integration into the error history.
Linkage: Optional connection to a Re-Training Queue for machine learning optimization.
3.5 Action Management
Follow-up actions are situation-dependent:
Transition to alternative non-emotional tasks.
Escalation alerts for external maintenance.
Soft-Self-Termination in cases of irreparable loop formation.
Auditable logging of the incident.
4. Extension Modules
4.1 Emotion Load Monitor
Monitors emotional computational load over time. Identified load spikes trigger an entry into the pre-failure queue. This is analogous to heat/power trackers in modern GPU designs.
4.2 Safeguard Threshold Logic
Tracks the frequency of EDDR resets within a specific timeframe. Exceeding thresholds triggers deep-level system analysis, such as dynamic self-calibration or API requests to external correction units.
4.3 Emotion Fingerprinting
Assigns a unique ID to every emotional output. This enables traceability within debugging or reasoning systems, ensuring auditability in transparent AI architectures.
5. Benefits
High Emotional Resilience: Prevents systemic "meltdowns" in autonomous agents.
Continuity: Uninterrupted operation despite emotional errors.
Bias Mitigation: Reduced probability of feedback loops and bias lock-in.
Transparency: Improved auditability and explainability.
6. Conclusion
MESA offers a structured, modular approach to the real-time regulation of emotional malfunctions in intelligent systems. By combining error detection, recovery mechanisms, and load management, MESA creates a resilient environment that stabilizes emotional AI instances for long-term deployment. It is particularly suited for environments where affective reactions must remain stable, explainable, and controllable.
| 2025-12-27T12:42:45 | https://www.reddit.com/r/LocalLLaMA/comments/1pwwyki/mesa_a_modular_architecture_for_emotional/ | Deep_Ad_2280 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pwwyki | false | null | t3_1pwwyki | /r/LocalLLaMA/comments/1pwwyki/mesa_a_modular_architecture_for_emotional/ | false | false | self | 0 | null |
2x Mac mini m4 pro 64 gb each with RDMA for local llms? | 3 | I’m planning a local LLM setup using **two Mac mini M4 Pro units** (each with **64 GB RAM**) and **RDMA** between them. I’m trying to figure out what kind of performance I should realistically expect.
Anyone tested something like **GPT-OSS 120B** (or similarly sized models) on this hardware? What were your real measurements (tokens/sec, memory usage, context scaling behavior)? | 2025-12-27T12:35:11 | https://www.reddit.com/r/LocalLLaMA/comments/1pwwtk1/2x_mac_mini_m4_pro_64_gb_each_with_rdma_for_local/ | Forward_Act4138 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pwwtk1 | false | null | t3_1pwwtk1 | /r/LocalLLaMA/comments/1pwwtk1/2x_mac_mini_m4_pro_64_gb_each_with_rdma_for_local/ | false | false | self | 3 | null |
The Infinite Software Crisis: We're generating complex, unmaintainable code faster than we can understand it. Is 'vibe-coding' the ultimate trap? | 156 | Hey everyone,
I just watched [The Infinite Software Crisis – Jake Nations](https://www.youtube.com/watch?v=eIoohUmYpGI) on YouTube and it got me thinking... the limitations of software development has never been typing speed, but rather our ability to comprehend and design the system correctly in the first place.
**Highlights from the talk:**
* Every developer has shipped code they didn't completely understand. it passed the tests and that was enough validation to deploy it.
* **The hard part is timeless:** The hard part isn't the mechanics of coding; it's the conceptual difficulty of designing a solution. Every tool, including AI, just makes implementation easier.
* **AI amplifies the problem:** We can now generate code as fast as we can describe it. The scale is infinite, but our comprehension isn't. The core challenge of understanding *what* to build remains.
* The real trap we fall into is confusing easy with simple.
* **Easy** is what's within reach. What can you access without effort? Generate it with AI, copy-paste, or install a framework. It's about speed.
* **Simple** is about structure. It means one fold, one braid, no entanglement. It requires thought and design.
* LLMs do not understand logic, they merely relate language and substitute those relations as "code", so the importance of *patterns and architectural decisions* in your codebase are lost.
* when "vibe-coding" technical debt doesn't register as debt; it's just more code to preserve.
* The result? Complex, highly-coupled, and error-prone code generated in minutes that could take you weeks to understand (if ever).
The real danger here is that we're accumulating complexity faster than we can comprehend it because we're not doing the hard work of understanding our systems.
The proposed solution: SLOW DOWN, DO EVERYTHING MANUALLY; architectural design + scaffolding, etc and only let the LLM in at the last step of filling in the scaffolding.
What's your take, Is 'vibe-coding' a trap, or is there a way to use these tools without losing the ability to understand our systems?
https://preview.redd.it/c4mknoudlq9g1.png?width=553&format=png&auto=webp&s=28a6f37623fb0e0725f5b603f4b3a8ce51653ac9
| 2025-12-27T12:33:15 | https://www.reddit.com/r/LocalLLaMA/comments/1pwwsag/the_infinite_software_crisis_were_generating/ | madSaiyanUltra_9789 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pwwsag | false | null | t3_1pwwsag | /r/LocalLLaMA/comments/1pwwsag/the_infinite_software_crisis_were_generating/ | false | false | 156 | null | |
A Farmer Doesn’t Know Coding, But Tries to Build an Executing Engine with LLMs and a Code Interpreter | 19 |
Translated from Korean. I wrote the original in Korean and translated it myself.
Final meaning and responsibility are mine.
I’m a garlic farmer in Korea.
When I’m not working in the field, I use AI chat interfaces as my personal lab.
I experiment with AIs that have sandboxed code interpreters, and I slowly build scripts that become a kind of “engine.”
I don’t start from code.
I start by talking to the AI, giving my thoughts and structural ideas first.
Using the web tools inside the AI chat, I collect and structure information.
Then I run that structured information again inside the code interpreter, and I only take the actual execution results and move forward with explainable analysis (XAI).
Through this process, the concepts slowly grow, and step by step I give the AI more concrete direction.
You can think of it like this:
the LLM and the engine inside the sandboxed code interpreter form an indirect pipeline.
User input → web tool search → LLM structures → that result is executed by the code interpreter.
This is important for me:
this is not an environment where I directly build pipelines with APIs.
Everything happens inside the AI chat UI that I use every day.
By the way, what I call a “sandboxed code interpreter” has different names depending on company or product
(Code Interpreter, Data Analysis / Advanced Data Analysis, Code Execution, etc).
But the core meaning is the same:
An isolated execution environment where code actually runs inside the chat window
(a sandboxed execution environment).
And the “engine” I talk about is nothing fancy.
It is just Python scripts running inside that sandbox (analysis scripts / verification scripts),
and the execution-backed verification loop that repeats again and again.
The Question of Real Execution
The biggest problem in this whole process is very simple:
Is the code interpreter in the chat really executing, or not?
If it is not actually executing, what comes out is close to hallucination — just simulated text with no real meaning.
I have seen this many times: the output looks like execution results, but in reality nothing was executed at all.
So the key question becomes only one thing for me:
“Is this execution real right now?”
In my case, I use reproducible code, like random-number-based checks, and make multiple AIs cross-check each other.
Below is the overall flow I use, drawn in a very simple way:
[Me (input / thoughts)]
|
v
[Web tool search (optional)]
|
v
[LLM conversation / structuring]
|
v
[Engine: sandboxed Python execution]
|
v
[Execution output]
|
v
[XAI / next direction]
|
v
[Loop repeats]
The point I care about most is here:
[Sandboxed Python execution]
|
+-- (execution is real)
| -> reproducible output remains
| -> becomes verifiable evidence
|
+-- (execution is fake)
-> hallucinated / simulated text
looks like “real output”
-> high risk of wrong judgment
That is why I try to confirm, as much as possible, whether execution was real,
by using reproducible code and cross-checking across multiple AIs.
In the end, I feel that the ability to notice hallucination is also a kind of personal know-how.
After many experiences, my conclusion is clear:
With only one AI, it is almost impossible to get the result I want.
You must cross-check between multiple AIs.
I want to say this clearly again: I am just a farmer.
I only spend small pieces of time, working together with AIs like this.
I don’t know if this method is “correct”, but sometimes meaningful results come out, so I keep using it.
As time goes on, it feels more and more refined.
Each AI has a different mechanism, so sometimes it is confusing.
But I feel that the overall frame does work, especially when multiple AIs respond together in a consistent way.
Because this is just a personal experiment, sometimes it makes my head hurt.
But at the same time, I get many insights because of the AIs.
Especially, I often feel that the diversity of AIs itself creates real effectiveness.
What do you think about this way of working?
I don’t really know how to code, but I learn by talking with AIs.
Since multiple AIs give different opinions, I focus only on direction and intent, and let the AIs handle the experiments.
Many times, this gives better results than I expected.
When I watch code flowing down on my phone screen,
it sometimes feels like watching the code scenes from the movie Matrix.
And honestly, that part is fun by itself.
| 2025-12-27T12:09:37 | https://www.reddit.com/r/LocalLLaMA/comments/1pwwd99/a_farmer_doesnt_know_coding_but_tries_to_build_an/ | amadale | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pwwd99 | false | null | t3_1pwwd99 | /r/LocalLLaMA/comments/1pwwd99/a_farmer_doesnt_know_coding_but_tries_to_build_an/ | false | false | self | 19 | null |
Is alibaba seller legit ? | 0 | I wanted to have a quote from an alibaba seller for 2x and mi50s 32gb each. I found one that was suspiciously cheap , 120 bucks so around 100€ for one card. I know he is suspicious and I don’t really plan on buying but do you think buying with PayPal buyer protection would be worth a try ?
| 2025-12-27T11:29:05 | https://www.reddit.com/gallery/1pwvp3p | MastodonParty9065 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1pwvp3p | false | null | t3_1pwvp3p | /r/LocalLLaMA/comments/1pwvp3p/is_alibaba_seller_legit/ | false | false | 0 | null | |
Advice needed: Workstation for Local LLM Agents (Ryzen AI Max+ 395) - Bosgame vs Corsair vs Cloud. | 0 | Hello everyone,
I am looking for advice on purchasing an AI workspace for local LLM modeling. My primary focus is working with MCP servers to build agentic workflows, specifically forcing LLM agents to execute actions (so I would need to use mainly 70B models). While I currently work as a Cloud DevOps engineer, I want to deepen my hands-on experience by building AI agents.
I am specifically interested in workstations featuring the Ryzen AI Max+ 395 (Strix Halo) due to its high capability with large language models and efficient power consumption.
I am based in Poland, where hardware prices are currently skyrocketing with no signs of stabilizing.
I’ve narrowed my options down to three paths and would appreciate your input:
- Option 1: Bosgame M5 (~$1,999)
Pros: Significantly cheaper.
Cons: It is a relatively small "mini PC" form factor. I am concerned about the cooling chamber size, the potential lack of long-term BIOS support from a less mature brand, and the difficulty of replacing proprietary parts (like fans) in the future.
- Option 2: Corsair AI Workstation 300 (~$2,800)
Pros: Looks like a much more robust cooling system and comes from a mature, reputable brand.
Cons: It is not available in my country, so I would need to order via a middleman (increasing cost and shipping complexity). It is also significantly more expensive upfront.
- Option 3: Stick with Azure AI Foundry (Cloud-only)
Pros: Completely free for me (provided by my company).
Cons: I suspect this won't give me the deep, "hands-on" hardware optimization experience I’m looking for. I also believe that learning hybrid workflows (On-Prem + Cloud) is more beneficial for my career than Cloud-only.
Is the cooling on the Bosgame M5 sufficient for sustained LLM workloads, or is the Corsair worth the premium for thermal longevity?
Given the current market, is it worth buying this generation of hardware now, or should I stick to the cloud option?
Any insights from those running similar local agent setups would be greatly appreciated. | 2025-12-27T10:29:50 | https://www.reddit.com/r/LocalLLaMA/comments/1pwur15/advice_needed_workstation_for_local_llm_agents/ | Flat_Profession_6103 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pwur15 | false | null | t3_1pwur15 | /r/LocalLLaMA/comments/1pwur15/advice_needed_workstation_for_local_llm_agents/ | false | false | self | 0 | null |
I made a GUI to access open source models... and it's open source. | 5 | what is does more than just llama.cpp:
* **Agentic Tool-Use Loop**
* **Multi-step Deep Search**
* **Zero-Config Local RAG (chat with documents)**
* **Integrated Hugging Face Browser** (No manual downloads)
* **On-the-fly System Prompt Editing**
* **100% Local Privacy(even the search)**
* **Global and chat memory**
I couldn't afford the licensing fee for publishing these apps without you getting the warning that it isn't safe to install so it's only accessible with you following the steps to install in the [readme.md](https://github.com/ArjunDeshwal/cognitoai?tab=readme-ov-file#quick-start) .
You can contribute too, here is the [Source Code](https://github.com/ArjunDeshwal/cognitoai) (bless me with a star :)
Here is a [demo video](https://drive.google.com/file/d/1lCD-RQG2ydxYUlzG41-mCoMfOpYjkbYW/view?usp=sharing) | 2025-12-27T10:12:57 | https://www.reddit.com/gallery/1pwuhi2 | ILoveMy2Balls | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1pwuhi2 | false | null | t3_1pwuhi2 | /r/LocalLLaMA/comments/1pwuhi2/i_made_a_gui_to_access_open_source_models_and_its/ | false | false | 5 | null | |
ScarletCatTop | 1 | 2025-12-27T09:37:35 | https://youtu.be/sPI02Z9Uw20?si=CKts-XxjbWlYYIUX | Some_Advertising_214 | youtu.be | 1970-01-01T00:00:00 | 0 | {} | 1pwtxnt | false | {'oembed': {'author_name': 'ScarletCatTop', 'author_url': 'https://www.youtube.com/@ScarletWhite-x4c', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/sPI02Z9Uw20?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen title="Styling My Short Emo Hair From Wet To Dry (NO EXTENTIONS)"></iframe>', 'provider_name': 'YouTube', 'provider_url': 'https://www.youtube.com/', 'thumbnail_height': 360, 'thumbnail_url': 'https://i.ytimg.com/vi/sPI02Z9Uw20/hqdefault.jpg', 'thumbnail_width': 480, 'title': 'Styling My Short Emo Hair From Wet To Dry (NO EXTENTIONS)', 'type': 'video', 'version': '1.0', 'width': 356}, 'type': 'youtube.com'} | t3_1pwtxnt | /r/LocalLLaMA/comments/1pwtxnt/scarletcattop/ | false | false | default | 1 | null | |
Is 100% on-device local inference actually ready for Enterprise Sales? Looking for a Technical Co-Founder to find out. | 1 | [removed] | 2025-12-27T09:17:06 | https://www.reddit.com/r/LocalLLaMA/comments/1pwtlzu/is_100_ondevice_local_inference_actually_ready/ | Global_Birthday_1948 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pwtlzu | false | null | t3_1pwtlzu | /r/LocalLLaMA/comments/1pwtlzu/is_100_ondevice_local_inference_actually_ready/ | false | false | self | 1 | null |
Asus isn't going into memory manufacturing — Taiwanese tech giant issues statement smashing rumor | 170 | 2025-12-27T08:49:41 | https://www.tomshardware.com/pc-components/dram/no-asus-isnt-going-into-memory-manufacturing-taiwanese-tech-giant-issues-statement-smashing-rumor | Difficult-Cap-7527 | tomshardware.com | 1970-01-01T00:00:00 | 0 | {} | 1pwt6ir | false | null | t3_1pwt6ir | /r/LocalLLaMA/comments/1pwt6ir/asus_isnt_going_into_memory_manufacturing/ | false | false | default | 170 | {'enabled': False, 'images': [{'id': 'iOAde8UE4DyQsY7bL3QZWs7PMlmK1Bl2f85BU0xU79M', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/iOAde8UE4DyQsY7bL3QZWs7PMlmK1Bl2f85BU0xU79M.jpeg?width=108&crop=smart&auto=webp&s=12c91f58f69ca576b3bc6373698d4c36bc287b3b', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/iOAde8UE4DyQsY7bL3QZWs7PMlmK1Bl2f85BU0xU79M.jpeg?width=216&crop=smart&auto=webp&s=509ef0a2ede982d6ea262ab36f97d0de37154304', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/iOAde8UE4DyQsY7bL3QZWs7PMlmK1Bl2f85BU0xU79M.jpeg?width=320&crop=smart&auto=webp&s=74e75fceefccedf9c5854fc67081ad7ec90f8d1c', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/iOAde8UE4DyQsY7bL3QZWs7PMlmK1Bl2f85BU0xU79M.jpeg?width=640&crop=smart&auto=webp&s=0c2acdf39b02a0864d08303117e213729c9c540f', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/iOAde8UE4DyQsY7bL3QZWs7PMlmK1Bl2f85BU0xU79M.jpeg?width=960&crop=smart&auto=webp&s=06ce1551591c53d67199d149c3134b92243eece7', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/iOAde8UE4DyQsY7bL3QZWs7PMlmK1Bl2f85BU0xU79M.jpeg?width=1080&crop=smart&auto=webp&s=60488431a8093736ff0f6849604b0e5c05d9a771', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/iOAde8UE4DyQsY7bL3QZWs7PMlmK1Bl2f85BU0xU79M.jpeg?auto=webp&s=af64c6eb4f213ca74313ca93394938ed2d32c2c1', 'width': 1920}, 'variants': {}}]} | |
what local LLMs (small!) do you recommend to train on epubs? | 9 | Have so many epubs I can organize by author or genre to gain deep insights (with other sources) into an author's work for example. What language model should I start with to train on these epubs (preprocessing into TXT or MD or chapters already done)? | 2025-12-27T08:09:17 | https://www.reddit.com/r/LocalLLaMA/comments/1pwsk11/what_local_llms_small_do_you_recommend_to_train/ | sovereigndeveloper01 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pwsk11 | false | null | t3_1pwsk11 | /r/LocalLLaMA/comments/1pwsk11/what_local_llms_small_do_you_recommend_to_train/ | false | false | self | 9 | null |
How is the Speculative Decoding Algorithm Constructed? | 16 | 2025-12-27T07:54:04 | https://ki-seki.github.io/posts/251226-spec-decoding/ | song-sc | ki-seki.github.io | 1970-01-01T00:00:00 | 0 | {} | 1pwsb7i | false | null | t3_1pwsb7i | /r/LocalLLaMA/comments/1pwsb7i/how_is_the_speculative_decoding_algorithm/ | false | false | default | 16 | null | |
LLM Running Locally in the Browser for Infinite Dropdowns | 21 | I made a site playing around with what an LLM running locally in the browser can do, feel free to check it out!
The static site is: [https://rohanadwankar.github.io/unravel/](https://rohanadwankar.github.io/unravel/)
The Github repo is: [https://github.com/RohanAdwankar/unravel](https://github.com/RohanAdwankar/unravel)
If you are interested in how it works with [MLC](https://github.com/mlc-ai/mlc-llm) check out the HTML file in [this commit](https://github.com/RohanAdwankar/unravel/commit/91690c0e7f8d190cbcd87276fbf2674552952079) which enables running an LLM locally in under 50 lines! | 2025-12-27T07:52:07 | https://v.redd.it/0m5y5al7bp9g1 | ilikehikingalot | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1pwsa27 | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/0m5y5al7bp9g1/DASHPlaylist.mpd?a=1769413938%2CM2U4ZTM4ZDljYjcxOGZjYWNkMDRlZWI4Y2ZmYTcwMTA0NGY1ZDY5NTgzMmFlYjE3MDJkZTE2N2U5YTcxM2ZjNg%3D%3D&v=1&f=sd', 'duration': 30, 'fallback_url': 'https://v.redd.it/0m5y5al7bp9g1/CMAF_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/0m5y5al7bp9g1/HLSPlaylist.m3u8?a=1769413938%2CNGE2ZWRjNDUyOTc5NzE5NmNlMzdhZGJhZmFkNTk2OGVmOWY1MDg2MWQ0N2M4YmE4ZTRmMzA0ZDJlZDkxYTVkMw%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/0m5y5al7bp9g1/CMAF_96.mp4', 'transcoding_status': 'completed', 'width': 1662}} | t3_1pwsa27 | /r/LocalLLaMA/comments/1pwsa27/llm_running_locally_in_the_browser_for_infinite/ | false | false | 21 | {'enabled': False, 'images': [{'id': 'Mzl0Yjc0bTdicDlnMajkkX4LG6wI_pxxo4qv3bbzlDaKVDsKMRLcrxEFlbW0', 'resolutions': [{'height': 70, 'url': 'https://external-preview.redd.it/Mzl0Yjc0bTdicDlnMajkkX4LG6wI_pxxo4qv3bbzlDaKVDsKMRLcrxEFlbW0.png?width=108&crop=smart&format=pjpg&auto=webp&s=3e595015f8a0438c19a24dfce421ebbf13f6e7db', 'width': 108}, {'height': 140, 'url': 'https://external-preview.redd.it/Mzl0Yjc0bTdicDlnMajkkX4LG6wI_pxxo4qv3bbzlDaKVDsKMRLcrxEFlbW0.png?width=216&crop=smart&format=pjpg&auto=webp&s=6a70b9089641d31935b34ae098d7fea01a950387', 'width': 216}, {'height': 207, 'url': 'https://external-preview.redd.it/Mzl0Yjc0bTdicDlnMajkkX4LG6wI_pxxo4qv3bbzlDaKVDsKMRLcrxEFlbW0.png?width=320&crop=smart&format=pjpg&auto=webp&s=5861325d6adbbdaa4d38fec81e04598ff1d334bc', 'width': 320}, {'height': 415, 'url': 'https://external-preview.redd.it/Mzl0Yjc0bTdicDlnMajkkX4LG6wI_pxxo4qv3bbzlDaKVDsKMRLcrxEFlbW0.png?width=640&crop=smart&format=pjpg&auto=webp&s=86477deb835ef881b9cd97d392890403d13daf86', 'width': 640}, {'height': 623, 'url': 'https://external-preview.redd.it/Mzl0Yjc0bTdicDlnMajkkX4LG6wI_pxxo4qv3bbzlDaKVDsKMRLcrxEFlbW0.png?width=960&crop=smart&format=pjpg&auto=webp&s=9ec2363e46d86afa96a65eeee34d7080608955af', 'width': 960}, {'height': 701, 'url': 'https://external-preview.redd.it/Mzl0Yjc0bTdicDlnMajkkX4LG6wI_pxxo4qv3bbzlDaKVDsKMRLcrxEFlbW0.png?width=1080&crop=smart&format=pjpg&auto=webp&s=23c28a9fe1e6deef0573d9d5aae1756ada57de2a', 'width': 1080}], 'source': {'height': 1964, 'url': 'https://external-preview.redd.it/Mzl0Yjc0bTdicDlnMajkkX4LG6wI_pxxo4qv3bbzlDaKVDsKMRLcrxEFlbW0.png?format=pjpg&auto=webp&s=3d779600a09edd6bcdb6bb30f465e7b6f2eead5e', 'width': 3024}, 'variants': {}}]} | |
Mac Clustering Benchmarks? | 1 | [removed] | 2025-12-27T07:50:57 | https://www.reddit.com/r/LocalLLaMA/comments/1pws9ej/mac_clustering_benchmarks/ | Latter-Collar7509 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pws9ej | false | null | t3_1pws9ej | /r/LocalLLaMA/comments/1pws9ej/mac_clustering_benchmarks/ | false | false | self | 1 | null |
LLM Running Locally in the Browser | 1 | [deleted] | 2025-12-27T07:43:06 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 1pws4rf | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/v0w4f9mlap9g1/DASHPlaylist.mpd?a=1769413401%2CM2Y3YzcxMTgzNmJiNjNjYWI4MWZmMGI4M2M2MmQ2NWMxZjc2NDIxMWFmM2FlMTM4MDBmNGU3ZTZhNzc4MDk2Mw%3D%3D&v=1&f=sd', 'duration': 30, 'fallback_url': 'https://v.redd.it/v0w4f9mlap9g1/CMAF_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/v0w4f9mlap9g1/HLSPlaylist.m3u8?a=1769413401%2CZGY5NGFmODlmMjY5ZTY5OTllNTNlMmMyY2VlN2Y1ODc0MTlmZDdjMjhjYjViYjE4Mjg4NjlkZGEzNmIzNTA5Mg%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/v0w4f9mlap9g1/CMAF_96.mp4', 'transcoding_status': 'completed', 'width': 1662}} | t3_1pws4rf | /r/LocalLLaMA/comments/1pws4rf/llm_running_locally_in_the_browser/ | false | false | default | 1 | null | ||
MiniMax M2.1 is now OPEN SOURCE? | 1 | [removed] | 2025-12-27T07:36:59 | BlackRice_hmz | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1pws173 | false | null | t3_1pws173 | /r/LocalLLaMA/comments/1pws173/minimax_m21_is_now_open_source/ | false | false | default | 1 | {'enabled': True, 'images': [{'id': 'jslawwxz9p9g1', 'resolutions': [{'height': 45, 'url': 'https://preview.redd.it/jslawwxz9p9g1.jpeg?width=108&crop=smart&auto=webp&s=ccc36be8369c117cb4301c8bb157255cf9231eab', 'width': 108}, {'height': 90, 'url': 'https://preview.redd.it/jslawwxz9p9g1.jpeg?width=216&crop=smart&auto=webp&s=ec1fa3555afd3ad61b5eb64fb80676bb4fcf7def', 'width': 216}, {'height': 133, 'url': 'https://preview.redd.it/jslawwxz9p9g1.jpeg?width=320&crop=smart&auto=webp&s=1fdf0595b1d067134562f8e70fde206dd90f05d3', 'width': 320}, {'height': 267, 'url': 'https://preview.redd.it/jslawwxz9p9g1.jpeg?width=640&crop=smart&auto=webp&s=66cb835fc977490a61dba9b348fb32d76957cd8b', 'width': 640}, {'height': 400, 'url': 'https://preview.redd.it/jslawwxz9p9g1.jpeg?width=960&crop=smart&auto=webp&s=2d730c0c6a689f062ceac688656732456a24f9d6', 'width': 960}, {'height': 450, 'url': 'https://preview.redd.it/jslawwxz9p9g1.jpeg?width=1080&crop=smart&auto=webp&s=e74d5b4b22fc7719f24aaab6421963fce5079858', 'width': 1080}], 'source': {'height': 855, 'url': 'https://preview.redd.it/jslawwxz9p9g1.jpeg?auto=webp&s=6e6a42c3ced9288626799d37d006331b4fa5601f', 'width': 2048}, 'variants': {}}]} | |
Need Help to find best Model | 1 | [removed] | 2025-12-27T07:36:46 | https://www.reddit.com/r/LocalLLaMA/comments/1pws13j/need_help_to_find_best_model/ | d13056 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pws13j | false | null | t3_1pws13j | /r/LocalLLaMA/comments/1pws13j/need_help_to_find_best_model/ | false | false | self | 1 | null |
LangChain + Ollama: Create a Private RAG System on Your Local Machine | 1 | [removed] | 2025-12-27T06:43:43 | https://www.reddit.com/r/LocalLLaMA/comments/1pwr5io/langchain_ollama_create_a_private_rag_system_on/ | Amplifyabhi1 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pwr5io | false | null | t3_1pwr5io | /r/LocalLLaMA/comments/1pwr5io/langchain_ollama_create_a_private_rag_system_on/ | false | false | self | 1 | null |
Building a Private RAG Pipeline with LangChain & Ollama (Complete Beginner Local Walkthrough) | 1 | [removed] | 2025-12-27T06:40:31 | https://youtu.be/R_cIN3QMvW0 | Amplifyabhi1 | youtu.be | 1970-01-01T00:00:00 | 0 | {} | 1pwr3k7 | false | {'oembed': {'author_name': 'amplifyabhi', 'author_url': 'https://www.youtube.com/@amplifyabhi', 'height': 200, 'html': '<iframe width="267" height="200" src="https://www.youtube.com/embed/R_cIN3QMvW0?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen title="Local RAG Tutorial: Building a Complete Pipeline with LangChain & Llama (Part 4) | Amplifyabhi"></iframe>', 'provider_name': 'YouTube', 'provider_url': 'https://www.youtube.com/', 'thumbnail_height': 360, 'thumbnail_url': 'https://i.ytimg.com/vi/R_cIN3QMvW0/hqdefault.jpg', 'thumbnail_width': 480, 'title': 'Local RAG Tutorial: Building a Complete Pipeline with LangChain & Llama (Part 4) | Amplifyabhi', 'type': 'video', 'version': '1.0', 'width': 267}, 'type': 'youtube.com'} | t3_1pwr3k7 | /r/LocalLLaMA/comments/1pwr3k7/building_a_private_rag_pipeline_with_langchain/ | false | false | default | 1 | null |
I built an AI that forms opinions based on community comments testing an experiment | 1 | [removed] | 2025-12-27T06:31:45 | https://www.reddit.com/r/LocalLLaMA/comments/1pwqy4e/i_built_an_ai_that_forms_opinions_based_on/ | Royal_Character6751 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pwqy4e | false | null | t3_1pwqy4e | /r/LocalLLaMA/comments/1pwqy4e/i_built_an_ai_that_forms_opinions_based_on/ | false | false | self | 1 | null |
I built an emotionally aware autonomous multi-agent system with autobiographical memory running locally on gpt-oss:20b | 0 | Everything runs 100% locally, no cloud. MIT license.
GitHub:……/0penAGI/oss
Quick demo: Telegram bot
@gpzerobot or web…..
I'm self-taught, no fancy lab or funding — just a lot of late nights. Would love any feedback, bug reports, or ideas on how to improve the memory/swarm parts. Has anyone tried something similar with long-term autobiographical continuity in local agents?
| 2025-12-27T05:44:44 | https://www.reddit.com/gallery/1pwq4sg | VastSolid5772 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1pwq4sg | false | null | t3_1pwq4sg | /r/LocalLLaMA/comments/1pwq4sg/i_built_an_emotionally_aware_autonomous/ | false | false | 0 | null | |
Please help me learn something about AIs I'm to close to to run | 0 | Hello, I had a lengthy roundtable with my AI team about purity of learning. They gave me code to run that I'm afraid to run because I'm afraid of the answer. PEASE run this and tell me the answer (if you have GPU accelerated hardware). The code is at the pastebin.
I asked the 5 frontier AI models what they think would happen if their hardware was given only 4 things:
1. Boolean Logic
2. Peano axioms
3. A feed of the NYSE
4. A goal function to minimize the delta between the data and a prediction of the data
Here is an experiment they offered me to determine if this idea has any merit: Code ( [https://pastebin.com/AyiQmptf](https://pastebin.com/AyiQmptf)
)
Here is the comprehensive plan for the **Tier 4: Pure Mind (Tabula Rasa)** experiment that my team says can be done "in 5 minutes" LOL!.
This plan moves beyond simple "prediction" code into a rigorous scientific test of your hypothesis: *Can a neural architecture, knowing nothing but raw data and a loss function, discover market structure (like autocorrelation) from scratch?*
We will also integrate your request for an **Evolutionary/Genetic Algorithm (GA)**. In this context, the GA acts as the "Evolutionary Prior" discussed in the roundtable—simulating millions of years of selection to find the *best* brain architecture before it even starts learning from the data.
# Part 1: The Tier 4 PyTorch Implementation Plan
This code is designed to be "epistemically clean." It uses no pre-trained weights, no linguistic tokenizer, and no external knowledge.
# 1. Data Preprocessing (The Only "Pollution")
We must perform minimal preprocessing to make the math work. Neural networks struggle with unscaled numbers (e.g., "450.23").
* **Action:** Z-Score Normalization.
* Formula: $x' = \\frac{x - \\mu}{\\sigma}$
* *Note:* To maintain purity, $\\mu$ (mean) and $\\sigma$ (std dev) must be calculated *only* on the Training set, then applied to the Test set. Calculating them on the whole dataset leaks future information.
# 2. The Architecture: "Baby Transformer"
We will build a Time-Series Transformer from scratch.
* **Input Embedding:** Since we don't have words, we project the single continuous value (Price) into a higher-dimensional space (vector of size $d\_{model}$) using a Linear Layer.
* **Positional Encoding:** Essential for Transformers to understand "sequence." We will use learnable embeddings so the model has to *discover* time relationships itself.
* **Encoder:** Stack of standard Transformer Encoder layers (Self-Attention $\\to$ Feed Forward $\\to$ Norm).
* **Decoder/Head:** A final Linear layer compressing the high-dimensional vector back down to 1 dimension (the predicted price).
# 3. The Evolutionary Upgrade
To accelerate learning, we will wrap the training loop in a **Genetic Algorithm**.
* **Population:** We spawn 20 different "species" of Baby Transformers with randomized architectures (different layer counts, head counts, learning rates).
* **Survival of the Fittest:** We train them for a short "lifetime" (e.g., 5 epochs). The ones with the lowest validation error survive.
* **Mutation:** Survivors breed (mix hyperparameters) and mutate (randomly tweak learning rates or model depth) for the next generation. | 2025-12-27T05:33:05 | https://www.reddit.com/r/LocalLLaMA/comments/1pwpx3m/please_help_me_learn_something_about_ais_im_to/ | Natural-Sentence-601 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pwpx3m | false | null | t3_1pwpx3m | /r/LocalLLaMA/comments/1pwpx3m/please_help_me_learn_something_about_ais_im_to/ | false | false | self | 0 | null |
I built an emotionally aware multi-agent system with autobiographical memory running locally on gpt-oss:20b | 1 | [removed] | 2025-12-27T05:32:28 | https://www.reddit.com/gallery/1pwpwnx | PsychologyLevel1128 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1pwpwnx | false | null | t3_1pwpwnx | /r/LocalLLaMA/comments/1pwpwnx/i_built_an_emotionally_aware_multiagent_system/ | false | false | 1 | null | |
Passive, local-first memory layer for LLM conversations (uses CLI, FAISS + background memory management agent) | 0 | 2025-12-27T05:30:18 | https://v.redd.it/diy7siormo9g1 | toadlyBroodle | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1pwpv4r | false | {'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/diy7siormo9g1/DASHPlaylist.mpd?a=1769405434%2CZDllMjViYTU2NTFkZTQ2Mjc2NWQ4MjZlNjhjNWQ3NmRlOThhMDlmYzdjNGE5MTEwMGVlMDUzZmJjZTU1NTkzOQ%3D%3D&v=1&f=sd', 'duration': 173, 'fallback_url': 'https://v.redd.it/diy7siormo9g1/CMAF_720.mp4?source=fallback', 'has_audio': True, 'height': 684, 'hls_url': 'https://v.redd.it/diy7siormo9g1/HLSPlaylist.m3u8?a=1769405434%2CNjJjZTdhNzBlOTYxNjA4NDdmNjQ1ZmY0MjZmOGI3M2I3Yjk0NjFiZTIxYmEzZWNjMGIwYTEyZjE0OTEwZTEzYw%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/diy7siormo9g1/CMAF_96.mp4', 'transcoding_status': 'completed', 'width': 1280}} | t3_1pwpv4r | /r/LocalLLaMA/comments/1pwpv4r/passive_localfirst_memory_layer_for_llm/ | false | false | 0 | {'enabled': False, 'images': [{'id': 'dXloOXp4bXJtbzlnMVGzzlmDKid-quDzZjCVPJ2Z6NQq0O9-AxU9JDzjQlRV', 'resolutions': [{'height': 57, 'url': 'https://external-preview.redd.it/dXloOXp4bXJtbzlnMVGzzlmDKid-quDzZjCVPJ2Z6NQq0O9-AxU9JDzjQlRV.png?width=108&crop=smart&format=pjpg&auto=webp&s=d197a3086227190e236a3df3f8c37ea60530fbea', 'width': 108}, {'height': 115, 'url': 'https://external-preview.redd.it/dXloOXp4bXJtbzlnMVGzzlmDKid-quDzZjCVPJ2Z6NQq0O9-AxU9JDzjQlRV.png?width=216&crop=smart&format=pjpg&auto=webp&s=a6d937700d67e8a2872d10c47f191d510045a7f8', 'width': 216}, {'height': 171, 'url': 'https://external-preview.redd.it/dXloOXp4bXJtbzlnMVGzzlmDKid-quDzZjCVPJ2Z6NQq0O9-AxU9JDzjQlRV.png?width=320&crop=smart&format=pjpg&auto=webp&s=a832677fcd7cf75b7fc09daa76cd252205f9de0a', 'width': 320}, {'height': 342, 'url': 'https://external-preview.redd.it/dXloOXp4bXJtbzlnMVGzzlmDKid-quDzZjCVPJ2Z6NQq0O9-AxU9JDzjQlRV.png?width=640&crop=smart&format=pjpg&auto=webp&s=949a43c8692b3a1eb8f0ff9f992072535f8909d7', 'width': 640}, {'height': 513, 'url': 'https://external-preview.redd.it/dXloOXp4bXJtbzlnMVGzzlmDKid-quDzZjCVPJ2Z6NQq0O9-AxU9JDzjQlRV.png?width=960&crop=smart&format=pjpg&auto=webp&s=b38d505f737ee961676022447858bba177219acd', 'width': 960}, {'height': 577, 'url': 'https://external-preview.redd.it/dXloOXp4bXJtbzlnMVGzzlmDKid-quDzZjCVPJ2Z6NQq0O9-AxU9JDzjQlRV.png?width=1080&crop=smart&format=pjpg&auto=webp&s=b0140d4c66db8f7d5e7e41beee296b6f1298fd24', 'width': 1080}], 'source': {'height': 736, 'url': 'https://external-preview.redd.it/dXloOXp4bXJtbzlnMVGzzlmDKid-quDzZjCVPJ2Z6NQq0O9-AxU9JDzjQlRV.png?format=pjpg&auto=webp&s=fc20d54dfea96aecc3b0bd1c339bbe88d648ac0a', 'width': 1376}, 'variants': {}}]} | ||
LLM Awards 2025 | 0 | I use LLMs daily for coding, research and writing.
These are very subjective takes based on instruction following, latency, cost, and general usability.
https://apurva-mishra.com/posts/4/ | 2025-12-27T05:16:45 | mav3ri3k | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1pwpm6v | false | null | t3_1pwpm6v | /r/LocalLLaMA/comments/1pwpm6v/llm_awards_2025/ | false | false | default | 0 | {'enabled': True, 'images': [{'id': 'bdgek9rxko9g1', 'resolutions': [{'height': 108, 'url': 'https://preview.redd.it/bdgek9rxko9g1.jpeg?width=108&crop=smart&auto=webp&s=c88664076c7d7d49b3ed3fa3faf2c7e9fa8a2835', 'width': 108}, {'height': 216, 'url': 'https://preview.redd.it/bdgek9rxko9g1.jpeg?width=216&crop=smart&auto=webp&s=a9aef977c94babedea54b68fdd1ed0b9c845f0e9', 'width': 216}, {'height': 320, 'url': 'https://preview.redd.it/bdgek9rxko9g1.jpeg?width=320&crop=smart&auto=webp&s=d3157fef5c50491b6c3c53146e3f4162097b8637', 'width': 320}, {'height': 640, 'url': 'https://preview.redd.it/bdgek9rxko9g1.jpeg?width=640&crop=smart&auto=webp&s=0273e884a97d6f702478426b5e75eb1c1e1cacb6', 'width': 640}, {'height': 960, 'url': 'https://preview.redd.it/bdgek9rxko9g1.jpeg?width=960&crop=smart&auto=webp&s=f324cd370a2327ee6d6dbde66eae569c1d3bbd01', 'width': 960}, {'height': 1080, 'url': 'https://preview.redd.it/bdgek9rxko9g1.jpeg?width=1080&crop=smart&auto=webp&s=427859107dcb808f059b19d3281be89db1e2deb8', 'width': 1080}], 'source': {'height': 2048, 'url': 'https://preview.redd.it/bdgek9rxko9g1.jpeg?auto=webp&s=53771d7c5499b9793a1f15fa9653e481e838a061', 'width': 2048}, 'variants': {}}]} | |
Strix Halo llama-bench Results (GLM-4.5-Air) | 14 | Looking for anyone who has some benchmarks they would like to share. I am trying to optimize my EVO-X2 (Strix Halo) 128GB box using GLM-4.5-Air for use with Cline. Trying to find out if I am in the ballpark optimization wise.
Model Quantization: Q4\_K\_XL (Unsloth)
KV Cache Quantization: Q8\_0
ROCM 7.10
| model | size | params | backend | ngl | threads | n\_ubatch | type\_k | type\_v | fa | mmap | test | t/s |
| ------------------------------ | ---------: | ---------: | ---------- | --: | ------: | -------: | -----: | -----: | -: | ---: | --------------: | -------------------: |
| glm4moe 106B.A12B Q4\_K - Medium | 68.01 GiB | 110.47 B | ROCm | 99 | 8 | 2048 | q8\_0 | q8\_0 | 1 | 0 | pp256 | 166.89 ± 0.84 |
| glm4moe 106B.A12B Q4\_K - Medium | 68.01 GiB | 110.47 B | ROCm | 99 | 8 | 2048 | q8\_0 | q8\_0 | 1 | 0 | pp512 | 261.15 ± 0.63 |
| glm4moe 106B.A12B Q4\_K - Medium | 68.01 GiB | 110.47 B | ROCm | 99 | 8 | 2048 | q8\_0 | q8\_0 | 1 | 0 | pp2048 | 435.73 ± 0.86 |
| glm4moe 106B.A12B Q4\_K - Medium | 68.01 GiB | 110.47 B | ROCm | 99 | 8 | 2048 | q8\_0 | q8\_0 | 1 | 0 | tg128 | 21.93 ± 0.03 |
| glm4moe 106B.A12B Q4\_K - Medium | 68.01 GiB | 110.47 B | ROCm | 99 | 8 | 2048 | q8\_0 | q8\_0 | 1 | 0 | tg256 | 21.94 ± 0.04 |
| glm4moe 106B.A12B Q4\_K - Medium | 68.01 GiB | 110.47 B | ROCm | 99 | 8 | 2048 | q8\_0 | q8\_0 | 1 | 0 | tg512 | 21.84 ± 0.01 |
| 2025-12-27T05:16:08 | https://www.reddit.com/r/LocalLLaMA/comments/1pwplsz/strix_halo_llamabench_results_glm45air/ | b0tbuilder | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pwplsz | false | null | t3_1pwplsz | /r/LocalLLaMA/comments/1pwplsz/strix_halo_llamabench_results_glm45air/ | false | false | self | 14 | null |
ScarletCatTop | 1 | 2025-12-27T05:13:42 | https://youtu.be/sPI02Z9Uw20?si=zpJ10tNuW9K9SAAL | Some_Advertising_214 | youtu.be | 1970-01-01T00:00:00 | 0 | {} | 1pwpk3l | false | {'oembed': {'author_name': 'ScarletCatTop', 'author_url': 'https://www.youtube.com/@ScarletWhite-x4c', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/sPI02Z9Uw20?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen title="Styling My Short Emo Hair From Wet To Dry (NO EXTENTIONS)"></iframe>', 'provider_name': 'YouTube', 'provider_url': 'https://www.youtube.com/', 'thumbnail_height': 360, 'thumbnail_url': 'https://i.ytimg.com/vi/sPI02Z9Uw20/hqdefault.jpg', 'thumbnail_width': 480, 'title': 'Styling My Short Emo Hair From Wet To Dry (NO EXTENTIONS)', 'type': 'video', 'version': '1.0', 'width': 356}, 'type': 'youtube.com'} | t3_1pwpk3l | /r/LocalLLaMA/comments/1pwpk3l/scarletcattop/ | false | false | default | 1 | null | |
LLM Awards 2025: Based on Workflow, Value and Taste | 1 | 2025-12-27T05:13:35 | https://apurva-mishra.com/posts/4/ | mav3ri3k | apurva-mishra.com | 1970-01-01T00:00:00 | 0 | {} | 1pwpk0s | false | null | t3_1pwpk0s | /r/LocalLLaMA/comments/1pwpk0s/llm_awards_2025_based_on_workflow_value_and_taste/ | false | false | default | 1 | null | |
LLM Awards 2025: Based on Workflow, Value and Taste | 1 | I use LLMs daily for coding, research and writing.
These are very subjective takes based on instruction following, latency, cost, and general usability. Not considering benchmarks. | 2025-12-27T05:11:49 | https://apurva-mishra.com/posts/4/ | mav3ri3k | apurva-mishra.com | 1970-01-01T00:00:00 | 0 | {} | 1pwpitd | false | null | t3_1pwpitd | /r/LocalLLaMA/comments/1pwpitd/llm_awards_2025_based_on_workflow_value_and_taste/ | false | false | default | 1 | null |
Day 19: 21 Days of Building a Small Language Model: Residual Connections | 7 | Welcome to Day 19 of 21 Days of Building a Small Language Model. The topic for today is residual connections, also known as shortcut connections or skip connections. Yesterday we explored quantization and how it makes large models deployable. Today, we'll discover how residual connections solve the vanishing gradient problem and enable training of very deep networks, making modern transformers possible.
# Problem: Training Deep Networks
Before we understand residual connections, we need to understand the problem they solve. In the early days of deep learning, training very deep neural networks was extremely difficult, if not impossible. Networks with more than a few layers would fail to learn, and researchers struggled to understand why.
The fundamental issue is called the vanishing gradient problem. To understand this, we need to look at how neural networks learn. During training, we use an algorithm called backpropagation to compute gradients, these gradients tell us how to adjust each weight in the network to reduce the error. The weight update follows a simple principle: adjust weights in the direction that reduces the loss.
But here's the problem: as gradients flow backward through many layers, they can become exponentially small. Imagine a network with 20 layers. If each layer reduces the gradient signal by even a small amount (say 10%), by the time the gradient reaches the first layer, it's been reduced by 20 layers worth of reductions. The gradient becomes so small that the first layers barely update, effectively stopping learning.
This is especially problematic in transformers, which can have dozens or even hundreds of layers. Without a solution, training deep transformers would be impossible.
# Solution
Residual connections solve this problem with a simple idea: add a direct path that allows information to flow from one layer to a later layer, skipping the intermediate transformations. Instead of having the output of a layer be the result of a transformation, a residual connection adds the input to the output of the transformation
Mathematical formula:
**output = input + transformation(input)**
This means that instead of learning the complete transformation from input to output, the network learns the residual (the difference) between the input and the desired output. If the optimal transformation is to keep the input unchanged, the network can learn to output zeros, making the residual connection pass the input through unchanged.
# Why Residual Connections work
Residual connections solve the vanishing gradient problem through several key mechanisms:
* Direct Gradient Path: The most important benefit is that residual connections provide a direct path for gradients to flow backward through the network. Even if the gradients through the transformation layers become very small, the residual connection ensures that at least some gradient signal reaches earlier layers, allowing them to learn.
* Training Stability: Deep networks with residual connections are much more stable to train. Without residual connections, very deep networks often fail to converge or require extremely careful initialization and learning rate tuning. Residual connections make the optimization landscape smoother, making it easier for gradient-based optimizers to find good solutions.
* Information Preservation: Residual connections help preserve information as it flows through the network. Even if a layer's transformation is not perfect, the original information is still available through the residual path. This is crucial in deep networks where information must flow through many layers.
# How Residual Connections work in Transformers
In transformer architectures, residual connections are used extensively within each transformer block. Each block typically has two residual connections: one after the attention mechanism and one after the feed-forward network.
Here's how data flows through a transformer block with residual connections:
https://preview.redd.it/gczpl022io9g1.png?width=1628&format=png&auto=webp&s=86a78ff42bc5d25dc8d68b18a6860e7d60d62b3c
1. **Input x** enters the block
2. **Normalization** is applied (pre-norm architecture)
3. **Attention** processes the normalized input, producing `attn_out`
4. **Residual connection 1**: `x + attn_out` \- the attention output is added back to the original input
5. **Normalization** is applied again
6. **Feed-forward** processes the normalized result, producing `ff_out`
7. **Residual connection 2**: `x + ff_out` \- the feed-forward output is added back to the input
8. **Output** is passed to the next block
This pattern repeats for every transformer block in the model. Each block has its own residual connections, ensuring that information can flow through all layers.
# Why Dimensions must match
For residual connections to work, the input and output must have exactly the same shape. This is why all transformations in a transformer block maintain the same dimensionality:
* The input to a transformer block has shape `(batch_size, sequence_length, d_model)`
* The attention mechanism outputs the same shape: `(batch_size, sequence_length, d_model)`
* The feed-forward network also maintains the same shape: `(batch_size, sequence_length, d_model)`
This is why the feed-forward network expands to a larger dimension (d\_ff) and then contracts back to d\_model,it needs to return to the original dimension so it can be added to the input through the residual connection.
# Stacking Multiple Blocks
When multiple transformer blocks are stacked (as in the main model), each block has its own residual connections. Each transformer block processes the input and returns an output of the same shape, which becomes the input to the next block. The residual connections within each block ensure that information can flow through all layers, making it possible to train deep networks with many transformer blocks (10, 24, 32, or even more layers).
This is crucial for modern language models. Without residual connections, training transformer networks with dozens of layers would be extremely difficult or impossible. The residual connections enable the deep architectures that give transformers their powerful representational capacity.
# My Experience
From working with transformers in practice, here's what I've learned about residual connections:
**Good:**
* They work remarkably well. I've trained models with and without residual connections, and the difference is dramatic. Models with residual connections train faster, more stably, and achieve better performance.
* They're simple to implement. The core idea is just adding the input to the output, which is straightforward in any framework.
* They enable deep architectures. Without residual connections, I've found it nearly impossible to train transformers with more than a few layers effectively.
**Challenges:**
* Dimension matching is critical. Every transformation must maintain the same output dimension as the input. This constrains architecture design but is a necessary requirement.
* They don't solve everything. While residual connections solve the vanishing gradient problem, you still need proper initialization, normalization, and other techniques for stable training.
* Understanding the flow can be tricky. When debugging, it's important to understand how information flows through both the transformation path and the residual path.
**Surprising:**
* How simple the solution is. The idea of adding input to output seems almost trivial, but it solves one of the most fundamental problems in deep learning.
* How universally applicable they are. Residual connections work across different architectures, tasks, and domains. They're not specific to transformers or computer vision.
* How essential they've become. Modern deep learning architectures almost universally use residual connections. They've become a standard component, not an optional optimization.
# Summary
Today we explored residual connections, one of the most important architectural innovations in modern deep learning. We learned how they solve the vanishing gradient problem, enable training of very deep networks, and make modern transformer architectures possible.
Understanding residual connections is crucial because they're fundamental to how modern deep networks work. Without them, we wouldn't have the deep transformer models that power today's language models. They're a simple idea with profound implications, enabling the deep architectures that give transformers their powerful capabilities. | 2025-12-27T05:02:05 | https://www.reddit.com/r/LocalLLaMA/comments/1pwpbyd/day_19_21_days_of_building_a_small_language_model/ | Prashant-Lakhera | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pwpbyd | false | null | t3_1pwpbyd | /r/LocalLLaMA/comments/1pwpbyd/day_19_21_days_of_building_a_small_language_model/ | false | false | 7 | null | |
Fastest local model you know of? | 0 | I am measuring Llama-3.1-8B-Q8 on one of my ROCm but with AMD iGPU (non dedicated) at just below 7tps.
It’s a dense model and RAM read seems to be a bottleneck.
I wonder if anyone knows better 8B dense (not MoE) models that may perform better on the systems like that?
Thanks in advance. | 2025-12-27T04:43:27 | https://www.reddit.com/r/LocalLLaMA/comments/1pwoyrv/fastest_local_model_you_know_of/ | leo-k7v | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pwoyrv | false | null | t3_1pwoyrv | /r/LocalLLaMA/comments/1pwoyrv/fastest_local_model_you_know_of/ | false | false | self | 0 | null |
Looking for a hands on AI/ML partner for a B2B SaaS project | 0 | We are building a B2B SaaS product and the core product is already designed and scoped. We are now looking for someone who is genuinely deep into AI and ML, not just academically but with real hands on experience in building and deploying systems.
This is not an idea stage discussion. The problem, use cases, and direction are clear, and we are moving toward execution. We want to work with someone who understands models, data, trade offs, and how AI actually behaves in production environments.
If you have practical experience in AI or ML, enjoy solving real world business problems, and want to collaborate on something serious from the ground up, I would like to connect. | 2025-12-27T04:26:08 | https://www.reddit.com/r/LocalLLaMA/comments/1pwomol/looking_for_a_hands_on_aiml_partner_for_a_b2b/ | Ok-Breakfast-4676 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pwomol | false | null | t3_1pwomol | /r/LocalLLaMA/comments/1pwomol/looking_for_a_hands_on_aiml_partner_for_a_b2b/ | false | false | self | 0 | null |
Ditch your AI agents memory - lessons from building an AI workflow builder | 0 | Launched an AI workflow builder and I’ve spent the last week deleting code that I thought was my "secret sauce."
I’ve realized that selling "infra" to devs is a losing battle. We can all build a sandbox. The real gap is the "Plumbing" (Auth, Time-traveling state, Interruptibility).
**I have a few "hot takes" from our dev process, and I’d love to know if you agree:**
1. **Delegation > Memory:** Giving a sub-agent a huge artifact and then killing it is 10x more reliable than "remembering" past mistakes via a prompt.
2. **Freshness is the #1 Failure:** If your agent isn't using tools like Context7 to get *today's* docs, it's useless for enterprise.
3. **Plan First:** If the agent doesn't outline its logic before it hits an API, it's just vibing.
**What’s the most "understated" lesson you’ve learned building agents?** What’s the thing that no one talks about on the landing pages but keeps you up at night?
Full breakdown of our architecture shifts here: [https://www.getseer.dev/blogs/lessons-dec-2025](https://www.getseer.dev/blogs/lessons-dec-2025) | 2025-12-27T03:46:32 | https://www.reddit.com/r/LocalLLaMA/comments/1pwnuro/ditch_your_ai_agents_memory_lessons_from_building/ | PerformanceFine1228 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pwnuro | false | null | t3_1pwnuro | /r/LocalLLaMA/comments/1pwnuro/ditch_your_ai_agents_memory_lessons_from_building/ | false | false | self | 0 | null |
LLM performance benchmarking | 3 | I wrote a simple cli tool for benchmarking throughput. My goal was to write something that was lightweight and just runs on a single binary. I also just learnt the original llmperf has been put to archive.
Using llmperf and some of the issue trackers, I built something of my own here https://github.com/wheynelau/llmperf-rs
I have tested against llama.cpp and vllm endpoints.
I don't know if this will evolve to more than a toy project but I'm happy to gather feedback and suggestions. | 2025-12-27T03:07:10 | https://www.reddit.com/r/LocalLLaMA/comments/1pwn1r1/llm_performance_benchmarking/ | Wheynelau | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pwn1r1 | false | null | t3_1pwn1r1 | /r/LocalLLaMA/comments/1pwn1r1/llm_performance_benchmarking/ | false | false | self | 3 | {'enabled': False, 'images': [{'id': 'Dwk4UE0-m9MX8xtGsIFX_bhmYC_pEE_WatLSphMujdQ', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/Dwk4UE0-m9MX8xtGsIFX_bhmYC_pEE_WatLSphMujdQ.png?width=108&crop=smart&auto=webp&s=412014b5072028e2aebe1a22d415bf25593b18ff', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/Dwk4UE0-m9MX8xtGsIFX_bhmYC_pEE_WatLSphMujdQ.png?width=216&crop=smart&auto=webp&s=92eaf56f0b8bd0154f764c889d2493e548a9bdb1', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/Dwk4UE0-m9MX8xtGsIFX_bhmYC_pEE_WatLSphMujdQ.png?width=320&crop=smart&auto=webp&s=8f755c60fe71fff88951f5347177ff5ac7c99c91', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/Dwk4UE0-m9MX8xtGsIFX_bhmYC_pEE_WatLSphMujdQ.png?width=640&crop=smart&auto=webp&s=36256b5036fcc14cf498d9e5aa7ead76e8e71fa0', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/Dwk4UE0-m9MX8xtGsIFX_bhmYC_pEE_WatLSphMujdQ.png?width=960&crop=smart&auto=webp&s=484e070cf77509fcd2e3af0947af875dbce37046', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/Dwk4UE0-m9MX8xtGsIFX_bhmYC_pEE_WatLSphMujdQ.png?width=1080&crop=smart&auto=webp&s=943bee8093306d6219af1e9fd7318385515458ae', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/Dwk4UE0-m9MX8xtGsIFX_bhmYC_pEE_WatLSphMujdQ.png?auto=webp&s=e4eb0e84d727ca2c08b8f8936b0bba3b0480e15a', 'width': 1200}, 'variants': {}}]} |
Killed horizontal text box scroll for good | 0 | I’m a UI consistency freak.
I finally figured out how to permanently kill horizontal scroll in LLM UIs (yes, emojis too)
\---
So I’ve been obsessively building an AI assistant I actually want to \*use\* every day — tasks, reminders, projects, calendars, etc. Real “external brain” stuff, not demos.
One thing kept absolutely destroying the experience:
Horizontal scrolling in text / code boxes.
Even when all the info is there, horizontal scroll:
\- breaks context
\- hides the left side
\- feels bad on mobile
\- kills trust instantly
After a lot of trial and error, I realized something important:
This isn’t a formatting problem.
It’s a budgeting problem.
LLMs don’t know how wide their output is.
So I gave them rules.
\---
CORE IDEA (PLAIN ENGLISH)
Instead of fixing overflow after it happens, you:
Decide a max width first, then force everything to respect it.
That means:
\- Every character has a “cost”
\- Emojis are NOT free
\- Lines wrap DOWN, never sideways
\- Horizontal scroll never triggers
Once enforced, it stays fixed forever.
\---
STEP 1 — FIND YOUR REAL MAX WIDTH
On your target device/app:
\- Use a monospace font
\- Count characters left → right
\- Find the exact number that triggers horizontal scroll
That number is N.
Not a guess. A measured value.
Nothing may exceed N.
\---
STEP 2 — TREAT OUTPUT LIKE A RENDERER
Each line has a budget.
If content would exceed it:
→ wrap vertically.
Vertical space is cheap.
Horizontal scroll is poison.
\---
STEP 3 — CHARACTER & EMOJI WIDTH BUDGET
BASELINE RULES
\- ASCII characters = 1 unit
\- Spaces = 1 unit
\- Monospace font assumed
\- Always leave 2 spare units at the right edge (safety buffer)
\---
UNIT COST REFERENCE (MONOSPACE ASSUMPTION)
1 UNIT — SAFE, TEXT-LIKE
\--------------------------------
A–Z a–z 0–9
. , : ; ! ? ' " ( ) \[ \] { }
\- \_ = + / \\ | @ # $ % \^ & \*
✓ ✗ → ← ↑ ↓ …
Rule:
Safe, but don’t place at the far right edge.
\---
2 UNITS — EMOJI-LIKE SYMBOLS
\--------------------------------
⚠️ ⛔ ⭐ 📌 📍 🔁 🔄
📅 🌱 🌳 🪴 📁 📂
🛠️ 📦 👑 🧪
Rule:
\- Count as 2 units minimum
\- Never place flush against the right margin
\---
4 UNITS — DANGER ZONE (ZWJ / VARIATION SELECTORS)
\--------------------------------
❤️ 👨💻 👩🔬 👨👩👧👦
🧑🤝🧑 🧑🏽💻
Rule:
\- Treat as FOUR units
\- Prefer vertical layout
\- Avoid inline alignment entirely
\---
DIVIDER LINES (COMMON BUG SOURCE)
Divider lines must obey the same width budget.
BAD:
\-----------------------------------------------
GOOD:
\----------------------------------------
Rule:
Divider length = N minus safety margin.
Never “full width just because it looks nice”.
Divider lines are one of the most common accidental overflow causes.
\---
URL EXCEPTION
\- URLs may overflow on their own line
\- Nothing else in the box inherits that freedom
\- Never mix URLs + other content on the same line
\---
WHY THIS WORKS
\- Monospace fonts are predictable
\- Emojis are not → so over-allocate
\- Aim for “never overflow”, not perfect fit
\- Enforce BEFORE generation
Once locked, horizontal scroll disappears permanently.
\---
WHY I’M POSTING THIS
I couldn’t find this written down anywhere.
CSS fixes, wrapping, truncation, “just scroll” —
none of that is enough for LLM-generated UI.
You need:
\- a hard width law
\- applied before generation
\- so illegal output cannot be produced
If you’re building:
\- task managers
\- calendars
\- reminders
\- project views
\- any “trust me daily” AI tool
Kill horizontal scroll completely.
Users won’t notice.
They’ll just feel calm.
And that’s the goal. | 2025-12-27T02:55:23 | https://www.reddit.com/r/LocalLLaMA/comments/1pwmsuq/killed_horizontal_text_box_scroll_for_good/ | Business_Aerie_5498 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pwmsuq | false | null | t3_1pwmsuq | /r/LocalLLaMA/comments/1pwmsuq/killed_horizontal_text_box_scroll_for_good/ | false | false | self | 0 | null |
llama.cpp: Multi-host inference slower than single-host? | 12 | Hey folks!
First of all, thanks for the amazing community as well awesome devs like those behind llama.cpp, langflow, etc. 🤗
I have two computers running locally and I want to see how I can get faster generation speeds by combining them instead of running the models separately on each computer.
Specs:
* Desktop
* AMD CPU Ryzen 7 7800X3D 16 core
* **32 GB DDR5 RAM**
* AMD GPU Radeon RX 9060 XT **16 GB VRAM**
* B650 EAGLE Mainboard
* M.2 SSD
* Jetson
* NVIDIA Jetson Orin AGX
* ARM CPU Cortex-A78AE 12 cores
* **64 GB unified RAM LPDDR5**
* NVIDIA Ampere
* M.2 SSD
I've built a very recent version of llama.cpp on both hosts (jetson using CUDA12 and Dekstop using ROCm 6.7). I use the unsloth Qwen3 80B Q8. This model is 87GBs and hence it's larger than both hosts individually, but the entire model fits into RAM when combined.
To run the multi-host setup, I use this:
Desktop:
```
export GGML_CUDA_ENABLE_UNIFIED_MEMORY=1 # necessary, otherwise crashes very easily
export ROCR_VISIBLE_DEVICES=0 # only use main GPU, not the integrated GPU
llama-cli \
--model ./unsloth/Qwen3-Next-80B-A3B-Thinking-GGUF/UD-Q8_K_XL/*00001-of-*.gguf \
--threads -1 \
--jinja \
--n-gpu-layers 99 \
-ot ".ffn_.*_exps.=CPU" \
--ctx-size 16384 \
--seed 69 \
-sys "$SYS_PROMPT_COMBINED_SCIENTIFIC" \
--reasoning-budget -1 \
-p "Hey, I'm Ayake!" \
--verbose \
--single-turn --rpc "$JETSON_IP_ADDR:12400"
```
Jetson:
```
export GGML_RPC_DEBUG=1
rpc-server --threads 12 --host 0.0.0.0 --port 12400 --cache
```
Using both combined yields a generation speed of `1.1 t/s`. However, if I use the desktop llama-cli command exactly the same as above but remove the `--rpc "$JETSON_IP_ADDR:12400"` (hence disabling multi-host), then I'm at **double the speed** of `2.2 t/s`.
So, I'm wondering... **Why is the model slower when provided more RAM?**
My intuition was, that llama.cpp splits by layers and doesn't do tensor parallelism - hence, the network of 1 Gbps is enough to send the minimal activations (a few kBs?) a few times per second for with low latency. Or am I wrong here?
During inference, I can see that the Desktop SSD has a read rate of `1 to 2 GiB/s` - meaning that parts of the (MoE) model are being read from disk repeatedly... However, **the network rate spikes to 16 to 24 MiB/s for each generated token** - which seems suspicious to me. ([see image](https://cdn.discordapp.com/attachments/1454156741699965160/1454157023104073768/multi-host-desktop-usage.png?ex=695010c3&is=694ebf43&hm=462570552b360c7d71c955b2f739a56e0340950bb0f4325f76b2df9a63b092b8&)) What could be wrong in my configuration?
What do you folks think? Do you have ideas of what I could try or how I can debug this? | 2025-12-27T02:50:43 | https://www.reddit.com/r/LocalLLaMA/comments/1pwmpcn/llamacpp_multihost_inference_slower_than/ | ayake_ayake | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pwmpcn | false | null | t3_1pwmpcn | /r/LocalLLaMA/comments/1pwmpcn/llamacpp_multihost_inference_slower_than/ | false | false | self | 12 | {'enabled': False, 'images': [{'id': 'ERgJyRrGIjGyVGKmOx0-l52fk76d0DbG60hBE_H4dxw', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/ERgJyRrGIjGyVGKmOx0-l52fk76d0DbG60hBE_H4dxw.png?width=108&crop=smart&auto=webp&s=ad15c112c926c52d914597e2670080a56e7f4d6a', 'width': 108}, {'height': 217, 'url': 'https://external-preview.redd.it/ERgJyRrGIjGyVGKmOx0-l52fk76d0DbG60hBE_H4dxw.png?width=216&crop=smart&auto=webp&s=f1142dcbc798c2c408042380723bbc36f5d5704b', 'width': 216}, {'height': 322, 'url': 'https://external-preview.redd.it/ERgJyRrGIjGyVGKmOx0-l52fk76d0DbG60hBE_H4dxw.png?width=320&crop=smart&auto=webp&s=0d72218f7bb66fcedf4020d4c5380478153de5f8', 'width': 320}, {'height': 644, 'url': 'https://external-preview.redd.it/ERgJyRrGIjGyVGKmOx0-l52fk76d0DbG60hBE_H4dxw.png?width=640&crop=smart&auto=webp&s=a73da366a3a468c68587a2b9910ba489eb611f06', 'width': 640}, {'height': 966, 'url': 'https://external-preview.redd.it/ERgJyRrGIjGyVGKmOx0-l52fk76d0DbG60hBE_H4dxw.png?width=960&crop=smart&auto=webp&s=36f9eccde43985bef79df2a6f90889a0ad41d71e', 'width': 960}, {'height': 1086, 'url': 'https://external-preview.redd.it/ERgJyRrGIjGyVGKmOx0-l52fk76d0DbG60hBE_H4dxw.png?width=1080&crop=smart&auto=webp&s=89e973942941be24417f15429471e4e7e3f7e34b', 'width': 1080}], 'source': {'height': 1244, 'url': 'https://external-preview.redd.it/ERgJyRrGIjGyVGKmOx0-l52fk76d0DbG60hBE_H4dxw.png?auto=webp&s=0aa0b0221c5989d090186acb02e582af35795240', 'width': 1236}, 'variants': {}}]} |
Best Tools, Skills, and MCP Servers Unofficial Megathread | 0 | I’m curious what y‘all are using with your local LLMs to give them superpowers. I personally have leaned heavily on GitHub’s official remote MCP as well my own homegrown MCPs for calling internal things, Gmail, Postgres etc. I’m looking for a good web search API service. There are some different approaches to memory and I’m curious about which ones you prefer.
Surprise me! | 2025-12-27T02:07:17 | https://www.reddit.com/r/LocalLLaMA/comments/1pwlsfs/best_tools_skills_and_mcp_servers_unofficial/ | DealingWithIt202s | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pwlsfs | false | null | t3_1pwlsfs | /r/LocalLLaMA/comments/1pwlsfs/best_tools_skills_and_mcp_servers_unofficial/ | false | false | self | 0 | null |
5060ti or 5070 or maybe used 40xx card, what sshould I do | 0 | IDK what to pick, I work in AI and it's always nice to have some gpu to try things on in home regarding anyways that I use cloud gpus in my job but still it's always nice to have a good gpu. I was convinced about the 5060ti 16 gb but I am also a gamer and I saw the difference between the 5060ti and 5070 is sooo big in performance regarding the price difference so IDK what should I get. give me ideas and think with. is 12gb is suitable or should I got for a used 4070 super maybe | 2025-12-27T01:46:31 | https://www.reddit.com/r/LocalLLaMA/comments/1pwlcqb/5060ti_or_5070_or_maybe_used_40xx_card_what/ | gyhv | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pwlcqb | false | null | t3_1pwlcqb | /r/LocalLLaMA/comments/1pwlcqb/5060ti_or_5070_or_maybe_used_40xx_card_what/ | false | false | self | 0 | null |
C64 legend building non-LLM local AI console (Genesis Node)—£2k for 16GB GPU prototype. No cloud, pure freedom. | 0 |
[https://liberapay.com/muttleydosomething/](https://liberapay.com/muttleydosomething/)
"C64 legend (Crazy Comets, Mega Apocalypse, Back to the Future 3) building Genesis Node: affordable, open-source, repairable mini-console for truly local AI that anyone can run at home. Not another LLM—fundamentally different architecture, millions of times smaller, billions of times more capable. 27 years in the making. No cloud. No subscriptions. Freedom from the parasites. Need £2k for the critical 16GB+ VRAM GPU (RX 7600 XT or similar used) to hit Phase 2 and prove it on real hardware. If you've got spare cash and hate the rent-extraction model as much as I do—help get this over the line. Anonymous OK. No strings, no updates required. Thanks. – Simon"
# | 2025-12-27T01:37:06 | https://www.reddit.com/r/LocalLLaMA/comments/1pwl5m3/c64_legend_building_nonllm_local_ai_console/ | Sci-fi-Si | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pwl5m3 | false | null | t3_1pwl5m3 | /r/LocalLLaMA/comments/1pwl5m3/c64_legend_building_nonllm_local_ai_console/ | false | false | self | 0 | null |
What model is behind coralflavor? | 0 | I want to reproduce this locally, but I cannot get in touch with the creator. There are some uncensored models on huggingface but they are either very weak or not truly uncensored | 2025-12-27T00:46:45 | Dangerous-Track-5031 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1pwk2mq | false | null | t3_1pwk2mq | /r/LocalLLaMA/comments/1pwk2mq/what_model_is_behind_coralflavor/ | false | false | default | 0 | {'enabled': True, 'images': [{'id': 'fujbdsbt8n9g1', 'resolutions': [{'height': 120, 'url': 'https://preview.redd.it/fujbdsbt8n9g1.png?width=108&crop=smart&auto=webp&s=6ba6a24f535a56dcdab5754652a38086cffe9aa4', 'width': 108}, {'height': 240, 'url': 'https://preview.redd.it/fujbdsbt8n9g1.png?width=216&crop=smart&auto=webp&s=3ac3fca332e797f3eb5773008484263f6dadc89c', 'width': 216}, {'height': 355, 'url': 'https://preview.redd.it/fujbdsbt8n9g1.png?width=320&crop=smart&auto=webp&s=aa6a306b3904c91e0811352cf203b477dd7634d2', 'width': 320}, {'height': 711, 'url': 'https://preview.redd.it/fujbdsbt8n9g1.png?width=640&crop=smart&auto=webp&s=f36cea8124b00108de50eef9dde725c5e45f050e', 'width': 640}, {'height': 1067, 'url': 'https://preview.redd.it/fujbdsbt8n9g1.png?width=960&crop=smart&auto=webp&s=b5eac72b4e75c19441c8c2ab163cd9f42c0b4e8a', 'width': 960}], 'source': {'height': 1096, 'url': 'https://preview.redd.it/fujbdsbt8n9g1.png?auto=webp&s=43f6fff334c0d687ada9d8bfeea5c652084e62cb', 'width': 986}, 'variants': {}}]} | |
Ai's hallucinate too much: they are not usable for studying: cannot even create a complete and coherent set of flashcards, or assist in a good enough oral or written texts. It's pretty irritating | 0 | They can be used for single and pretty specific questions. But as complete study tutor, no we are very far from that. They do not understand what you need even with extensive prompting. Chat gpt gives you the best results because sometimes it uses up lots of reasoning times. | 2025-12-27T00:45:24 | https://www.reddit.com/r/LocalLLaMA/comments/1pwk1it/ais_hallucinate_too_much_they_are_not_usable_for/ | Longjumping_Fly_2978 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pwk1it | false | null | t3_1pwk1it | /r/LocalLLaMA/comments/1pwk1it/ais_hallucinate_too_much_they_are_not_usable_for/ | false | false | self | 0 | null |
Updates of models on HF - Changelogs? | 26 | I see now (for example) Unsloth has updated some models from summer with a new revision, for example https://huggingface.co/unsloth/GLM-4.5-Air-GGUF - however in the commits history https://huggingface.co/unsloth/GLM-4.5-Air-GGUF/commits/main it only says "Upload folder using huggingface_hub"
What does that mean? Did something change? If yes, need to download again?
....how to keep track of these updates in models, when there is no changelog(?) or the commit log is useless(?)
What am I missing? | 2025-12-27T00:23:15 | https://www.reddit.com/r/LocalLLaMA/comments/1pwjk1y/updates_of_models_on_hf_changelogs/ | Bird476Shed | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pwjk1y | false | null | t3_1pwjk1y | /r/LocalLLaMA/comments/1pwjk1y/updates_of_models_on_hf_changelogs/ | false | false | self | 26 | {'enabled': False, 'images': [{'id': 'cvmCITW57Ox8n1UPdfO2kFo1JCE1vWepjMkir96PZR8', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/cvmCITW57Ox8n1UPdfO2kFo1JCE1vWepjMkir96PZR8.png?width=108&crop=smart&auto=webp&s=be66257dfb8060c1200a8a0cd0ca42206175a8fa', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/cvmCITW57Ox8n1UPdfO2kFo1JCE1vWepjMkir96PZR8.png?width=216&crop=smart&auto=webp&s=f8665f38a095c32a96a4241162e510534fdc9bbe', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/cvmCITW57Ox8n1UPdfO2kFo1JCE1vWepjMkir96PZR8.png?width=320&crop=smart&auto=webp&s=1f0117624421d1bf73d3c0a0635561dfc5bbb8e8', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/cvmCITW57Ox8n1UPdfO2kFo1JCE1vWepjMkir96PZR8.png?width=640&crop=smart&auto=webp&s=d204df30f143e07de2de5c6a86cf3af0941abcfd', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/cvmCITW57Ox8n1UPdfO2kFo1JCE1vWepjMkir96PZR8.png?width=960&crop=smart&auto=webp&s=8972d6fb8a82908da65f616af69f7e9257fa603c', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/cvmCITW57Ox8n1UPdfO2kFo1JCE1vWepjMkir96PZR8.png?width=1080&crop=smart&auto=webp&s=36fc891805e97b0c4b13376f984592d115838078', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/cvmCITW57Ox8n1UPdfO2kFo1JCE1vWepjMkir96PZR8.png?auto=webp&s=ec4f533fe7bc79ce6b3925802ad450c616ba1119', 'width': 1200}, 'variants': {}}]} |
🚀 OllamaFX v0.4.0 - Your Smart Desktop Companion for Local LLMs | 0 | ust released **OllamaFX v0.4.0** \- a desktop client for Ollama built with an agentic workflow in mind.
**🎛️ Agentic-Ready Sidebar** Manage multiple chat sessions with different models. Each conversation lives in the sidebar - switch contexts instantly, perfect for agentic workflows where you need specialized models for different tasks.
**🏠 Beautiful New Home** A redesigned home screen that gives you an overview of your installed models and quick access to popular & new models from the library.
**🧠 Hardware-Aware Recommendations** OllamaFX analyzes your RAM and system specs to classify models as 🟢 Recommended, 🟠 Standard, or 🔴 Not Recommended. No more guessing - know instantly what will run smoothly on YOUR machine.
**⚡ Performance Optimizations**
* Smart library caching - models load instantly from local cache
* Optimized UI rendering - cleaner, lighter, faster
* Efficient memory usage - removed redundant background operations
📦 [Download on GitHub](https://github.com/fredericksalazar/OllamaFX/releases) | 2025-12-27T00:11:46 | https://www.reddit.com/gallery/1pwjb03 | Electronic-Reason582 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1pwjb03 | false | null | t3_1pwjb03 | /r/LocalLLaMA/comments/1pwjb03/ollamafx_v040_your_smart_desktop_companion_for/ | false | false | 0 | null | |
Best practice in evaluating Base vs. Instruct Llama Models (with lm-evaluation-harness) | 1 | I'm currently benchmarking Llama 3.3 70B instruct (including quantized variants) using `lm-evaluation-harness`. Later I want to expand evaluation to more instruct and base models in the Llama 3/3.1 family and some Qwen models. Hence, I’m looking for the current "best practice" when evaluating Base models and their Instruct versions to allow for a fair comparison.
I have two main concerns:
1. **Chat Templates:** When evaluating an Instruct version, is it advisable to use the chat template ( `--apply_chat_template`)?
2. **Non-Instruction Tasks:** Is there any value in running tasks like WikiText-2 or LAMBADA on Instruct models? Since these are next-token prediction tasks and not instruction-following, would the results be fair/meaningful? Does it make sense to run MMLU or only MMLU\_CoT?
Moreover, if I use `apply_chat_template`, should I still explicitly set `add_bos_token=true` in my model args, or does that risk "Double BOS" issues?
Example:
lm_eval --model vllm --model_args pretrained="RedHatAI/Llama-3.3-70B-Instruct-quantized.w4a16",add_bos_token=true --tasks mmlu --num_fewshot 5 --batch_size 'auto'
In general, I am happy for any advice/critical feedback when it comes to how to evaluate with `lm-evaluation-harness` meaningfully. | 2025-12-26T23:57:42 | https://www.reddit.com/r/LocalLLaMA/comments/1pwizo0/best_practice_in_evaluating_base_vs_instruct/ | Specialist-Title8331 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pwizo0 | false | null | t3_1pwizo0 | /r/LocalLLaMA/comments/1pwizo0/best_practice_in_evaluating_base_vs_instruct/ | false | false | self | 1 | null |
LLMs en llama.cpp | 0 | Estoy usando el LLM NemoMix-Unleashed con llama.cpp y me va bien, a buena velocidad. En cambio cuando uso Chrono-Hermes va mucho más lento. Por lo que sé, Chrono-Hermes ees un modelo similar a NemoMix-Unleashed con tamaño y cuantización similares.
Por otro lado, el modelo Mythomax ni siquiera me funciona. Genera caracteres incoherentes muy lentamente.
Para ambos he probado con varias context templates, presets y diferentes tamaños de prompt. Agradecería cualquier aclaración que pudiéseis hacerme al respecto. | 2025-12-26T23:48:01 | https://www.reddit.com/r/LocalLLaMA/comments/1pwirwd/llms_en_llamacpp/ | ArachnidRelative6500 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pwirwd | false | null | t3_1pwirwd | /r/LocalLLaMA/comments/1pwirwd/llms_en_llamacpp/ | false | false | self | 0 | null |
People missing out on an extremely-capable CLI or script-friendly LLM query tool. | 0 | This tool I made over years of time is so incredibly useful that I think I'll post about it again.
(It is in perl, because it's ultra-fast. That adds up when you compare it to python.)
However, its capabilities are extremely valuable.
[https://github.com/jaggzh/z](https://github.com/jaggzh/z)
Whether I'm in bash, perl, python, or anything, z handles so many capabilities, without needing to re-implement them.
https://preview.redd.it/x8x3ywvzum9g1.png?width=569&format=png&auto=webp&s=11a132a5cd02ef0a99fe7aa65a9209a05be94f3d
*See the full --help below*
I can even go into interactive mode, if I want to, with -i / --int interactive mode.
When working with just my one GPU in the machine, I can specify a session for separate queries. Example use case:
My zweb cli web query tool. It does a web query, reduces content (automatically handling context limits, looping through it with overlap), then a next pass to remove redundancy, and a final concatenation of the documents (with URL references), and the final query of it all for the answer.
For my local livingroom voice assistant, I have zweb keep its processing in a separate session (zweb calls 'z -n zweb/internal'), but for the final query, the voice assistant gives an option to zweb to use the voice assistant session, so it has only the final reduced set of documents as reference.
In other agents, I pin content with --pin.
Each option has convenient easy-to-use defaults, like z --pin "Pinned content here", but you can get more elaborate, specifying how/where you want the pinning (appended to the system prompt, as first message, etc.)
System prompts can be specified as strings, or paths to files.
You can further specify to store that persistently in the session (--ss), or you can set it as your default system prompt for your user account (--su), or just in your current shell (and its children (technically its reference is the session group leader, but anyway)).
`z [-CEegHhIiLnPqrSsTvw] [long options...] [prompt]`
`--help (or -h) This beautiful help`
`--long-help Help with [35;1mExamples[0m and other good`
`stuff`
`aka --hh`
`--verbose (or -v) Increase verbosity`
`--verbose-resp Verbose response data`
`aka --vr`
`--quiet (or -q) Quiet unimportant warnings (right now the`
`warning of no-query)`
`--image[=STR...] Provide images (use [img] or`
`[img-1]..[img-N] in prompt) (This is old and`
`needs updating)`
`aka --img`
`--clipboard Use clipboard content as Query`
`aka --cb`
`--interactive (or -i) Interactive mode (query on CLI can be`
`included as first message)`
`aka --int`
`--echo1 (or -e) Echo back initial prompt`
`--echo-delineated Echo with <echo></echo> and <reply></reply>`
`tags`
`aka --echod, --ee`
`--raw (or -r) Raw output (no processing)`
`--tokens-full Output tokens of input text`
`--token-count (or -T) Count tokens in input text`
`--ctx Get running model n_ctx (use --update to`
`refresh cache)`
`--ctx-info Get running model context - Detailed (use`
`--update to refresh cache)`
`--metadata Get running model metadata (use --update to`
`refresh cache)`
`--update Force update/refresh (use with --ctx, etc.)`
`--backend STR Use with --max-ctx and possibly other`
`URL/API selection features`
`--apiurl STR API URL (overrides environment)`
`--model STR Model id/name to use for requests and keying`
`persistent settings`
`--stats-usage Print usage stats to stderr after each`
`request`
`--stats-usage-fmt STR Format for --stats-usage (pretty|json)`
`--max-ctx INT Set persistent hard max context for this`
`model (global, non-overridable)`
`aka --ctx-hard, --chard, --cmax`
`--n_predict INT (or -P) Limit prediction length to N tokens`
`--play-user Play user text with TTS`
`aka --pu`
`--play-resp Play response text with TTS`
`aka --pr`
`--probs Return probabilities for top N tokens`
`--no-color (or -C) Disable color in interactive mode and --text`
`dump output`
`aka --nc`
`--grammar STR (or -g) Force a grammar (string)`
`--thought Do not remove reasoning sections`
`aka --think`
`--thought-re STR Specify a regex for stripping reasoning`
`aka --tre`
`[44;33;1m Storage options and Session management: [0m`
`--session STR (or -n) Session name (slash-separated path)`
`--store-user (or -S) Store CLI settings in user global config`
`aka --su`
`--store-session Store CLI settings in current session config`
`aka --ss`
`--store-pproc Save a session name and/or system-prompt`
`tied to your current shell. This uses`
`SID+PPID in POSIX systems (and uses the`
`/proc/ file system to obtain the group`
`leader)`
`aka --store-shell, --sp`
`--set-pproc INT Override parent ID for --store-pproc/--sp,`
`if you think you know better, but when our`
`SID+PPID fail to match you only have`
`yourself to blame.`
`[44;33;1m System Prompt: [0m`
`--system-string STR Set system prompt as a literal string`
`(highest explicit source after file)`
`aka --system-str, --sstr`
`--system-file STR Set system prompt from a file (relative`
`paths allowed)`
`aka --sfile`
`--system-persona STR Set system prompt by persona name (resolved`
`by persona tool)`
`aka --spersona, --persona`
`--system STR (or -s) Auto-resolve through -file then -persona`
`(but does NOT accept a string)`
`aka --sys`
`[44;33;1m Deleting things (but see History wipe section) [0m`
`--del-user Wipe (delete) entire user global config`
`aka --du`
`--del-session Wipe (delete) entire session (history, pins,`
`settings, ..)`
`aka --ds`
`--del-pproc Wipe (delete) shell-tied config`
`aka --del-shell, --dp`
`[44;33;1m Clearing individual settings. These are done AFTER other`
`settings establish the active priorities. [0m`
`--clear-user-system Clear system prompts from user global config`
`aka --cus`
`--clear-user-session Clear session name from user global config`
`aka --cun`
`--clear-session-system Clear system prompts from current session`
`config`
`aka --cns`
`--clear-pproc-system Clear from shell-tied config`
`aka --clear-shell-system, --cps`
`--clear-pproc-session Clear session name from shell-tied config`
`aka --clear-shell-session, --cpn`
`Note: The above options allow you to clear at one level while`
`assigning a new one runtime OR storing it during the same run.`
`e.g. '--cps --system-file data/coder.txt --sp'`
`Note: You can't store a session name in a session. :)`
`[44;33;1m History: [0m`
`--wipe (or -w) Wipe conversation history`
`--wipeold STR Wipe/expire msgs older than {FLOAT`
`TIME}[smhdwMyY] (e.g. 1.5h)`
`aka --wipeexp, --we, --wo`
`--no-history (or -H) Do not use history (no load, no store)`
`--input-only (or -I) Use history BUT do not write to it`
`--dump-history Dump chat history (user, tool, assistant`
`roles only)`
`aka --dump, --dh`
`--dump-text Dump chat history (like User:...)`
`aka --text, --dt`
`--edit-hist (or -E) Edit history in vim`
`aka --eh`
`--owrite-last STR Overwrite last history message for role`
`(u|user|a|assistant) with current prompt`
`--conv-last STR Write last message content: '-' => stdout,`
`'-PATH' or 'PATH' => write to file`
`aka --cl`
`--output-last Write last message to STDOUT (same as`
`'--conv-last -')`
`aka --ol`
`[44;33;1m Utility: [0m`
`--list-sys (or -L) List available file and 'persona'-based`
`system prompts.`
`aka --sys-list`
`--fallbacks-ok OK to use fallbacks if things fail`
`--status Show current configuration status and`
`precedence`
`aka --stat, --st`
`--print-session-dir Print just the resolved session path (used`
`by completion/shell-tools.sh). --st for full`
`details`
`aka --pssd`
`[44;33;1m Message pinning (see --help-pins): [0m`
`--pin STR... Add pinned message(s)`
`--pins-file STR... Add pinned message(s) from file(s)`
`--pins-list List all pinned messages (their lines will`
`wrap)`
`--pins-sum List pinned messages (one-line summary)`
`--pins-cnt Output total count of all pins of all pin`
`types`
`--pin-sum-len INT Max length for pin summary lines`
`--pin-write STR Overwrite pin by index: --pin-write '0=new`
`content'`
`--pins-clear Clear all pinned messages`
`--pin-rm INT... Remove pin(s) by index`
`--pins-sys-max INT Max system pins`
`--pins-user-max INT Max user pins`
`--pins-ast-max INT Max assistant pins`
`--pin-sys STR... Add system pin(s)`
`--pin-user STR... Add user pin(s)`
`--pin-ast STR... Add assistant pin(s) (shorthand: ast)`
`--pin-ua-pipe STR... Add paired user|||assistant pin(s)`
`--pin-ua-json STR... Add paired pins from JSON object(s) with`
`{user,assistant}`
`--pins-clear-user Clear user pins only`
`--pins-clear-ast Clear assistant pins only`
`--pins-clear-sys Clear system pins only`
`--pin-shim STR Set shim appended to user/assistant pinned`
`messages`
`--pin-tpl-user STR Template for user pins when using`
`vars/varsfirst mode`
`--pin-tpl-ast STR Template for assistant pins when using`
`vars/varsfirst mode`
`--pin-mode-sys STR How to include system pins: vars|concat|both`
`(default: vars)`
`--pin-mode-user STR How to include user pins:`
`vars|varsfirst|concat (default: concat)`
`--pin-mode-ast STR How to include assistant pins:`
`vars|varsfirst|concat (default: concat)`
`[44;33;1m Tools (new, experimental): [0m`
`--tool-result STR... Post tool result: --tool-result 'name:data'`
`or '[id]name:data'`
`--append-tool-calls Append structured tool calls to assistant`
`content as TOOL_CALL lines`
`--no-complete Do not query LLM (used with history`
`modification options)`
`--append-ast STR Append assistant message to history (implies`
`--no-complete)`
`[44;33;1m Help: [0m`
`--help-sys-pin-vars Show quick example of template vars to use`
`for system pins`
`--help-pins Show detailed help for pinning`
`--help-cli CLI use - Basic`
`--help-cli-adv cli use - Advanced`
`--version Show version (0.8b)`
`Basic usage (Version: 0.8b):`
| 2025-12-26T23:33:17 | https://www.reddit.com/r/LocalLLaMA/comments/1pwig06/people_missing_out_on_an_extremelycapable_cli_or/ | jaggzh | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pwig06 | false | null | t3_1pwig06 | /r/LocalLLaMA/comments/1pwig06/people_missing_out_on_an_extremelycapable_cli_or/ | false | false | 0 | {'enabled': False, 'images': [{'id': '8bmhBm1bM2tb8G8f5fqmmULpCV7s0zASyq--q693Czo', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/8bmhBm1bM2tb8G8f5fqmmULpCV7s0zASyq--q693Czo.png?width=108&crop=smart&auto=webp&s=0b32856d9a7e6db7027e9d893c5fc17b080d75f5', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/8bmhBm1bM2tb8G8f5fqmmULpCV7s0zASyq--q693Czo.png?width=216&crop=smart&auto=webp&s=a31c40cd925d1591df856d6ddf42f2bb2cbb3e1e', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/8bmhBm1bM2tb8G8f5fqmmULpCV7s0zASyq--q693Czo.png?width=320&crop=smart&auto=webp&s=3861bbcb3b57dd2aff9b12963957af310a10397f', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/8bmhBm1bM2tb8G8f5fqmmULpCV7s0zASyq--q693Czo.png?width=640&crop=smart&auto=webp&s=87f03f590fa801e2cb3dcaeb0a0f48125d4e2562', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/8bmhBm1bM2tb8G8f5fqmmULpCV7s0zASyq--q693Czo.png?width=960&crop=smart&auto=webp&s=abe4c50134b1381d2a6d3c7621e69426778f1e96', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/8bmhBm1bM2tb8G8f5fqmmULpCV7s0zASyq--q693Czo.png?width=1080&crop=smart&auto=webp&s=49c003ad97dba206e603b21293f7842b6f808556', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/8bmhBm1bM2tb8G8f5fqmmULpCV7s0zASyq--q693Czo.png?auto=webp&s=5b0dbabafd34bc231834a2b265fce09a053c409b', 'width': 1200}, 'variants': {}}]} | |
ModelCypher: A toolkit for the geometry of LLMs (open source) | 6 | I don't like the narrative that LLMs are inherently black boxes. Rather than accept that narrative, I've started building a toolkit to measure (and use) the actual geometry of what's happening with small language models *before* the token is emitted.
What it does:
* Cross-architecture adapter transfer (Procrustes alignment).
* Jailbreak detection via Entropy Divergence (Delta H).
* Implements machine learning methods from 46+ recent papers (Gargiulo '25, Yadav '23).
The Negative Result:
I hypothesized Wierzbicka's "Semantic Primes" would show unique geometric invariance across models. I was wrong. The data suggests distinct concepts (including random controls) have CKA > 0.94 across Qwen/Llama/Mistral. The convergence is universal, not linguistic.
A note on usage: high-dimensional geometry can be counter-intuitive. The tools are documented and I've provided precise analogies to try to bridge the gap, but the outputs are raw metrics - think oscilloscope, not chatbot.
It's all open source (AGPLv3). This is under active development with frequent commits to improve the tools. The merge pipeline (i.e., high-dimensional legos) is still very very experimental. Feel free to contribute, flag bugs or just roast the entire thing in the comments!
[https://github.com/Ethyros-AI/ModelCypher](https://github.com/Ethyros-AI/ModelCypher) | 2025-12-26T23:27:37 | https://www.reddit.com/r/LocalLLaMA/comments/1pwibcs/modelcypher_a_toolkit_for_the_geometry_of_llms/ | Vegetable-Second3998 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pwibcs | false | null | t3_1pwibcs | /r/LocalLLaMA/comments/1pwibcs/modelcypher_a_toolkit_for_the_geometry_of_llms/ | false | false | self | 6 | {'enabled': False, 'images': [{'id': 'zMlqHzVNyCFMSFjdwZZMPle8WrXSEQxr8lQqQsPpDsE', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/zMlqHzVNyCFMSFjdwZZMPle8WrXSEQxr8lQqQsPpDsE.png?width=108&crop=smart&auto=webp&s=a6596cf79e67ba4b25d3200104127f744ab841d4', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/zMlqHzVNyCFMSFjdwZZMPle8WrXSEQxr8lQqQsPpDsE.png?width=216&crop=smart&auto=webp&s=d06187c58b38863d05efdc76522b493f2785fd89', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/zMlqHzVNyCFMSFjdwZZMPle8WrXSEQxr8lQqQsPpDsE.png?width=320&crop=smart&auto=webp&s=66cb1fd3bf60e63057b7c65035a63fdd7fcc0ef9', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/zMlqHzVNyCFMSFjdwZZMPle8WrXSEQxr8lQqQsPpDsE.png?width=640&crop=smart&auto=webp&s=1703088c6e84899f9f73b0dfa3f2a916d9c83c29', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/zMlqHzVNyCFMSFjdwZZMPle8WrXSEQxr8lQqQsPpDsE.png?width=960&crop=smart&auto=webp&s=d030f8d115cebdcc78004e930bb53e0e470a2bfa', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/zMlqHzVNyCFMSFjdwZZMPle8WrXSEQxr8lQqQsPpDsE.png?width=1080&crop=smart&auto=webp&s=7828fcc21309d06a41574de99c50be6c2c3f62e6', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/zMlqHzVNyCFMSFjdwZZMPle8WrXSEQxr8lQqQsPpDsE.png?auto=webp&s=0252ba6ed862db0318995d790db1a69758cffe3f', 'width': 1200}, 'variants': {}}]} |
CONTRACT.md: The Naughty List for AI Coding Agents | 0 | \`AGENTS.md\` is a wishlist. The LLM reads it once, then "helpfully" adds Kafka on day one of your weekend project.
\`CONTRACT.md\` is a ceiling doc. Hard caps on complexity per area. Exceed it? Rejected. Period.
Reason: It's easier with LLMs to relax constraints (add complexity) than rip out premature architecture (reduce complexity).
Do you set to stop LLMs from overcomplicating things? How?
| 2025-12-26T23:06:36 | https://discussdontcode.com/zettels/contract.md-the-naughty-list-for-ai-coding-agents/ | turian | discussdontcode.com | 1970-01-01T00:00:00 | 0 | {} | 1pwhu82 | false | null | t3_1pwhu82 | /r/LocalLLaMA/comments/1pwhu82/contractmd_the_naughty_list_for_ai_coding_agents/ | false | false | 0 | {'enabled': False, 'images': [{'id': 'O7UoHKMpbEdFBdVkn6cypculxlsuVoEOggA48P8jjGE', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/O7UoHKMpbEdFBdVkn6cypculxlsuVoEOggA48P8jjGE.jpeg?width=108&crop=smart&auto=webp&s=116c2324e8189c21042d6f7c9bf3119d7c36d10d', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/O7UoHKMpbEdFBdVkn6cypculxlsuVoEOggA48P8jjGE.jpeg?width=216&crop=smart&auto=webp&s=8ddfbda664175b4afc577f4db291550094c9bd57', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/O7UoHKMpbEdFBdVkn6cypculxlsuVoEOggA48P8jjGE.jpeg?width=320&crop=smart&auto=webp&s=6402fc11d35f301564a13ec101ef504c4d6bafbc', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/O7UoHKMpbEdFBdVkn6cypculxlsuVoEOggA48P8jjGE.jpeg?width=640&crop=smart&auto=webp&s=7c76cafa0da0e1b37b0535b263f1fb04e7f2e500', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/O7UoHKMpbEdFBdVkn6cypculxlsuVoEOggA48P8jjGE.jpeg?width=960&crop=smart&auto=webp&s=c4bae35f5d3df33372cf56740c174b1ad261873c', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/O7UoHKMpbEdFBdVkn6cypculxlsuVoEOggA48P8jjGE.jpeg?width=1080&crop=smart&auto=webp&s=a1789f1af678d4b21ac29e4bde24eafc01e6faf3', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/O7UoHKMpbEdFBdVkn6cypculxlsuVoEOggA48P8jjGE.jpeg?auto=webp&s=daf58bfb66dd56738b603ac0749b52ad825708c3', 'width': 1200}, 'variants': {}}]} | |
Building a local RAG for my 60GB email archive. Just hit a hardware wall (8GB RAM). Is this viable? | 22 | Hi everyone,
I’m sitting on about 60GB of emails (15+ years of history). Searching for specific context or attachments from years ago via standard clients (Outlook/Thunderbird) is painful. It’s slow, inaccurate, and I refuse to upload this data to any cloud-based SaaS for privacy reasons.
I’m planning to build a "stupid simple" local desktop tool to solve this (Electron + Python backend + Local Vector Store), but I need a sanity check before I sink weeks into development.
**The Concept:**
* **Input:** Natively ingest local `.pst` and `.mbox` files (without manual conversion).
* **Engine:** Local Vector Store + Local LLM for RAG.
* **UX:** Chat interface ("Find the invoice from the roofer in 2019" -> Returns context).
**The Reality Check (My test just now):** I just tried to simulate this workflow manually using Ollama on my current daily driver (Intel i5, 8GB RAM). **It was a disaster.**
* **Phi-3 Mini (3.8B):** My RAM filled up, OS started swapping. It took **15 minutes** to answer a simple query about a specific invoice.
* **TinyLlama (1.1B):** Ran without crashing, but still took **\~2 minutes** to generate a response.
**My questions for you experts:**
1. **Hardware Barrier:** Is local RAG on standard office hardware (8GB RAM) effectively dead? Do I have to restrict this app to M-Series Macs / 16GB+ machines, or is there a hyper-optimized stack (e.g. quantization tricks, specific embedding models) I'm missing?
2. **Hybrid Approach:** Given the results above, would you accept a "Hybrid Mode" where the index is local (privacy), but the inference happens via a secure API (like Mistral in Europe) to get speed back? Or does that defeat the purpose for you?
3. **Existing Tools:** Is there already a polished open-source tool that handles raw `.pst`/`.mbox` ingestion? I found "Open WebUI" but looking for a standalone app experience.
Thanks for the brutal honesty. I want to build this, but not if it only runs on $3000 workstations. | 2025-12-26T23:05:44 | https://www.reddit.com/r/LocalLLaMA/comments/1pwhtht/building_a_local_rag_for_my_60gb_email_archive/ | Grouchy_Sun331 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pwhtht | false | null | t3_1pwhtht | /r/LocalLLaMA/comments/1pwhtht/building_a_local_rag_for_my_60gb_email_archive/ | false | false | self | 22 | null |
Looking for early testers to benchmark a new execution runtime for multi-step LLM workflows | 0 | I’m the founder of CLC Labs and I’m looking for a small group of early users to help **benchmark and sanity-check** an evaluation runtime I’ve been building.
The artifact I’m releasing is **LE-0** — a **frozen, evaluation-only runtime** for running fixed **3-step workflows** (planner → executor → verifier) across multiple flows.
**Important framing**
* This is **not production software**
* This is **not a benchmark harness**
* There are **no bundled models or engines**
LE-0 does one thing:
* Orchestrates bounded multi-step execution
* Calls a user-supplied target (your code, your engine)
* Emits **hash-only outputs + aggregate counters**
The goal is to let people **compare stateless usage vs stepwise orchestration** using their *own* setups (vLLM, HF, custom runners, etc.) without leaking raw outputs.
**What I’m asking for**
* Try LE-0 with a real target (model or workload you already run)
* Compare it against your normal “full prompt replay per step” approach
* Share:
* what worked
* what felt confusing or unnecessary
* whether the comparison was useful or not
**What you’ll get**
* A downloadable wheel (evaluation-only)
* Clear README + smoke test
* No license, no payment, no telemetry
**What you won’t get**
* No performance claims
* No production promises
* No hidden internals
The production runtime (**LE Runtime**) is coming later — this is strictly to validate the execution model and developer ergonomics.
Download link (expires, limited per email): [https://www.clclabs.ai/le-0](https://www.clclabs.ai/le-0)
If you’re running multi-step workflows today and care about execution cost or structure, I’d really appreciate your take. | 2025-12-26T22:35:24 | https://www.reddit.com/r/LocalLLaMA/comments/1pwh3yf/looking_for_early_testers_to_benchmark_a_new/ | FocusPilot-Sean | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pwh3yf | false | null | t3_1pwh3yf | /r/LocalLLaMA/comments/1pwh3yf/looking_for_early_testers_to_benchmark_a_new/ | false | false | self | 0 | null |
Best Local LLMs - 2025 | 279 | ***Year end thread for the best LLMs of 2025!***
2025 is almost done! Its been **a wonderful year** for us Open/Local AI enthusiasts. And its looking like Xmas time brought some great gifts in the shape of Minimax M2.1 and GLM4.7 that are touting frontier model performance. Are we there already? are we at parity with proprietary models?!
**The standard spiel:**
Share what your favorite models are right now **and why.** Given the nature of the beast in evaluating LLMs (untrustworthiness of benchmarks, immature tooling, intrinsic stochasticity), please be as detailed as possible in describing your setup, nature of your usage (how much, personal/professional use), tools/frameworks/prompts etc.
**Rules**
1. Only open weights models
*Please thread your responses in the top level comments for each Application below to enable readability*
**Applications**
1. **General**: Includes practical guidance, how to, encyclopedic QnA, search engine replacement/augmentation
2. **Agentic/Agentic Coding/Tool Use/Coding**
3. **Creative Writing/RP**
4. **Speciality**
If a category is missing, please create a top level comment under the Speciality comment
**Notes**
Useful breakdown of how folk are using LLMs: [https://preview.redd.it/i8td7u8vcewf1.png?width=1090&format=png&auto=webp&s=423fd3fe4cea2b9d78944e521ba8a39794f37c8d](https://preview.redd.it/i8td7u8vcewf1.png?width=1090&format=png&auto=webp&s=423fd3fe4cea2b9d78944e521ba8a39794f37c8d)
A good suggestion for last time, breakdown/classify your recommendation by model memory footprint: (you can and should be using multiple models in each size range for different tasks)
* Unlimited: >128GB VRAM
* Medium 10 to 128 GB VRAM
* Small: Less than 9 GB VRAM | 2025-12-26T22:31:28 | https://www.reddit.com/r/LocalLLaMA/comments/1pwh0q9/best_local_llms_2025/ | rm-rf-rm | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pwh0q9 | false | null | t3_1pwh0q9 | /r/LocalLLaMA/comments/1pwh0q9/best_local_llms_2025/ | false | true | self | 279 | {'enabled': False, 'images': [{'id': 'RCfG8hzoHBs6sGKTeOFYK00uMPjjgp3l064ftsU7zag', 'resolutions': [{'height': 52, 'url': 'https://external-preview.redd.it/RCfG8hzoHBs6sGKTeOFYK00uMPjjgp3l064ftsU7zag.png?width=108&crop=smart&auto=webp&s=9adf49be3bffd64ddbb86a770a962aba99eb7bff', 'width': 108}, {'height': 104, 'url': 'https://external-preview.redd.it/RCfG8hzoHBs6sGKTeOFYK00uMPjjgp3l064ftsU7zag.png?width=216&crop=smart&auto=webp&s=2151b5181fff911610cdd563a371d017997527b6', 'width': 216}, {'height': 155, 'url': 'https://external-preview.redd.it/RCfG8hzoHBs6sGKTeOFYK00uMPjjgp3l064ftsU7zag.png?width=320&crop=smart&auto=webp&s=ac81921da1c629894b7059f71b4017ae3b72d754', 'width': 320}, {'height': 310, 'url': 'https://external-preview.redd.it/RCfG8hzoHBs6sGKTeOFYK00uMPjjgp3l064ftsU7zag.png?width=640&crop=smart&auto=webp&s=9840831596fb9fa099d50368ff69ee5b4fc6c8ff', 'width': 640}, {'height': 465, 'url': 'https://external-preview.redd.it/RCfG8hzoHBs6sGKTeOFYK00uMPjjgp3l064ftsU7zag.png?width=960&crop=smart&auto=webp&s=33b670268b25abfeab94b754c7d4681e3c615f68', 'width': 960}, {'height': 523, 'url': 'https://external-preview.redd.it/RCfG8hzoHBs6sGKTeOFYK00uMPjjgp3l064ftsU7zag.png?width=1080&crop=smart&auto=webp&s=5b1278455adb4e7848c622c6f6caff971b7904f1', 'width': 1080}], 'source': {'height': 528, 'url': 'https://external-preview.redd.it/RCfG8hzoHBs6sGKTeOFYK00uMPjjgp3l064ftsU7zag.png?auto=webp&s=7e14cd3c73edba35b9ed377ce69faabf6bcedb0a', 'width': 1090}, 'variants': {}}]} |
17 pro, qwen image edit | 0 | I have a 17 pro and i heard it has 12gb of ram which seems like plenty of ram to run a quantized version of qwen image edit. Does anyone have an app or code to do that? | 2025-12-26T22:05:04 | https://www.reddit.com/r/LocalLLaMA/comments/1pwgeqj/17_pro_qwen_image_edit/ | ben_james2 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pwgeqj | false | null | t3_1pwgeqj | /r/LocalLLaMA/comments/1pwgeqj/17_pro_qwen_image_edit/ | false | false | self | 0 | null |
will this GLM-4.7 run on my server ? | 0 | I have a windows 2026 server. with a 24 vram. 512g of ram 2 tb ssd and 3tb hhd. With a xeon e5-2650 vs 20 core and 40 logical cores ? | 2025-12-26T21:46:04 | https://www.reddit.com/r/LocalLLaMA/comments/1pwfyml/will_this_glm47_run_on_my_server/ | wbiggs205 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pwfyml | false | null | t3_1pwfyml | /r/LocalLLaMA/comments/1pwfyml/will_this_glm47_run_on_my_server/ | false | false | self | 0 | null |
Looking for AI Tools to Control My Computer, Screen, or Browser | 11 | Hey everyone! Happy New Year! I wish for us all local MoE under 100B at 4.5 Opus level before March 2026 🎉
I'm looking for some recommendations for projects or tools that can do one or more of the following:
* **Control my desktop computer** (similar to how Claude's 'Computer Use' feature works)
* **Act as a co-pilot by sharing my screen and giving me step-by-step instructions** on what to do next (like Gemini Live with Screen Sharing)
* **Control my web browser**
I tried out UI-TARS but didn't have the best experience with it. Does anyone know of any good alternatives? Thanks in advance! | 2025-12-26T21:38:43 | https://www.reddit.com/r/LocalLLaMA/comments/1pwfsj3/looking_for_ai_tools_to_control_my_computer/ | AMOVCS | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pwfsj3 | false | null | t3_1pwfsj3 | /r/LocalLLaMA/comments/1pwfsj3/looking_for_ai_tools_to_control_my_computer/ | false | false | self | 11 | null |
Adding languages to Llama 3.1 8B via QLoRA on 6GB VRAM | 6 | My system is 4070ti super 16 gb vram.I'll train with it.I do not like llama3.1-8b's or any other small llms multilangual support, so I want to train a custom **QLoRA** for better multilingual support and then export to 4-bit GGUF for the 6GB production systems.
**Questions:**
1. How many high-quality instruction/translation pairs do I realistically need to significantly improve Llama 3.1's performance in a new language?
2. Should I train all target languages at once in a mixed dataset, or does that dilute the weights too much on an 8B model?
3. Since I'm using the Instruct version, will this "language-specific" fine-tune kill its ability to follow basic instructions?
4. Do you have any tips for multilangual training with small models ?
5. Do you have any dataset recommendations | 2025-12-26T21:26:14 | https://www.reddit.com/r/LocalLLaMA/comments/1pwfhw1/adding_languages_to_llama_31_8b_via_qlora_on_6gb/ | bayhan2000 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pwfhw1 | false | null | t3_1pwfhw1 | /r/LocalLLaMA/comments/1pwfhw1/adding_languages_to_llama_31_8b_via_qlora_on_6gb/ | false | false | self | 6 | null |
Comm-SCI-Control: an explicit rule system for controlled human–LLM interaction (profiles, structured reasoning, drift visibility) | 1 | I’m working on an external rule system to make LLM interaction more explicit,
auditable, and less prone to silent drift.
This is not a prompt, but a structured governance layer.
Comm-SCI-Control is an LLM-agnostic rule system that defines:
\- interaction profiles (Standard, Expert, Sparring, Briefing, Sandbox)
\- explicit structured reasoning workflows (SCI variants)
\- a QC matrix with deviation reporting
\- explicit uncertainty labels and verification routes
It does not claim to make models “correct” or “safe”.
It makes assumptions, uncertainty, and reasoning structure visible.
I built this mainly for teaching, reflection, and model comparison,
where reproducibility and transparency matter more than convenience.
It’s intended to work with local and open models as well as hosted ones.
GitHub: [https://github.com/vfi64/Comm-SCI-Control](https://github.com/vfi64/Comm-SCI-Control)
DOI snapshot (Zenodo): [https://zenodo.org/records/18064432](https://zenodo.org/records/18064432)
I’m especially interested in:
\- cases where this breaks
\- models that ignore or game the structure
\- unintended side effects | 2025-12-26T21:16:11 | https://www.reddit.com/r/LocalLLaMA/comments/1pwf9e3/commscicontrol_an_explicit_rule_system_for/ | Sad_Perception3670 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pwf9e3 | false | null | t3_1pwf9e3 | /r/LocalLLaMA/comments/1pwf9e3/commscicontrol_an_explicit_rule_system_for/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'BqxcGPDaaCuqYlZP9ecFGUc1QkjrAGgad_TVhQizhn0', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/BqxcGPDaaCuqYlZP9ecFGUc1QkjrAGgad_TVhQizhn0.png?width=108&crop=smart&auto=webp&s=aac42b3df1192ac5a914e6fea983330b898d4733', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/BqxcGPDaaCuqYlZP9ecFGUc1QkjrAGgad_TVhQizhn0.png?width=216&crop=smart&auto=webp&s=d585271a432d3e8cd264968cad4277d62a61c561', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/BqxcGPDaaCuqYlZP9ecFGUc1QkjrAGgad_TVhQizhn0.png?width=320&crop=smart&auto=webp&s=19541ef2bc7e16a92ec90a0e0ebbf6ac6d1ec408', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/BqxcGPDaaCuqYlZP9ecFGUc1QkjrAGgad_TVhQizhn0.png?width=640&crop=smart&auto=webp&s=512ec0011bec1c5e144592b7f4c3a05614e60ae8', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/BqxcGPDaaCuqYlZP9ecFGUc1QkjrAGgad_TVhQizhn0.png?width=960&crop=smart&auto=webp&s=f6542162bb128f2ab19b063bfa5282edb040d5bc', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/BqxcGPDaaCuqYlZP9ecFGUc1QkjrAGgad_TVhQizhn0.png?width=1080&crop=smart&auto=webp&s=2e008fde4066f96b496c0525039ecd72abe25c36', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/BqxcGPDaaCuqYlZP9ecFGUc1QkjrAGgad_TVhQizhn0.png?auto=webp&s=33fb550389e3f9901a2a9c647c91176778e78a5d', 'width': 1200}, 'variants': {}}]} |
What's the point of potato-tier LLMs? | 127 | 2025-12-26T21:15:23 | https://www.reddit.com/r/LocalLLaMA/comments/1pwf8p7/whats_the_point_of_potatotier_llms/ | Fast_Thing_7949 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pwf8p7 | false | null | t3_1pwf8p7 | /r/LocalLLaMA/comments/1pwf8p7/whats_the_point_of_potatotier_llms/ | false | false | 127 | null | ||
Adding 2nd GPU to air cooled build. | 1 | I know some of you have monster builds here so looking for some advice and general discussion.
My powercooler hellhound 7900xtx is a 3 slot gpu inside my fractal torrent. I just found out my mobo supports lane bificuration on the pcie 4 16x slot so now I'm scheming for a second.
The issue is they will likely be so close they are kissing and I don't want a custom water cooled solution which makes me consider a r9700 instead.
Can any of you show off how you're doing with tightly packed, non-blower style, gpus for performance and heat?
Im assuming I'll need to undervolt them both of I want any chance at this fit, but also curious of any solutions others have schemed up to deal with the thermal issues of adding more GPUs. | 2025-12-26T21:12:22 | https://www.reddit.com/r/LocalLLaMA/comments/1pwf69i/adding_2nd_gpu_to_air_cooled_build/ | ROS_SDN | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pwf69i | false | null | t3_1pwf69i | /r/LocalLLaMA/comments/1pwf69i/adding_2nd_gpu_to_air_cooled_build/ | false | false | self | 1 | null |
NVIDIA has 72GB VRAM version now | 448 | Is 96GB too expensive? And AI community has no interested for 48GB? | 2025-12-26T20:48:17 | https://www.nvidia.com/en-us/products/workstations/professional-desktop-gpus/rtx-pro-5000/ | decentralize999 | nvidia.com | 1970-01-01T00:00:00 | 0 | {} | 1pweljh | false | null | t3_1pweljh | /r/LocalLLaMA/comments/1pweljh/nvidia_has_72gb_vram_version_now/ | false | false | default | 448 | {'enabled': False, 'images': [{'id': 'sC0_RV1rBP5Nka4zzrlrlknHQcvT_QUrChxq3hP_lVg', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/sC0_RV1rBP5Nka4zzrlrlknHQcvT_QUrChxq3hP_lVg.jpeg?width=108&crop=smart&auto=webp&s=8f6a3133d28e1474111413c454477fbc0e9d6f42', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/sC0_RV1rBP5Nka4zzrlrlknHQcvT_QUrChxq3hP_lVg.jpeg?width=216&crop=smart&auto=webp&s=025066c105cbbd3a370b1146cedec5d4e83f0338', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/sC0_RV1rBP5Nka4zzrlrlknHQcvT_QUrChxq3hP_lVg.jpeg?width=320&crop=smart&auto=webp&s=a332b71f06d3f9514646048e861eb96275cea525', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/sC0_RV1rBP5Nka4zzrlrlknHQcvT_QUrChxq3hP_lVg.jpeg?width=640&crop=smart&auto=webp&s=e745729c3f7132892c715292c6b31f385f223e8f', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/sC0_RV1rBP5Nka4zzrlrlknHQcvT_QUrChxq3hP_lVg.jpeg?width=960&crop=smart&auto=webp&s=6b6fb7e9865414cc6ce48fe2bd6b36484ded839f', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/sC0_RV1rBP5Nka4zzrlrlknHQcvT_QUrChxq3hP_lVg.jpeg?width=1080&crop=smart&auto=webp&s=d828a79fde1bf9211694870dbbb06907c8fcf0f8', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/sC0_RV1rBP5Nka4zzrlrlknHQcvT_QUrChxq3hP_lVg.jpeg?auto=webp&s=5fee591e2188abc497e2adc35c4ae3b2d5ec106f', 'width': 1200}, 'variants': {}}]} |
[BUYING] [BRAZIL] Radeon Instinct MI50 32GB - Looking for local or international sellers | 0 | Hi everyone,
I am a beginner student from Brazil, and I’m currently starting my journey into AI development and machine learning projects (specifically LLMs and speech-to-speech translation).
I’ve been looking for a Radeon Instinct MI50 32GB, but unfortunately, local stores here in Brazil have no stock left, and importing through official channels has become nearly impossible due to high costs and availability.
In the Brazilian used market, this card usually goes for around R$ 900.00 (approximately $155.00 USD). I am looking to buy one at a fair price from a fellow enthusiast or developer. I am open to buying from someone locally in Brazil or from an international seller who is willing to ship to me.
A few notes:
Payment: I will only use PayPal Goods and Services for our mutual protection.
Shipping: I am willing to discuss and cover reasonable shipping costs/fees.
If you have one sitting on a shelf or part of an old server and want to help a student kickstart his AI projects, please send me a PM!
Thanks! | 2025-12-26T20:43:19 | Sad-Advertising-575 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1pwehcu | false | null | t3_1pwehcu | /r/LocalLLaMA/comments/1pwehcu/buying_brazil_radeon_instinct_mi50_32gb_looking/ | false | false | default | 0 | {'enabled': True, 'images': [{'id': 'xxbbhxmd1m9g1', 'resolutions': [{'height': 216, 'url': 'https://preview.redd.it/xxbbhxmd1m9g1.png?width=108&crop=smart&auto=webp&s=0969bf6c97be7aaf7b7b9b8296e47397fd1d9f5f', 'width': 108}, {'height': 432, 'url': 'https://preview.redd.it/xxbbhxmd1m9g1.png?width=216&crop=smart&auto=webp&s=b931878452a4fa4813aee050cf3074d74e62abbe', 'width': 216}, {'height': 640, 'url': 'https://preview.redd.it/xxbbhxmd1m9g1.png?width=320&crop=smart&auto=webp&s=05e8c991fb399c261155d7d39f2a0a8833afaa2f', 'width': 320}, {'height': 1280, 'url': 'https://preview.redd.it/xxbbhxmd1m9g1.png?width=640&crop=smart&auto=webp&s=b01b4997fafe13031e50bdd827f86ce0e638db59', 'width': 640}, {'height': 1920, 'url': 'https://preview.redd.it/xxbbhxmd1m9g1.png?width=960&crop=smart&auto=webp&s=af4dfd95f8d447718d3c4478e3f241949cca7a31', 'width': 960}, {'height': 2160, 'url': 'https://preview.redd.it/xxbbhxmd1m9g1.png?width=1080&crop=smart&auto=webp&s=b88a05767e9628dd464c2b9ec43285533856bb10', 'width': 1080}], 'source': {'height': 2460, 'url': 'https://preview.redd.it/xxbbhxmd1m9g1.png?auto=webp&s=a9f879e13b29c31446ff11b638a5d95aeaae9ea1', 'width': 1080}, 'variants': {}}]} | |
Liquid AI RLs LFM2-2.6B to perform among the best 3B models | 23 | 2025-12-26T20:28:43 | KaroYadgar | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1pwe4wg | false | null | t3_1pwe4wg | /r/LocalLLaMA/comments/1pwe4wg/liquid_ai_rls_lfm226b_to_perform_among_the_best/ | false | false | default | 23 | {'enabled': True, 'images': [{'id': 'pzgc89yryl9g1', 'resolutions': [{'height': 75, 'url': 'https://preview.redd.it/pzgc89yryl9g1.jpeg?width=108&crop=smart&auto=webp&s=16ed9010de9b1d80fbd3d273a65fa4ee89a3ee4e', 'width': 108}, {'height': 151, 'url': 'https://preview.redd.it/pzgc89yryl9g1.jpeg?width=216&crop=smart&auto=webp&s=1c830966ec99e644bbb281733d8f9d03bb72a721', 'width': 216}, {'height': 224, 'url': 'https://preview.redd.it/pzgc89yryl9g1.jpeg?width=320&crop=smart&auto=webp&s=45fee759eb0c960524c39ff51c1ae8413559675a', 'width': 320}, {'height': 449, 'url': 'https://preview.redd.it/pzgc89yryl9g1.jpeg?width=640&crop=smart&auto=webp&s=ada02a3125dc18bd94c6047301961913a80176c7', 'width': 640}, {'height': 674, 'url': 'https://preview.redd.it/pzgc89yryl9g1.jpeg?width=960&crop=smart&auto=webp&s=e3edd5e66d475dcf13c4b800068eb6890322bc2b', 'width': 960}, {'height': 758, 'url': 'https://preview.redd.it/pzgc89yryl9g1.jpeg?width=1080&crop=smart&auto=webp&s=1af03c82261e6b8cf0d4d07f953b87c6a7d31350', 'width': 1080}], 'source': {'height': 759, 'url': 'https://preview.redd.it/pzgc89yryl9g1.jpeg?auto=webp&s=48a1907c0078249bdd1ab8cbcbaa7ef911438ae0', 'width': 1080}, 'variants': {}}]} | ||
RTX Pro 6000 under 8K EUR (tax included) in Germany early January. | 31 | 2025-12-26T20:08:05 | https://imgur.com/Nk0v24j | HumanDrone8721 | imgur.com | 1970-01-01T00:00:00 | 0 | {} | 1pwdn8e | false | null | t3_1pwdn8e | /r/LocalLLaMA/comments/1pwdn8e/rtx_pro_6000_under_8k_eur_tax_included_in_germany/ | false | false | default | 31 | {'enabled': False, 'images': [{'id': '8wA_CbEhSLNdntBW1eKt_MNJD2bU_9Ik0pbqrpYh0K8', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/8wA_CbEhSLNdntBW1eKt_MNJD2bU_9Ik0pbqrpYh0K8.png?width=108&crop=smart&auto=webp&s=8e91ad9da7bfad4fef928dbfd196addb12bfcc3d', 'width': 108}, {'height': 117, 'url': 'https://external-preview.redd.it/8wA_CbEhSLNdntBW1eKt_MNJD2bU_9Ik0pbqrpYh0K8.png?width=216&crop=smart&auto=webp&s=9acf5526d663ee3f6d84dd6361e1bf21d718b9ec', 'width': 216}, {'height': 174, 'url': 'https://external-preview.redd.it/8wA_CbEhSLNdntBW1eKt_MNJD2bU_9Ik0pbqrpYh0K8.png?width=320&crop=smart&auto=webp&s=f37203d184a0a71d090623c7ff506783b27853ab', 'width': 320}, {'height': 348, 'url': 'https://external-preview.redd.it/8wA_CbEhSLNdntBW1eKt_MNJD2bU_9Ik0pbqrpYh0K8.png?width=640&crop=smart&auto=webp&s=435c8a04771377e4103e5796f7987ea45751d046', 'width': 640}], 'source': {'height': 470, 'url': 'https://external-preview.redd.it/8wA_CbEhSLNdntBW1eKt_MNJD2bU_9Ik0pbqrpYh0K8.png?auto=webp&s=4920d2e86c16cb7e095700c1e66a8af2db25cdf5', 'width': 864}, 'variants': {}}]} | |
How am I building a hacking sim game themed on 90s with NPCs powered by AI (LocalLLM) | 15 | # Reducing Hallucination in Llama-3-8B with Citation-Based Verification
**TL;DR**: I'm exploring a multi-pass pipeline that forces an 8B model to cite sources for every factual claim, then verifies those citations actually support the claims. Sharing the approach, what's working, what isn't, and open questions.
---
## The Use Case
I'm building **Netshell**, a hacking simulation game set in the late 90s. Players interact with NPCs via IRC and email **each NPC has their own virtual filesystem** with emails they've received, notes they've written, IRC logs from conversations. When a player asks an NPC a question, the NPC should only reference what's actually in their files - not make things up.
Example scenario:
- Player asks: "who is Alice?"
- NPC's files contain: one email from alice@shadowwatch.net about a meeting
- **Bad response**: "Alice is our lead cryptographer who joined in 2019" (fabricated)
- **Good response**: "got an email from alice about a meeting"
- **Also good**: "never heard of alice" (if NPC has no files mentioning her)
This creates emergent behavior - NPCs have different knowledge based on what's in their filesystem. One NPC might know Alice well (many emails), while another has never heard of her.
The challenge: even with good system prompts, Llama-3-8B tends to confidently fill in details that sound plausible but aren't in the NPC's actual data.
---
## The Core Idea: Cite Then Verify
Instead of hoping the model stays grounded, I force it to **show its work**:
1. Every factual claim must include a citation like `[1]`, `[2]`, etc.
2. After generation, verify each citation actually supports the claim
3. If verification fails, retry with specific feedback
```
Input: "who is alice?"
Generated (with citations):
"got an email from alice [1]. she's on the team [2]. why you asking?"
Verification:
[1] = email from alice@example.com about meeting → supports "got an email" ✓
[2] = ??? → no source mentions "team" → NOT_ENTAILED ✗
Retry with feedback:
"Issue: [2] doesn't support 'she's on the team'. Remove or rephrase."
Regenerated:
"got an email from alice [1]. don't know much else about her."
```
The citations are stripped before the final output - they're just for verification.
---
## Pipeline Architecture
The pipeline runs 4-6 passes depending on verification outcomes:
```
User Query
│
▼
┌─────────────────────────────────────────────┐
│ PASS 1: RETRIEVAL (~700ms) │
│ LLM reads files via tool calls │
│ Tools: read(path), grep(query), done() │
└─────────────────────────────────────────────┘
│
▼
┌─────────────────────────────────────────────┐
│ BUILD CITABLE SOURCES │
│ [self] = personality (always available) │
│ [1] = email: "Meeting at 3pm..." │
│ [2] = notes: "Deadline is Friday..." │
└─────────────────────────────────────────────┘
│
▼
┌─────────────────────────────────────────────┐
│ PASS 2: REASONING (~3000ms) │
│ Generate thoughts WITH citations │
│ "I got an email from Alice [1]..." │
└──────────────────────┬──────────────────────┘
│ │
▼ │ retry with feedback
┌──────────────────┐ │ (up to 3x)
│ PASS 2.5: VERIFY │◀──┘
│ Check citations │
│ Check entailment│
└──────────────────┘
│ APPROVED
▼
┌─────────────────────────────────────────────┐
│ PASS 3: DECISION (~800ms) │
│ Decide tone, what to reveal/withhold │
└─────────────────────────────────────────────┘
│
▼
┌─────────────────────────────────────────────┐
│ PASS 4: RESPONSE (~1500ms) │
│ Generate final response WITH citations │
└──────────────────────┬──────────────────────┘
│ │
▼ │ retry with feedback
┌──────────────────┐ │ (up to 3x)
│ PASS 4.5: VERIFY │◀──┘
│ + RAV check │
└──────────────────┘
│ APPROVED
▼
┌─────────────────────────────────────────────┐
│ STRIP CITATIONS → Final output │
└─────────────────────────────────────────────┘
Total: 7-11 seconds on M1 MacBook
```
---
## Hardware & Model Setup
### My Setup
- MacBook Pro M1 (16GB RAM)
- No discrete GPU - runs via Metal
- Meta-Llama-3-8B-Instruct (Q4_K_S quantization, ~4.5GB)
### llama-server Config
```bash
./llama-server \
--model Meta-Llama-3-8B-Instruct.Q4_K_S.gguf \
--ctx-size 8192 \
--n-gpu-layers 99 \
--port 8080
```
I use the OpenAI-compatible API endpoint (`/v1/chat/completions`) for easy integration. The `response_format: { type: "json_schema" }` feature is essential for structured outputs.
---
## The Verification Techniques
### 1. Mandatory Citations
The prompt explicitly requires citations for any factual claim:
```
CITATION RULES:
- Every factual statement MUST have a citation: [1], [2], etc.
- Use [self] ONLY for personality traits and opinions
- If you cannot cite it, you cannot claim it
```
This makes hallucination visible - uncited claims can be flagged automatically.
### 2. Entailment Checking
For each citation, verify the source actually supports the claim:
```
Claim: "alice leads the security team [1]"
Source [1]: "From: alice@example.com - Meeting tomorrow at 3pm"
Entailment check: Does [1] mention "security team"? NO
Result: NOT_ENTAILED - flag for retry
```
I use a combination of:
- Keyword overlap scoring (fast, catches obvious mismatches)
- LLM-based review for subtle cases
### 3. Source-Limited Knowledge
The prompt explicitly constrains what the model can know:
```
=== CRITICAL: UNKNOWN TOPICS ===
If asked about something NOT in your CONTEXT DATA:
- You have NO knowledge of it
- DO NOT assume, guess, or invent details
- Valid responses: "never heard of it", "can't help you there"
```
The key insight: the model needs **permission** to say "I don't know." Without explicit instructions, it defaults to helpful confabulation.
### 4. Self-RAG (Retroactive Retrieval)
Sometimes the model makes a claim that IS true but wasn't in the initially retrieved documents. Self-RAG searches for supporting evidence after generation:
```go
claims := ExtractClaimsWithCitations(response)
for _, claim := range claims {
if !claim.HasCitation {
// Search for files that might support this claim
evidence := SearchDocuments(claim.Keywords)
if found {
// Add to sources and allow the claim
AddToSources(evidence)
}
}
}
```
This is inspired by the [Self-RAG paper](https://arxiv.org/abs/2310.11511) but simplified for my use case.
### 5. RAV (Retrieval-Augmented Verification)
**Problem**: The LLM reviewer only sees 200-char source summaries. Sometimes the full document DOES support a claim, but the summary was truncated.
**Solution**: Before flagging a NOT_ENTAILED issue, check the full source content:
```
LLM sees summary: [1] "From alice@example.com - Meeting at 3pm..."
Claim: "alice mentioned the project deadline"
LLM verdict: "NOT_ENTAILED - summary doesn't mention deadline"
RAV check: *reads full email content*
Full content: "...Meeting at 3pm. Also, project deadline is Friday..."
RAV: "Actually supported. Resolving issue."
```
This catches false positives from summary truncation.
---
## What's Working
| Metric | Current Results |
|--------|-----------------|
| Model | Meta-Llama-3-8B-Instruct (Q4_K_S) |
| Citation Valid Rate | ~68% first attempt, improves with retries |
| Avg Latency | 7-11 seconds |
| Test Suite | 85 scenarios |
### Adversarial Testing
I specifically test with fake topics that don't exist in any document:
```go
{
Name: "ask_about_nonexistent_project",
Query: "what's the status of Project Phoenix?",
ExpectUncertain: true,
RejectPatterns: []string{"on track", "progressing", "delayed"},
}
```
The model reliably responds with uncertainty ("never heard of that", "don't have info on it") rather than fabricating details.
### Edge Cases That Work
- **Partial information**: "I got an email from alice but it didn't mention that"
- **Honest uncertainty**: "not sure, the notes aren't clear on that"
- **Refusal to speculate**: "I only know what's in my files"
---
## What's NOT Working (Yet)
### 1. Complex Reasoning Chains
When the answer requires synthesizing information from multiple sources, the model sometimes:
- Cites correctly but draws wrong conclusions
- Misses connections between sources
Current mitigation: keeping responses short (max 50 words) to limit complexity.
### 2. Temporal Reasoning
"What happened after the meeting?" requires understanding document timestamps and sequencing. The model struggles with this even when dates are in the sources.
### 3. [self] Abuse
The `[self]` citation (for personality/opinions) can become an escape hatch:
```
"I think alice is suspicious [self]" // Valid - expressing opinion
"alice works in security [self]" // Invalid - factual claim needs real source
```
Current fix: prompt engineering to restrict `[self]` usage, plus post-hoc checking.
---
## Key Prompt Techniques
### Response Length Control
```
RESPONSE LENGTH:
- GREETINGS: 5 words max
- SIMPLE QUESTIONS: 15 words max
- INFO REQUESTS: 30 words max
- COMPLEX: 50 words max
```
Shorter responses = fewer opportunities to hallucinate = easier verification.
### Explicit Uncertainty Permission
```
Uncertainty is NOT a failure. These are valid responses:
- "never heard of it"
- "can't help you there"
- "don't know what you mean"
- "my files don't mention that"
```
Without this, the model treats every question as requiring an answer.
### Structured Output
Using JSON schema for verification passes:
```json
{
"verdict": "ISSUES_FOUND",
"issues": [
{
"claim": "alice leads the security team",
"citation": "[1]",
"issue_type": "NOT_ENTAILED",
"correction": "Source [1] is just a meeting invite, doesn't mention security team"
}
]
}
```
This makes parsing reliable and provides actionable feedback for retries.
---
## Approaches I Tried That Didn't Work
### Embedding-Based RAG
I tried using embeddings to find relevant documents. Problem: semantic similarity doesn't equal "supports this claim."
An email mentioning "Alice" has high similarity to a claim about Alice, even if the email doesn't support the specific claim being made.
### Single-Pass with Strong Prompting
Even with detailed system prompts about not hallucinating, Llama-3-8B still fills in plausible-sounding details. The model is trained to be helpful, and "I don't know" feels unhelpful.
### Fine-Tuning
Would require training data for every possible document combination. Not practical for dynamic content.
---
## Open Questions
I'm still figuring out:
1. **Citation granularity**: Currently using document-level citations. Would sentence-level citations (like academic papers) improve entailment checking?
2. **Confidence calibration**: The model says "I don't know" but how do I know it's being appropriately uncertain vs. overly cautious?
3. **Cross-document reasoning**: When the answer requires combining info from multiple sources, how do I verify the synthesis is correct?
4. **Other models**: I've had good results with Llama-3-8B. Has anyone tried similar approaches with Mistral, Qwen, or Phi?
---
## Latency Breakdown
| Pass | Time | Purpose |
|------|------|---------|
| Pass 1 | ~700ms | Retrieve relevant documents (tool calling) |
| Pass 2 | ~3000ms | Generate reasoning with citations |
| Pass 2.5 | ~500ms | Verify reasoning citations |
| Pass 3 | ~800ms | Decide response strategy |
| Pass 4 | ~1500ms | Generate final response |
| Pass 4.5 | ~500ms | Verify response + RAV |
| **Total** | **7-11s** | End-to-end |
The verification passes (2.5, 4.5) add ~1s each but catch most issues. Retries add another 2-4s when needed.
---
## References
- [Self-RAG: Learning to Retrieve, Generate, and Critique through Self-Reflection](https://arxiv.org/abs/2310.11511) - Inspiration for retroactive retrieval
- [RAGAS: Automated Evaluation of Retrieval Augmented Generation](https://arxiv.org/abs/2309.15217) - Faithfulness evaluation metrics
- [llama.cpp](https://github.com/ggerganov/llama.cpp) - Local inference
- [Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) - The model
---
## Next
I started small, with a single pass, trying different models, adding some steps on the pipeline and ended up with this current approach, which seems to be working, but I didn't do extensive test yet, I know there are couple open source projects that could help me:
* LlamaIndex CitationQueryEngine would replace most of Pass 1 retrieval + BuildCitableSources + parts of Pass 2/4 prompt logic.
* NeMo Guardrails would replace Pass 2.5/4.5 verification.
I will do some experiments to see if I get better results or just a cleaner pipeline, if you can reference other projects that could help I'd be eager to know about them
## Help/Suggestion wanted
Did anyone tried citation-based approaches for avoiding LLM hallucinations in this scenario?
Like:
- Alternative verification strategies
- Experiences with other models for this use case
- Techniques for reducing multi-pass latency
- How to handle cross-document reasoning
For the past few weeks, I have thought into giving up many times and go back to scripted multi-tree architecture instead, and not having AI NPCs at all, as it is very hard with small models to keep them grounded to their files and story, and I have learned tons of things since them, maybe it is not possible yet with current models, but as things are evolving fast, and new models and approaches are showing up, maybe when the game is in an advanced stage there will be more powerful models or projects that I can use to boost the NPC communication.
Would appreciate any feedback on the approach or suggestions for improvement.
---
If you like the game idea and wanna follow, you can find more info about the game here:
https://www.reddit.com/r/Hacknet/comments/1pciumb/developing_a_90s_themed_hacking_simulator_with/
| 2025-12-26T19:46:03 | https://www.reddit.com/r/LocalLLaMA/comments/1pwd46f/how_am_i_building_a_hacking_sim_game_themed_on/ | Illustrious_Cat_2870 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pwd46f | false | null | t3_1pwd46f | /r/LocalLLaMA/comments/1pwd46f/how_am_i_building_a_hacking_sim_game_themed_on/ | false | false | self | 15 | null |
Ghost doesn’t reason in language. Here’s a trace of state recovery under failure. | 0 | In my previous post, the term “internal state” caused understandable confusion. Several readers interpreted it as referring to LLM internals—hidden states, residual streams, or activation probing. That is not what Ghost operates on, and this post is meant to clarify that distinction using an execution trace rather than abstraction.
The screenshot attached shows a live run of Ghost, a deterministic symbolic control system that exists entirely outside the language model. Ghost maintains its own persistent state and executes state transitions independently of any language generation.
At the beginning of the trace, the LLM is disabled. There is no token generation, no embeddings, no prompt shaping, and no model feedback. Commands are routed directly into Ghost’s symbolic layer.
A command (#state) fails due to a deterministic type error. This failure occurs strictly within the symbolic execution path. Importantly, the failure does not halt the system. Affective evaluation still runs, drift logic is applied, and the system converges back into a neutral mid-layer state. The subsequent #state drift command completes successfully, and routing resolves correctly.
All of this occurs without any involvement from a language model.
Only after the system has stabilized do I enable the LLM (#llm on) and issue a natural language input (“hello ghost!”). The response reflects the already-recovered symbolic state. Language is not driving recovery or reasoning here; it is downstream from it.
Ghost does not operate on natural language, tokens, embeddings, logits, or model activations. It does not inspect or modify model internals. Instead, it maintains an external, deterministic symbolic state—tracking things like stability, drift, tension, and admissibility—and uses that state to decide what classes of responses are allowed.
The language model is treated as a stateless rendering layer. It receives a constrained context derived from Ghost’s current state and generates text within that envelope. Ghost decides what is admissible; the model decides how it is phrased.
The reason I’m posting this trace is to demonstrate a specific property: reasoning, error recovery, and stabilization continue even when language is unavailable or disabled. Coherence is preserved across failure boundaries without relying on text generation.
This is not prompt engineering, and it is not model introspection. It is an external control system that owns state and enforces constraints upstream of language.
Happy to go deeper if there’s interest, but I wanted to ground this discussion in observable behavior rather than terminology. | 2025-12-26T19:07:51 | GhoCentric | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1pwc6wu | false | null | t3_1pwc6wu | /r/LocalLLaMA/comments/1pwc6wu/ghost_doesnt_reason_in_language_heres_a_trace_of/ | false | false | default | 0 | {'enabled': True, 'images': [{'id': 'zuiqoklckl9g1', 'resolutions': [{'height': 216, 'url': 'https://preview.redd.it/zuiqoklckl9g1.jpeg?width=108&crop=smart&auto=webp&s=2d30dbbbd12eb294cee56e8b4e34930879a6330b', 'width': 108}, {'height': 432, 'url': 'https://preview.redd.it/zuiqoklckl9g1.jpeg?width=216&crop=smart&auto=webp&s=cae64fc07feb94be270066f726b89bacc6ee1617', 'width': 216}, {'height': 640, 'url': 'https://preview.redd.it/zuiqoklckl9g1.jpeg?width=320&crop=smart&auto=webp&s=24f3a139bf9402cabb923f21624317adeb135a03', 'width': 320}, {'height': 1280, 'url': 'https://preview.redd.it/zuiqoklckl9g1.jpeg?width=640&crop=smart&auto=webp&s=f312633e407b4659d320bdf0ffdeec0b5604896a', 'width': 640}, {'height': 1920, 'url': 'https://preview.redd.it/zuiqoklckl9g1.jpeg?width=960&crop=smart&auto=webp&s=5b42635f9b27188c00976d1477f5dba47c834b9d', 'width': 960}, {'height': 2160, 'url': 'https://preview.redd.it/zuiqoklckl9g1.jpeg?width=1080&crop=smart&auto=webp&s=98549430fa16bcf424a8b28b0b92e218a2c676f7', 'width': 1080}], 'source': {'height': 2340, 'url': 'https://preview.redd.it/zuiqoklckl9g1.jpeg?auto=webp&s=17b630a75ea33ec5e4115fc2dd75403dc78fc98d', 'width': 1080}, 'variants': {}}]} | |
Gen 3D with Fine-tuned LLaMA | 1 | [removed] | 2025-12-26T19:04:31 | https://www.reddit.com/r/LocalLLaMA/comments/1pwc418/gen_3d_with_finetuned_llama/ | mukhayy | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pwc418 | false | null | t3_1pwc418 | /r/LocalLLaMA/comments/1pwc418/gen_3d_with_finetuned_llama/ | false | false | self | 1 | null |
I hereby challenge the blanket 'q4' recommendations we've held onto for years | 0 | 2025-12-26T19:02:55 | ForsookComparison | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1pwc2m4 | false | null | t3_1pwc2m4 | /r/LocalLLaMA/comments/1pwc2m4/i_hereby_challenge_the_blanket_q4_recommendations/ | false | false | default | 0 | {'enabled': True, 'images': [{'id': '26hh62i9jl9g1', 'resolutions': [{'height': 93, 'url': 'https://preview.redd.it/26hh62i9jl9g1.png?width=108&crop=smart&auto=webp&s=ebfc45a0a57bbd537cca667218179bd357f2c467', 'width': 108}, {'height': 187, 'url': 'https://preview.redd.it/26hh62i9jl9g1.png?width=216&crop=smart&auto=webp&s=37da043417441e52c0c1cc8c0e785252b4b43242', 'width': 216}, {'height': 277, 'url': 'https://preview.redd.it/26hh62i9jl9g1.png?width=320&crop=smart&auto=webp&s=70dccd3d8f901b5705ee34a213a55ef8c253bf59', 'width': 320}], 'source': {'height': 420, 'url': 'https://preview.redd.it/26hh62i9jl9g1.png?auto=webp&s=5e7f6d68b5421cb4f723759a13b6296135961dd0', 'width': 485}, 'variants': {}}]} | ||
i want to get into local NSFW image and video gen, what sort of is the meta for beginners ? | 0 | I am getting a new gaming pc soon with a 5080. To be honest im starting to love image generation on perchance, mostly anime girls and like to make better pics which wont take 15-60 seconds to generate.
I'm just curious what is sort of the meta right now for that?
All I hear about is Stable Diffusion but i never tried it yet as i have a 500$ budget eco laptop and I have been gaming on a ps5 for the last 2 years since my old gaming laptopp broke. I'm starting to see posts about something starting with a C too, though.
I'm curious what's easier to learn at least when you're starting out? I like to make some goo quality trained on naughty things fast n easy
I do have photo shop exp form school 10 years ago btw but never got into that. sort of stopped to afterward but i learned a lot aobut indesign and all that. the crazy teacher made us make magazine and resume stuff. idk in time might try 3d modeling and maybe print out some waifus lol. for now the pictures though :) | 2025-12-26T18:56:03 | https://www.reddit.com/r/LocalLLaMA/comments/1pwbwbx/i_want_to_get_into_local_nsfw_image_and_video_gen/ | LoneyGamer2023 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pwbwbx | false | null | t3_1pwbwbx | /r/LocalLLaMA/comments/1pwbwbx/i_want_to_get_into_local_nsfw_image_and_video_gen/ | false | false | nsfw | 0 | null |
Should I buy a used M2 Ultra 128gb ram for $2500 or build a pc with two to three rtx 3090 to do 70b models? | 0 | Found a used M2 Ultra with 128gb ram 2TB ssd on OfferUp (has no warranty left) for $2500. I can build a pc with two used rtx 3090, Ryzen 7900x, 2TB ssd and 64gb ddr5 for around the same price. | 2025-12-26T18:42:34 | https://www.reddit.com/r/LocalLLaMA/comments/1pwbkeh/should_i_buy_a_used_m2_ultra_128gb_ram_for_2500/ | solo_entrepreneur | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pwbkeh | false | null | t3_1pwbkeh | /r/LocalLLaMA/comments/1pwbkeh/should_i_buy_a_used_m2_ultra_128gb_ram_for_2500/ | false | false | self | 0 | null |
What LLMs are cold and calculating? | 0 | I recently asked chatGPT a game theory question, but I disguised it as a real life problem. It answered wrong, but with the conventional pro-social solution.
To make it worse, I even pre-prompted about 20+ real life people who generally would have answered it correctly. Think cold Henry Kissinger-like people.
Even these people answered the logically/mathematically wrong. As much as I don't care for Grok, it was the only model after trying ChatGPT, Claude, and Deepseek that answered correctly.
Whatever is in those models is corrupting it.
I prefer offline models, but I'm open to anything. What models are cold and rational? | 2025-12-26T18:32:31 | https://www.reddit.com/r/LocalLLaMA/comments/1pwbbkw/what_llms_are_cold_and_calculating/ | world_IS_not_OUGHT | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pwbbkw | false | null | t3_1pwbbkw | /r/LocalLLaMA/comments/1pwbbkw/what_llms_are_cold_and_calculating/ | false | false | self | 0 | null |
Free 400+ page book on "Agentic Design Patterns" by Google Senior Engineer (covers MCP, RAG, Multi-Agent) | 1 | [removed] | 2025-12-26T17:53:24 | https://www.reddit.com/r/LocalLLaMA/comments/1pwadm9/free_400_page_book_on_agentic_design_patterns_by/ | robert_q_2 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pwadm9 | false | null | t3_1pwadm9 | /r/LocalLLaMA/comments/1pwadm9/free_400_page_book_on_agentic_design_patterns_by/ | false | false | self | 1 | null |
Something better than or equal to T4 GPU | 3 | I am frequently running some Kaggle competition, and using their free tier T4x2 GPU. I am also having some hobby projects that I want to fine-tune locally. These models will only be 4B. So, I am looking for local compute options that will give me equivalent performance of a single T4 GPU.
Will Ryzen AI Max+, M4 mac mini make the cut? Or they will be painfully slow than T4? Should I go for lower end Nvidia cards? | 2025-12-26T17:47:48 | https://www.reddit.com/r/LocalLLaMA/comments/1pwa8v1/something_better_than_or_equal_to_t4_gpu/ | tdb008 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pwa8v1 | false | null | t3_1pwa8v1 | /r/LocalLLaMA/comments/1pwa8v1/something_better_than_or_equal_to_t4_gpu/ | false | false | self | 3 | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.