title
stringlengths
1
300
score
int64
0
8.54k
selftext
stringlengths
0
41.5k
created
timestamp[ns]date
2023-04-01 04:30:41
2026-03-04 02:14:14
url
stringlengths
0
878
author
stringlengths
3
20
domain
stringlengths
0
82
edited
timestamp[ns]date
1970-01-01 00:00:00
2026-02-19 14:51:53
gilded
int64
0
2
gildings
stringclasses
7 values
id
stringlengths
7
7
locked
bool
2 classes
media
stringlengths
646
1.8k
name
stringlengths
10
10
permalink
stringlengths
33
82
spoiler
bool
2 classes
stickied
bool
2 classes
thumbnail
stringlengths
4
213
ups
int64
0
8.54k
preview
stringlengths
301
5.01k
Qwen3 Next 80B A3B Instruct on RTX 5090
37
With latest patches you can run the Q2 on 32GB VRAM with 50K context size. Here's how: git clone https://github.com/cturan/llama.cpp.git llama.cpp-qwen3-next cd llama.cpp-qwen3-next git checkout qwen3_next time cmake -B build -DGGML_CUDA=ONgit clone https://github.com/cturan/llama.cpp.git llama.cpp-qwen3-next cd llama.cpp-qwen3-next git checkout qwen3_next time cmake -B build -DGGML_CUDA=ON time cmake --build build --config Release --parallel $(nproc --all) Grab the model from HuggingFace: [https://huggingface.co/lefromage/Qwen3-Next-80B-A3B-Instruct-GGUF/tree/main](https://huggingface.co/lefromage/Qwen3-Next-80B-A3B-Instruct-GGUF/tree/main) If all of that went according to plan, launch it with: build/bin/llama-server -m \~/models/lefromage/Qwen3-Next-80B-A3B-Instruct-GGUF/Qwen\_\_Qwen3-Next-80B-A3B-Instruct-Q2\_K.gguf --port 5005 --no-mmap -ngl 999 --ctx-size 50000 -fa on That gives me around 600t/s for prompt parsing and 50-60t/s for generation. You can also run Q4 with partial CUDA offload, adjust -ngl 30 or whatever VRAM you have available. The performance is not great though.
2025-10-23T07:03:34
https://www.reddit.com/r/LocalLLaMA/comments/1odwj89/qwen3_next_80b_a3b_instruct_on_rtx_5090/
lkarlslund
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1odwj89
false
null
t3_1odwj89
/r/LocalLLaMA/comments/1odwj89/qwen3_next_80b_a3b_instruct_on_rtx_5090/
false
false
self
37
null
[Project] Running Gemma3 1B + multimodal Gemma 3n (text/images/audio) on Android for private journaling. Phi-4, DeepSeek R1, Qwen 2.5. Looking for beta testers.
1
Hey r/LocalLLaMA, I built **ClarityAI** \- a privacy-focused journaling app that runs the latest LLMs entirely on-device, including **multimodal models that support text, images, AND audio input**. Thought this community would appreciate the technical approach. **The interesting part:** Running multimodal LLMs on mobile is still bleeding-edge. I wanted AI journal analysis without cloud APIs, so everything runs locally using Google's LiteRT runtime. **Available Models (all 100% on-device):** **Instant Download (Ungated):** * **DeepSeek R1 Distilled 1.5B** (\~1.8GB) - Reasoning-specialized * **Qwen 2.5 1.5B** (\~1.6GB) - Strong mid-range performance * **Phi-4 Mini** (\~3.9GB) - Latest from Microsoft (experimental) **Gated (requires HF approval):** * **Gemma3 1B** (\~557MB) - Incredibly lightweight, 4-bit quantized * **Gemma 3n E2B** (\~3.4GB) - **Multimodal: text + images + audio** * **Gemma 3n E4B** (\~4.7GB) - Larger multimodal variant **Implementation:** * **Framework:** LiteRT (Google's mobile inference runtime) * **Optimization:** TPU acceleration on Pixel devices, GPU/CPU fallback * **Quantization:** 4-bit for smaller models, mixed precision for larger * **Performance:** * Gemma3 1B: \~1-2 sec on Pixel 9, \~3-4 sec on mid-range * Phi-4: \~4-6 sec on Pixel 9, \~8-12 sec on mid-range * DeepSeek R1: \~2-3 sec (optimized for reasoning chains) * **Multimodal:** Gemma 3n can analyze journal photos and voice notes locally * **Privacy:** Zero telemetry, no network after download **Architecture:** * SQLite + RAG-style knowledge base with local embeddings * Dynamic model selection based on task (reasoning vs. chat vs. multimodal) * Incremental processing (only new entries analyzed) * Background model loading to avoid UI lag * Support for voice journal entries with audio-to-text + sentiment analysis **What it does:** * Analyzes journal entries for themes, patterns, insights * **Image analysis** \- attach photos to entries, AI describes/analyzes them * **Audio journaling** \- speak entries, AI transcribes + analyzes tone/sentiment * Builds searchable knowledge base from your entries * Mood tracking with AI-powered pattern recognition * All inference local - works completely offline **Current status:** Beta-ready, looking for \~20 Android testers (especially Pixel users for TPU testing) **Why I'm posting here:** 1. **Multimodal on mobile** \- This is cutting-edge. Gemma 3n just dropped and running it locally on phones is still unexplored territory 2. **Model diversity** \- DeepSeek R1 for reasoning, Phi-4 for chat, Gemma 3n for multimodal. Curious about your experiences 3. **Performance optimization** \- Any tips for running 4GB+ models smoothly on 8GB devices? **Specific technical questions:** 1. **Gemma 3n multimodal** \- Anyone tested this on Android yet? Performance/quality feedback? 2. **DeepSeek R1 distill** \- Is 1.5B enough for reasoning tasks, or should I add the 7B version? 3. **Phi-4 vs Phi-3** \- Worth the upgrade? Seeing mixed reports on mobile performance 4. **Quantization strategies** \- Currently using 4-bit for <2B models. Better approaches? 5. **Model selection heuristics** \- Should I auto-route tasks (reasoning → DeepSeek, images → Gemma 3n) or let user choose? 6. **Audio processing** \- Currently preprocessing audio before feeding to Gemma 3n. Better pipeline? If you're interested in testing (especially the multimodal features), comment or DM me. Would love feedback from people who understand the trade-offs. **Tech stack:** * Kotlin + Jetpack Compose * LiteRT for inference * SQLDelight for type-safe queries * Custom RAG pipeline with local embeddings * MediaPipe for audio preprocessing * Ktor for model downloads from HuggingFace **Bonus:** All models support CPU/GPU/TPU acceleration with runtime switching.
2025-10-23T07:00:33
https://www.reddit.com/gallery/1odwhfe
Secret_Difference498
reddit.com
1970-01-01T00:00:00
0
{}
1odwhfe
false
null
t3_1odwhfe
/r/LocalLLaMA/comments/1odwhfe/project_running_gemma3_1b_multimodal_gemma_3n/
false
false
https://b.thumbs.redditm…hLYNsJUbV-zA.jpg
1
null
Selling VPS (GPU options available) for very cheap.
0
Hey everyone, I’m planning to offer affordable VPS access for anyone who needs, including GPU options if required. The idea is simple: you don’t have to pay upfront. You can just pay occasionally while you’re using it. The prices are lower than most places, so if you’ve been looking for a cheaper VPS and/or GPU for your development or other purposes, hit me up or drop a comment.
2025-10-23T06:48:04
https://www.reddit.com/r/LocalLLaMA/comments/1odwal2/selling_vps_gpu_options_available_for_very_cheap/
Comfortable-Wall-465
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1odwal2
false
null
t3_1odwal2
/r/LocalLLaMA/comments/1odwal2/selling_vps_gpu_options_available_for_very_cheap/
false
false
self
0
null
I’ve open-sourced part of my BrainAPI project! tackling AI memory, hallucination, and search grounding
7
One of the biggest challenges with current LLMs and "agents" isn’t just generating text.. it’s remembering, reasoning, and verifying what’s true. Models can sound smart, but when it comes to consistent memory and accurate retrieval, they often fall apart. That’s what I’m working on with BrainAPI. The idea is to go beyond just vector search or RAG and build a real memory architecture that allows agents to: * track down information clearly and contextually * cross-check knowledge over time * reduce hallucination by connecting to factual sources * and perform fast, structured, grounded searches I see "memory" as more than just storing past messages, it’s about building a long-term cognitive layer where information lives, evolves, and connects. I'd love to make that foundation open, composable, and agent-friendly something that any AI system can plug into to gain reliable recall, better reasoning, and true continuity. I’ve open-sourced one of the core repos here if you want to explore or contribute: [https://github.com/Lumen-Labs/brainapi](https://github.com/Lumen-Labs/brainapi) Curious how others here think about this! How do you see the future of agent memory and information grounding evolving?
2025-10-23T06:33:14
https://www.reddit.com/r/LocalLLaMA/comments/1odw2fs/ive_opensourced_part_of_my_brainapi_project/
shbong
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1odw2fs
false
null
t3_1odw2fs
/r/LocalLLaMA/comments/1odw2fs/ive_opensourced_part_of_my_brainapi_project/
false
false
self
7
{'enabled': False, 'images': [{'id': 'CJjCwu938wLyWJ7PUskkT9Pdibvuly9Zr3KZFewxxEM', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/CJjCwu938wLyWJ7PUskkT9Pdibvuly9Zr3KZFewxxEM.png?width=108&crop=smart&auto=webp&s=803a7acff15030aa14533dbbb18427aa81938a66', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/CJjCwu938wLyWJ7PUskkT9Pdibvuly9Zr3KZFewxxEM.png?width=216&crop=smart&auto=webp&s=e7279bda66e6cc636beb038fddd958b07823a84f', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/CJjCwu938wLyWJ7PUskkT9Pdibvuly9Zr3KZFewxxEM.png?width=320&crop=smart&auto=webp&s=724847e5d75a56cd6565693b2fd3b3b60b08ade8', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/CJjCwu938wLyWJ7PUskkT9Pdibvuly9Zr3KZFewxxEM.png?width=640&crop=smart&auto=webp&s=7ed0f3fd1bbdb0486ed071f307c85dc5766cfb35', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/CJjCwu938wLyWJ7PUskkT9Pdibvuly9Zr3KZFewxxEM.png?width=960&crop=smart&auto=webp&s=76789a0a259bdc10fe326e18180f800ff8713af2', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/CJjCwu938wLyWJ7PUskkT9Pdibvuly9Zr3KZFewxxEM.png?width=1080&crop=smart&auto=webp&s=923016c8b587374e5e9703710144d256f9730cc1', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/CJjCwu938wLyWJ7PUskkT9Pdibvuly9Zr3KZFewxxEM.png?auto=webp&s=e07d34b552dc14d12878e230a24f8378bda0444b', 'width': 1200}, 'variants': {}}]}
what are the best models for code generation right now??
16
Hey!! recently a lot of new models have been released and I wanted to know which ones are the best for coding. I’ve heard that sonnet 4.5 and GLM 4.5 are really good, but I’m curious if there are any other models that perform well in different areas, such as frontend design, software architecture, or other coding dimensions. I’m open to both open-source and closed-source models. rn trying to use models that are available on bedrock
2025-10-23T06:25:09
https://www.reddit.com/r/LocalLLaMA/comments/1odvxxa/what_are_the_best_models_for_code_generation/
lavangamm
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1odvxxa
false
null
t3_1odvxxa
/r/LocalLLaMA/comments/1odvxxa/what_are_the_best_models_for_code_generation/
false
false
self
16
null
Understanding OpenPose: The Easy Way
3
Read the full blog here: [https://www.labellerr.com/blog/understanding-openpose-the-easy-way/](https://www.labellerr.com/blog/understanding-openpose-the-easy-way/)
2025-10-23T05:54:24
https://i.redd.it/6bli93eewswf1.jpeg
Street-Lie-2584
i.redd.it
1970-01-01T00:00:00
0
{}
1odvg45
false
null
t3_1odvg45
/r/LocalLLaMA/comments/1odvg45/understanding_openpose_the_easy_way/
false
false
https://b.thumbs.redditm…Uzv_2rviKjEI.jpg
3
{'enabled': True, 'images': [{'id': 'WTUl2cuYd4tFZwdTcEkYPKreGNsJkn9YdoLSiFxovkk', 'resolutions': [{'height': 72, 'url': 'https://preview.redd.it/6bli93eewswf1.jpeg?width=108&crop=smart&auto=webp&s=d85ac8d6ab5200565493215ed4b58dfc42cdc67a', 'width': 108}, {'height': 144, 'url': 'https://preview.redd.it/6bli93eewswf1.jpeg?width=216&crop=smart&auto=webp&s=4d806cf094552c0966ec74724fea52e4d76dd76c', 'width': 216}, {'height': 213, 'url': 'https://preview.redd.it/6bli93eewswf1.jpeg?width=320&crop=smart&auto=webp&s=85f9d57699458de00b88525e802483c0d17e19cc', 'width': 320}, {'height': 426, 'url': 'https://preview.redd.it/6bli93eewswf1.jpeg?width=640&crop=smart&auto=webp&s=a0b6bb1f29fe6e211edcd6961113d50ad1a66973', 'width': 640}, {'height': 640, 'url': 'https://preview.redd.it/6bli93eewswf1.jpeg?width=960&crop=smart&auto=webp&s=7d85858e280020baa744704fb406792da1216792', 'width': 960}, {'height': 720, 'url': 'https://preview.redd.it/6bli93eewswf1.jpeg?width=1080&crop=smart&auto=webp&s=1b9f7e10a564709f70f150cecd613da465765c05', 'width': 1080}], 'source': {'height': 1024, 'url': 'https://preview.redd.it/6bli93eewswf1.jpeg?auto=webp&s=87c3ec10957f03d192596be2d9d2dd51a46628df', 'width': 1536}, 'variants': {}}]}
GitHub - deepseek-ai/DeepSeek-OCR: Contexts Optical Compression
1
2025-10-23T05:49:48
https://github.com/deepseek-ai/DeepSeek-OCR
Fun-Wolf-2007
github.com
1970-01-01T00:00:00
0
{}
1odvdfq
false
null
t3_1odvdfq
/r/LocalLLaMA/comments/1odvdfq/github_deepseekaideepseekocr_contexts_optical/
false
false
https://external-preview…ae0cbc5886c16f29
1
{'enabled': False, 'images': [{'id': 'XxeEBptvgSjue7M8vcQDJyhCYvtz4tPQ-salE9xVC0o', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/XxeEBptvgSjue7M8vcQDJyhCYvtz4tPQ-salE9xVC0o.png?width=108&crop=smart&auto=webp&s=1d7c755bdde57e08e75bf7db21e2602fa0f0ec7d', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/XxeEBptvgSjue7M8vcQDJyhCYvtz4tPQ-salE9xVC0o.png?width=216&crop=smart&auto=webp&s=bd67ca54198ffe5047e447bce7fc9c2fb8fc8893', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/XxeEBptvgSjue7M8vcQDJyhCYvtz4tPQ-salE9xVC0o.png?width=320&crop=smart&auto=webp&s=f503e1a0f3de38fca8ad711c19f3fd75f1461572', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/XxeEBptvgSjue7M8vcQDJyhCYvtz4tPQ-salE9xVC0o.png?width=640&crop=smart&auto=webp&s=b1e87d85e6ae98a751e0800703c2cf40e7d488eb', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/XxeEBptvgSjue7M8vcQDJyhCYvtz4tPQ-salE9xVC0o.png?width=960&crop=smart&auto=webp&s=9ecd549707dfb224bb962d0e745b664ce641cb0e', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/XxeEBptvgSjue7M8vcQDJyhCYvtz4tPQ-salE9xVC0o.png?width=1080&crop=smart&auto=webp&s=7c41cd296239a230c00afb35a4df80cd11b38f2a', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/XxeEBptvgSjue7M8vcQDJyhCYvtz4tPQ-salE9xVC0o.png?auto=webp&s=d8470a62dc88778a1b27fc3e711c7ecaacd26468', 'width': 1200}, 'variants': {}}]}
I built 50+ RAGs in 2 years. Here are the architectures that get products out the door!
388
I have been ML engineering for different startups in both in Europe and in the US and I can tell you... the gap between a RAG demo and a RAG product is almost always the same: people are still using naive retrieval. Let's be clear: if you actually want to ship a product that works, you must move beyond the basic `sim(BiEncoder(q), BiEncoder(d))` setup. It fails on precision, nuance, and complex queries. Your architecture must solve a specific problem. Here is a technical summary of three advanced patterns. # Notation Key * `q, d`: Query, Document * `BiEncoder(x)`: Bi-encoder model (e.g., SBERT), computes v independently. * `CrossEncoder(q, d)`: Cross-encoder model, computes a joint relevance score. * `sim(v1, v2)`: Cosine similarity. * `S_naive` = sim(BiEncoder(q), BiEncoder(d)) # 1. The Retriever-Reranker (The Precision Stack) This is the most reliable path to production accuracy. It decouples the recall problem from the precision problem. * **How it works:** 1. **Stage 1 (Retriever):** Get Top-K candidates using a fast, high-recall hybrid search (RRF).`RRF_Score(d) = SUM( 1 / (k + rank_r(d)) ) for r in {bm25, vector}` 2. **Stage 2 (Reranker):** Re-score only the Top-K with the slower, more accurate `CrossEncoder(q, d)`. * Pros: This is the correct way to solve precision. The `CrossEncoder(q, d)` is fundamentally more powerful than `S_naive` and is the only reliable method to handle negation and nuance. * Cons: The latency of a second network call is a minor, predictable cost for the massive gain in accuracy. There is a nice implementation of this with **Turbotuffer** and **ZeroEntropy.** (btw this has given me the best results so far, will post in a few days some results from side projects I can share) # 2. The Query Transformer (The Recall Stack) This pattern assumes the query `q` is the problem. It uses an LLM to refine q before retrieval. * How it works: An LLM generates n query variants `{q_1, ..., q_n}` (Multi-Query) or a hypothetical document `d_hypo` (HyDE) to search against. `Search Vector = BiEncoder(d_hypo)` * Pros: Fixes bad recall from vague or semantically mismatched user input. * Cons: Adds a costly and slow LLM call before the search has even begun. # 3. The Graph RAG (The Connections Stack) A different paradigm focused on explicit, structured relationships. * **How it works:** Abandons vector similarity for a graph query language. `MATCH (e:Engineer)-[:WORKS_AT]->(c:Company) RETURN e .name` **Pros:** Can answer complex, multi-hop questions that vector search fundamentally cannot. **Cons:** This is often a distraction. It requires a massive, upfront data-modeling bottleneck (ETL, schema definition). It is rigid, expensive, and defeats the primary purpose of RAG, which is to work with unstructured data. # TLDR **Setup 1 (Retriever-Reranker)** is the production standard for fixing precision. **Setup 2 (Query-Transformer)** is a-costly-way to fix bad user queries. **Setup 3 (Graph RAG)** solves a different problem (structured data) and is mostly a distraction.
2025-10-23T05:27:40
https://www.reddit.com/r/LocalLLaMA/comments/1odv090/i_built_50_rags_in_2_years_here_are_the/
jremynse
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1odv090
false
null
t3_1odv090
/r/LocalLLaMA/comments/1odv090/i_built_50_rags_in_2_years_here_are_the/
false
false
self
388
null
Stop using naive RAG. A technical breakdown of 3 production-ready architectures.
1
I have been ML engineering for different startups in both in Europe and in the US and I can tell you... the gap between a RAG demo and a RAG product is almost always the same: **people are still using naive retrieval.** Let's be clear: if you actually want to ship a product that works, you must move beyond the basic sim(BiEncoder(q), BiEncoder(d)) setup. It fails on precision, nuance, and complex queries. Your architecture must solve a specific problem. Here is a technical summary of three advanced patterns. # Notation Key * q, d: Query, Document * BiEncoder(x): Bi-encoder model (e.g., SBERT), computes v independently. * CrossEncoder(q, d): Cross-encoder model, computes a joint relevance score. * sim(v1, v2): Cosine similarity. * S\_naive = sim(BiEncoder(q), BiEncoder(d)) **1. The Retriever-Reranker (The Precision Stack)** This is the most reliable path to production accuracy. It decouples the recall problem from the precision problem. * **How it works:** 1. **Stage 1 (Retriever):** Get Top-K candidates using a fast, high-recall hybrid search (RRF).RRF\_Score(d) = SUM( 1 / (k + rank\_r(d)) ) for r in {bm25, vector} 2. **Stage 2 (Reranker):** Re-score only the Top-K with the slower, more accurate CrossEncoder(q, d). * **Pros:** This is the correct way to solve precision. The CrossEncoder(q, d) is fundamentally more powerful than S\_naive and is the only reliable method to handle negation and nuance. * **Cons:** The latency of a second network call is a minor, predictable cost for the massive gain in accuracy. There is a nice implementation of this with **Turbotuffer** and **ZeroEntropy.** (btw this has given me the best results so far, will post in a few days some results from side projects I can share) **2. The Query Transformer (The Recall Stack)** This pattern assumes the query q is the problem. It uses an LLM to refine q *before* retrieval. * **How it works:** An LLM generates n query variants {q\_1, ..., q\_n} (Multi-Query) or a hypothetical document d\_hypo (HyDE) to search against. Search Vector = BiEncoder(d\_hypo) * **Pros:** Fixes bad recall from vague or semantically mismatched user input. * **Cons:** Adds a costly and slow LLM call *before* the search has even begun. **3. The Graph RAG (The Connections Stack)** A different paradigm focused on explicit, structured relationships. * **How it works:** Abandons vector similarity for a graph query language. MATCH (e:Engineer)-\[:WORKS\_AT\]->(c:Company) RETURN [e.name](http://e.name/) * **Pros:** Can answer complex, multi-hop questions that vector search fundamentally cannot. * **Cons:** This is often a distraction. It requires a massive, upfront data-modeling bottleneck (ETL, schema definition). It is rigid, expensive, and defeats the primary purpose of RAG, which is to work with unstructured data. # TLDR * **Setup 1 (Retriever-Reranker)** is the production standard for fixing precision. * **Setup 2 (Query-Transformer)** is a-costly-way to fix bad user queries. **Setup 3 (Graph RAG)** solves a different problem (structured data) and is mostly a distraction.
2025-10-23T05:15:30
https://www.reddit.com/r/LocalLLaMA/comments/1oduswl/stop_using_naive_rag_a_technical_breakdown_of_3/
jremynse
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1oduswl
false
null
t3_1oduswl
/r/LocalLLaMA/comments/1oduswl/stop_using_naive_rag_a_technical_breakdown_of_3/
false
false
self
1
null
[Spark] The Jupyter Server has a memory leak.
4
I was running the Jupyter Notebook server to test things out, but noticed that memory wasn’t releasing even after I restarted the kernel. Next I rebooted the Spark. On reboot I launched Jupyter and just left it there as I got busy with something else. Came back after 20 minutes to 99% memory usage. Couldn't run anything without getting an out of memory error. Shutting down Jupyter would not release the memory for some odd reason. Work around: Don't run the Jupyter notebook for now. Anyone had any memory issues with it? Ps. I still think the Spark is a bad purchase at 4K USD, but after juggling family issues, seeing what the guardianship process has cost me, and realizing I haven’t taken a real vacation since the pandemic... I figured might as well spend my money before someone else does. So yeah… impulse bought the Spark. Also curious to see how practical the Spark could be as a portable system I could take to work and use directly as an MCP server as opposed to taking the RTX 6000 PRO WS in a eGPU. Pps. I had originally reserved the 'Asus Ascent GX10' at Nvidia's shop when it was 1999.99 and the others were 2999.99. Looks like they all got bumped by 1000. Moreover, I thought the pricing on the Asus Ascent was a mistake. It looks like Central Computers has it for pre-order at 3K. [Asus Ascent GX10 2999.99](https://www.centralcomputer.com/asus-ascent-gx10-personal-ai-supercomputer-with-nvidia-gb10-grace-blackwell-superchip-128gb-unified-lpddr5x-memory-1tb-pcie.html)
2025-10-23T05:15:01
https://www.reddit.com/r/LocalLLaMA/comments/1odusla/spark_the_jupyter_server_has_a_memory_leak/
Aroochacha
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1odusla
false
null
t3_1odusla
/r/LocalLLaMA/comments/1odusla/spark_the_jupyter_server_has_a_memory_leak/
false
false
self
4
{'enabled': False, 'images': [{'id': 'R8pylSINGrdhElxp4oPJnfJEdzz1c_Np5HV_KZGUPq0', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/R8pylSINGrdhElxp4oPJnfJEdzz1c_Np5HV_KZGUPq0.png?width=108&crop=smart&auto=webp&s=9a732b5cafca709ccb460a391899e9328424e358', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/R8pylSINGrdhElxp4oPJnfJEdzz1c_Np5HV_KZGUPq0.png?width=216&crop=smart&auto=webp&s=aed31e2eb429505d9d26c1a98d48073453728ca4', 'width': 216}], 'source': {'height': 265, 'url': 'https://external-preview.redd.it/R8pylSINGrdhElxp4oPJnfJEdzz1c_Np5HV_KZGUPq0.png?auto=webp&s=d25cf59fd089e99d1f3b4122428bfcd469854eb3', 'width': 265}, 'variants': {}}]}
I built my own AI coding assistant after realizing I was paying twice — now it’s open source (Codebase MCP)
40
So here’s what happened. I was paying around $40/month for an AI coding assistant. Then I realized... I was already paying for Claude. Why was I paying twice for something I could build myself? So I spent a week hacking together Codebase MCP — an open-source bridge that turns Claude Desktop into a full-on local coding assistant. I wanted something that: Uses my existing LLM (Claude) instead of forcing me onto another paid tool Runs fully local — no code leaves my machine Does semantic code search with local embeddings Edits code like Cursor, but with my own rules Remembers context between sessions Auto-formats & scores edits so I don’t have to babysit it Basically, I wanted to turn Claude into the dev assistant I actually wanted — private, customizable, and free. It’s built with FastAPI + React, uses FAISS + SQLite for vector search and memory, and hooks right into Claude Desktop via MCP. Once connected, Claude suddenly has 13+ tools — semantic search, memory, git manager, edit tools, etc. Everything runs locally except for code edits, which use Gemini’s free tier (and only send the specific file being edited). The rest — search, memory, analysis — all happen on your machine. No cloud logs, no tracking, no vendor lock-in. I built it mostly because I hate subscription fatigue. And honestly? I like owning my own tools. Here’s the repo if you want to try it: 👉 [https://github.com/danyQe/codebase-mcp](https://github.com/danyQe/codebase-mcp) It’s open source (Apache 2.0), works best for projects under 20k lines of code, and it’s production-ready — not just a weekend demo. Would love feedback from anyone using Claude, Cursor, or any self-hosted AI dev setups. What’s been your biggest pain point with AI coding tools so far?
2025-10-23T04:56:04
https://www.reddit.com/r/LocalLLaMA/comments/1odugps/i_built_my_own_ai_coding_assistant_after/
Appropriate_Poet_229
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1odugps
false
null
t3_1odugps
/r/LocalLLaMA/comments/1odugps/i_built_my_own_ai_coding_assistant_after/
false
false
self
40
null
Local alternatives to Atlas
1
I was disappointed to learn that Atlas, despite being built on open source Chromium, is closed source. (Correct me if I'm wrong.) As far as I know, the best option we have for replicating Atlas functionality locally is playwright. But I didn't have good results from playwright last time I tried it. Can anyone suggest how to achieve robust Atlas or Comet-like functionality with local models? Also, I appreciate any thoughts on preventing indirect prompt injection with a diy approach like this. Is it too risky to be practical?
2025-10-23T04:19:39
https://www.reddit.com/r/LocalLLaMA/comments/1odtu5d/local_alternatives_to_atlas/
tongkat-jack
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1odtu5d
false
null
t3_1odtu5d
/r/LocalLLaMA/comments/1odtu5d/local_alternatives_to_atlas/
false
false
self
1
null
Amd pc
3
I’ve been at it all day trying to get wsl2 setup with gpu support for my amd pc cpu 7700 gpu 7900gre I have tried multiple versions of ubuntu I tried to instal rocm from official amd repos I can’t get gpu support I was told from a YouTube video the safest way to run ai llms is in windows 11 wsl2 on docker I can run ai llms in my lm studio already it works fine I don’t know what to do and I’m new I’ve been trying with gpt oss and regular gpt and google I can’t figure it out it
2025-10-23T04:11:58
https://www.reddit.com/r/LocalLLaMA/comments/1odtpby/amd_pc/
AceCustom1
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1odtpby
false
null
t3_1odtpby
/r/LocalLLaMA/comments/1odtpby/amd_pc/
false
false
self
3
null
Semantic Compression: A Critical Component of the Local Agent Stack
0
**Why Your Local AI Agent Feels Broken (And How to Fix It)** You've got a powerful GPU. You've downloaded the latest 8B model. You've set up the slickest inference engine. But when you try to build an actual AI agent—something that remembers who you are, uses tools, maintains context across conversations—it crawls. The problem isn't your hardware. It's not even your model. It's that we're trying to run agents using an architecture designed for the cloud era. We're feeding our local models massive novels of instructions when they need tight, executable code. We're using RAG for problems that need fuzzy operating systems in the context window. This isn't about waiting for better models or bigger GPUs. It's about rethinking the entire stack—from how we compress agent state, to how we manage memory, to how inference engines and semantic density multiply each other's gains. The gap between "a chatbot that runs locally" and "a truly personal AI assistant" isn't model intelligence. It's systems engineering. This paper shows you how to close that gap.
2025-10-23T02:52:57
https://medium.com/@mbonsign/semantic-compression-a-critical-component-of-the-local-agent-stack-ead4fe8b6e02
MikeBeezzz
medium.com
1970-01-01T00:00:00
0
{}
1ods7hj
false
null
t3_1ods7hj
/r/LocalLLaMA/comments/1ods7hj/semantic_compression_a_critical_component_of_the/
false
false
default
0
null
Best open-source TTS model for commercial voice cloning (possible to fine-tune with Argentine Spanish voices)?
2
Hi everyone, I’m working on a **commercial project** that involves deploying a **Text-to-Speech (TTS)** system **locally** (not cloud-based). I’m looking for an **open-source model** capable of **voice cloning** — ideally one that has the **possibility** of being fine-tuned or adapted with **Argentine Spanish** voices to better match local accent and prosody. A few questions: 1. What’s currently the best open-source TTS model for realistic voice cloning that can run locally (single GPU setups)? 2. How feasible would it be to adapt such a model to Argentine Spanish? What data, audio quality, or hardware specs would typically be required? 3. Any repos, tutorials, or communities you’d recommend that have already experimented with Spanish or Latin American fine-tuning for TTS? Thanks in advance for any pointers!
2025-10-23T02:52:52
https://www.reddit.com/r/LocalLLaMA/comments/1ods7ff/best_opensource_tts_model_for_commercial_voice/
rucoide
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ods7ff
false
null
t3_1ods7ff
/r/LocalLLaMA/comments/1ods7ff/best_opensource_tts_model_for_commercial_voice/
false
false
self
2
null
Need a model for my MacBook Air M4 16Gb
2
Just got a new Mac and found out later that I could run some small LLMs, got the 10 core GPU version with 16 Gb RAM, I know it’s not a lot but would it be enough for some Polymarket elections calculations with data from previous elections and opinion polling?
2025-10-23T02:44:42
https://www.reddit.com/r/LocalLLaMA/comments/1ods1de/need_a_model_for_my_macbook_air_m4_16gb/
klas228
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ods1de
false
null
t3_1ods1de
/r/LocalLLaMA/comments/1ods1de/need_a_model_for_my_macbook_air_m4_16gb/
false
false
self
2
null
Preliminary support in llama.cpp for Qualcomm Hexagon NPU
8
2025-10-23T02:19:18
https://github.com/ggml-org/llama.cpp/releases/tag/b6822
SkyFeistyLlama8
github.com
1970-01-01T00:00:00
0
{}
1odriw4
false
null
t3_1odriw4
/r/LocalLLaMA/comments/1odriw4/preliminary_support_in_llamacpp_for_qualcomm/
false
false
https://external-preview…e7939636763420b8
8
{'enabled': False, 'images': [{'id': '1SH06ZC_-U-9PyIEx2h0F1rL7381Vh1_Ps6mFmWthjk', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/1SH06ZC_-U-9PyIEx2h0F1rL7381Vh1_Ps6mFmWthjk.png?width=108&crop=smart&auto=webp&s=967de5aa1d58741c7aaf3f1c1bad01ab7960da9b', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/1SH06ZC_-U-9PyIEx2h0F1rL7381Vh1_Ps6mFmWthjk.png?width=216&crop=smart&auto=webp&s=cd2982de1bb455dc145c06181e579b55bea9b83a', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/1SH06ZC_-U-9PyIEx2h0F1rL7381Vh1_Ps6mFmWthjk.png?width=320&crop=smart&auto=webp&s=19801e7f9a39755de3b5deeb4595f7126bddd65d', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/1SH06ZC_-U-9PyIEx2h0F1rL7381Vh1_Ps6mFmWthjk.png?width=640&crop=smart&auto=webp&s=9cfef2e806525d04ebf7a7f5c0abfe4d4908006e', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/1SH06ZC_-U-9PyIEx2h0F1rL7381Vh1_Ps6mFmWthjk.png?width=960&crop=smart&auto=webp&s=27427c60dad2bb5c70c6392c66d793a229d48f32', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/1SH06ZC_-U-9PyIEx2h0F1rL7381Vh1_Ps6mFmWthjk.png?width=1080&crop=smart&auto=webp&s=e8bc98f2682f9b1823c990d7bd6d5b9f5f54d64b', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/1SH06ZC_-U-9PyIEx2h0F1rL7381Vh1_Ps6mFmWthjk.png?auto=webp&s=08da287442a3dd2735307887632887113454ead6', 'width': 1200}, 'variants': {}}]}
I'm done with Aider.
0
So, I have been trying to use aider as a pair programmer tool with Qwen3 models, but it is just a disaster. Editing files without asking for permission, creating new duplicate folders/files... it just mess with the whole project. Does anyone have an open-source alternative to it?
2025-10-23T02:10:02
https://www.reddit.com/r/LocalLLaMA/comments/1odrc8s/im_done_with_aider/
Puzzleheaded_Dark_80
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1odrc8s
false
null
t3_1odrc8s
/r/LocalLLaMA/comments/1odrc8s/im_done_with_aider/
false
false
self
0
null
Software export ban
0
[https://x.com/DeItaone/status/1981035523599687730](https://x.com/DeItaone/status/1981035523599687730) >TRUMP ADMINISTRATION CONSIDERING PLAN TO RESTRICT GLOBALLY PRODUCED EXPORTS TO CHINA MADE WITH OR CONTAINING U.S. SOFTWARE, SOURCES SAY Will be a curious situation if this happens and yet China continues to export significant amounts of open AI R&D to the US.
2025-10-23T02:09:57
https://www.reddit.com/r/LocalLLaMA/comments/1odrc6i/software_export_ban/
kaggleqrdl
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1odrc6i
false
null
t3_1odrc6i
/r/LocalLLaMA/comments/1odrc6i/software_export_ban/
false
false
self
0
null
LLM enterprise search
0
Hi everyone, We are building PipesHub, a fully open source platform (Apache 2.0 license) that brings all your business data together and makes it searchable and usable. It connects with apps like Google Drive, Gmail, Slack, Notion, Confluence, Jira, Outlook, SharePoint, Dropbox, and even local file uploads. You can deploy it and run it with just one docker compose command. Apart from using common techniques like hybrid search, knowledge graphs, rerankers, etc the other most crucial thing is implementing Agentic RAG. The goal of our indexing pipeline is to make documents retrieval/searchable. But during query stage, we let the agent decide how much data it needs to answer the query. The entire system is built on a **fully event-streaming architecture powered by Kafka**, making indexing and retrieval scalable, fault-tolerant, and real-time across large volumes of data. **Key features** * Deep understanding of documents, user, organization and teams with enterprise knowledge graph * Connect to any AI model of your choice including OpenAI, Gemini, Claude, or Ollama * Use any provider that supports OpenAI compatible endpoints * Choose from 1,000+ embedding models * Vision-Language Models and OCR for visual or scanned docs * Login with Google, Microsoft, OAuth, or SSO * Rich REST APIs for developers * All major file types support including pdfs with images, diagrams and charts **Features releasing this month** * Agent Builder - Perform actions like Sending mails, Schedule Meetings, etc along with Search, Deep research, Internet search and more * Reasoning Agent that plans before executing tasks * 50+ Connectors allowing you to connect to your entire business apps We have been working very hard to fix bugs and issues from last few months, testing with Ollama models like gpt-oss:20b, qwen3:30b and more. We are also coming out of beta early next month. Your feedback is immensely valuable and is much appreciated. Check out our work below and share your thoughts or feedback: [https://github.com/pipeshub-ai/pipeshub-ai](https://github.com/pipeshub-ai/pipeshub-ai)
2025-10-23T01:26:59
https://www.reddit.com/r/LocalLLaMA/comments/1odqgf8/llm_enterprise_search/
Inevitable-Letter385
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1odqgf8
false
null
t3_1odqgf8
/r/LocalLLaMA/comments/1odqgf8/llm_enterprise_search/
false
false
self
0
{'enabled': False, 'images': [{'id': 'hO1BK6bS_4mNYaGVC084UtT7OL1PkuHl2mbg6ueHrQM', 'resolutions': [{'height': 96, 'url': 'https://external-preview.redd.it/hO1BK6bS_4mNYaGVC084UtT7OL1PkuHl2mbg6ueHrQM.jpeg?width=108&crop=smart&auto=webp&s=63a546b8ac654187ee9b0d14224e852ef0c3d692', 'width': 108}], 'source': {'height': 99, 'url': 'https://external-preview.redd.it/hO1BK6bS_4mNYaGVC084UtT7OL1PkuHl2mbg6ueHrQM.jpeg?auto=webp&s=47e8987d3d53065768b4c796fa5af51c7a36d470', 'width': 111}, 'variants': {}}]}
Is Chain of Thought Still An Emergent Behavior?
25
In the famous Chain of Thought Paper[Chain of Thought Paper](https://arxiv.org/pdf/2201.11903), the authors argued that reasoning is an emergent behavior: models with <10B parameters showed little to no improvement from the baseline with the Chain of Thought prompting, but larger models did. This is an old paper experimented in 2022. I wonder if their assertion still holds currently. We have - Teacher-Student learning (distillation) - ReACT which led to training "Thinking Models" - better data concoction of training - better model architecture - better general performance models The results from their experiments and the conclusions would be different if it was done right now. I tried to find n-shot CoT vs. 0-shot performance comparisons across model scales, but this data is surprisingly hard to find. In my own quick tests with sub-3B models on MMLU and GSM8K, I found no improvement with n-shot CoT prompting. So I’d love to hear from others: - Has anyone seen systematic evaluations on this recently? - Is reasoning still emergent only in larger models? - Or can smaller models be trained (or distilled) to exhibit CoT-like reasoning reliably
2025-10-23T01:14:30
https://www.reddit.com/r/LocalLLaMA/comments/1odq73r/is_chain_of_thought_still_an_emergent_behavior/
Environmental_Form14
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1odq73r
false
null
t3_1odq73r
/r/LocalLLaMA/comments/1odq73r/is_chain_of_thought_still_an_emergent_behavior/
false
false
self
25
null
What's the best model that supports tools for local use?
1
My setup is Ollama on 64 gig RAM/ 24 gig VRAM. Thanks.
2025-10-23T00:42:24
https://www.reddit.com/r/LocalLLaMA/comments/1odpiwa/whats_the_best_model_that_supports_tools_for/
Great_Guidance_8448
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1odpiwa
false
null
t3_1odpiwa
/r/LocalLLaMA/comments/1odpiwa/whats_the_best_model_that_supports_tools_for/
false
false
self
1
null
SGLang vs vLLM on H200: Which one do you prefer, Faster TTFT and higher TPS?
19
I ran both **SGLang** and **vLLM** on **Qwen3-Coder-30B** with **NVIDIA H200** and 500GB memory. Here are the numbers: * **TTFT (Time to First Token):** SGLang 2333ms vs vLLM 2669ms. SGLang is \~12.6% faster to start generating, which you feel in interactive workloads. * **TPS (Tokens/sec):** SGLang 2688.46 vs vLLM 2020.99. SGLang delivers \~33% higher throughput, meaning more tokens per unit time under load. * **Token lengths:** SGLang produced **\~4.9%** longer inputs (48.14 vs 45.88) and **\~23.7%** longer outputs (72.50 vs 58.63). Even with longer generations, TPS still leads for SGLang, which strengthens the throughput win. * **Setup time:** vLLM container setup and model download are both **388s/ms** vs SGLang **523s/ms** vLLM is \~34.8% faster to get to “ready.” If you spin clusters often or bake fresh images, this matters. Which one do you think is better for production grade services?
2025-10-23T00:28:56
https://i.redd.it/0t0vl6ap9rwf1.png
batuhanaktass
i.redd.it
1970-01-01T00:00:00
0
{}
1odp8pe
false
null
t3_1odp8pe
/r/LocalLLaMA/comments/1odp8pe/sglang_vs_vllm_on_h200_which_one_do_you_prefer/
false
false
default
19
{'enabled': True, 'images': [{'id': '0t0vl6ap9rwf1', 'resolutions': [{'height': 54, 'url': 'https://preview.redd.it/0t0vl6ap9rwf1.png?width=108&crop=smart&auto=webp&s=005301a7052c209d65f9c2f8024943915bb50c11', 'width': 108}, {'height': 109, 'url': 'https://preview.redd.it/0t0vl6ap9rwf1.png?width=216&crop=smart&auto=webp&s=d658659a953333d2fc2bb308001402dae4e4bf5d', 'width': 216}, {'height': 162, 'url': 'https://preview.redd.it/0t0vl6ap9rwf1.png?width=320&crop=smart&auto=webp&s=c7a318fc20be08ab9a2c5218ed4450e7c5039c4f', 'width': 320}, {'height': 325, 'url': 'https://preview.redd.it/0t0vl6ap9rwf1.png?width=640&crop=smart&auto=webp&s=f9118b9eba227185c90cfa8ec519e7381573d4b1', 'width': 640}, {'height': 487, 'url': 'https://preview.redd.it/0t0vl6ap9rwf1.png?width=960&crop=smart&auto=webp&s=0f554d76396605a03fed330b104b0a6bfe82eebc', 'width': 960}, {'height': 548, 'url': 'https://preview.redd.it/0t0vl6ap9rwf1.png?width=1080&crop=smart&auto=webp&s=737551727c0d8a0d1ea8f031a1a9d944e007fe7d', 'width': 1080}], 'source': {'height': 1482, 'url': 'https://preview.redd.it/0t0vl6ap9rwf1.png?auto=webp&s=b3eed5cd3ac9687852966e761a1427a0b6fcc427', 'width': 2916}, 'variants': {}}]}
Copyright concerns regarding LLMs and coding
0
Hi, I've been using LLMs, both local and cloud ones, to write a lot of AI generated code. While I imagine this will be an issue that is mainly sorted out in court, what are the ethical considerations of using AI generated code that has been trained on various open source licensed codebases, such as AGPL, to write closed source code? It seems pretty unethical, even if it's determined to be legal. I'm leaning toward open sourcing all the code that I write with LLMs, since the training data used by the LLMs are almost entirely open source in nature. However, I'm not sure which license to choose? I've recently been changing my projects to GPL, which seems to be a good choice. However, I'm guessing that the licenses used during training represent an even distribution across open source licenses, so there's no single license I could use that represents the training data.
2025-10-22T23:20:33
https://www.reddit.com/r/LocalLLaMA/comments/1odnr17/copyright_concerns_regarding_llms_and_coding/
Far-Incident822
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1odnr17
false
null
t3_1odnr17
/r/LocalLLaMA/comments/1odnr17/copyright_concerns_regarding_llms_and_coding/
false
false
self
0
null
New 'Markovian Thinking' technique unlocks a path to million-token AI reasoning
31
2025-10-22T23:19:50
https://venturebeat.com/ai/new-markovian-thinking-technique-unlocks-a-path-to-million-token-ai
qzrz
venturebeat.com
1970-01-01T00:00:00
0
{}
1odnqga
false
null
t3_1odnqga
/r/LocalLLaMA/comments/1odnqga/new_markovian_thinking_technique_unlocks_a_path/
false
false
default
31
null
Tensor parallelism with non-matching GPUs
6
Hi all, this might be a stupid/obvious question but I have the opportunity to buy some 3090s at a very good price. The issue is that one is a Zotac, and the other is a Founders Edition. I'm mainly only looking to do inference, but was wondering if the AIB difference between the GPUs would cause performance or stability issues (this will be in a home server, so doesn't need enterprise-level stability, but ykwim) due to one having an OC profile, different firmware/vbios, etc Thanks
2025-10-22T22:30:21
https://www.reddit.com/r/LocalLLaMA/comments/1odmlnr/tensor_parallelism_with_nonmatching_gpus/
xt8sketchy
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1odmlnr
false
null
t3_1odmlnr
/r/LocalLLaMA/comments/1odmlnr/tensor_parallelism_with_nonmatching_gpus/
false
false
self
6
null
Can you imagine how DeepSeek is sold on Amazon in China?
0
How DeepSeek Reveals the Info Gap on AI China is now seen as one of the top two leaders in AI, together with the US. DeepSeek is one of its biggest breakthroughs. However, how DeepSeek is sold on Taobao, China's version of Amazon, tells another interesting story. On Taobao, many shops claim they sell “unlimited use” of DeepSeek for a one-time $2 payment. If you make the payment, what they send you is just links to some search engine or other AI tools (which are entirely free-to-use!) powered by DeepSeek. In one case, they sent the link to Kimi-K2, which is another model. Yet, these shops have high sales and good reviews. Who are the buyers? They are real people, who have limited income or tech knowledge, feeling the stress of a world that moves too quickly. They see DeepSeek all over the news and want to catch up. But the DeepSeek official website is quite hard for them to use. So they resort to Taobao, which seems to have everything, and they think they have found what they want—without knowing it is all free. These buyers are simply people with hope, trying not to be left behind. Amid all the hype and astonishing progress in AI, we must not forget those who remain buried under the information gap. Saw [this](https://mp.weixin.qq.com/s/Ne7zwBVJLu12OOR_S7vTXw) in WeChat & feel like it’s worth sharing here too.
2025-10-22T22:17:11
https://i.redd.it/ibn9wbsumqwf1.jpeg
MarketingNetMind
i.redd.it
1970-01-01T00:00:00
0
{}
1odmaz5
false
null
t3_1odmaz5
/r/LocalLLaMA/comments/1odmaz5/can_you_imagine_how_deepseek_is_sold_on_amazon_in/
false
false
default
0
{'enabled': True, 'images': [{'id': 'ibn9wbsumqwf1', 'resolutions': [{'height': 40, 'url': 'https://preview.redd.it/ibn9wbsumqwf1.jpeg?width=108&crop=smart&auto=webp&s=24aebb550b615eec93e8677f6c188206630d46aa', 'width': 108}, {'height': 80, 'url': 'https://preview.redd.it/ibn9wbsumqwf1.jpeg?width=216&crop=smart&auto=webp&s=8a346b69f1d5d48679d2cb819258ea5da3ac9556', 'width': 216}, {'height': 119, 'url': 'https://preview.redd.it/ibn9wbsumqwf1.jpeg?width=320&crop=smart&auto=webp&s=1473c1ce703297380efe738424da9bc98f24b645', 'width': 320}, {'height': 238, 'url': 'https://preview.redd.it/ibn9wbsumqwf1.jpeg?width=640&crop=smart&auto=webp&s=1e7b13b7cc7f869ce2eade8b425936bb19b8e810', 'width': 640}, {'height': 358, 'url': 'https://preview.redd.it/ibn9wbsumqwf1.jpeg?width=960&crop=smart&auto=webp&s=60c8df7f35f2a3603e8c6bd084822e83b03f9085', 'width': 960}, {'height': 403, 'url': 'https://preview.redd.it/ibn9wbsumqwf1.jpeg?width=1080&crop=smart&auto=webp&s=73e3a54b9b498f8b8f3a07469be73f5b8c701f0a', 'width': 1080}], 'source': {'height': 403, 'url': 'https://preview.redd.it/ibn9wbsumqwf1.jpeg?auto=webp&s=54bca028674c862025676343425a56afee6d2a7a', 'width': 1080}, 'variants': {}}]}
How to run Qwen3-VL-2B on mobile?
2
Can anyone help me run this directly on a mobile device? I found this package to run gguf models? https://pub.dev/packages/aub_ai And this package to run models in onnx format https://pub.dev/packages/flutter_onnxruntime
2025-10-22T21:50:31
https://www.reddit.com/r/LocalLLaMA/comments/1odlnnq/how_to_run_qwen3vl2b_on_mobile/
sugarfreecaffeine
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1odlnnq
false
null
t3_1odlnnq
/r/LocalLLaMA/comments/1odlnnq/how_to_run_qwen3vl2b_on_mobile/
false
false
self
2
{'enabled': False, 'images': [{'id': '35AIRtvC2-MWS1wmgvEyH2SVDXgZ4BJv7oW-9iLwxdA', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/35AIRtvC2-MWS1wmgvEyH2SVDXgZ4BJv7oW-9iLwxdA.png?width=108&crop=smart&auto=webp&s=61a6139f310bcc3bc65c384eaa5e24e1519ea494', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/35AIRtvC2-MWS1wmgvEyH2SVDXgZ4BJv7oW-9iLwxdA.png?width=216&crop=smart&auto=webp&s=a57a7c3daf678ab3df6a37916d5a028b3bf404aa', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/35AIRtvC2-MWS1wmgvEyH2SVDXgZ4BJv7oW-9iLwxdA.png?width=320&crop=smart&auto=webp&s=bde5f9c5645c63100ae4958be2036f7c706d238a', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/35AIRtvC2-MWS1wmgvEyH2SVDXgZ4BJv7oW-9iLwxdA.png?width=640&crop=smart&auto=webp&s=9a4dc57c774326f1d4497a43cc384b983438206b', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/35AIRtvC2-MWS1wmgvEyH2SVDXgZ4BJv7oW-9iLwxdA.png?width=960&crop=smart&auto=webp&s=aa5f486402cb29066525581f7e70058cd6fcd81f', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/35AIRtvC2-MWS1wmgvEyH2SVDXgZ4BJv7oW-9iLwxdA.png?width=1080&crop=smart&auto=webp&s=2230474dff859a0b94dfee997c945512647a7017', 'width': 1080}], 'source': {'height': 640, 'url': 'https://external-preview.redd.it/35AIRtvC2-MWS1wmgvEyH2SVDXgZ4BJv7oW-9iLwxdA.png?auto=webp&s=c05ba331dc9ae7dbb467579c6978574bd058b618', 'width': 1280}, 'variants': {}}]}
Building out first local AI server for business use.
1
I work for a small company of about 5 techs that handle support for some bespoke products we sell as well as general MSP/ITSP type work. My boss wants to build out a server that we can use to load in all the technical manuals and integrate with our current knowledgebase as well as load in historical ticket data and make this queryable. I am thinking Ollama with Onyx for Bookstack is a good start. Problem is I do not know enough about the hardware to know what would get this job done but be low cost. I am thinking a Milan series Epyc, a couple AMD older Instict cards like the 32GB ones. I would be very very open to ideas or suggestions as I need to do this for as low cost as possible for such a small business. Thanks for reading and your ideas!
2025-10-22T21:32:49
https://www.reddit.com/r/LocalLLaMA/comments/1odl7ru/building_out_first_local_ai_server_for_business/
Squanchy2112
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1odl7ru
false
null
t3_1odl7ru
/r/LocalLLaMA/comments/1odl7ru/building_out_first_local_ai_server_for_business/
false
false
self
1
null
What are the best small models with good tool call and good comprehension that can run entirely off CPU/ram
3
I’m hoping to just repurpose an old laptop as a basic LLM assistant of sorts , like Alexa but local. Are there any good models and fast enough tts to pair with it ?
2025-10-22T21:25:57
https://www.reddit.com/r/LocalLLaMA/comments/1odl1q1/what_are_the_best_small_models_with_good_tool/
SilentReporter9635
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1odl1q1
false
null
t3_1odl1q1
/r/LocalLLaMA/comments/1odl1q1/what_are_the_best_small_models_with_good_tool/
false
false
self
3
null
Llama-bench with Mesa 26.0git on AMD Strix Halo - Nice pp512 gains
14
https://preview.redd.it/… pp512 increase.
2025-10-22T21:19:46
https://www.reddit.com/r/LocalLLaMA/comments/1odkw5h/llamabench_with_mesa_260git_on_amd_strix_halo/
Money_Hand_4199
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1odkw5h
false
null
t3_1odkw5h
/r/LocalLLaMA/comments/1odkw5h/llamabench_with_mesa_260git_on_amd_strix_halo/
false
false
https://b.thumbs.redditm…NTyVkaKw-o7Q.jpg
14
null
Design Arena Launches Video-to-Video Arena
0
Looks like Design Arena just added a video-to-video arena. Might be mistaken but I'm pretty sure it's the first video editing arena (doesn't look like LMArena and Artificial Analysis have any equivalents). I'm especially interested because: 1) It's 50% OW -- they've got both Hunyuan and Wan video on there and anecdotally they've done the best (the margins of error on the leaderboard are criminal right now so I'm not trusting it until more votes roll in). 2) They've already got a hidden model on there -- they've got a model called Black Panther on there that I can't find any info about online (it's fast but BAD). 3) They're tracking speed of generations -- this low key looks like a loophole into knowing which model is which so i hope they fix that soon but on the leaderboard it's cool to see stats on gen time. 4) It's FREE -- genuinely this cannot be sustainable I don't know who's eating their inference costs but I will happily enjoy while it lasts. It's still kinda buggy from my experience but curious to hear this sub's thoughts (especially on why the Chinese models are so cracked regardless of modality LOL) https://preview.redd.it/74secndsaqwf1.png?width=1889&format=png&auto=webp&s=8dd61303524c81eb11d793ed56086dfa1a55235d
2025-10-22T21:10:51
https://www.reddit.com/r/LocalLLaMA/comments/1odko5g/design_arena_launches_videotovideo_arena/
Significant-Fan241
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1odko5g
false
null
t3_1odko5g
/r/LocalLLaMA/comments/1odko5g/design_arena_launches_videotovideo_arena/
false
false
https://b.thumbs.redditm…Pf6yQSC6siUo.jpg
0
null
What are your favorite models to run on 12gb vram (4070 oc)
2
Hey everyone. I'm an avid user of ai in my workflows but haven't tried any of the local models. I have a 4070 and would love to know what's the best model for coding and general day to day tasks that I can run locally. I'm enticed by the 128gb Ryzen chips as well as the m4 max 512gb. However, I feel like I should get some local experience first. I understand that it won't be as performance as state of the art models but I'm willing to give it a shot. I would also love to hear of your experiences upgrading to a 4090 or 5090 and what models those have allowed you to run locally. Thanks
2025-10-22T20:48:40
https://www.reddit.com/r/LocalLLaMA/comments/1odk3d9/what_are_your_favorite_models_to_run_on_12gb_vram/
ILooveMangoes
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1odk3d9
false
null
t3_1odk3d9
/r/LocalLLaMA/comments/1odk3d9/what_are_your_favorite_models_to_run_on_12gb_vram/
false
false
self
2
null
Strix Halo vs DGX Spark - Initial Impressions (long post with TL;DR at the end)
183
There are a lot of separate posts about Strix Halo and DGX Spark, but not too many direct comparisons from the people who are actually going to use them for work. So, after getting Strix Halo and later DGX Spark, decided to compile my initial impressions after using both Strix Halo (GMKTek Evo x2 128GB) and NVidia DGX Spark as an AI developer, in case it would be useful to someone. # Hardware DGX Spark is probably the most minimalist mini-PC I've ever used. It has absolutely no LEDs, not even in the LAN port, and on/off switch is a button, so unless you ping it over the network or hook up a display, good luck guessing if this thing is on. All ports are in the back, there is no Display Port, only a single HDMI port, USB-C (power only), 3x USB-C 3.2 gen 2 ports, 10G ethernet port and 2x QSFP ports. The air intake is in the front and exhaust is in the back. It is quiet for the most part, but the fan is quite audible when it's on (but quieter than my GMKTek). It has a single 4TB PciE 5.0x4 M.2 2242 SSD - SAMSUNG MZALC4T0HBL1-00B07 which I couldn't find anywhere for sale in 2242 form factor, only 2280 version, but DGX Spark only takes 2242 drives. I wish they went with standard 2280 - weird decision, given that it's a mini-PC, not a laptop or tablet. Who cares if the motherboard is an inch longer! The performance seems good, and gives me 4240.64 MB/sec vs 3118.53 MB/sec on my GMKTek (as measured by hdparm). It is user replaceable, but there is only one slot, accessible from the bottom of the device. You need to take the magnetic plate off and there are some access screws underneath. The unit is made of metal, and gets quite hot during high loads, but not unbearable hot like some reviews mentioned. Cools down quickly, though (metal!). The CPU is 20 core ARM with 10 performance and 10 efficiency cores. I didn't benchmark them, but other reviews CPU show performance similar to Strix Halo. # Initial Setup DGX Spark comes with DGX OS pre-installed (more on this later). You can set it up interactively using keyboard/mouse/display or in headless mode via WiFi hotspot that it creates. I tried to set it up by connecting my trusted Logitech keyboard/trackpad combo that I use to set up pretty much all my server boxes, but once it booted up, it displayed "Connect the keyboard" message and didn't let me proceed any further. Trackpad portion worked, and volume keys on the keyboard also worked! I rebooted, and was able to enter BIOS (by pressing Esc) just fine, and the keyboard was fully functioning there! BTW, it has AMI BIOS, but doesn't expose anything interesting other than networking and boot options. Booting into DGX OS resulted in the same problem. After some googling, I figured that it shipped with a borked kernel that broke Logitech unified setups, so I decided to proceed in a headless mode. Connected to the Wifi hotspot from my Mac (hotspot SSID/password are printed on a sticker on top of the quick start guide) and was able to continue set up there, which was pretty smooth, other than Mac spamming me with "connect to internet" popup every minute or so. It then proceeded to update firmware and OS packages, which took about 30 minutes, but eventually finished, and after that my Logitech keyboard worked just fine. # Linux Experience DGX Spark runs DGX OS 7.2.3 which is based on Ubuntu 24.04.3 LTS, but uses NVidia's custom kernel, and an older one than mainline Ubuntu LTS uses. So instead of 6.14.x you get 6.11.0-1016-nvidia. It comes with CUDA 13.0 development kit and NVidia drivers (580.95.05) pre-installed. It also has NVidia's container toolkit that includes docker, and GPU passthrough works well. Other than that, it's a standard Ubuntu Desktop installation, with GNOME and everything. SSHd is enabled by default, so after headless install you can connect to it immediately without any extra configuration. RDP remote desktop doesn't work currently - it connects, but display output is broken. I tried to boot from Fedora 43 Beta Live USB, and it worked, sort of. First, you need to disable Secure Boot in BIOS. Then, it boots only in "basic graphics mode", because built-in nvidia drivers don't recognize the chipset. It also throws other errors complaining about chipset, processor cores, etc. I think I'll try to install it to an external SSD and see if NVidia standard drivers will recognize the chip. There is hope: ============== PLATFORM INFO: ============== IOMMU: Pass-through or enabled Nvidia Driver Info Status: Supported(Nvidia Open Driver Installed) Cuda Driver Version Installed: 13000 Platform: NVIDIA_DGX_Spark, Arch: aarch64(Linux 6.11.0-1016-nvidia) Platform verification succeeded As for Strix Halo, it's an x86 PC, so you can run any distro you want. I chose Fedora 43 Beta, currently running with kernel 6.17.3-300.fc43.x86_64. Smooth sailing, up-to-date packages. # Llama.cpp experience ## DGX Spark You need to build it from source as there is no CUDA ARM build, but compiling llama.cpp was very straightforward - CUDA toolkit is already installed, just need to install development tools and it compiles just like on any other system with NVidia GPU. Just follow the instructions, no surprises. However, when I ran the benchmarks, I ran into two issues. 1. The model loading was VERY slow. It took 1 minute 40 seconds to load gpt-oss-120b. For comparison, it takes 22 seconds to load on Strix Halo (both from cold, memory cache flushed). 2. I wasn't getting the same results as ggerganov in this [thread](https://github.com/ggml-org/llama.cpp/discussions/16578). While PP was pretty impressive for such a small system, TG was matching or even slightly worse than my Strix Halo setup with ROCm. For instance, here are my Strix Halo numbers, compiled with ROCm 7.10.0a20251017, llama.cpp build 03792ad9 (6816), HIP only, no rocWMMA: ```bash build/bin/llama-bench -m ~/.cache/llama.cpp/ggml-org_gpt-oss-120b-GGUF_gpt-oss-120b-mxfp4-00001-of-00003.gguf -fa 1 -d 0,4096,8192,16384,32768 -p 2048 -n 32 -ub 2048 ``` | model |       size |     params | backend     |            test |                  t/s | | ------------------------------- | ---------: | ---------: | ----------- | --------------: | -------------------: | | gpt-oss 120B MXFP4 MoE          |  59.02 GiB |   116.83 B | ROCm        |          pp2048 |        999.59 ± 4.31 | | gpt-oss 120B MXFP4 MoE          |  59.02 GiB |   116.83 B | ROCm        |            tg32 |         47.49 ± 0.01 | | gpt-oss 120B MXFP4 MoE          |  59.02 GiB |   116.83 B | ROCm        |  pp2048 @ d4096 |        824.37 ± 1.16 | | gpt-oss 120B MXFP4 MoE          |  59.02 GiB |   116.83 B | ROCm        |    tg32 @ d4096 |         44.23 ± 0.01 | | gpt-oss 120B MXFP4 MoE          |  59.02 GiB |   116.83 B | ROCm        |  pp2048 @ d8192 |        703.42 ± 1.54 | | gpt-oss 120B MXFP4 MoE          |  59.02 GiB |   116.83 B | ROCm        |    tg32 @ d8192 |         42.52 ± 0.04 | | gpt-oss 120B MXFP4 MoE          |  59.02 GiB |   116.83 B | ROCm        | pp2048 @ d16384 |        514.89 ± 3.86 | | gpt-oss 120B MXFP4 MoE          |  59.02 GiB |   116.83 B | ROCm        |   tg32 @ d16384 |         39.71 ± 0.01 | | gpt-oss 120B MXFP4 MoE          |  59.02 GiB |   116.83 B | ROCm        | pp2048 @ d32768 |        348.59 ± 2.11 | | gpt-oss 120B MXFP4 MoE |  59.02 GiB |   116.83 B | ROCm        |   tg32 @ d32768 |         35.39 ± 0.01 | The same command on Spark gave me this: | model                           |       size |     params | backend     |            test |                  t/s | | ------------------------------- | ---------: | ---------: | ----------- | --------------: | -------------------: | | gpt-oss 120B MXFP4 MoE          |  59.02 GiB |   116.83 B | CUDA        |          pp2048 |      1816.00 ± 11.21 | | gpt-oss 120B MXFP4 MoE          |  59.02 GiB |   116.83 B | CUDA        |            tg32 |         44.74 ± 0.99 | | gpt-oss 120B MXFP4 MoE          |  59.02 GiB |   116.83 B | CUDA        |  pp2048 @ d4096 |       1763.75 ± 6.43 | | gpt-oss 120B MXFP4 MoE          |  59.02 GiB |   116.83 B | CUDA        |    tg32 @ d4096 |         42.69 ± 0.93 | | gpt-oss 120B MXFP4 MoE          |  59.02 GiB |   116.83 B | CUDA        |  pp2048 @ d8192 |      1695.29 ± 11.56 | | gpt-oss 120B MXFP4 MoE          |  59.02 GiB |   116.83 B | CUDA        |    tg32 @ d8192 |         40.91 ± 0.35 | | gpt-oss 120B MXFP4 MoE          |  59.02 GiB |   116.83 B | CUDA        | pp2048 @ d16384 |       1512.65 ± 6.35 | | gpt-oss 120B MXFP4 MoE          |  59.02 GiB |   116.83 B | CUDA        |   tg32 @ d16384 |         38.61 ± 0.03 | | gpt-oss 120B MXFP4 MoE          |  59.02 GiB |   116.83 B | CUDA        | pp2048 @ d32768 |       1250.55 ± 5.21 | | gpt-oss 120B MXFP4 MoE          |  59.02 GiB |   116.83 B | CUDA        |   tg32 @ d32768 |         34.66 ± 0.02 | I tried enabling Unified Memory switch (GGML_CUDA_ENABLE_UNIFIED_MEMORY=1) - it improved model loading, but resulted in even worse performance. I reached out to ggerganov, and he suggested disabling mmap. I thought I tried it, but apparently not. Well, that fixed it. Model loading improved too - now taking 56 seconds from cold and 23 seconds when it's still in cache. Updated numbers: | model |       size |     params | backend |            test |                  t/s | | ---------------------- | ---------: | ---------: | ----------- | --------------: | -------------------: | | gpt-oss 120B MXFP4 MoE |  59.02 GiB |   116.83 B | CUDA        |          pp2048 |       1939.32 ± 4.03 | | gpt-oss 120B MXFP4 MoE |  59.02 GiB |   116.83 B | CUDA        |            tg32 |         56.33 ± 0.26 | | gpt-oss 120B MXFP4 MoE |  59.02 GiB |   116.83 B | CUDA        |  pp2048 @ d4096 |       1832.04 ± 5.58 | | gpt-oss 120B MXFP4 MoE |  59.02 GiB |   116.83 B | CUDA        |    tg32 @ d4096 |         52.63 ± 0.12 | | gpt-oss 120B MXFP4 MoE |  59.02 GiB |   116.83 B | CUDA        |  pp2048 @ d8192 |       1738.07 ± 5.93 | | gpt-oss 120B MXFP4 MoE |  59.02 GiB |   116.83 B | CUDA        |    tg32 @ d8192 |         48.60 ± 0.20 | | gpt-oss 120B MXFP4 MoE |  59.02 GiB |   116.83 B | CUDA        | pp2048 @ d16384 |      1525.71 ± 12.34 | | gpt-oss 120B MXFP4 MoE |  59.02 GiB |   116.83 B | CUDA        |   tg32 @ d16384 |         45.01 ± 0.09 | | gpt-oss 120B MXFP4 MoE |  59.02 GiB |   116.83 B | CUDA        | pp2048 @ d32768 |       1242.35 ± 5.64 | | gpt-oss 120B MXFP4 MoE |  59.02 GiB |   116.83 B | CUDA        |   tg32 @ d32768 |         39.10 ± 0.09 | As you can see, much better performance both in PP and TG. As for Strix Halo, mmap/no-mmap doesn't make any difference there. ## Strix Halo On Strix Halo, llama.cpp experience is... well, a bit turbulent. You can download a pre-built version for Vulkan, and it works, but the performance is a mixed bag. TG is pretty good, but PP is not great. ```bash build/bin/llama-bench -m ~/.cache/llama.cpp/ggml-org_gpt-oss-120b-GGUF_gpt-oss-120b-mxfp4-00001-of-00003.gguf -fa 1 -d 0,4096,8192,16384,32768 -p 2048 -n 32 --mmap 0 -ngl 999 -ub 1024 ``` **NOTE**: Vulkan likes batch size of 1024 the most, unlike ROCm that likes 2048 better. | model                           |       size |     params | backend     |            test | t/s | | ------------------------------- | ---------: | ---------: | ----------- | --------------: | -------------------: | | gpt-oss 120B MXFP4 MoE          |  59.02 GiB |   116.83 B | Vulkan      |          pp2048 |        526.54 ± 4.90 | | gpt-oss 120B MXFP4 MoE          |  59.02 GiB |   116.83 B | Vulkan      |            tg32 |         52.64 ± 0.08 | | gpt-oss 120B MXFP4 MoE          |  59.02 GiB |   116.83 B | Vulkan      |  pp2048 @ d4096 |        438.85 ± 0.76 | | gpt-oss 120B MXFP4 MoE          |  59.02 GiB |   116.83 B | Vulkan      |    tg32 @ d4096 |         48.21 ± 0.03 | | gpt-oss 120B MXFP4 MoE          |  59.02 GiB |   116.83 B | Vulkan      |  pp2048 @ d8192 |        356.28 ± 4.47 | | gpt-oss 120B MXFP4 MoE          |  59.02 GiB |   116.83 B | Vulkan      |    tg32 @ d8192 |         45.90 ± 0.23 | | gpt-oss 120B MXFP4 MoE          |  59.02 GiB |   116.83 B | Vulkan      | pp2048 @ d16384 |        210.17 ± 2.53 | | gpt-oss 120B MXFP4 MoE          |  59.02 GiB |   116.83 B | Vulkan      |   tg32 @ d16384 |         42.64 ± 0.07 | | gpt-oss 120B MXFP4 MoE          |  59.02 GiB |   116.83 B | Vulkan      | pp2048 @ d32768 |        138.79 ± 9.47 | | gpt-oss 120B MXFP4 MoE          |  59.02 GiB |   116.83 B | Vulkan      |   tg32 @ d32768 |         36.18 ± 0.02 | I tried toolboxes from kyuz0, and some of them were better, but I still felt that I could squeeze more juice out of it. All of them suffered from significant performance degradation when the context was filling up. Then I tried to compile my own using the latest ROCm [build](https://therock-nightly-tarball.s3.amazonaws.com/therock-dist-linux-gfx1151-7.10.0a20251017.tar.gz) from TheRock (on that date). I also build [rocWMMA](https://github.com/ROCm/rocWMMA.git) as recommended by kyoz0 (more on that later). Llama.cpp compiled without major issues - I had to configure the paths properly, but other than that, it just worked. The PP increased dramatically, but TG decreased. | model                          |       size |     params | backend    | ngl | n_ubatch | fa | mmap |            test |                  t/s | | ------------------------------ | ---------: | ---------: | ---------- | --: | -------: | -: | ---: | --------------: | -------------------: | | gpt-oss 120B MXFP4 MoE         |  59.02 GiB |   116.83 B | ROCm       | 999 |     2048 |  1 |    0 |          pp2048 |       1030.71 ± 2.26 | | gpt-oss 120B MXFP4 MoE         |  59.02 GiB |   116.83 B | ROCm       | 999 |     2048 |  1 |    0 |            tg32 |         47.84 ± 0.02 | | gpt-oss 120B MXFP4 MoE         |  59.02 GiB |   116.83 B | ROCm       | 999 |     2048 |  1 |    0 |  pp2048 @ d4096 |        802.36 ± 6.96 | | gpt-oss 120B MXFP4 MoE         |  59.02 GiB |   116.83 B | ROCm       | 999 |     2048 |  1 |    0 |    tg32 @ d4096 |         39.09 ± 0.01 | | gpt-oss 120B MXFP4 MoE         |  59.02 GiB |   116.83 B | ROCm       | 999 |     2048 |  1 |    0 |  pp2048 @ d8192 |        615.27 ± 2.18 | | gpt-oss 120B MXFP4 MoE         |  59.02 GiB |   116.83 B | ROCm       | 999 |     2048 |  1 |    0 |    tg32 @ d8192 |         33.34 ± 0.05 | | gpt-oss 120B MXFP4 MoE         |  59.02 GiB |   116.83 B | ROCm       | 999 |     2048 |  1 |    0 | pp2048 @ d16384 |        409.25 ± 0.67 | | gpt-oss 120B MXFP4 MoE         |  59.02 GiB |   116.83 B | ROCm       | 999 |     2048 |  1 |    0 |   tg32 @ d16384 |         25.86 ± 0.01 | | gpt-oss 120B MXFP4 MoE         |  59.02 GiB |   116.83 B | ROCm       | 999 |     2048 |  1 |    0 | pp2048 @ d32768 |        228.04 ± 0.44 | | gpt-oss 120B MXFP4 MoE         |  59.02 GiB |   116.83 B | ROCm       | 999 |     2048 |  1 |    0 |   tg32 @ d32768 |         18.07 ± 0.03 | But the biggest issue is significant performance degradation with long context, much more than you'd expect. Then I stumbled upon Lemonade SDK and their pre-built llama.cpp. Ran that one, and got much better results across the board. TG was still below Vulkan, but PP was decent and degradation wasn't as bad: | model | size | params | test | t/s | | ---------------------- | --------: | -------: | --------------: | ------------: | | gpt-oss 120B MXFP4 MoE | 59.02 GiB | 116.83 B | pp2048 | 999.20 ± 3.44 | | gpt-oss 120B MXFP4 MoE | 59.02 GiB | 116.83 B | tg32 | 47.53 ± 0.01 | | gpt-oss 120B MXFP4 MoE | 59.02 GiB | 116.83 B | pp2048 @ d4096 | 826.63 ± 9.09 | | gpt-oss 120B MXFP4 MoE | 59.02 GiB | 116.83 B | tg32 @ d4096 | 44.24 ± 0.03 | | gpt-oss 120B MXFP4 MoE | 59.02 GiB | 116.83 B | pp2048 @ d8192 | 702.66 ± 2.15 | | gpt-oss 120B MXFP4 MoE | 59.02 GiB | 116.83 B | tg32 @ d8192 | 42.56 ± 0.03 | | gpt-oss 120B MXFP4 MoE | 59.02 GiB | 116.83 B | pp2048 @ d16384 | 505.85 ± 1.33 | | gpt-oss 120B MXFP4 MoE | 59.02 GiB | 116.83 B | tg32 @ d16384 | 39.82 ± 0.03 | | gpt-oss 120B MXFP4 MoE | 59.02 GiB | 116.83 B | pp2048 @ d32768 | 343.06 ± 2.07 | | gpt-oss 120B MXFP4 MoE | 59.02 GiB | 116.83 B | tg32 @ d32768 | 35.50 ± 0.02 | So I looked at their compilation options and noticed that they build without rocWMMA. So, I did the same and got similar performance too! | model                           |       size |     params | backend |            test |                  t/s | | ------------------------------- | ---------: | ---------: | ----------- | --------------: | -------------------: | | gpt-oss 120B MXFP4 MoE          |  59.02 GiB |   116.83 B | ROCm        |          pp2048 |       1000.93 ± 1.23 | | gpt-oss 120B MXFP4 MoE          |  59.02 GiB |   116.83 B | ROCm        |            tg32 |         47.46 ± 0.02 | | gpt-oss 120B MXFP4 MoE          |  59.02 GiB |   116.83 B | ROCm        |  pp2048 @ d4096 |        827.34 ± 1.99 | | gpt-oss 120B MXFP4 MoE          |  59.02 GiB |   116.83 B | ROCm        |    tg32 @ d4096 |         44.20 ± 0.01 | | gpt-oss 120B MXFP4 MoE          |  59.02 GiB |   116.83 B | ROCm        |  pp2048 @ d8192 |        701.68 ± 2.36 | | gpt-oss 120B MXFP4 MoE          |  59.02 GiB |   116.83 B | ROCm        |    tg32 @ d8192 |         42.39 ± 0.04 | | gpt-oss 120B MXFP4 MoE          |  59.02 GiB |   116.83 B | ROCm        | pp2048 @ d16384 |        503.49 ± 0.90 | | gpt-oss 120B MXFP4 MoE          |  59.02 GiB |   116.83 B | ROCm        |   tg32 @ d16384 |         39.61 ± 0.02 | | gpt-oss 120B MXFP4 MoE          |  59.02 GiB |   116.83 B | ROCm        | pp2048 @ d32768 |        344.36 ± 0.80 | | gpt-oss 120B MXFP4 MoE          |  59.02 GiB |   116.83 B | ROCm        |   tg32 @ d32768 |         35.32 ± 0.01 | So far that's the best I could get from Strix Halo. It's very usable for text generation tasks. Also, wanted to touch multi-modal performance. That's where Spark shines. I don't have any specific benchmarks yet, but image processing is much faster on Spark than on Strix Halo, especially in vLLM. # VLLM Experience Haven't had a chance to do extensive testing here, but wanted to share some early thoughts. ## DGX Spark First, I tried to just build vLLM from the source as usual. The build was successful, but it failed with the following error: ptxas fatal : Value 'sm_121a' is not defined for option 'gpu-name' I decided not to spend too much time on this for now, and just launched vLLM container that NVidia provides through their Docker repository. It is built for DGX Spark, so supports it out of the box. However, it has version 0.10.1, so I wasn't able to run Qwen3-VL there. Now, they put the source code inside the container, but it wasn't a git repository - probably contains some NVidia-specific patches - I'll need to see if those could be merged into main vllm code. So I just checked out vllm main branch and proceeded to build with existing pytorch as usual. This time I was able to run it and launch qwen3-vl models just fine. Both dense and MOE work. I tried FP4 and AWQ quants - everything works, no need to disable CUDA graphs. The performance is decent - I still need to run some benchmarks, but image processing is very fast. ## Strix Halo Unlike llama.cpp that just works, vLLM experience on Strix Halo is much more limited. My goal was to run Qwen3-VL models that are not supported by llama.cpp yet, so I needed to build 0.11.0 or later. There are some existing containers/toolboxes for earlier versions, but I couldn't use them. So, I installed ROCm pyTorch libraries from TheRock, some [patches](https://github.com/kyuz0/amd-strix-halo-vllm-toolboxes/blob/main/Dockerfile.vllm-therock-gfx1151-aotriton) from kyoz0 toolboxes to avoid amdsmi package crash, [ROCm FlashAttention](https://github.com/ROCm/flash-attention.git) and then just followed vLLM standard installation instructions with existing pyTorch. I was able to run Qwen3VL dense models with decent (for dense models) speeds, although initialization takes quite some time until you reduce -max-num-seqs to 1 and set tp 1. The image processing is very slow though, much slower than llama.cpp for the same image, but the token generation is about what you'd expect from it. Again, model loading is faster than Spark for some reason (I'd expect other way around given faster SSD in Spark and slightly faster memory). I'm going to rebuild vLLM and re-test/benchmark later. Some observations: - FP8 models don't work - they hang on WARNING 10-22 12:55:04 [fp8_utils.py:785] Using default W8A8 Block FP8 kernel config. Performance might be sub-optimal! Config file not found at /home/eugr/vllm/vllm/vllm/model_executor/layers/quantization/utils/configs/N=6144,K=2560,device_name=Radeon_8060S_Graphics,dtype=fp8_w8a8,block_shape=[128,128].json - You need to use --enforce-eager, as CUDA graphs crash vLLM. Sometimes it works, but mostly crashes. - Even with --enforce-eager, there are some HIP-related crashes here and there occasionally. - AWQ models work, both 4-bit and 8-bit, but only dense ones. AWQ MOE quants require Marlin kernel that is not available for ROCm. # Conclusion / TL;DR Summary of my initial impressions: - DGX Spark is an interesting beast for sure. - Limited extensibility - no USB-4, only one M.2 slot, and it's 2242. - But has 200Gbps network interface. - It's a first generation of such devices, so there are some annoying bugs and incompatibilities. - Inference wise, the token generation is nearly identical to Strix Halo both in llama.cpp and vllm, but prompt processing is 2-5x higher than Strix Halo. - Strix Halo performance in prompt processing degrades much faster with context. - Image processing takes longer, especially with vLLM. - Model loading into unified RAM is slower on DGX Spark for some reason, both in llama.cpp and vLLM. - Even though vLLM included gfx1151 in the supported configurations, it still requires some hacks to compile it. - And even then, the experience is suboptimal. Initialization time is slow, it crashes, FP8 doesn't work, AWQ for MOE doesn't work. - If you are an AI developer who uses transformers/pyTorch or you need vLLM - you are better off with DGX Spark (or just a normal GPU build). - If you want a power-efficient inference server that can run gpt-oss and similar MOE at decent speeds, and don't need to process images often, Strix Halo is the way to go. - If you want a general purpose machine, Strix Halo wins too.
2025-10-22T20:46:12
https://www.reddit.com/r/LocalLLaMA/comments/1odk11r/strix_halo_vs_dgx_spark_initial_impressions_long/
Eugr
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1odk11r
false
null
t3_1odk11r
/r/LocalLLaMA/comments/1odk11r/strix_halo_vs_dgx_spark_initial_impressions_long/
false
false
self
183
{'enabled': False, 'images': [{'id': 'aN4XkL-MJPOzspOZbJF-LHUTlqDFer6LLeAA6qUoIWk', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/aN4XkL-MJPOzspOZbJF-LHUTlqDFer6LLeAA6qUoIWk.png?width=108&crop=smart&auto=webp&s=82b51de17e10958d753371a7f7fd9f0ac42abc99', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/aN4XkL-MJPOzspOZbJF-LHUTlqDFer6LLeAA6qUoIWk.png?width=216&crop=smart&auto=webp&s=0f768b9f4dc6d1182c220681fd46319f9c3977f8', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/aN4XkL-MJPOzspOZbJF-LHUTlqDFer6LLeAA6qUoIWk.png?width=320&crop=smart&auto=webp&s=425ccf3c5f982cde38a24dde1f3e842fc71c3040', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/aN4XkL-MJPOzspOZbJF-LHUTlqDFer6LLeAA6qUoIWk.png?width=640&crop=smart&auto=webp&s=377cf7ca25ff4b2196af2204178159dbace13753', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/aN4XkL-MJPOzspOZbJF-LHUTlqDFer6LLeAA6qUoIWk.png?width=960&crop=smart&auto=webp&s=013b8b148dd49b18edd8bf77de77fb9b16e7ea25', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/aN4XkL-MJPOzspOZbJF-LHUTlqDFer6LLeAA6qUoIWk.png?width=1080&crop=smart&auto=webp&s=a448dd834f210b9fde4d621242156942559718fd', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/aN4XkL-MJPOzspOZbJF-LHUTlqDFer6LLeAA6qUoIWk.png?auto=webp&s=98df517fc92b9b1cd372a1182a7d59cd29250d82', 'width': 1200}, 'variants': {}}]}
Newer architecture vs raw VRAM for AI workstation
6
I'm building an AI/animation workstation and can't decide between going all-in on the latest tech or maximizing VRAM with older cards. Would love the community's perspective. **THE DILEMMA:** **Option A: Go New (Blackwell)** - 1-2x RTX 5090 or RTX PRO 5000 72GB - Pros: Blackwell architecture, PCIe 5.0, 2-3x faster single-GPU performance, better power efficiency - Cons: NO NVLink (unified memory gone), $2,600-5,000 per card, 32-72GB total VRAM **Option B: Go Proven (Ampere)** - 4x RTX 3090 with NVLink bridges - Pros: 96GB unified VRAM, NVLink bandwidth (600GB/s), battle-tested for multi-GPU, $2,800 for all 4 GPUs - Cons: 2 generations old, PCIe 4.0, higher power consumption (1400W vs 575-1200W) **MY WORKFLOW:** - Fine-tuning 30-70B parameter models (LoRA, QLoRA) - Hobby: Blender, Unreal Engine - Future: want to experiment with 100B+ models without limitations **THE CONFLICTING ADVICE:** - "Always buy latest gen, PCIe 5.0 is the future!" - "VRAM is king, NVLink or bust for serious AI" - NVIDIA: (drops NVLink from consumer cards) 😑 **SPECIFIC QUESTIONS:** 1. **Does PCIe 5.0 actually matter?** - Will I see meaningful gains over PCIe 4.0 for GPU-bound workloads? From what I've read, GPUs don't even saturate PCIe 3.0 x16 in most cases... 2. **Is losing NVLink a dealbreaker?** - For fine-tuning transformers, does the lack of unified memory force painful model sharding? Or has PyTorch/Transformers gotten good enough at handling isolated GPU pools? 3. **Does Blackwell's speed overcome the VRAM gap?** - If a 5090 is 2x faster but I have 64GB isolated vs 96GB unified, which completes a 70B fine-tuning job faster? 4. **Am I crazy to spend $5k on 2-gen-old cards?** - Or is this actually the smart move while NVLink 3090s are still available? **BUDGET:** ~$5-8k for GPUs (flexible but trying to be reasonable) Thanks in advance! 🙏
2025-10-22T20:41:08
https://www.reddit.com/r/LocalLLaMA/comments/1odjw7q/newer_architecture_vs_raw_vram_for_ai_workstation/
icybergenome
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1odjw7q
false
null
t3_1odjw7q
/r/LocalLLaMA/comments/1odjw7q/newer_architecture_vs_raw_vram_for_ai_workstation/
false
false
self
6
null
How can i training AI model to Pentest (Cyber) without restriction ?
2
So, I'm a beginner in AI, but I have a lot of knowledge in Penetration Testing. I'd like to have a local server to help me with my daily activities and perhaps even sell its use. But my VRAM is 12GB and 32GB RAM, and I have a Ryzen 5 5600G. Which model would be best for Penetration Testing in this scenario? How can I train it to be an expert, using external resources like the OWASP Guide? I still don't know how to train it. Sorry for the silly question.
2025-10-22T20:27:58
https://www.reddit.com/r/LocalLLaMA/comments/1odjk0s/how_can_i_training_ai_model_to_pentest_cyber/
SplitEarly2354
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1odjk0s
false
null
t3_1odjk0s
/r/LocalLLaMA/comments/1odjk0s/how_can_i_training_ai_model_to_pentest_cyber/
false
false
self
2
null
Completing an RTX 3090 with another GPU for more VRAM at an affordable price, what are the best options?
1
I have an RTX 3090, but I'm reaching the limits of this GPU VRAM, I was wondering what are the best options to complete it? what are the Pros and Cons to add it an RTX 3080 for example? does the cards perform better when they are exactly the same? and the same architecture? What are the pros and cons?
2025-10-22T20:21:33
https://www.reddit.com/r/LocalLLaMA/comments/1odjdth/completing_an_rtx_3090_with_another_gpu_for_more/
tomakorea
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1odjdth
false
null
t3_1odjdth
/r/LocalLLaMA/comments/1odjdth/completing_an_rtx_3090_with_another_gpu_for_more/
false
false
self
1
null
Compute in memory breakthrough from GSI
0
[https://gsitechnology.com/compute-in-memory-computational-devices/](https://gsitechnology.com/compute-in-memory-computational-devices/) The news says that Cornell University study validated companies claims, but I couldn't find the study. The in memory tech is in sram. Would be more fascinating if it was in dram or flash. With sram not able to have large models.
2025-10-22T20:16:26
https://www.reddit.com/r/LocalLLaMA/comments/1odj8w0/compute_in_memory_breakthrough_from_gsi/
Terminator857
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1odj8w0
false
null
t3_1odj8w0
/r/LocalLLaMA/comments/1odj8w0/compute_in_memory_breakthrough_from_gsi/
false
false
self
0
null
RamaLama: Running LLMs as containers adding MLX support
14
I’m not sure if anyone has played around with it yet but RamaLama is CLI for running and building LLMs as container images. We recently added support for MLX in addition to llama.cpp and vLLM (shoutout to kush-gupt)!  We are aiming to be totally runtime and hardware agnostic but it’s been an uphill battle with  vLLM support still a little shaky. Still, we’ve got support for Apple Silicon GPUs, Nvidia GPUs (cuda), AMD GPUs (rocm, vulkan), Intel GPUs, Moore Threads GPUs, and Ascend NPUs. With so much variation we could really use help finding people with atypical hardware configurations to test against. **Github**: [https://github.com/containers/ramalama](https://github.com/containers/ramalama) As an aside, there’s going to be a developer forum in a few weeks for new users: [http://ramalama.com/events/dev-forum-1](http://ramalama.com/events/dev-forum-1)
2025-10-22T19:52:16
https://v.redd.it/x0zcn3zhwpwf1
ProfessionalHorse707
v.redd.it
1970-01-01T00:00:00
0
{}
1odilom
false
{'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/x0zcn3zhwpwf1/DASHPlaylist.mpd?a=1763754752%2CZWRlOTBmODBmNDgyNmRjMGQzZGQ1YjJlYTA3NjJhNWVmNTFlNDJjMmVhMzBmZWRkZDViNTQ5ODQ2NDE2ZTY4NQ%3D%3D&v=1&f=sd', 'duration': 43, 'fallback_url': 'https://v.redd.it/x0zcn3zhwpwf1/DASH_720.mp4?source=fallback', 'has_audio': True, 'height': 790, 'hls_url': 'https://v.redd.it/x0zcn3zhwpwf1/HLSPlaylist.m3u8?a=1763754752%2CYzdiYzU0ZjIwYTEwMzVjZWZmNzQyYzZjMTE5NmRhZmJkNTZlYjBhZmE2NTU5NzJhYWJmOGQ1YzVmYWI1MTc1MQ%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/x0zcn3zhwpwf1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 720}}
t3_1odilom
/r/LocalLLaMA/comments/1odilom/ramalama_running_llms_as_containers_adding_mlx/
false
false
https://external-preview…ab8b37806799aff3
14
{'enabled': False, 'images': [{'id': 'cTYxeHE0emh3cHdmMXy1caZuZgS62CJImk8F9ntJO8-7vfSdfeZ_sZ1BZhe-', 'resolutions': [{'height': 118, 'url': 'https://external-preview.redd.it/cTYxeHE0emh3cHdmMXy1caZuZgS62CJImk8F9ntJO8-7vfSdfeZ_sZ1BZhe-.png?width=108&crop=smart&format=pjpg&auto=webp&s=7db714c3c3eab660d08ac0967648030abefcafca', 'width': 108}, {'height': 237, 'url': 'https://external-preview.redd.it/cTYxeHE0emh3cHdmMXy1caZuZgS62CJImk8F9ntJO8-7vfSdfeZ_sZ1BZhe-.png?width=216&crop=smart&format=pjpg&auto=webp&s=f550246473f60fe4cb88d324def64f7367780575', 'width': 216}, {'height': 351, 'url': 'https://external-preview.redd.it/cTYxeHE0emh3cHdmMXy1caZuZgS62CJImk8F9ntJO8-7vfSdfeZ_sZ1BZhe-.png?width=320&crop=smart&format=pjpg&auto=webp&s=acf133e7205b160eb514bfa26c8e0b36c3ca72c7', 'width': 320}, {'height': 702, 'url': 'https://external-preview.redd.it/cTYxeHE0emh3cHdmMXy1caZuZgS62CJImk8F9ntJO8-7vfSdfeZ_sZ1BZhe-.png?width=640&crop=smart&format=pjpg&auto=webp&s=a947cb51e1b80ccffcdbcb93484c0af1ec63ee31', 'width': 640}, {'height': 1053, 'url': 'https://external-preview.redd.it/cTYxeHE0emh3cHdmMXy1caZuZgS62CJImk8F9ntJO8-7vfSdfeZ_sZ1BZhe-.png?width=960&crop=smart&format=pjpg&auto=webp&s=c7112e02f0bdc5e51d6c7e5205eff218f5b835ab', 'width': 960}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/cTYxeHE0emh3cHdmMXy1caZuZgS62CJImk8F9ntJO8-7vfSdfeZ_sZ1BZhe-.png?format=pjpg&auto=webp&s=21ba19d5fb6d18759253b05f9014db88fd0dfff2', 'width': 984}, 'variants': {}}]}
Troubleshooting Prompt Cache with Llama.cpp Question
2
Hey guys, been trying to troubleshoot or figure out what's causing an odd behavior where Llama.cpp doesn't appear to cache the prompt if the initial few messages are longer. I've been able to get it to work as expected if the first 2-3 messages I send are small (like 10-30ish tokens) and from there I can send a message of any size. If the initial few messages are too large I get a low similarity and it reprocesses the message before + my response. Similarly sending in a different format (saying using Mistral 7 while using GLM 4.6) appears to also not work with prompt cache, where it did before for me (about a week ago). I've tried reinstalling both Llama.cpp and Sillytavern, and was just wondering if there is a command I'm missing. .\\llama-server.exe -m ""C:\\Models\\GLM4.6\\GLM-4.6-Q4\_K\_M-00001-of-00005.gguf"" -ngl 92 --flash-attn on --jinja --n-cpu-moe 92 -c 13000 \- Example command I've been testing with. Any idea what may be causing this or how I could resolve it? Thanks for your time and any input you have, I appreciate it.
2025-10-22T19:40:33
https://www.reddit.com/r/LocalLLaMA/comments/1odiai7/troubleshooting_prompt_cache_with_llamacpp/
DragonfruitIll660
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1odiai7
false
null
t3_1odiai7
/r/LocalLLaMA/comments/1odiai7/troubleshooting_prompt_cache_with_llamacpp/
false
false
self
2
null
olmOCR 2: Unit test rewards for document OCR | Ai2
1
[deleted]
2025-10-22T19:40:30
[deleted]
1970-01-01T00:00:00
0
{}
1odiaga
false
null
t3_1odiaga
/r/LocalLLaMA/comments/1odiaga/olmocr_2_unit_test_rewards_for_document_ocr_ai2/
false
false
default
1
null
𝘞𝘩𝘦𝘯 𝘙𝘦𝘥𝘪𝘴 𝘧𝘦𝘭𝘵 𝘵𝘰𝘰 𝘮𝘶𝘤𝘩, 𝘐 𝘶𝘴𝘦𝘥 𝘖𝘖𝘗S 𝘪𝘯𝘴𝘵𝘦𝘢𝘥
1
[removed]
2025-10-22T19:33:10
https://i.redd.it/xgrh9i3mtpwf1.jpeg
Legendary_Outrage
i.redd.it
1970-01-01T00:00:00
0
{}
1odi3cv
false
null
t3_1odi3cv
/r/LocalLLaMA/comments/1odi3cv/𝘞𝘩𝘦𝘯_𝘙𝘦𝘥𝘪𝘴_𝘧𝘦𝘭𝘵_𝘵𝘰𝘰_𝘮𝘶𝘤𝘩_𝘐_𝘶𝘴𝘦𝘥_𝘖𝘖𝘗s_𝘪𝘯𝘴𝘵𝘦𝘢𝘥/
false
false
default
1
{'enabled': True, 'images': [{'id': 'xgrh9i3mtpwf1', 'resolutions': [{'height': 78, 'url': 'https://preview.redd.it/xgrh9i3mtpwf1.jpeg?width=108&crop=smart&auto=webp&s=66524d0246f8dc05e777c221f866e509a62c7cfd', 'width': 108}, {'height': 157, 'url': 'https://preview.redd.it/xgrh9i3mtpwf1.jpeg?width=216&crop=smart&auto=webp&s=fa4431d4d7648b48a6c6b142af53461811c2c89d', 'width': 216}, {'height': 233, 'url': 'https://preview.redd.it/xgrh9i3mtpwf1.jpeg?width=320&crop=smart&auto=webp&s=d003190ef136865f4099810b98066357927c5a8b', 'width': 320}, {'height': 467, 'url': 'https://preview.redd.it/xgrh9i3mtpwf1.jpeg?width=640&crop=smart&auto=webp&s=1433853403ef557f256c1b3a1f5b4f4f4719bcd0', 'width': 640}, {'height': 701, 'url': 'https://preview.redd.it/xgrh9i3mtpwf1.jpeg?width=960&crop=smart&auto=webp&s=f755d8bdfef4efa48bc0bc339ceae9b4392da02e', 'width': 960}, {'height': 789, 'url': 'https://preview.redd.it/xgrh9i3mtpwf1.jpeg?width=1080&crop=smart&auto=webp&s=214eaacb78559c61a73562ecc9bf011fd2b2f04f', 'width': 1080}], 'source': {'height': 789, 'url': 'https://preview.redd.it/xgrh9i3mtpwf1.jpeg?auto=webp&s=bea80dfe8343692db3e39da2010f46c8c9979936', 'width': 1080}, 'variants': {}}]}
Meta lays off 600 employees within AI unit
253
2025-10-22T19:30:58
https://www.cnbc.com/2025/10/22/meta-layoffs-ai.html
a_slay_nub
cnbc.com
1970-01-01T00:00:00
0
{}
1odi1c0
false
null
t3_1odi1c0
/r/LocalLLaMA/comments/1odi1c0/meta_lays_off_600_employees_within_ai_unit/
false
false
default
253
{'enabled': False, 'images': [{'id': 'J07EauFcN4nV9LOcRS0eXwdDIcxd3OiFJdlO3Bhl-Rc', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/J07EauFcN4nV9LOcRS0eXwdDIcxd3OiFJdlO3Bhl-Rc.jpeg?width=108&crop=smart&auto=webp&s=d9d748204c8c92947ff46013b819672df4c492f0', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/J07EauFcN4nV9LOcRS0eXwdDIcxd3OiFJdlO3Bhl-Rc.jpeg?width=216&crop=smart&auto=webp&s=01866ed4756a0ff24272c3d067000125dc5afcf6', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/J07EauFcN4nV9LOcRS0eXwdDIcxd3OiFJdlO3Bhl-Rc.jpeg?width=320&crop=smart&auto=webp&s=c20722eaaff413b6eb7a459b17d0bc8aa8a70466', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/J07EauFcN4nV9LOcRS0eXwdDIcxd3OiFJdlO3Bhl-Rc.jpeg?width=640&crop=smart&auto=webp&s=b63bb44a8b15efeaf11cd6f80af319b8caa01688', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/J07EauFcN4nV9LOcRS0eXwdDIcxd3OiFJdlO3Bhl-Rc.jpeg?width=960&crop=smart&auto=webp&s=71c94261a3b1a35f067671ee90b6396708586869', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/J07EauFcN4nV9LOcRS0eXwdDIcxd3OiFJdlO3Bhl-Rc.jpeg?width=1080&crop=smart&auto=webp&s=99c871bf779ace6eb144463d971deb20adbf0a72', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/J07EauFcN4nV9LOcRS0eXwdDIcxd3OiFJdlO3Bhl-Rc.jpeg?auto=webp&s=39ead90d0c7e45ab3f9e8a9f0efaf9e31b6edd5a', 'width': 1920}, 'variants': {}}]}
Looking for a working NVFP4/MXFP4 pretraining recipe for sm121 Nvidia GPUs
0
I am working on pretraining a small model in NVFP4 (or MXFP4) on Blackwell (sm121 not sm120a like the 50xx cards). Nvidia has an example [recipe](https://docs.nvidia.com/deeplearning/transformer-engine/user-guide/examples/fp8_primer.html) for doing this, and Cursor has a nice [blog post](https://cursor.com/blog/kernels) on various MXFP8 training tips that I could learn from. But both are lacking various details that I’ll have to figure out using trial-and-error. Are there any working end-to-end recipes for doing this? Hoping to save time if someone else has done this already.
2025-10-22T19:28:37
https://i.redd.it/u7yw1gzsspwf1.jpeg
entsnack
i.redd.it
1970-01-01T00:00:00
0
{}
1odhz2s
false
null
t3_1odhz2s
/r/LocalLLaMA/comments/1odhz2s/looking_for_a_working_nvfp4mxfp4_pretraining/
false
false
default
0
{'enabled': True, 'images': [{'id': 'u7yw1gzsspwf1', 'resolutions': [{'height': 93, 'url': 'https://preview.redd.it/u7yw1gzsspwf1.jpeg?width=108&crop=smart&auto=webp&s=8d3febbfd37b41ca00a7747e06beb84c7b0837c2', 'width': 108}, {'height': 186, 'url': 'https://preview.redd.it/u7yw1gzsspwf1.jpeg?width=216&crop=smart&auto=webp&s=5333bff1ce8f9058ab6643dce4d5cfd8e973a2df', 'width': 216}, {'height': 276, 'url': 'https://preview.redd.it/u7yw1gzsspwf1.jpeg?width=320&crop=smart&auto=webp&s=5248925386c132841dc4d3677bbe90dd6ae2ab2c', 'width': 320}, {'height': 553, 'url': 'https://preview.redd.it/u7yw1gzsspwf1.jpeg?width=640&crop=smart&auto=webp&s=f0f5e8e499a29c224d60e003141d95a4aa43d487', 'width': 640}, {'height': 830, 'url': 'https://preview.redd.it/u7yw1gzsspwf1.jpeg?width=960&crop=smart&auto=webp&s=d9119c464246bd476668c2d19e054b7d54790301', 'width': 960}, {'height': 933, 'url': 'https://preview.redd.it/u7yw1gzsspwf1.jpeg?width=1080&crop=smart&auto=webp&s=b7779b82404120b6efeb1da01f7e9adab2ce0083', 'width': 1080}], 'source': {'height': 996, 'url': 'https://preview.redd.it/u7yw1gzsspwf1.jpeg?auto=webp&s=b2e8aef9e0128c61cc17e71739c0cd7b4b0e6159', 'width': 1152}, 'variants': {}}]}
⚙️ "𝘞𝘩𝘦𝘯 𝘙𝘦𝘥𝘪𝘴 𝘧𝘦𝘭𝘵 𝘵𝘰𝘰 𝘮𝘶𝘤𝘩, 𝘐 𝘶𝘴𝘦𝘥 𝘖𝘖𝘗 𝘪𝘯𝘴𝘵𝘦𝘢𝘥"
1
[removed]
2025-10-22T19:20:53
https://i.redd.it/0ruwak5frpwf1.jpeg
Legendary_Outrage
i.redd.it
1970-01-01T00:00:00
0
{}
1odhrpx
false
null
t3_1odhrpx
/r/LocalLLaMA/comments/1odhrpx/𝘞𝘩𝘦𝘯_𝘙𝘦𝘥𝘪𝘴_𝘧𝘦𝘭𝘵_𝘵𝘰𝘰_𝘮𝘶𝘤𝘩_𝘐_𝘶𝘴𝘦𝘥_𝘖𝘖𝘗_𝘪𝘯𝘴𝘵𝘦𝘢𝘥/
false
false
default
1
{'enabled': True, 'images': [{'id': '0ruwak5frpwf1', 'resolutions': [{'height': 78, 'url': 'https://preview.redd.it/0ruwak5frpwf1.jpeg?width=108&crop=smart&auto=webp&s=01cf248303e2438086e8d1c75a5fbff6d5e4de13', 'width': 108}, {'height': 157, 'url': 'https://preview.redd.it/0ruwak5frpwf1.jpeg?width=216&crop=smart&auto=webp&s=eb8ad06249137c96d56e5f667615afc8f45982bb', 'width': 216}, {'height': 233, 'url': 'https://preview.redd.it/0ruwak5frpwf1.jpeg?width=320&crop=smart&auto=webp&s=16804be0797d708fbad1f873828b652fa91b1f5a', 'width': 320}, {'height': 467, 'url': 'https://preview.redd.it/0ruwak5frpwf1.jpeg?width=640&crop=smart&auto=webp&s=aaf36d65eba5a0595c65a5c3f369931d1ddd39fe', 'width': 640}, {'height': 701, 'url': 'https://preview.redd.it/0ruwak5frpwf1.jpeg?width=960&crop=smart&auto=webp&s=91743dfd2148f85d550563827fcbb6448c5d9ab7', 'width': 960}, {'height': 789, 'url': 'https://preview.redd.it/0ruwak5frpwf1.jpeg?width=1080&crop=smart&auto=webp&s=4183aa669b70716aebaf7ddc0b6f68f399ebd2a1', 'width': 1080}], 'source': {'height': 789, 'url': 'https://preview.redd.it/0ruwak5frpwf1.jpeg?auto=webp&s=52e50466afaad5990ecb027e58538b6dcb9ae47c', 'width': 1080}, 'variants': {}}]}
Gradio-related Riskware alert when installing Chatterbox
3
I'm trying to install Chatterbox from here: https://github.com/psdwizzard/chatterbox-Audiobook At the *Launch the Application* stage I run a batch file: launch_audiobook.bat It though errors with the following: **Could not create share link. Please check your internet connection or our status page: https://status.gradio.app.** And my antivirus software (from ESET) flags up: **Threat Removed** **A threat (WinGo/Riskware.Frp.AR) was found in a file that Python tried to access** **The file has been deleted** On checking ESET's log file this has been caused by the file: **frpc_windows_amd64_v0.3** (part of Gradio). I see from looking online that over the years others have had this issue with that file but I've not not found a resolution. Also, various other antivirus software has flagged this before: https://www.virustotal.com/gui/file/14bc0ea470be5d67d79a07412bd21de8a0a179c6ac1116d7764f68e942dc9ceb Is is a false positive? Or is there some workaround just to be safe? Perhaps I should download the file manually, put it into the relevant folder and proceed from there?
2025-10-22T19:17:41
https://www.reddit.com/r/LocalLLaMA/comments/1odholp/gradiorelated_riskware_alert_when_installing/
Twigling
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1odholp
false
null
t3_1odholp
/r/LocalLLaMA/comments/1odholp/gradiorelated_riskware_alert_when_installing/
false
false
self
3
{'enabled': False, 'images': [{'id': '7EDYoKVg45DdchO0WzF7XDzgHdJzAPoKXjT3D3qCC18', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/7EDYoKVg45DdchO0WzF7XDzgHdJzAPoKXjT3D3qCC18.png?width=108&crop=smart&auto=webp&s=a3f348e677ca24ab570519a58ea25dc2f65baec6', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/7EDYoKVg45DdchO0WzF7XDzgHdJzAPoKXjT3D3qCC18.png?width=216&crop=smart&auto=webp&s=97fbc3e9e159640817bf72dca56a3e9ba65f7a84', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/7EDYoKVg45DdchO0WzF7XDzgHdJzAPoKXjT3D3qCC18.png?width=320&crop=smart&auto=webp&s=477869be7e2ff95967084c7e2aedd4f08777a8fe', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/7EDYoKVg45DdchO0WzF7XDzgHdJzAPoKXjT3D3qCC18.png?width=640&crop=smart&auto=webp&s=6f07afc4890462444794a1dc13499e2dec9bd593', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/7EDYoKVg45DdchO0WzF7XDzgHdJzAPoKXjT3D3qCC18.png?width=960&crop=smart&auto=webp&s=be0e4bad4e061fd5f6f03e2ea4adc458281f5612', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/7EDYoKVg45DdchO0WzF7XDzgHdJzAPoKXjT3D3qCC18.png?width=1080&crop=smart&auto=webp&s=92e849f6d2cfd1813cd704755da963767d00bcb9', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/7EDYoKVg45DdchO0WzF7XDzgHdJzAPoKXjT3D3qCC18.png?auto=webp&s=c8382811f3065d351bf33910a53a865837f7fbac', 'width': 1200}, 'variants': {}}]}
Where can I check my essay for plagiarism for free? Tried a few tools, but Ollama really stood out!
28
I’ve been testing Ollama for a while now, but whenever I push it to generate longer essays (around 1,000–2.000 words), it either starts repeating itself or just stops halfway. I’ve tried tweaking my prompts in various ways, but nothing seems to hold together for a complete essay. It’s honestly frustrating because I really want to use AI as a writing assistant, but for long-form assignments, it feels like I’m constantly fighting against it. Sometimes it can get the introduction right, but by the time I reach the body paragraphs, it loses focus or starts rambling. I’ve also noticed that adjusting instructions slightly doesn’t always solve the problem, which makes the whole process even more time-consuming. At this point, I’m considering just handing the essay off to a professional service like MyPaperHelp, since they handle everything smoothly and reliably without all the trial and error. Has anyone figured out a way to make these tools produce a cohesive essay? Do you break it down section by section? Adjust context length? Or use specific models? Any insights or strategies would be greatly appreciated because I really want to make AI tools work better for long assignments.
2025-10-22T19:11:25
https://www.reddit.com/r/LocalLLaMA/comments/1odhimh/where_can_i_check_my_essay_for_plagiarism_for/
Human_Armadillo_1585
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1odhimh
false
null
t3_1odhimh
/r/LocalLLaMA/comments/1odhimh/where_can_i_check_my_essay_for_plagiarism_for/
false
false
self
28
null
First impressions and thoughts on the GTR9 Pro (Beelink's 395)
15
tl;dr: Good and bad, some "benchmarks" and details [here](https://gist.github.com/KMouratidis/88456bea439ea8d38f452bb6df289b58). Not sure I'd recommend it. Not yet. Hey y'all! Just like many others I wanted to try the 395, but since I mostly wanted it as a server first (and LLM runner third), I wanted one with 10 Gbps networking. The MS-S1 hadn't come out yet, so I went with the [Beelink GTR9 Pro AMD Ryzen™ AI Max+ 395](https://www.bee-link.com/products/beelink-gtr9-pro-amd-ryzen-ai-max-395?variant=47842426224882), and \~25 days later it's here. I tried the preinstalled Windows, which functioned for a bit, quickly devolved into a mess that made me want to return it. Thankfully, I wanted it as a server, which means I'll be running Linux, but I had to test it. Plenty of crashes under load, the Intel network card not working, and other weirdness. Turns out there are plenty of known issues that may be hardware or driver related, plenty of posts and speculation in r/BeelinkOfficial and it has been going for a couple weeks it seems, and may also affect Linux, but oh well, time to move on. People suggest you use Fedora or Debian Sid, or anything with a recent kernel, and that's probably good advice for most people, but I ain't running Fedora for my server. I used a heavily configured DietPi (so basically Debian) instead, for no other reason than consistency with the rest of my (actually mini\*) servers. Surely the driver situation can't be that bad, right? Actually yes, it's perfectly fine to run Debian and I haven't had an issue yet, although it's early, let's see if it reach even 10% the uptime my TrueNAS server has. After troubleshooting a few issues, installing the (hopefully) correct drivers, and building llama.cpp (lemonade and vLLM will have to wait until the weekend), I quickly tested a bunch of models, and the results I'm getting seem to roughly align with what others are getting ([1](https://www.reddit.com/r/LocalLLaMA/comments/1mqtnz7/comment/n8uhbp3/), [2](https://www.reddit.com/r/LocalLLaMA/comments/1mqtnz7/comment/n8wnxzc/), [3](https://forum.level1techs.com/t/strix-halo-ryzen-ai-max-395-llm-benchmark-results/233796), [4](https://github.com/lhl/strix-halo-testing/tree/main/llm-bench/gpt-oss-120b-F16)). I have documented everything in the [gist](https://gist.github.com/KMouratidis/88456bea439ea8d38f452bb6df289b58) (I think!). Out of the box, the Beelink runs with 96GB allocated as VRAM and can consume up to 170W without me messing with BIOS or Linux settings. In short, the results are exactly as you would expect: * GPT-OSS-120B is probably the best model to run * Flash Attention helps, but not always by a lot * Performance mode didn't do a thing and maybe was worse, graphics overclocking *seems* to help a bit with prefill/pp/input, but not a low * ECO still consumes 100W during inference, *but the performance hit can be as little \~15% for \~45% less max power*, which is kinda insane but well-known by now that max power only gives marginal improvements * You must be dense if you expect to run dense models |Model|Size|Params|Backend|Test|Tokens/s (FA 0)|Tokens/s (FA 1)| |:-|:-|:-|:-|:-|:-|:-| |GLM-4.5-Air (Q4\_K\_XL)|68.01 GiB|110.47B|ROCm|pp512|142.90 ± 1.39|152.65 ± 1.49| |||||tg128|20.31 ± 0.07|20.83 ± 0.12| |Qwen3-30B (Q4\_K\_XL)|16.49 GiB|30.53B|ROCm|pp512|496.63 ± 11.29|503.25 ± 6.42| |||||tg128|63.26 ± 0.28|64.43 ± 0.71| |GPT-OSS-120B (F16)|60.87 GiB|116.83B|ROCm|pp512|636.25 ± 5.49|732.70 ± 5.99| |||||tg128|34.44 ± 0.01|34.60 ± 0.07| Happy to run tests / benchmarks or answer questions, but some stuff may need to wait for the weekend. \---------- \* Bonus: I sent this photo of the Beelink with my old [Minisforum Z83-F](https://refurbished.minisforum.com/products/minisforum-z83-f-refurbished) to someone, joking about how mini PCs looked in 2015 vs in 2025. She thought the Minisforum was the one from 2025. [Beelink GTR9 Pro \(2025\) dwarfs it's little bro, the Minisforum Z83-F \(2015\)](https://preview.redd.it/6hlbhs9lmpwf1.jpg?width=2304&format=pjpg&auto=webp&s=8ab4f30de9ecd33370f73213ebd0d121a055c041)
2025-10-22T19:00:20
https://www.reddit.com/r/LocalLLaMA/comments/1odh7ns/first_impressions_and_thoughts_on_the_gtr9_pro/
kmouratidis
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1odh7ns
false
null
t3_1odh7ns
/r/LocalLLaMA/comments/1odh7ns/first_impressions_and_thoughts_on_the_gtr9_pro/
false
false
https://b.thumbs.redditm…jG2kDR0kkA7U.jpg
15
{'enabled': False, 'images': [{'id': 'OAXSl8SY6T3JK9MGQyKxkoYbqZ71HQRYXLeB8CV0NXg', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/OAXSl8SY6T3JK9MGQyKxkoYbqZ71HQRYXLeB8CV0NXg.png?width=108&crop=smart&auto=webp&s=796041decb8c1250cbc2f301331b72f7385b477d', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/OAXSl8SY6T3JK9MGQyKxkoYbqZ71HQRYXLeB8CV0NXg.png?width=216&crop=smart&auto=webp&s=2e3562243f324d16bc6d9dd09adb1da4e0b100b5', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/OAXSl8SY6T3JK9MGQyKxkoYbqZ71HQRYXLeB8CV0NXg.png?width=320&crop=smart&auto=webp&s=564e5f4bb6808064a14eb3965a6911671c3c9807', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/OAXSl8SY6T3JK9MGQyKxkoYbqZ71HQRYXLeB8CV0NXg.png?width=640&crop=smart&auto=webp&s=0f53460a90493497883ab4cacbbb58e2acb464c4', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/OAXSl8SY6T3JK9MGQyKxkoYbqZ71HQRYXLeB8CV0NXg.png?width=960&crop=smart&auto=webp&s=7a4f79362039959fa37eab208ae001245ccfe6e3', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/OAXSl8SY6T3JK9MGQyKxkoYbqZ71HQRYXLeB8CV0NXg.png?width=1080&crop=smart&auto=webp&s=912f966e123e94e32e7975fe8aebac89450a6b98', 'width': 1080}], 'source': {'height': 640, 'url': 'https://external-preview.redd.it/OAXSl8SY6T3JK9MGQyKxkoYbqZ71HQRYXLeB8CV0NXg.png?auto=webp&s=c7cbcc7517e2406e2326e7a1eb6bdb9022c27fda', 'width': 1280}, 'variants': {}}]}
Readline and Shift+Enter for Soft Enters in tmux
5
In case anyone's struggling with getting soft-enters in their terminal-based tools... (and using tmux): I make a lot of CLI tools, but recently have been doing some interactive readline versions. I needed Shift+Enter to do a soft enter (inserting the newline without committing the line -- like what you experience in many chats). While Konsole is sending out \^\[OM (esc+OM) (as seen with just running `cat` and hitting shift+enter, tmux was converting it to just an enter. After many futile chats with many LLMs (I'll spare you the details), I figured tmux itself might have hard-coded it in. Going through their source I found it: key-string.c:{ "KPEnter",KEYC_KP_ENTER|KEYC_KEYPAD }, tty-keys.c:{ "\033OM", KEYC_KP_ENTER|KEYC_KEYPAD }, <--- right there input-keys.c:{ .key = KEYC_KP_ENTER|KEYC_KEYPAD, input-keys.c:{ .key = KEYC_KP_ENTER, tmux.h:KEYC_KP_ENTER, tty-keys.c handles the keys coming from outside tmux Adding this to my .tmux.conf binds KPEnter to send out the same thing Konsole is sending out: bind-key -T root KPEnter send-keys Escape O M Now my own code is able to catch it. For what it's worth, I'm doing it in perl, and this is the code that catches alt+enter and shift+enter now, inserting newline into my text, and letting me continue typing: $term = Term::ReadLine->new("z") or die "Cannot create Term::ReadLine object"; # Define a readline function that inserts a newline when called: $term->add_defun("insert-newline", sub { my ($count, $key) = @_; $term->insert_text("\n"); }); # alt+enter was going through fine as esc-\n, so binding it was direct: $term->parse_and_bind('"\e\C-m": insert-newline'); # ESC+LF # shift+enter now sends esc+O+M which can now be bound: $term->parse_and_bind('"\eOM": insert-newline'); # ESC+O+M
2025-10-22T18:56:36
https://www.reddit.com/r/LocalLLaMA/comments/1odh47h/readline_and_shiftenter_for_soft_enters_in_tmux/
jaggzh
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1odh47h
false
null
t3_1odh47h
/r/LocalLLaMA/comments/1odh47h/readline_and_shiftenter_for_soft_enters_in_tmux/
false
false
self
5
null
I Asked Grok, Claude, ChatGPT, and Google to Fix My Code (Are we really doomed?)
107
So yesterday I spent about 3 hours on an existing project, throwing it at Grok, Claude, and Google AI. It’s a painting editor — sort of a Photoshop-ish thing (complete with multi-undo and all that crap). I noticed the zoom in/out was chaotic. It was supposed to zoom around the cursor, but instead, it was jumping all over the place. So first, Grok. It noticed I did GDI+ dynamically and told me there’s no reason for that. The rewrite it came up with to “fix” my issue was a disaster — after multiple back-and-forths, it just kept getting worse. Also, Grok’s tendency to randomly change and add lot of code didn’t help. Hahaha. Reverted back to my original code. ChatGPT — not enough tokens to feed entire code on my tier, so ignored. Google AI… now that one has this funny habit of always agreeing with you. It just keeps spitting out the same code and saying, *“Now it’s perfectly fixed, this is the final version, I swear on Larry Page, I found the problem!”* No, it didn’t. To be fair, it was poking in the right places and found the functions that likely needed changing, but the result was still wrong. Again, the problem got even worse. Claude - same issue, rewrote the code multiple times, finding the bug, never found it. But then I asked if maybe I was mixing up coordinates, and boom — Claude immediately said, yep, you’re mixing local and screen coordinates. (didn't you notice that before?) And indeed, that was the broad culprit. Its fix then was halfway there — zoom in worked, but zoom out… the moment the image fit in the viewport, it started pushing everything to the bottom-right. (That's a new one!) Blah, blah, blah, couldn’t find the issue. So I threw in the towel and looked at the code myself. All these AIs missed that the offset was based on the **image center**. They were all calculating the offset from the top-left corner — and the funny thing is, all the relevant code was *right there* in front of them. I literally gave them everything. Summary: Claude eventually found my local/screen coordinate mix-up (the reason zooming jumped all over the place — the functions themselves were fine, just working with the wrong coordinates), but none of them figured out the display logic. The offset is from the image center — zero means centered. I assume if I nudged Grok and google right direction, they could eventually find the issue too. (It actually didn't occurred to me that coordinates mixup was the cause, until after I thought about it...) Here’s the current state of AI programming, in practice: There’s no way someone who doesn’t already know a thing or two about the project — and general graphics programming — could fix this with AI right now. On their own, all the AIs kept diverging from the right fix, touching half the codebase, when the real fix was just about four lines total. (correct the screen-to-image coordinates, and when the image fits in the viewport, set the offset to zero — not `(viewport - image)/2`, like every single one of them insisted on doing, even though the original code has it zeroed - they all changed it!!!) Still, AI programming is a big WOW to me. But after 25 years of graphics programming, yeah… that still matters (for now) when things go pear-shaped like this.
2025-10-22T18:51:11
https://www.reddit.com/r/LocalLLaMA/comments/1odgyxp/i_asked_grok_claude_chatgpt_and_google_to_fix_my/
FPham
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1odgyxp
false
null
t3_1odgyxp
/r/LocalLLaMA/comments/1odgyxp/i_asked_grok_claude_chatgpt_and_google_to_fix_my/
false
false
self
107
null
What can be run with Mac mini m4?
2
Hey everyone, I am curious whether agentic coding LLM is possible with my Mac. I am lost what is what, and I have little knowledge, pardon my ignorance, but I feel a lot of people seek some basic knowledge about which models are small, which one is agentic etc. is there any website to check that?
2025-10-22T18:33:10
https://www.reddit.com/r/LocalLLaMA/comments/1odghg7/what_can_be_run_with_mac_mini_m4/
60finch
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1odghg7
false
null
t3_1odghg7
/r/LocalLLaMA/comments/1odghg7/what_can_be_run_with_mac_mini_m4/
false
false
self
2
null
olmoOCR 2 released, big quality improvements, fully open training data and code
149
Given the interest in OCR models recently, Ai2's release today should be on your radar. The weights, training data, and training code are all open, and you can try it for free here: [https://olmocr.allenai.org/](https://olmocr.allenai.org/) 📚 Blog: [https://allenai.org/blog/olmocr-2](https://allenai.org/blog/olmocr-2) 💻 Model: [https://huggingface.co/allenai/olmOCR-2-7B-1025-FP8](https://huggingface.co/allenai/olmOCR-2-7B-1025-FP8)
2025-10-22T18:22:09
https://allenai.org/blog/olmocr-2
whistling_frank
allenai.org
1970-01-01T00:00:00
0
{}
1odg6pz
false
null
t3_1odg6pz
/r/LocalLLaMA/comments/1odg6pz/olmoocr_2_released_big_quality_improvements_fully/
false
false
default
149
{'enabled': False, 'images': [{'id': 'jMHnzDUDDsA1xIrP_vxD1Z6TLTLk5mgpCRd-v7PwCn4', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/jMHnzDUDDsA1xIrP_vxD1Z6TLTLk5mgpCRd-v7PwCn4.png?width=108&crop=smart&auto=webp&s=05c112492ff8c44c819cfa0d1bdcfbe9695db642', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/jMHnzDUDDsA1xIrP_vxD1Z6TLTLk5mgpCRd-v7PwCn4.png?width=216&crop=smart&auto=webp&s=56b194713366a0577473cbed59da9231e80554e3', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/jMHnzDUDDsA1xIrP_vxD1Z6TLTLk5mgpCRd-v7PwCn4.png?width=320&crop=smart&auto=webp&s=0a953690666d9cac25108a8ef6d6c326e7dd78cd', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/jMHnzDUDDsA1xIrP_vxD1Z6TLTLk5mgpCRd-v7PwCn4.png?width=640&crop=smart&auto=webp&s=60397e0ca48d71f38e4f50d1bb3a4a5210618f9a', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/jMHnzDUDDsA1xIrP_vxD1Z6TLTLk5mgpCRd-v7PwCn4.png?width=960&crop=smart&auto=webp&s=3cf68247732195d82ee97341a30a355898ff6a21', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/jMHnzDUDDsA1xIrP_vxD1Z6TLTLk5mgpCRd-v7PwCn4.png?width=1080&crop=smart&auto=webp&s=f0990e19730a584f4b0e786073441e5d586c90ca', 'width': 1080}], 'source': {'height': 1200, 'url': 'https://external-preview.redd.it/jMHnzDUDDsA1xIrP_vxD1Z6TLTLk5mgpCRd-v7PwCn4.png?auto=webp&s=5f625b2e54628c272e19b28ab29da77735507717', 'width': 2133}, 'variants': {}}]}
Introducing ExecuTorch 1.0
19
2025-10-22T18:17:14
https://pytorch.org/blog/introducing-executorch-1-0/
dayanruben
pytorch.org
1970-01-01T00:00:00
0
{}
1odg1wm
false
null
t3_1odg1wm
/r/LocalLLaMA/comments/1odg1wm/introducing_executorch_10/
false
false
default
19
null
Quick question for indie devs running LLMs/AI models
1
[removed]
2025-10-22T18:14:11
https://www.reddit.com/r/LocalLLaMA/comments/1odfyy5/quick_question_for_indie_devs_running_llmsai/
ex_aws_builder
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1odfyy5
false
null
t3_1odfyy5
/r/LocalLLaMA/comments/1odfyy5/quick_question_for_indie_devs_running_llmsai/
false
false
self
1
null
FS: Dual RTX 4090 Puget Systems TRX50 T120-XL • Threadripper 7960X • 128 GB ECC • $7K OBO • UPS Included Free (NY/NJ area)
0
Hi everyone! I’m downsizing and hoping to find a good home for my **Puget Systems TRX50 T120-XL** workstation. Purchased April 2025, used only a few hours for local inference testing. Moving soon and won’t have a 20 A outlet at the new place. **Specs:** • AMD Threadripper 7960X (24c/48t) • ASUS Pro WS TRX50-SAGE WIFI • Dual MSI RTX 4090 Ventus 3X E 24 GB (48 GB total VRAM) • 128 GB DDR5-5600 ECC RAM (4×32 GB Micron Reg ECC) • 4 TB NVMe storage (2× Kingston KC3000 2 TB) • 1600 W Titanium PSU (Super Flower Leadex) • Asetek 836S 360 mm Threadripper AIO • Original Puget crate + lifetime labor support + unused accessories **Free add-on at asking price:** • **CyberPower PR2000RT2UC Smart App Sinewave UPS** (barely used, boxed, $1.3K new) **Asking:** $7,000 OBO (includes UPS) **Location:** NY/NJ area — local pickup preferred, shipping possible in original crates **Timestamp + photos:** [https://imgur.com/a/EhAoEEQ](https://imgur.com/a/EhAoEEQ) Prefer full system sale, but open to ideas or advice if someone knows a safe place where rigs like this move fast.
2025-10-22T18:06:17
https://www.reddit.com/r/LocalLLaMA/comments/1odfram/fs_dual_rtx_4090_puget_systems_trx50_t120xl/
Aromatic_Wolverine86
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1odfram
false
null
t3_1odfram
/r/LocalLLaMA/comments/1odfram/fs_dual_rtx_4090_puget_systems_trx50_t120xl/
false
false
self
0
{'enabled': False, 'images': [{'id': 'sQiIVOauFf5IMkrd8zhCCKi6o5gkVy2hjpDWq4Mlylk', 'resolutions': [{'height': 216, 'url': 'https://external-preview.redd.it/WRWAXud1d7cJp1cnXvP0RSbsKOCE-gYT12-ydgfzY8s.jpg?width=108&crop=smart&auto=webp&s=5283cef4f9719ed83e9219d7ab8b5f8fb9279366', 'width': 108}, {'height': 432, 'url': 'https://external-preview.redd.it/WRWAXud1d7cJp1cnXvP0RSbsKOCE-gYT12-ydgfzY8s.jpg?width=216&crop=smart&auto=webp&s=d6a6f3be87bf15ca97ec515de9e611c15a3cebf1', 'width': 216}, {'height': 640, 'url': 'https://external-preview.redd.it/WRWAXud1d7cJp1cnXvP0RSbsKOCE-gYT12-ydgfzY8s.jpg?width=320&crop=smart&auto=webp&s=661c888920b37413569a6d0615589b16033d8c10', 'width': 320}, {'height': 1280, 'url': 'https://external-preview.redd.it/WRWAXud1d7cJp1cnXvP0RSbsKOCE-gYT12-ydgfzY8s.jpg?width=640&crop=smart&auto=webp&s=0697aa43c4a52252aafed0e31ca83290283c7b6f', 'width': 640}, {'height': 1920, 'url': 'https://external-preview.redd.it/WRWAXud1d7cJp1cnXvP0RSbsKOCE-gYT12-ydgfzY8s.jpg?width=960&crop=smart&auto=webp&s=34acc95a498782b39826090e793b237fe5f7832f', 'width': 960}, {'height': 2160, 'url': 'https://external-preview.redd.it/WRWAXud1d7cJp1cnXvP0RSbsKOCE-gYT12-ydgfzY8s.jpg?width=1080&crop=smart&auto=webp&s=1009bad3d2a5b1eeb89fdc29591fd94b883dcc5e', 'width': 1080}], 'source': {'height': 2826, 'url': 'https://external-preview.redd.it/WRWAXud1d7cJp1cnXvP0RSbsKOCE-gYT12-ydgfzY8s.jpg?auto=webp&s=364cdd95e75288fb6ac7486badb8a765b10cc399', 'width': 1170}, 'variants': {}}]}
Qwen-MT Open Source?
2
Does anyone know if it is possible to download Qwen-MT? I would like to run translations via my proprietory VM. Thanks
2025-10-22T18:05:23
https://www.reddit.com/r/LocalLLaMA/comments/1odfqfq/qwenmt_open_source/
dr_progress
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1odfqfq
false
null
t3_1odfqfq
/r/LocalLLaMA/comments/1odfqfq/qwenmt_open_source/
false
false
self
2
null
Devs, what are your experiences with Qwen3-coder-30b?
27
From code completion, method refactoring, to generating a full MVP project, how well does Qwen3-coder-30b perform? I have a desktop with 32GB DDR5 RAM and I'm planning to buy an RTX 5000 series with at least 16GB of VRAM. Can it handle the quantized version of this model well?
2025-10-22T17:43:11
https://www.reddit.com/r/LocalLLaMA/comments/1odf4ei/devs_what_are_your_experiences_with_qwen3coder30b/
AzRedx
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1odf4ei
false
null
t3_1odf4ei
/r/LocalLLaMA/comments/1odf4ei/devs_what_are_your_experiences_with_qwen3coder30b/
false
false
self
27
null
Ling-1T is very impressive – why are there no independent benchmarks?
75
Today, I finally had the chance to run some tests with ubergarm’s GGUF version of Ling-1T: [Hugging Face – Ling-1T-GGUF](https://huggingface.co/ubergarm/Ling-1T-GGUF) I focused on mathematical and reasoning tasks, and I have to say: I’m genuinely impressed. I only used IQ2\_K-quants and Ling-1T solved every problem I threw at it, while keeping costs low thanks to its minimal token usage. But: I can’t find **any** independent benchmarks. No results on Artificial Analysis, LiveBench, Aider’s LLM Leaderboard, EQ-Bench… nothing beyond anecdotal impressions. What are your thoughts? Any ideas why this model seems to fly under the radar?
2025-10-22T17:40:56
https://www.reddit.com/r/LocalLLaMA/comments/1odf249/ling1t_is_very_impressive_why_are_there_no/
Snail_Inference
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1odf249
false
null
t3_1odf249
/r/LocalLLaMA/comments/1odf249/ling1t_is_very_impressive_why_are_there_no/
false
false
self
75
{'enabled': False, 'images': [{'id': '3x-nz94zBN5kpeGGP9T-N30FgeUV8wXRE7FCAExZwr4', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/3x-nz94zBN5kpeGGP9T-N30FgeUV8wXRE7FCAExZwr4.png?width=108&crop=smart&auto=webp&s=d08835d2f8b28160d9c9804f83cd20644e5c59fc', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/3x-nz94zBN5kpeGGP9T-N30FgeUV8wXRE7FCAExZwr4.png?width=216&crop=smart&auto=webp&s=75ae87e9fdc5da1323b816739221e8e5d745f7c7', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/3x-nz94zBN5kpeGGP9T-N30FgeUV8wXRE7FCAExZwr4.png?width=320&crop=smart&auto=webp&s=584581de1c770e5e8c6ba0b0b692516ac5a2ab02', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/3x-nz94zBN5kpeGGP9T-N30FgeUV8wXRE7FCAExZwr4.png?width=640&crop=smart&auto=webp&s=bc36d657b700b3d347e4b73fe8c2ce101236f1eb', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/3x-nz94zBN5kpeGGP9T-N30FgeUV8wXRE7FCAExZwr4.png?width=960&crop=smart&auto=webp&s=6ab2ea3e3b9417c9721e6354319cc941e6316fc8', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/3x-nz94zBN5kpeGGP9T-N30FgeUV8wXRE7FCAExZwr4.png?width=1080&crop=smart&auto=webp&s=62afcfee9f7aca54e536e5488d0fc2fe7063ed74', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/3x-nz94zBN5kpeGGP9T-N30FgeUV8wXRE7FCAExZwr4.png?auto=webp&s=68ad738637599d8dde4f21c776dbc5cbfaf6a9c9', 'width': 1200}, 'variants': {}}]}
AGI has already been disproven
0
AGI has already been disproven — at least within any foreseeable timeframe, it’s not going to appear. The current generation of agents isn’t capable enough to sustain such an enormous narrative about the future of AI. That’s why many people have shifted their hopes toward embodied intelligence, believing it to be the true path to real-world AI. But I’m not optimistic. Its problems are too obvious: the hardware isn’t capable enough, and the software isn’t truly intelligent yet. On the hardware side, progress has been stagnant for years. Most robots today rely on cameras to imitate motion, yet they have almost no sense of touch. Humans can pick up a sheet of paper steadily because our fingers can sense tiny changes in pressure and friction. Machines can see, but they can’t feel. Without tactile feedback, the more movements they perform, the greater the accumulated error. Even their legs pose real challenges. Modern humanoid robots still use control algorithms from years ago, relying on high-powered motors to maintain balance. If they fall, the consequences are severe. The videos that look stable are often recorded in tightly controlled environments with human intervention. The software issues run even deeper. Large language models can write essays or generate images, but they don’t understand physics. Their world is made of discrete words, not continuous reality. Asking them to control a robot is asking for trouble — a typo in text is a joke, but an error in the physical world is an accident. Current AI systems still lack a true sense of safety boundaries.
2025-10-22T17:37:31
https://www.reddit.com/r/LocalLLaMA/comments/1odeypb/agi_has_already_been_disproven/
uncleofchocolate
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1odeypb
false
null
t3_1odeypb
/r/LocalLLaMA/comments/1odeypb/agi_has_already_been_disproven/
false
false
self
0
null
Saving Agentic AI Deployment Cost via Knowledge Distillation
1
**Why Knowledge Distillation Matters in Enterprise AI** Large AI models are powerful — but also expensive to deploy and maintain. Running a 7B+ parameter model in production means high GPU memory usage, slow inference, and high operational costs. For enterprise AI systems that need real-time reasoning or on-device execution, this isn’t scalable. That’s where knowledge distillation comes in. Distillation allows us to compress intelligence — training a smaller model (the student) to imitate a larger, more capable model (the teacher). With [ToolBrain](https://github.com/ToolBrain/ToolBrain), this process becomes simple — especially when working with tool-using agents. ToolBrain is a free and open-source framework for teaching LLMs using tools more effectively with reinforcement learning where knowledge distillation is a built-in feature. Please read the full article on [medium](https://medium.com/@lamdot09/saving-agentic-ai-deployment-cost-via-knowledge-distillation-c06d4ffe092e). # Results The following plot show the results when small model can learn from large models and being very effective in using tools after only a few distillation steps. https://preview.redd.it/md0gazjn7pwf1.png?width=3072&format=png&auto=webp&s=1102de7b2ce2dbc74431c9f26b52a4f75d0daff6
2025-10-22T17:30:25
https://www.reddit.com/r/LocalLLaMA/comments/1oderhh/saving_agentic_ai_deployment_cost_via_knowledge/
Successful_Table_263
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1oderhh
false
null
t3_1oderhh
/r/LocalLLaMA/comments/1oderhh/saving_agentic_ai_deployment_cost_via_knowledge/
false
false
https://b.thumbs.redditm…wsAit61O6v1U.jpg
1
{'enabled': False, 'images': [{'id': 'eSDFXR3PZyHdyzxDOXNQ-Pe9EAFjtUCQedcZFa4l_1M', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/eSDFXR3PZyHdyzxDOXNQ-Pe9EAFjtUCQedcZFa4l_1M.png?width=108&crop=smart&auto=webp&s=f986c1a46ab7527e55c2501924d04d5ac49f0573', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/eSDFXR3PZyHdyzxDOXNQ-Pe9EAFjtUCQedcZFa4l_1M.png?width=216&crop=smart&auto=webp&s=10d2189b5bfb61e3f6eb6a7c48a411715f6e95c8', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/eSDFXR3PZyHdyzxDOXNQ-Pe9EAFjtUCQedcZFa4l_1M.png?width=320&crop=smart&auto=webp&s=a11d47ea46324657a5868e87a799e7d07278bfe6', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/eSDFXR3PZyHdyzxDOXNQ-Pe9EAFjtUCQedcZFa4l_1M.png?width=640&crop=smart&auto=webp&s=5cb860824bbf9804057713526e7ec843088b1314', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/eSDFXR3PZyHdyzxDOXNQ-Pe9EAFjtUCQedcZFa4l_1M.png?width=960&crop=smart&auto=webp&s=e44b26a08341129b6dabb9e3bdb5f0200f795171', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/eSDFXR3PZyHdyzxDOXNQ-Pe9EAFjtUCQedcZFa4l_1M.png?width=1080&crop=smart&auto=webp&s=17349472e59bb75bd6baf2a7d28377efebcc966b', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/eSDFXR3PZyHdyzxDOXNQ-Pe9EAFjtUCQedcZFa4l_1M.png?auto=webp&s=963bba6172593a650443229022be8b35c8d6207c', 'width': 1200}, 'variants': {}}]}
LM Studio running on Thunderbolt RTX eGPU "device lost" after sleep
1
So I'm struggling with this problem: I'm running LM Studio (0.3.25) on an NVIDIA RTX in a Thunderbolt enclosure. After a clean reboot, everything works as expected. Chatting, it's responding... But when I have put my laptop to sleep, and wake it up again, LM Studio will (almost?) always stop working. I make sure that - before I put the laptop to sleep or hibernate - I "Eject" the current model, and I close LM Studio. Then AFTER waking from sleep or hibernate, I restart LM Studio, reload the LLM. Everything seems to go fine, also when sending a message to the LLM it will first pause a little, but it will never get to the stage that it shows a "percentage". Instead, I will get: "Failed to generate AI response" "*This message contains no content. The AI has nothing to say."* And it seems like ONLY a clean reboot will enable me to use LM Studio again. Now, the curious thing is that for example ComfyUI or Forge (with diffusion image generators) are FINE. So the eGPU IS definitely still available, actually. I wonder what the problem is, and if there a workaround that allows me to keep using LM Studio WITHOUT going through a full reboot each time...
2025-10-22T17:25:26
https://www.reddit.com/r/LocalLLaMA/comments/1odemmc/lm_studio_running_on_thunderbolt_rtx_egpu_device/
NetworkSpecial3268
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1odemmc
false
null
t3_1odemmc
/r/LocalLLaMA/comments/1odemmc/lm_studio_running_on_thunderbolt_rtx_egpu_device/
false
false
self
1
null
Can someone please create a benchmark for spatial information in images?
2
Rant: I'm so annoyed that the image describing models (like the autocaptioners, but actually any multimodal LLM) are pathetic bad at getting left and right correct. You can easily get them confused by showing them an image of a person facing the camera (i.e. nearly all images with a person). When that person is holding something in the hand (cup of coffee, a sword, anything) or is doing something with that hand (opening a door, adjusting the glasses, anything) the models will most likely mix left and right. Of course it is "difficult" that the right hand of a person facing the camera is on the left side of the image. But we have full blown LLMs that are multi modal. They should easily be able to know that. And no, it's not one stupid model. It's Gemini's best (2.5), it's Qwen. And it was all earlier models that I used as captioners as well. So, to be constructive: Can someone please generate a benchmark where it is judged how the models handle spatial information? Left and right is obvious but can become really complex, especially when camera left/right is mixed with subject left/right and multiple subjects are in the scene. Up/down and infront/behind are also interesting use cases. And most interesting is when everything comes together. Actually, I think it shouldn't even be hard to create that benchmark. Using blender and some scripting should be able to create artificial images that would be good enough here. I'm sure the current models with fail clearly. But such a benchmark would perhaps force the model creators to fix this annoying weakness.
2025-10-22T17:09:40
https://www.reddit.com/r/LocalLLaMA/comments/1ode77b/can_someone_please_create_a_benchmark_for_spatial/
StableLlama
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ode77b
false
null
t3_1ode77b
/r/LocalLLaMA/comments/1ode77b/can_someone_please_create_a_benchmark_for_spatial/
false
false
self
2
null
The bug that taught me more about PyTorch than years of using it
6
Another banger blog by Elana !
2025-10-22T17:07:05
https://elanapearl.github.io/blog/2025/the-bug-that-taught-me-pytorch/
rajko_rad
elanapearl.github.io
1970-01-01T00:00:00
0
{}
1ode4nh
false
null
t3_1ode4nh
/r/LocalLLaMA/comments/1ode4nh/the_bug_that_taught_me_more_about_pytorch_than/
false
false
default
6
{'enabled': False, 'images': [{'id': 'wIFczsrtcevJhxKysJxxE4CE5_uQHqLT92beux54J5k', 'resolutions': [{'height': 137, 'url': 'https://external-preview.redd.it/wIFczsrtcevJhxKysJxxE4CE5_uQHqLT92beux54J5k.png?width=108&crop=smart&auto=webp&s=0ff66481a5fc32b522d6bfaa190ce9f0c7d1a3e0', 'width': 108}, {'height': 275, 'url': 'https://external-preview.redd.it/wIFczsrtcevJhxKysJxxE4CE5_uQHqLT92beux54J5k.png?width=216&crop=smart&auto=webp&s=f3cbc457de8e887430c053975f2c5ed6ce1b1a0a', 'width': 216}, {'height': 408, 'url': 'https://external-preview.redd.it/wIFczsrtcevJhxKysJxxE4CE5_uQHqLT92beux54J5k.png?width=320&crop=smart&auto=webp&s=8e3ded70ba01b17b8441e9d76b0abe05a27a4052', 'width': 320}], 'source': {'height': 646, 'url': 'https://external-preview.redd.it/wIFczsrtcevJhxKysJxxE4CE5_uQHqLT92beux54J5k.png?auto=webp&s=4dc27c6900d0f98c002ff5bb69e6c206d193acd1', 'width': 506}, 'variants': {}}]}
layer activation tracing
1
I am currently using llama.cpp but am open to other runtimes. I would like to get an understanding on the sequence of decoders that a token takes, through which layers of the gguf file it will travel. I know that this will probably look random but still want to give it a try. does anyone know of a software that can help me with that?
2025-10-22T16:58:47
https://www.reddit.com/r/LocalLLaMA/comments/1oddw6j/layer_activation_tracing/
Maleficent-Koalabeer
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1oddw6j
false
null
t3_1oddw6j
/r/LocalLLaMA/comments/1oddw6j/layer_activation_tracing/
false
false
self
1
null
Is it possible to fully fine tuning LLaMA 2 7B on tpu-v4-8
2
I’m trying to reproduce the results from a paper, which trains a LLaMA 2 7B model for code generation on a 30 k‑sample dataset (10k each from Evol CodeAlpaca (Luo et al., 2023), Code-Alpaca (Chaudhary, 2023) Tulu 3 Persona Python (Lambert et al., 2025) ). The paper uses 8× A100 80 GB GPUs and achieves good performance on HumanEval and HumanEval+. My lab only has access to TPUs, specifically i was using a TPU v4‑8, so I’ve been trying to adapt their GitHub repo to run on TPUs, but I keep getting OOM errors. I have tried reducing the max sequence length and I’ve tried using Fully Sharded Data Parallel (FSDP) via PyTorch XLA, but training fails for OOM during compilation or poor results on validation set. Is it possible to Fully fine‑tuned a 7B model on tpu-v4-8 using PyTorch? Also does what I am doing even make sense to do?
2025-10-22T16:58:47
https://www.reddit.com/r/LocalLLaMA/comments/1oddw6u/is_it_possible_to_fully_fine_tuning_llama_2_7b_on/
Bhristopherr
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1oddw6u
false
null
t3_1oddw6u
/r/LocalLLaMA/comments/1oddw6u/is_it_possible_to_fully_fine_tuning_llama_2_7b_on/
false
false
self
2
null
Free GPU memory during local LLM inference without KV cache hogging VRAM
38
We are building [kvcached](https://github.com/ovg-project/kvcached), a library that lets local LLM inference engines such as **SGLang** and **vLLM** free idle KV cache memory instead of occupying the entire GPU. This allows you to run a model locally without using all available VRAM, so other applications can still run or even share the GPU. * ✅ Works out of the box with SGLang and vLLM * 🔧 Support for Ollama and LM Studio is in progress * 🧩 No changes to your model or prompts required * 🚀 Install with pip and it runs out of the box Our code is open source: [https://github.com/ovg-project/kvcached](https://github.com/ovg-project/kvcached) Deep dive blog for those interested in the techniques behind it: [https://yifanqiao.notion.site/Solve-the-GPU-Cost-Crisis-with-kvcached-289da9d1f4d68034b17bf2774201b141](https://yifanqiao.notion.site/Solve-the-GPU-Cost-Crisis-with-kvcached-289da9d1f4d68034b17bf2774201b141) We would love feedback from the local LLM community. If you want to run multiple models on one GPU, combine LLMs with other GPU applications, or simply reduce memory usage, feel free to try it out and ask questions. Happy to discuss and improve together 🙌
2025-10-22T16:40:37
https://www.reddit.com/r/LocalLLaMA/comments/1odddyg/free_gpu_memory_during_local_llm_inference/
ivaniumr
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1odddyg
false
null
t3_1odddyg
/r/LocalLLaMA/comments/1odddyg/free_gpu_memory_during_local_llm_inference/
false
false
self
38
{'enabled': False, 'images': [{'id': 'Qexkfi7FQ3mBNXUsI349OiBIZqFoa4py3iuRmtBXCE0', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/Qexkfi7FQ3mBNXUsI349OiBIZqFoa4py3iuRmtBXCE0.png?width=108&crop=smart&auto=webp&s=3d4a1569ef45ad35817310e1a90e6a42b16cda86', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/Qexkfi7FQ3mBNXUsI349OiBIZqFoa4py3iuRmtBXCE0.png?width=216&crop=smart&auto=webp&s=d8d9862fdd2b1236dacacac63e8e42a63714f7d2', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/Qexkfi7FQ3mBNXUsI349OiBIZqFoa4py3iuRmtBXCE0.png?width=320&crop=smart&auto=webp&s=0a863c52b48987e4de20a34c634bb49bdcf6b6be', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/Qexkfi7FQ3mBNXUsI349OiBIZqFoa4py3iuRmtBXCE0.png?width=640&crop=smart&auto=webp&s=3a119b694af4a1e62532c3a6e7da37245e136c66', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/Qexkfi7FQ3mBNXUsI349OiBIZqFoa4py3iuRmtBXCE0.png?width=960&crop=smart&auto=webp&s=7b95d31f001c8f141342bd1b416c97d01645eb36', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/Qexkfi7FQ3mBNXUsI349OiBIZqFoa4py3iuRmtBXCE0.png?width=1080&crop=smart&auto=webp&s=770af2be905c3601c038ee454755669b31e10ea7', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/Qexkfi7FQ3mBNXUsI349OiBIZqFoa4py3iuRmtBXCE0.png?auto=webp&s=736417b21d2fcc55b89f9c2b5b42777fb689efb6', 'width': 1200}, 'variants': {}}]}
How are people syncing and indexing data from tools like Gmail or Slack for RAG?
4
I’ve been exploring how to make personal assistants or knowledge tools that understand your email and calendar context. The tricky part is data freshness and scale do you sync and embed everything in a vector DB, or just fetch data on demand? If you’ve built anything similar: * How do you handle syncing without hitting API limits? * What’s your setup for embedding large text (emails, threads, docs)? * Are there better ways to structure this than just a RAG pipeline? Curious how others are thinking about retrieval and context for personal data.
2025-10-22T16:39:45
https://www.reddit.com/r/LocalLLaMA/comments/1oddd4r/how_are_people_syncing_and_indexing_data_from/
BriefCardiologist656
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1oddd4r
false
null
t3_1oddd4r
/r/LocalLLaMA/comments/1oddd4r/how_are_people_syncing_and_indexing_data_from/
false
false
self
4
null
Free GPU memory during local LLM inference without KV cache hogging VRAM
2
We are building [kvcached](https://github.com/ovg-project/kvcached), a library that lets local LLM inference engines such as **SGLang** and **vLLM** free idle KV cache memory instead of occupying the entire GPU. This allows you to run a model locally without using all available VRAM, so other applications can still run or even share the GPU. * ✅ Works out of the box with SGLang and vLLM * 🔧 Support for Ollama and LM Studio is in progress * 🧩 No changes to your model or prompts required * 🚀 Install with pip and it runs out of the box Our code is open source: [https://github.com/ovg-project/kvcached](https://github.com/ovg-project/kvcached) Deep dive blog for those interested in the techniques behind it: [https://yifanqiao.notion.site/Solve-the-GPU-Cost-Crisis-with-kvcached-289da9d1f4d68034b17bf2774201b141](https://yifanqiao.notion.site/Solve-the-GPU-Cost-Crisis-with-kvcached-289da9d1f4d68034b17bf2774201b141) We would love feedback from the local LLM community. If you want to run multiple models on one GPU, combine LLMs with other GPU applications, or simply reduce memory usage, feel free to try it out and ask questions. Happy to discuss and improve together 🙌
2025-10-22T16:36:20
https://www.reddit.com/r/LocalLLaMA/comments/1odd9r0/free_gpu_memory_during_local_llm_inference/
ivaniumr
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1odd9r0
false
null
t3_1odd9r0
/r/LocalLLaMA/comments/1odd9r0/free_gpu_memory_during_local_llm_inference/
false
false
self
2
null
Qwen3-VL-2B GGUF is here
2
GGUFs are available (Note currently only NexaSDK supports Qwen3-VL-2B GGUF model) [https://huggingface.co/NexaAI/Qwen3-VL-2B-Thinking-GGUF](https://huggingface.co/NexaAI/Qwen3-VL-2B-Thinking-GGUF) [https://huggingface.co/NexaAI/Qwen3-VL-2B-Instruct-GGUF](https://huggingface.co/NexaAI/Qwen3-VL-2B-Instruct-GGUF) Here's a quick demo of it counting circles: 155 t/s on M4 Max # [](https://huggingface.co/Qwen/Qwen3-VL-2B-Thinking/discussions/1#quickstart) https://reddit.com/link/1odcib3/video/y3bwkg6psowf1/player Quickstart in 2 steps * Step 1: Download [NexaSDK](https://github.com/NexaAI/nexa-sdk) with one click * Step 2: one line of code to run in your terminal: &#8203; nexa infer NexaAI/Qwen3-VL-2B-Instruct-GGUF nexa infer NexaAI/Qwen3-VL-2B-Thinking-GGUF
2025-10-22T16:08:01
https://www.reddit.com/r/LocalLLaMA/comments/1odcib3/qwen3vl2b_gguf_is_here/
AlanzhuLy
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1odcib3
false
null
t3_1odcib3
/r/LocalLLaMA/comments/1odcib3/qwen3vl2b_gguf_is_here/
false
false
self
2
{'enabled': False, 'images': [{'id': 'W7AGVvstE9pre0GHVfitXRaqvaJcdswOTBT90OSCLds', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/W7AGVvstE9pre0GHVfitXRaqvaJcdswOTBT90OSCLds.png?width=108&crop=smart&auto=webp&s=15a1adaf9db61360d898c6aea7243641c19aa93f', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/W7AGVvstE9pre0GHVfitXRaqvaJcdswOTBT90OSCLds.png?width=216&crop=smart&auto=webp&s=d37a58b1d6f062485ab50d0880a6e3231ec7dd71', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/W7AGVvstE9pre0GHVfitXRaqvaJcdswOTBT90OSCLds.png?width=320&crop=smart&auto=webp&s=395490c9487dd99a2ab27a9265ec319cf5fa565d', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/W7AGVvstE9pre0GHVfitXRaqvaJcdswOTBT90OSCLds.png?width=640&crop=smart&auto=webp&s=0b8a9fdf290f4fb4c4b047a45249827638e70e25', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/W7AGVvstE9pre0GHVfitXRaqvaJcdswOTBT90OSCLds.png?width=960&crop=smart&auto=webp&s=ea5cd848220307e7c5169cceb6e9abd8697db15c', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/W7AGVvstE9pre0GHVfitXRaqvaJcdswOTBT90OSCLds.png?width=1080&crop=smart&auto=webp&s=9e7e7ee1964f5270cf3fe515e47195f3ff20c7e7', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/W7AGVvstE9pre0GHVfitXRaqvaJcdswOTBT90OSCLds.png?auto=webp&s=810a87fb5c0aba8f6c49d3265df2e25af12208f8', 'width': 1200}, 'variants': {}}]}
The Next Generation of Founders Will Build With AI as a Partner, Not a Tool
0
I think the next wave of great startups won’t just use AI, they’ll build alongside it. That’s how I approached [ember.do](http://ember.do). Instead of adding “AI features,” I let AI act as a co-founder. It reviews your business plan, detects blind spots, and suggests smarter moves. It doesn’t just automate tasks, it thinks with you. We’re entering an era where your second co-founder might not be human, and that’s not scary; it’s powerful. Curious: how many of you are integrating AI deeper into your core product, not just as a side feature?
2025-10-22T16:00:42
https://www.reddit.com/r/LocalLLaMA/comments/1odcawt/the_next_generation_of_founders_will_build_with/
Ok-Fan-6434
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1odcawt
false
null
t3_1odcawt
/r/LocalLLaMA/comments/1odcawt/the_next_generation_of_founders_will_build_with/
false
false
self
0
null
The security paradox of local LLMs
0
2025-10-22T15:58:18
https://quesma.com/blog/local-llms-security-paradox/
svacko
quesma.com
1970-01-01T00:00:00
0
{}
1odc8h2
false
null
t3_1odc8h2
/r/LocalLLaMA/comments/1odc8h2/the_security_paradox_of_local_llms/
false
false
default
0
{'enabled': False, 'images': [{'id': '7v-C3aY3iVb1D8DFwD4STRtA6plkUIhWsA4zs5JGd5I', 'resolutions': [{'height': 72, 'url': 'https://external-preview.redd.it/7v-C3aY3iVb1D8DFwD4STRtA6plkUIhWsA4zs5JGd5I.png?width=108&crop=smart&auto=webp&s=79ec2d5e1bad2dd08c0377e8b5d9c055b34a7f66', 'width': 108}, {'height': 144, 'url': 'https://external-preview.redd.it/7v-C3aY3iVb1D8DFwD4STRtA6plkUIhWsA4zs5JGd5I.png?width=216&crop=smart&auto=webp&s=54132c3588ad0004a1c463786f77c5004b9c2f16', 'width': 216}, {'height': 213, 'url': 'https://external-preview.redd.it/7v-C3aY3iVb1D8DFwD4STRtA6plkUIhWsA4zs5JGd5I.png?width=320&crop=smart&auto=webp&s=1f3800440146e50b4bf166b8c88e6ad31088da10', 'width': 320}, {'height': 426, 'url': 'https://external-preview.redd.it/7v-C3aY3iVb1D8DFwD4STRtA6plkUIhWsA4zs5JGd5I.png?width=640&crop=smart&auto=webp&s=98d7fa83ca4e30bde0d97d40ab162d8d4e4110b1', 'width': 640}, {'height': 640, 'url': 'https://external-preview.redd.it/7v-C3aY3iVb1D8DFwD4STRtA6plkUIhWsA4zs5JGd5I.png?width=960&crop=smart&auto=webp&s=1363585ba966cd76fa573824089e794f5d754b94', 'width': 960}, {'height': 720, 'url': 'https://external-preview.redd.it/7v-C3aY3iVb1D8DFwD4STRtA6plkUIhWsA4zs5JGd5I.png?width=1080&crop=smart&auto=webp&s=1fb624fd850628656ba069cee9c5f3a9b7116c9a', 'width': 1080}], 'source': {'height': 1024, 'url': 'https://external-preview.redd.it/7v-C3aY3iVb1D8DFwD4STRtA6plkUIhWsA4zs5JGd5I.png?auto=webp&s=5413e48a93437f98ccba53df98adccb8d96e21bc', 'width': 1536}, 'variants': {}}]}
LFM2-VL 3B released today
75
New **LFM2-VL 3B** version released by LiquidAI today. * [Blog post](https://www.liquid.ai/blog/lfm2-vl-3b-a-new-efficient-vision-language-for-the-edge) * [HuggingFace ](https://huggingface.co/LiquidAI/LFM2-VL-3B)page * Available quant: [GGUF](https://huggingface.co/LiquidAI/LFM2-VL-3B-GGUF) |**Model**|Average|MMStar|MMMU (val)|MathVista|BLINK|InfoVQA (val)|MMBench (dev en)|OCRBench|POPE|RealWorldQA|MME|MM-IFEval|SEEDBench| |:-|:-|:-|:-|:-|:-|:-|:-|:-|:-|:-|:-|:-|:-| |**InternVL3\_5-2B**|66.63|57.67|51.78|61.6|50.97|69.29|78.18|834|87.17|60.78|2,128.83|47.31|75.41| |**Qwen2.5-VL-3B**|66.61|56.13|51.67|62.5|48.97|76.12|80.41|824|86.17|65.23|2,163.29|38.62|73.88| |**InternVL3-2B**|66.46|61.1|48.7|57.6|53.1|66.1|81.1|831|90.1|65.1|2,186.40|38.49|74.95| |**SmolVLM2-2.2B**|54.85|46|41.6|51.5|42.3|37.75|69.24|725|85.1|57.5|1792.5|19.42|71.3| |**LFM2-VL-3B**|**67.31**|**57.73**|**45.33**|**62.2**|**51.03**|**67.37**|**79.81**|**822**|**89.01**|**71.37**|**2,050.90**|**51.83**|**76.55**| Table from: [https://www.liquid.ai/blog/lfm2-vl-3b-a-new-efficient-vision-language-for-the-edge](https://www.liquid.ai/blog/lfm2-vl-3b-a-new-efficient-vision-language-for-the-edge)
2025-10-22T15:46:03
https://www.reddit.com/r/LocalLLaMA/comments/1odbwjj/lfm2vl_3b_released_today/
cruncherv
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1odbwjj
false
null
t3_1odbwjj
/r/LocalLLaMA/comments/1odbwjj/lfm2vl_3b_released_today/
false
false
self
75
{'enabled': False, 'images': [{'id': 'Sm9xk0S8oNZqDFcBuFJNjvOXN7_U_fgEzB4DKF3rxNo', 'resolutions': [{'height': 77, 'url': 'https://external-preview.redd.it/Sm9xk0S8oNZqDFcBuFJNjvOXN7_U_fgEzB4DKF3rxNo.png?width=108&crop=smart&auto=webp&s=9d89e03e7cfccf675ff78c5608e33bb5b5e8da54', 'width': 108}, {'height': 154, 'url': 'https://external-preview.redd.it/Sm9xk0S8oNZqDFcBuFJNjvOXN7_U_fgEzB4DKF3rxNo.png?width=216&crop=smart&auto=webp&s=ad8d1fd464c11e213da50170117908eb78192807', 'width': 216}, {'height': 229, 'url': 'https://external-preview.redd.it/Sm9xk0S8oNZqDFcBuFJNjvOXN7_U_fgEzB4DKF3rxNo.png?width=320&crop=smart&auto=webp&s=911eb6c668e8602a41ef99f20bef5176491c164d', 'width': 320}, {'height': 459, 'url': 'https://external-preview.redd.it/Sm9xk0S8oNZqDFcBuFJNjvOXN7_U_fgEzB4DKF3rxNo.png?width=640&crop=smart&auto=webp&s=be8312410960633629be94dc06e2de45bc6f66ea', 'width': 640}, {'height': 688, 'url': 'https://external-preview.redd.it/Sm9xk0S8oNZqDFcBuFJNjvOXN7_U_fgEzB4DKF3rxNo.png?width=960&crop=smart&auto=webp&s=658d518b6386cb1a9bbab19d4963728af97a28a5', 'width': 960}, {'height': 774, 'url': 'https://external-preview.redd.it/Sm9xk0S8oNZqDFcBuFJNjvOXN7_U_fgEzB4DKF3rxNo.png?width=1080&crop=smart&auto=webp&s=0e9b3c7acac206d95f04f56a5f419bbf10c5b852', 'width': 1080}], 'source': {'height': 1476, 'url': 'https://external-preview.redd.it/Sm9xk0S8oNZqDFcBuFJNjvOXN7_U_fgEzB4DKF3rxNo.png?auto=webp&s=7a2c97d3a0dcb43c762c6ec5da65dcafde04f91d', 'width': 2058}, 'variants': {}}]}
LFM2-VL 3B released today
1
New **LFM2-VL 3B** version released by LiquidAI today. * [Blog post](https://www.liquid.ai/blog/lfm2-vl-3b-a-new-efficient-vision-language-for-the-edge) * [HuggingFace ](https://huggingface.co/LiquidAI/LFM2-VL-3B)page * Available quant: [GGUF](https://huggingface.co/LiquidAI/LFM2-VL-3B-GGUF) || || |Model|Average|MMStar|MMMU (val)|MathVista|BLINK|InfoVQA (val)|MMBench (dev en)|OCRBench|POPE|RealWorldQA|MME|MM-IFEval|SEEDBench| |InternVL3\_5-2B|66.63|57.67|51.78|61.6|50.97|69.29|78.18|834|87.17|60.78|2,128.83|47.31|75.41| |Qwen2.5-VL-3B|66.61|56.13|51.67|62.5|48.97|76.12|80.41|824|86.17|65.23|2,163.29|38.62|73.88| |InternVL3-2B|66.46|61.1|48.7|57.6|53.1|66.1|81.1|831|90.1|65.1|2,186.40|38.49|74.95| |SmolVLM2-2.2B|54.85|46|41.6|51.5|42.3|37.75|69.24|725|85.1|57.5|1792.5|19.42|71.3| |LFM2-VL-3B|67.31|57.73|45.33|62.2|51.03|67.37|79.81|822|89.01|71.37|2,050.90|51.83|76.55 | Table from: [https://www.liquid.ai/blog/lfm2-vl-3b-a-new-efficient-vision-language-for-the-edge](https://www.liquid.ai/blog/lfm2-vl-3b-a-new-efficient-vision-language-for-the-edge)
2025-10-22T15:42:26
https://www.reddit.com/r/LocalLLaMA/comments/1odbsuo/lfm2vl_3b_released_today/
cruncherv
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1odbsuo
false
null
t3_1odbsuo
/r/LocalLLaMA/comments/1odbsuo/lfm2vl_3b_released_today/
false
false
self
1
null
LMStudio - Now has GLM 4.6 Support (CUDA)
32
Hey, just so you know, LMStudio seems to now have GLM 4.6 support. Yay. I'm getting 2.99 tokens a second when generating 3000 tokens using 1 3090 and PC RAM. Model: Unsloth GLM 4.6 UD - Q3\_K\_XL (147.22GB) Hardware setup: single 3090 + 14700K with 192GB RAM DDR5333. (14700K limited to 250Watts) NOTE: Getting a buffer related error when trying to offload layers onto 2x 3090s.
2025-10-22T15:26:02
https://www.reddit.com/r/LocalLLaMA/comments/1odbco8/lmstudio_now_has_glm_46_support_cuda/
YouAreRight007
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1odbco8
false
null
t3_1odbco8
/r/LocalLLaMA/comments/1odbco8/lmstudio_now_has_glm_46_support_cuda/
false
false
self
32
null
Feasibility Check: Modifying DeepSeek-OCR (2510.18234) into an Instruction-Following Document VLM?
13
Hey everyone I've been digging into the new DeepSeek-OCR paper (arXiv: 2510.18234), and its DeepEncoder looks like a game-changer for handling high-resolution, dense documents with its high-compression ratio. As I understand it, the model in its current form is a pure OCR engine, with a workflow of: Image -> [Encoder -> Decoder] -> Full Text (It seems it's not designed to take text instructions, only image inputs). I'm wondering about the feasibility of modifying this to become an instruction-following Visual Language Model (VLM) for documents. The Core Idea: To change the workflow to: Image + Text Instruction -> Specific Answer For example: * Input: (Image of an invoice) + "Extract the final total." * Output: "$450.72" * Input: (Image of a paper) + "Summarize the abstract." * Output: "The paper introduces a novel optical compression engine..." Proposed High-Level Approach: Since the base model only accepts images, a modification would be necessary: * Keep the DeepEncoder: Leverage the pre-trained DeepEncoder as the powerful, high-resolution vision backbone. * Modify the Architecture: This is the key step. We would need to adapt the model (likely the DeepSeek3B-MoE decoder part) to accept two types of input simultaneously: * The vision_tokens (from the document via the Encoder/Projector). * The text_tokens (from the user's new instruction). * Instruction Fine-Tune: Re-train (SFT) this modified model on a new dataset of (image, instruction, answer) pairs. This would teach the LLM decoder to reason based on the combined inputs, rather than just transcribe the visual input. My Questions: * Is this a sound approach? Does this architectural modification make sense? * Has anyone tried this? I know of models like LLaVA, Donut, etc., but the appeal here is starting with DeepSeek's SOTA document-specific encoder, rather than a general-purpose one like CLIP. * What are the biggest challenges? I assume preventing "catastrophic forgetting" (i.e., making sure it can still do basic OCR) would be one. How hard is it to get the model to properly attend to both the image and text instructions? Would love to hear any thoughts or see if I'm missing a more obvious path. Thanks!
2025-10-22T15:09:45
https://www.reddit.com/r/LocalLLaMA/comments/1odax0g/feasibility_check_modifying_deepseekocr_251018234/
hiiamtin
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1odax0g
false
null
t3_1odax0g
/r/LocalLLaMA/comments/1odax0g/feasibility_check_modifying_deepseekocr_251018234/
false
false
self
13
null
Running whisper-large-v3-turbo (OpenAI) Exclusively on AMD Ryzen™ AI NPU
39
# About the Demo * **Workflow:** `whisper-large-v3-turbo` transcribes audio; `gpt-oss:20b` generates the summary. Both models are **pre-loaded** on the NPU. * **Settings:** `gpt-oss:20b` reasoning effort = **High**. * **Test system:** ASRock 4X4 BOX-AI340 Mini PC (Kraken Point), 96 GB RAM. * **Software:** FastFlowLM (CLI mode). # About FLM We’re a small team building **FastFlowLM (FLM)** — a fast runtime for running **Whisper (Audio)**, **GPT-OSS (first MoE on NPUs), Gemma3 (vision), Medgemma,** **Qwen3,** **DeepSeek-R1**, **LLaMA3.x,** and others **entirely on the AMD Ryzen AI NPU**. Think **Ollama (maybe llama.cpp since we have our own backend?)**, but deeply optimized for AMD NPUs — with both **CLI** and **Server Mode (OpenAI-compatible)**. ✨ **From Idle Silicon to Instant Power — FastFlowLM (FLM) Makes Ryzen™ AI Shine.** # Key Features * No GPU fallback * **Faster and over 10× more power efficient.** * **Supports context lengths up to 256k tokens (qwen3:4b-2507).** * **Ultra-Lightweight (16 MB). Installs within 20 seconds.** # Try It Out * **GitHub:** [github.com/FastFlowLM/FastFlowLM](https://github.com/FastFlowLM/FastFlowLM) * **Live Demo → Remote machine access on the repo page** * **YouTube Demos:** [FastFlowLM - YouTube](https://www.youtube.com/@FastFlowLM-YT/playlists) We’re iterating fast and would **love your feedback, critiques, and ideas**🙏
2025-10-22T15:07:57
https://youtu.be/0t8ijUPg4A0?si=539G5mrICJNOwe6Z
BandEnvironmental834
youtu.be
1970-01-01T00:00:00
0
{}
1odavba
false
{'oembed': {'author_name': 'FastFlowLM', 'author_url': 'https://www.youtube.com/@FastFlowLM-YT', 'height': 200, 'html': '<iframe width="267" height="200" src="https://www.youtube.com/embed/0t8ijUPg4A0?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen title="Whisper-large-v3-turbo (OpenAI) — 100% Powered by AMD Ryzen™ AI NPU (CLI Mode)"></iframe>', 'provider_name': 'YouTube', 'provider_url': 'https://www.youtube.com/', 'thumbnail_height': 360, 'thumbnail_url': 'https://i.ytimg.com/vi/0t8ijUPg4A0/hqdefault.jpg', 'thumbnail_width': 480, 'title': 'Whisper-large-v3-turbo (OpenAI) — 100% Powered by AMD Ryzen™ AI NPU (CLI Mode)', 'type': 'video', 'version': '1.0', 'width': 267}, 'type': 'youtube.com'}
t3_1odavba
/r/LocalLLaMA/comments/1odavba/running_whisperlargev3turbo_openai_exclusively_on/
false
false
https://external-preview…59c3aba1535a7d07
39
{'enabled': False, 'images': [{'id': 'y7g9bbQ0RPFXfYX-nbtW899i8fH3DJk9OyQ8tRM7MG4', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/y7g9bbQ0RPFXfYX-nbtW899i8fH3DJk9OyQ8tRM7MG4.jpeg?width=108&crop=smart&auto=webp&s=90247130172f0a9db7062bd32f8068772a9e47e4', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/y7g9bbQ0RPFXfYX-nbtW899i8fH3DJk9OyQ8tRM7MG4.jpeg?width=216&crop=smart&auto=webp&s=9bf82ff634752d05a628d10170b6fa1f83d54b6e', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/y7g9bbQ0RPFXfYX-nbtW899i8fH3DJk9OyQ8tRM7MG4.jpeg?width=320&crop=smart&auto=webp&s=b115fd1754530c90df1effac911d91b79c4dfb2b', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/y7g9bbQ0RPFXfYX-nbtW899i8fH3DJk9OyQ8tRM7MG4.jpeg?auto=webp&s=b7ea49aa4bc621a7ce0528f1eca75547e19f4abd', 'width': 480}, 'variants': {}}]}
GPT-OSS-20b TAKE THE HELM! Further experiments in autopilot.
22
[Github...](https://github.com/Deveraux-Parker/GPT-OSS-20b-TAKE-THE-WHEEL) After fiddling around the other day I did a little more messing with gpt-oss-20b and prompting to get it to be a bit more reliable at flying/shooting/controlling the spaceship. The basic idea is that the system calculates bad and good control choices and feeds the AI a list of options with pre-filled "thinking" on the choices that encourage it to make correct choices. It is still given agency and does deviate from perfect flight from time to time (and will eventually crash as you see here). To allow fast-paced decision making, this whole stack is running gpt-oss-20b in VLLM on a 4090, and since each generation is only looking to output a single token (that represents a single control input), it allows the system to run in near-realtime. The look-ahead code tries to predict and mitigate the already low latency and the result is an autopilot that is actually reasonably good at flying the ship. I went ahead and collapsed everything into a single HTML file if you feel like messing with it, and tossed it at the github link above. You'll need an openAI spec API to use it with gpt-oss-20b running on port 8005 (or have to edit the file appropriately to match your own system).
2025-10-22T15:04:54
https://www.youtube.com/watch?v=Yo7GWnGtpoc
teachersecret
youtube.com
1970-01-01T00:00:00
0
{}
1odasbj
false
{'oembed': {'author_name': 'Dbl Spc', 'author_url': 'https://www.youtube.com/@dblspc4756', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/Yo7GWnGtpoc?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen title="GPT-OSS-20B TAKE THE WHEEL!!! (Experiments with low latency LLM spaceship control)"></iframe>', 'provider_name': 'YouTube', 'provider_url': 'https://www.youtube.com/', 'thumbnail_height': 360, 'thumbnail_url': 'https://i.ytimg.com/vi/Yo7GWnGtpoc/hqdefault.jpg', 'thumbnail_width': 480, 'title': 'GPT-OSS-20B TAKE THE WHEEL!!! (Experiments with low latency LLM spaceship control)', 'type': 'video', 'version': '1.0', 'width': 356}, 'type': 'youtube.com'}
t3_1odasbj
/r/LocalLLaMA/comments/1odasbj/gptoss20b_take_the_helm_further_experiments_in/
false
false
https://external-preview…58155b22039888e8
22
{'enabled': False, 'images': [{'id': 'ZspUdJpzHTOkiEa4knHQwTZYSVuilpI-6KVKwszZkLU', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/ZspUdJpzHTOkiEa4knHQwTZYSVuilpI-6KVKwszZkLU.jpeg?width=108&crop=smart&auto=webp&s=29aea4d6abe95ff4e74fea17c308ab5098e0e0c9', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/ZspUdJpzHTOkiEa4knHQwTZYSVuilpI-6KVKwszZkLU.jpeg?width=216&crop=smart&auto=webp&s=ca695533482055028bb53e2d31d18caf886bf6f3', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/ZspUdJpzHTOkiEa4knHQwTZYSVuilpI-6KVKwszZkLU.jpeg?width=320&crop=smart&auto=webp&s=5807a68162444fb98dc8bc5d6f30eaf44af008c3', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/ZspUdJpzHTOkiEa4knHQwTZYSVuilpI-6KVKwszZkLU.jpeg?auto=webp&s=29c785665aed5d8e43baf59a0a9a2c6d5a4eda31', 'width': 480}, 'variants': {}}]}
Anyone knows the theoretical performance of FP16, 32, 64 FLOP numbers?
0
DGX Spark doesn’t publish FP 16, 32, 64 FLOP numbers on their data sheet. They only have FP4 FLOP with sparsity. Meanwhile, RTX 50xx don’t publish FP4 FLOP with sparsity. No apple to apple comparison. Anyways we could know/measure/estimate their FLOP limit (theoretical and experimental)? I want to compare their compute power in terms of FLOPs with other Blackwell GPUs. Thank you!
2025-10-22T15:04:08
https://www.reddit.com/r/LocalLLaMA/comments/1odarl1/anyone_knows_the_theoretical_performance_of_fp16/
Spare-Solution-787
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1odarl1
false
null
t3_1odarl1
/r/LocalLLaMA/comments/1odarl1/anyone_knows_the_theoretical_performance_of_fp16/
false
false
self
0
null
I gave ChatGPT, Claude, Gemini, and others the DISC personality test (you know, for science)
0
So I had a completely normal thought: "What if we gave AI models the same cringey workplace personality test my company makes us take?" Forced them all through 28 questions about whether they're more "persuasive" or "analytical," charted the results, and honestly? The personality differences were way more distinct than I expected. Made a site with all the data: [**tastybots.net**](http://tastybots.net) This is obviously just for fun - not claiming LLMs have "real" personalities. But it's a weirdly entertaining way to compare models. Curious what you all think.
2025-10-22T15:01:04
https://www.reddit.com/r/LocalLLaMA/comments/1odaofr/i_gave_chatgpt_claude_gemini_and_others_the_disc/
InternationalHome300
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1odaofr
false
null
t3_1odaofr
/r/LocalLLaMA/comments/1odaofr/i_gave_chatgpt_claude_gemini_and_others_the_disc/
false
false
self
0
null
LLMs Can Get Brain Rot
0
2025-10-22T14:53:22
https://llm-brain-rot.github.io/
ChiliPepperHott
llm-brain-rot.github.io
1970-01-01T00:00:00
0
{}
1odagxv
false
null
t3_1odagxv
/r/LocalLLaMA/comments/1odagxv/llms_can_get_brain_rot/
false
false
default
0
null
Qwen team is helping llama.cpp again
1,159
2025-10-22T14:44:44
https://i.redd.it/dh1iaky2eowf1.png
jacek2023
i.redd.it
1970-01-01T00:00:00
0
{}
1oda8mk
false
null
t3_1oda8mk
/r/LocalLLaMA/comments/1oda8mk/qwen_team_is_helping_llamacpp_again/
false
false
default
1,159
{'enabled': True, 'images': [{'id': 'dh1iaky2eowf1', 'resolutions': [{'height': 88, 'url': 'https://preview.redd.it/dh1iaky2eowf1.png?width=108&crop=smart&auto=webp&s=ffaaab6751db9e25a8a3784c6b4b469d888f01d9', 'width': 108}, {'height': 177, 'url': 'https://preview.redd.it/dh1iaky2eowf1.png?width=216&crop=smart&auto=webp&s=60cfc5e379088e0922a97052fabd36205ebe0715', 'width': 216}, {'height': 263, 'url': 'https://preview.redd.it/dh1iaky2eowf1.png?width=320&crop=smart&auto=webp&s=cc89fd59c2e6140263cafb34345625ee394ebf21', 'width': 320}, {'height': 527, 'url': 'https://preview.redd.it/dh1iaky2eowf1.png?width=640&crop=smart&auto=webp&s=addcf456730d4f5ec04b561980fa9d74dfb18d96', 'width': 640}, {'height': 790, 'url': 'https://preview.redd.it/dh1iaky2eowf1.png?width=960&crop=smart&auto=webp&s=86b7933cc4267417d2814d6102a04560b32fe97f', 'width': 960}, {'height': 889, 'url': 'https://preview.redd.it/dh1iaky2eowf1.png?width=1080&crop=smart&auto=webp&s=d0ad01629c22e460a8d0fcadb2cf6b527d8f9536', 'width': 1080}], 'source': {'height': 1540, 'url': 'https://preview.redd.it/dh1iaky2eowf1.png?auto=webp&s=925d77e56c570b169549371e3e2e3af488b10b6c', 'width': 1870}, 'variants': {}}]}
The RoboNuggets Community
0
Are you looking to move past AI theory and start building and earning from automation?The RoboNuggets Community is a dedicated hub focused on making advanced AI and no-code automation accessible to everyone, regardless of technical background. The mission is simple: providing the exact blueprints and training needed to turn your knowledge of tools like ChatGPT and n8n into practical, revenue-generating systems. The core of the program features step-by-step courses and templates for creating powerful automations, such as RAG agents and automated content pipelines. You get to learn directly from a verified n8n Partner and a community of over a thousand active practitioners. If you're an agency owner, a business looking to automate growth, or an aspiring AI builder who wants to monetize this skill, this community is structured to accelerate your results. It's the practical next step for anyone tired of just talking about AI and ready to put it to work to save time and make money.
2025-10-22T14:32:35
https://www.skool.com/robonuggets/about?ref=6e4cb8ffa1d646989a8252abc0567f65%20
TeachingAny4631
skool.com
1970-01-01T00:00:00
0
{}
1od9x3g
false
null
t3_1od9x3g
/r/LocalLLaMA/comments/1od9x3g/the_robonuggets_community/
false
false
default
0
{'enabled': False, 'images': [{'id': 'qY5niy9wAglhH0e4Uaekip9AFpQ_PiSB33aheKPnTC8', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/qY5niy9wAglhH0e4Uaekip9AFpQ_PiSB33aheKPnTC8.jpeg?width=108&crop=smart&auto=webp&s=3608638554a6a5b7682a85bf574aed0fb3f1206c', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/qY5niy9wAglhH0e4Uaekip9AFpQ_PiSB33aheKPnTC8.jpeg?width=216&crop=smart&auto=webp&s=5022b562ddd7ae8f9cda43516cedfe44c35b470b', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/qY5niy9wAglhH0e4Uaekip9AFpQ_PiSB33aheKPnTC8.jpeg?width=320&crop=smart&auto=webp&s=5d52c91110bab1fdefdc25b8c7d5d55f94c439ac', 'width': 320}, {'height': 361, 'url': 'https://external-preview.redd.it/qY5niy9wAglhH0e4Uaekip9AFpQ_PiSB33aheKPnTC8.jpeg?width=640&crop=smart&auto=webp&s=8e7374bda752fb56df0c1c05c98d4f04b4271dd1', 'width': 640}, {'height': 541, 'url': 'https://external-preview.redd.it/qY5niy9wAglhH0e4Uaekip9AFpQ_PiSB33aheKPnTC8.jpeg?width=960&crop=smart&auto=webp&s=b100d97f3bf5ee882dcc18130dee7a0dde30c1f2', 'width': 960}, {'height': 609, 'url': 'https://external-preview.redd.it/qY5niy9wAglhH0e4Uaekip9AFpQ_PiSB33aheKPnTC8.jpeg?width=1080&crop=smart&auto=webp&s=de1deb9c7fca024c89895917d539f18cc7a86c65', 'width': 1080}], 'source': {'height': 790, 'url': 'https://external-preview.redd.it/qY5niy9wAglhH0e4Uaekip9AFpQ_PiSB33aheKPnTC8.jpeg?auto=webp&s=85c479abab4641dd90ba4124e028d233769c0235', 'width': 1400}, 'variants': {}}]}
Text 2 SQL benchmark
2
Has anybody tried using the new Spider 2.0 benchmark on Databricks? I have seen that currently it is hosted on Snowflake but would love to use the evaluation script for other ground truth and sql queries
2025-10-22T14:07:49
https://www.reddit.com/r/LocalLLaMA/comments/1od99vo/text_2_sql_benchmark/
LeaveNo7723
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1od99vo
false
null
t3_1od99vo
/r/LocalLLaMA/comments/1od99vo/text_2_sql_benchmark/
false
false
self
2
null
Qwen3-VL fixes "on the right track"
12
2025-10-22T14:05:38
https://github.com/ggml-org/llama.cpp/issues/16207#issuecomment-3432273713
beneath_steel_sky
github.com
1970-01-01T00:00:00
0
{}
1od97z0
false
null
t3_1od97z0
/r/LocalLLaMA/comments/1od97z0/qwen3vl_fixes_on_the_right_track/
false
false
https://external-preview…7b2d0aecc3bf0bff
12
{'enabled': False, 'images': [{'id': '5gTRUWKAIf2E3WjfyB5DFLRcJhx5HNbmQ4esZO67n30', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/5gTRUWKAIf2E3WjfyB5DFLRcJhx5HNbmQ4esZO67n30.png?width=108&crop=smart&auto=webp&s=aa719287b23d17a0476154b8ca3e56e6d2cb69ef', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/5gTRUWKAIf2E3WjfyB5DFLRcJhx5HNbmQ4esZO67n30.png?width=216&crop=smart&auto=webp&s=25eac8a81c63c429324d56c8b4f1daaa1c13b054', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/5gTRUWKAIf2E3WjfyB5DFLRcJhx5HNbmQ4esZO67n30.png?width=320&crop=smart&auto=webp&s=9a48be32c54d5880db28ff0ec1198c70716150da', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/5gTRUWKAIf2E3WjfyB5DFLRcJhx5HNbmQ4esZO67n30.png?width=640&crop=smart&auto=webp&s=a5c8c99279301f8d5a84fda6b28f5668078b541a', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/5gTRUWKAIf2E3WjfyB5DFLRcJhx5HNbmQ4esZO67n30.png?width=960&crop=smart&auto=webp&s=2e4bc2aee5190a172e942719ce43bdcd7e11b17a', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/5gTRUWKAIf2E3WjfyB5DFLRcJhx5HNbmQ4esZO67n30.png?width=1080&crop=smart&auto=webp&s=43fb68fc7d6d211b77dea4cbf46070da2967a0c2', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/5gTRUWKAIf2E3WjfyB5DFLRcJhx5HNbmQ4esZO67n30.png?auto=webp&s=91b9178aa7d7c8092e287ed85f0be49f06851d19', 'width': 1200}, 'variants': {}}]}
Can we talk about max_tokens (response tokens) for a second? What is a realistic setting when doing document production tasks?
1
So I’m running GLM 4.6 AWQ on a couple of H100s. I set the max context window in vLLM TO 128k. In Open WebUI, I’m trying to figure out what the maximum usable output tokens (max_tokens) can be set to because I want GLM to have the output token headroom it needs to produce reasonably long document output. I’m not trying to get it to write a book or anything super long, but I am trying to get it to be able to use the GenFilesMCP to produce DOCX, XLSX, and PPTX files of decent substance. The file production part seems to work without a hitch, but with low max_tolens it doesn’t seem to produce full documents, it seems to produce what almost appear to be chunked documents that have major gaps in them Example: I asked it to produce a PowerPoint presentation file containing every World Series winner since 1903 (each on its own slide) and include two interesting facts about each World Series. At low max_tokens, It created the PowerPoint document, but when I opened it, it only had like 16 slides. It skipped huge swaths of years randomly. It’s started at 1903, then went to 1907, 1963, 2007, etc. the slides themselves had what was asked for, it just randomly skipped a bunch of years. So I changed max_tokens to 65535 and then it did it correctly. So I wanted to see what the max allowable would be and raised it up another 32K to 98303, and then it was garbage again, skipping years like before. I guess my big questions are: - I understand that a model has a max context window that obviously counts both input and output tokens against that value, is there a percentage or ratio that you need to allocate to input vs. output tokens if you want long / quality output? - Would “-1” be best for max_token to just roll the dice and let it take as much as it wants / needs? - Is there such thing as actual usable number of output tokens vs. what the model makers claim it can do? - What’s the best current local model for producing long output content (like typical office work products) and what is the best settings for max_tokens? - is there a common do-not-exceed-this-value for max_tokens that everyone has agreed upon?
2025-10-22T13:50:46
https://www.reddit.com/r/LocalLLaMA/comments/1od8uaq/can_we_talk_about_max_tokens_response_tokens_for/
Porespellar
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1od8uaq
false
null
t3_1od8uaq
/r/LocalLLaMA/comments/1od8uaq/can_we_talk_about_max_tokens_response_tokens_for/
false
false
self
1
null
Local AI Directory
1
I recently set up a home server that I’m planning on using for various local AI/ML-related tasks. While looking through Reddit and Github, I found so many tools that it began hard to keep track. I’ve been wanting to improve my web dev skills so I built this simple local AI web directory ([https://thelocalaidirectory.com/](https://thelocalaidirectory.com/)). It’s very basic right now, but I’m planning on adding more features like saving applications, ranking by popularity, etc. I’m wondering what you all think… I know there are some really solid directories on Github that already exist but I figured the ability to filter, search, and save all in one place could be useful for some people. Does anybody think this could be useful for them? Is there another feature you think could be helpful?
2025-10-22T13:39:12
https://www.reddit.com/r/LocalLLaMA/comments/1od8jxp/local_ai_directory/
Frequent-Contract925
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1od8jxp
false
null
t3_1od8jxp
/r/LocalLLaMA/comments/1od8jxp/local_ai_directory/
false
false
self
1
null
Best open-source LLM (8–14B) for natural English → European language translations on a 15 GB GPU?
3
Hey everyone, I’m looking for an open-source LLM (~8-14B parameters) (or other types of models, if any) that can run on ~15 GB of GPU VRAM and produce fluent, context-aware translations from English → European languages (French, Spanish, Italian, German). I want translations that understand nuance and tone, not just literal word-for-word. I’ve tested: • Qwen‑3 14B (4-bit unsloth) — decent but not perfect. • Seamless M4T Large — too literal/robotic for my needs. Thank you in advance!
2025-10-22T13:34:41
https://www.reddit.com/r/LocalLLaMA/comments/1od8fys/best_opensource_llm_814b_for_natural_english/
SignificanceFlashy50
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1od8fys
false
null
t3_1od8fys
/r/LocalLLaMA/comments/1od8fys/best_opensource_llm_814b_for_natural_english/
false
false
self
3
null
YES! Super 80b for 8gb VRAM - Qwen3-Next-80B-A3B-Instruct-GGUF
315
So amazing to be able to run this beast on a 8GB VRAM laptop [https://huggingface.co/lefromage/Qwen3-Next-80B-A3B-Instruct-GGUF](https://huggingface.co/lefromage/Qwen3-Next-80B-A3B-Instruct-GGUF) Note that this is not yet supported by latest llama.cpp so you need to compile the non-official version as shown in the link above. (Do not forget to add GPU support when compiling). Have fun!
2025-10-22T13:34:41
https://www.reddit.com/r/LocalLLaMA/comments/1od8fz0/yes_super_80b_for_8gb_vram/
Mangleus
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1od8fz0
false
null
t3_1od8fz0
/r/LocalLLaMA/comments/1od8fz0/yes_super_80b_for_8gb_vram/
false
false
self
315
{'enabled': False, 'images': [{'id': 'CXtUO0R_N41j9Eelg5U78KXslAIkjYs74iALfmzdlj4', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/CXtUO0R_N41j9Eelg5U78KXslAIkjYs74iALfmzdlj4.png?width=108&crop=smart&auto=webp&s=8800176f5990aecb560785a6dda7e1ab5005d9ad', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/CXtUO0R_N41j9Eelg5U78KXslAIkjYs74iALfmzdlj4.png?width=216&crop=smart&auto=webp&s=f8904a2e5b01dded6ccef5379b47b695b5eb94b9', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/CXtUO0R_N41j9Eelg5U78KXslAIkjYs74iALfmzdlj4.png?width=320&crop=smart&auto=webp&s=e983bd8d22083a7c00db52e5a8ac270638bad015', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/CXtUO0R_N41j9Eelg5U78KXslAIkjYs74iALfmzdlj4.png?width=640&crop=smart&auto=webp&s=229c42e92a70f56b7b309b1a94a509401343d64a', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/CXtUO0R_N41j9Eelg5U78KXslAIkjYs74iALfmzdlj4.png?width=960&crop=smart&auto=webp&s=89fedc321475f48c74f5984bce215e6ca09f2cbf', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/CXtUO0R_N41j9Eelg5U78KXslAIkjYs74iALfmzdlj4.png?width=1080&crop=smart&auto=webp&s=10382ee0e7aca28f74859d09a4079665f63762c0', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/CXtUO0R_N41j9Eelg5U78KXslAIkjYs74iALfmzdlj4.png?auto=webp&s=55ac272336333b120ee93a54cd860a03683553bc', 'width': 1200}, 'variants': {}}]}
What If You Could Store Harry Potter in a Single Image—and AI Reads It Back Perfectly?
1
[removed]
2025-10-22T13:19:23
https://www.reddit.com/r/LocalLLaMA/comments/1od82eb/what_if_you_could_store_harry_potter_in_a_single/
yeekal126
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1od82eb
false
null
t3_1od82eb
/r/LocalLLaMA/comments/1od82eb/what_if_you_could_store_harry_potter_in_a_single/
false
false
self
1
null
Qwen3-Embedding-0.6B -> any cloud inference providers?
6
Are there any cloud inference providers for Qwen/Qwen3-Embedding-0.6B ? https://huggingface.co/Qwen/Qwen3-Embedding-0.6B I'm trying to setup low latency embeddings, in my tests generating embeddings on CPU results in somewhat high latencies (30-80ms on int8 onnx TEI). When I test with GPU - I get 5ms latencies on vulkanized amd strix halo, 11-13ms on vulkanized amd 780m -> which is much better (llama.cpp). Anyways - I might just use cloud for inference. Any provider has that model?
2025-10-22T13:11:55
https://www.reddit.com/r/LocalLLaMA/comments/1od7vxy/qwen3embedding06b_any_cloud_inference_providers/
bytepursuits
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1od7vxy
false
null
t3_1od7vxy
/r/LocalLLaMA/comments/1od7vxy/qwen3embedding06b_any_cloud_inference_providers/
false
false
self
6
{'enabled': False, 'images': [{'id': 'HVlDx3sIc9fDyqxYROltZh4_zaLPIwQ2OvPGjxm27kk', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/HVlDx3sIc9fDyqxYROltZh4_zaLPIwQ2OvPGjxm27kk.png?width=108&crop=smart&auto=webp&s=e0a0b9e00a90a64308b392ea065a5666bbc7c99a', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/HVlDx3sIc9fDyqxYROltZh4_zaLPIwQ2OvPGjxm27kk.png?width=216&crop=smart&auto=webp&s=24794d4dc5e2f816acf136f12041a449ec01d2b4', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/HVlDx3sIc9fDyqxYROltZh4_zaLPIwQ2OvPGjxm27kk.png?width=320&crop=smart&auto=webp&s=ced369b8a0ae3d1cdbe7a030960c50fd3f2cfdd2', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/HVlDx3sIc9fDyqxYROltZh4_zaLPIwQ2OvPGjxm27kk.png?width=640&crop=smart&auto=webp&s=64a598c0c7e2a44fa02d43ac09c7d63d0a6c1b6b', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/HVlDx3sIc9fDyqxYROltZh4_zaLPIwQ2OvPGjxm27kk.png?width=960&crop=smart&auto=webp&s=cba6e6727f41b1406c3ce46b365fd9edcbcbf5c5', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/HVlDx3sIc9fDyqxYROltZh4_zaLPIwQ2OvPGjxm27kk.png?width=1080&crop=smart&auto=webp&s=4a426c4d7704d76efbc418602a12814dc8c29b80', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/HVlDx3sIc9fDyqxYROltZh4_zaLPIwQ2OvPGjxm27kk.png?auto=webp&s=4399c8976d12b85ffaee7ae14ab7f5725bb2ea12', 'width': 1200}, 'variants': {}}]}
DeepSeek-OCR encoder as a tiny Python package (encoder-only tokens, CUDA/BF16, 1-liner install)
10
If you’re benchmarking the new **DeepSeek-OCR** on local stacks, this package (that I made) exposes the **encoder** directly—skip the decoder and just get the vision tokens. * Encoder-only: returns \[1, N, 1024\] tokens for your downstream OCR/doc pipelines. * Speed/VRAM: BF16 + optional CUDA Graphs; avoids full VLM runtime. * Install: ``` pip install deepseek-ocr-encoder ``` Minimal example (HF Transformers): ``` from transformers import AutoModel from deepseek_ocr_encoder import DeepSeekOCREncoder import torch m = AutoModel.from_pretrained("deepseek-ai/DeepSeek-OCR", trust_remote_code=True, use_safetensors=True, torch_dtype=torch.bfloat16, attn_implementation="eager").eval().to("cuda", dtype=torch.bfloat16) enc = DeepSeekOCREncoder(m, device="cuda", dtype=torch.bfloat16, freeze=True) print(enc("page.png").shape) ``` Links: https://pypi.org/project/deepseek-ocr-encoder/ https://github.com/dwojcik92/deepseek-ocr-encoder
2025-10-22T12:57:19
https://www.reddit.com/r/LocalLLaMA/comments/1od7jll/deepseekocr_encoder_as_a_tiny_python_package/
Exciting_Traffic_667
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1od7jll
false
null
t3_1od7jll
/r/LocalLLaMA/comments/1od7jll/deepseekocr_encoder_as_a_tiny_python_package/
false
false
self
10
null
Help with OCR
1
Good afternoon. Could you please advise how to download and install any OCR software (I might have phrased it incorrectly)? I have no programming experience at all. For my thesis, I need to process a large number of scanned newspapers in Russian. I would greatly appreciate your help.
2025-10-22T12:56:22
https://www.reddit.com/r/LocalLLaMA/comments/1od7itz/help_with_ocr/
Brilliant_Salary_234
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1od7itz
false
null
t3_1od7itz
/r/LocalLLaMA/comments/1od7itz/help_with_ocr/
false
false
self
1
null
M5 MacBook Pro: Up to ~45% PP Improvement. ~25% TG (Ollama Tested)
66
\[[Geekerwan](https://www.youtube.com/watch?v=BKQggt9blGo)\]
2025-10-22T12:55:20
https://i.redd.it/2r0ue3k6unwf1.png
Noble00_
i.redd.it
1970-01-01T00:00:00
0
{}
1od7hyu
false
null
t3_1od7hyu
/r/LocalLLaMA/comments/1od7hyu/m5_macbook_pro_up_to_45_pp_improvement_25_tg/
false
false
default
66
{'enabled': True, 'images': [{'id': '2r0ue3k6unwf1', 'resolutions': [{'height': 60, 'url': 'https://preview.redd.it/2r0ue3k6unwf1.png?width=108&crop=smart&auto=webp&s=0c286f30dc834954ace33ab0d8fbcf1b7369a59c', 'width': 108}, {'height': 121, 'url': 'https://preview.redd.it/2r0ue3k6unwf1.png?width=216&crop=smart&auto=webp&s=a4b025138adcd70afb530f805907a917ebd1f476', 'width': 216}, {'height': 180, 'url': 'https://preview.redd.it/2r0ue3k6unwf1.png?width=320&crop=smart&auto=webp&s=9a5ec7f2ce6ee89e0d76d23b2ccff786f7febfb0', 'width': 320}, {'height': 360, 'url': 'https://preview.redd.it/2r0ue3k6unwf1.png?width=640&crop=smart&auto=webp&s=4b506f746bde959cb1cf422094c3babaa4c4113e', 'width': 640}, {'height': 540, 'url': 'https://preview.redd.it/2r0ue3k6unwf1.png?width=960&crop=smart&auto=webp&s=c58ce5187565921e1b2a54876413a948854ea722', 'width': 960}, {'height': 607, 'url': 'https://preview.redd.it/2r0ue3k6unwf1.png?width=1080&crop=smart&auto=webp&s=43ba7948e211b2fde75b604e3d6200a7a4142cc3', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://preview.redd.it/2r0ue3k6unwf1.png?auto=webp&s=1f2b40568a1ff0ddf01f759d215ceb86bad40460', 'width': 1920}, 'variants': {}}]}
Is Oxylabs AI studio is better than Firecrawl?
1
[removed]
2025-10-22T12:40:26
https://www.reddit.com/r/LocalLLaMA/comments/1od75k4/is_oxylabs_ai_studio_is_better_than_firecrawl/
Fearless_Buy1873
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1od75k4
false
null
t3_1od75k4
/r/LocalLLaMA/comments/1od75k4/is_oxylabs_ai_studio_is_better_than_firecrawl/
false
false
self
1
null
Oxylabs AI studio is way better than Firecrawl
1
[removed]
2025-10-22T12:37:07
https://www.reddit.com/r/LocalLLaMA/comments/1od72yu/oxylabs_ai_studio_is_way_better_than_firecrawl/
Fearless_Buy1873
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1od72yu
false
null
t3_1od72yu
/r/LocalLLaMA/comments/1od72yu/oxylabs_ai_studio_is_way_better_than_firecrawl/
false
false
self
1
null
What do LLMs actually tell us?
0
Everyone knows that LLMs predict the next, most probable token *given the context* and the training data. But, what does this generally translate into? [View Poll](https://www.reddit.com/poll/1od6rb8)
2025-10-22T12:22:39
https://www.reddit.com/r/LocalLLaMA/comments/1od6rb8/what_do_llms_actually_tell_us/
Bubbly-Bank-6202
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1od6rb8
false
null
t3_1od6rb8
/r/LocalLLaMA/comments/1od6rb8/what_do_llms_actually_tell_us/
false
false
self
0
null
Best LLM for 96G RTX Pro 6000 Blackwell?
3
Hi, I just got my hands on a rtx pro 6000 blackwell that I want to be running a llm in the background when its sitting idle throughout the day. What would be the best performing model that can fit it's amount of vram, and if needed, an additional 128gb of system memory (best not to use)? Only going to use it for general purposes, sort of like an offline replacement thats versatile for whatever I throw at it.
2025-10-22T12:14:02
https://www.reddit.com/r/LocalLLaMA/comments/1od6knb/best_llm_for_96g_rtx_pro_6000_blackwell/
AssociationAdept4052
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1od6knb
false
null
t3_1od6knb
/r/LocalLLaMA/comments/1od6knb/best_llm_for_96g_rtx_pro_6000_blackwell/
false
false
self
3
null
When a realization hits after listening to Andrej Karpathy
3
https://preview.redd.it/…er to 1+1 is 2."
2025-10-22T12:08:32
https://www.reddit.com/r/LocalLLaMA/comments/1od6gg8/when_a_realization_hits_after_listening_to_andrej/
martinerous
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1od6gg8
false
null
t3_1od6gg8
/r/LocalLLaMA/comments/1od6gg8/when_a_realization_hits_after_listening_to_andrej/
false
false
https://b.thumbs.redditm…p87XfuN1oG_Q.jpg
3
null
How often do you use LLM for repetitive/straightforward tasks more suited for a script?
8
I caught myself asking GPT-OSS-20B to query my local sqlite database just to display the current data. I use OpenCode, and I was reluctant to switch from the terminal to another app to check the database. Every GPT invocation took a solid few seconds, as my hardware is struggling to operate under the 32GB RAM limit. My productivity got impacted to the point I decided to do something about it. So I asked GPT to generate a shell script returning the information I was looking for. Obviously, the execution performance of that script was waaaay higher than using the LLM for that simple task. The bottom line is - sometimes we need a broader perspective to use the right tool for a job. Have you caught yourself picking the convenience over effectiveness?
2025-10-22T11:15:47
https://i.redd.it/pjmi86rkcnwf1.png
ThingRexCom
i.redd.it
1970-01-01T00:00:00
0
{}
1od5eq3
false
null
t3_1od5eq3
/r/LocalLLaMA/comments/1od5eq3/how_often_do_you_use_llm_for/
false
false
default
8
{'enabled': True, 'images': [{'id': 'pjmi86rkcnwf1', 'resolutions': [{'height': 69, 'url': 'https://preview.redd.it/pjmi86rkcnwf1.png?width=108&crop=smart&auto=webp&s=04aa818e48317d06981f7f9c820fda67beec94bd', 'width': 108}, {'height': 138, 'url': 'https://preview.redd.it/pjmi86rkcnwf1.png?width=216&crop=smart&auto=webp&s=d91d42d017e99cbc00de1031ec4399d219850331', 'width': 216}, {'height': 205, 'url': 'https://preview.redd.it/pjmi86rkcnwf1.png?width=320&crop=smart&auto=webp&s=df3a86b26e66708f4cb29bee52033adb32272140', 'width': 320}, {'height': 411, 'url': 'https://preview.redd.it/pjmi86rkcnwf1.png?width=640&crop=smart&auto=webp&s=2f008c150ee3ac531bf6dcf325aa8ef25e034b59', 'width': 640}, {'height': 616, 'url': 'https://preview.redd.it/pjmi86rkcnwf1.png?width=960&crop=smart&auto=webp&s=e9841ce6b6059641243c529916cf706dbaac79a8', 'width': 960}, {'height': 693, 'url': 'https://preview.redd.it/pjmi86rkcnwf1.png?width=1080&crop=smart&auto=webp&s=ee412957b52fc1a2f9e4223bef2701b4c0fa9def', 'width': 1080}], 'source': {'height': 1268, 'url': 'https://preview.redd.it/pjmi86rkcnwf1.png?auto=webp&s=7891b182673325f8cef5357213fcd3ce552b4cc5', 'width': 1974}, 'variants': {}}]}
I created a corporate-level chat UI with advanced features
124
2025-10-22T11:14:42
https://v.redd.it/25jaz4q9cnwf1
BlueLemonPixel
/r/LocalLLaMA/comments/1od5dxw/i_created_a_corporatelevel_chat_ui_with_advanced/
1970-01-01T00:00:00
0
{}
1od5dxw
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/25jaz4q9cnwf1/DASHPlaylist.mpd?a=1763853291%2CZTFlMTA5NGM1M2E1NjFkOTA1ZjJlZWJlZDg4NjIzNmY4MzUyNDQ0NDcxM2Q4NmU2MTFhOTkxYzYwNTRmMjJkNA%3D%3D&v=1&f=sd', 'duration': 57, 'fallback_url': 'https://v.redd.it/25jaz4q9cnwf1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/25jaz4q9cnwf1/HLSPlaylist.m3u8?a=1763853291%2CMDgyNjk3NjZjOGRkMWY4MzhmMjkzMTlhYzkwYjYyMDUwNjQ3M2VkYzE0NDkzMzU5NTMxZjg4YjJlNWE3ZmUwZQ%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/25jaz4q9cnwf1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1920}}
t3_1od5dxw
/r/LocalLLaMA/comments/1od5dxw/i_created_a_corporatelevel_chat_ui_with_advanced/
false
false
https://external-preview…8c0adc1d27eaf4bb
124
{'enabled': False, 'images': [{'id': 'djdkNjQ3cTljbndmMfxTetg6CBBfStk5DiU9z2b-nKMFi6u2lJOyUFl3MEW8', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/djdkNjQ3cTljbndmMfxTetg6CBBfStk5DiU9z2b-nKMFi6u2lJOyUFl3MEW8.png?width=108&crop=smart&format=pjpg&auto=webp&s=2af3772b382b1d289219982c706086c4d127b399', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/djdkNjQ3cTljbndmMfxTetg6CBBfStk5DiU9z2b-nKMFi6u2lJOyUFl3MEW8.png?width=216&crop=smart&format=pjpg&auto=webp&s=964b5f268fad7e8b1fcc4be8b0a4548688a9d887', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/djdkNjQ3cTljbndmMfxTetg6CBBfStk5DiU9z2b-nKMFi6u2lJOyUFl3MEW8.png?width=320&crop=smart&format=pjpg&auto=webp&s=a8a4e44e07c9c2bd254ca91ac32aac95e1e8df3f', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/djdkNjQ3cTljbndmMfxTetg6CBBfStk5DiU9z2b-nKMFi6u2lJOyUFl3MEW8.png?width=640&crop=smart&format=pjpg&auto=webp&s=15f088b6083e2149502d1159784abc23e3e0da61', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/djdkNjQ3cTljbndmMfxTetg6CBBfStk5DiU9z2b-nKMFi6u2lJOyUFl3MEW8.png?width=960&crop=smart&format=pjpg&auto=webp&s=823119bdcb58404a67b03df66cdbb4c9bea05d6e', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/djdkNjQ3cTljbndmMfxTetg6CBBfStk5DiU9z2b-nKMFi6u2lJOyUFl3MEW8.png?width=1080&crop=smart&format=pjpg&auto=webp&s=4042a6c478092ca8bbe837c1e5005b8a2781eb6c', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/djdkNjQ3cTljbndmMfxTetg6CBBfStk5DiU9z2b-nKMFi6u2lJOyUFl3MEW8.png?format=pjpg&auto=webp&s=427d4c4e395e799c473fce3961f032ae82abdbd9', 'width': 1920}, 'variants': {}}]}