title
stringlengths
1
300
score
int64
0
8.54k
selftext
stringlengths
0
41.5k
created
timestamp[ns]date
2023-04-01 04:30:41
2026-03-04 02:14:14
url
stringlengths
0
878
author
stringlengths
3
20
domain
stringlengths
0
82
edited
timestamp[ns]date
1970-01-01 00:00:00
2026-02-19 14:51:53
gilded
int64
0
2
gildings
stringclasses
7 values
id
stringlengths
7
7
locked
bool
2 classes
media
stringlengths
646
1.8k
name
stringlengths
10
10
permalink
stringlengths
33
82
spoiler
bool
2 classes
stickied
bool
2 classes
thumbnail
stringlengths
4
213
ups
int64
0
8.54k
preview
stringlengths
301
5.01k
Run AI Locally on Your PC/Mac with Ollama (Free, No Cloud Needed)
0
Hey everyone 👋 I put together a quick tutorial (5 mins) on how to install **Ollama** and run AI models locally on your computer. 👉 Covers: * Installing Ollama on Windows/Mac * Running your first local LLM (Large Language Model) * Benefits of local AI vs cloud (privacy + free) * Quick demo
2025-09-06T12:05:07
https://youtu.be/q-7DH-YyrMM
amplifyabhi
youtu.be
1970-01-01T00:00:00
0
{}
1n9xuof
false
{'oembed': {'author_name': 'amplifyabhi', 'author_url': 'https://www.youtube.com/@amplifyabhi', 'height': 200, 'html': '<iframe width="267" height="200" src="https://www.youtube.com/embed/q-7DH-YyrMM?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen title="Make Your Own AI with Ollama | Run AI Locally in 5 Minutes (Free Setup Guide) | amplifyabhi"></iframe>', 'provider_name': 'YouTube', 'provider_url': 'https://www.youtube.com/', 'thumbnail_height': 360, 'thumbnail_url': 'https://i.ytimg.com/vi/q-7DH-YyrMM/hqdefault.jpg', 'thumbnail_width': 480, 'title': 'Make Your Own AI with Ollama | Run AI Locally in 5 Minutes (Free Setup Guide) | amplifyabhi', 'type': 'video', 'version': '1.0', 'width': 267}, 'type': 'youtube.com'}
t3_1n9xuof
/r/LocalLLaMA/comments/1n9xuof/run_ai_locally_on_your_pcmac_with_ollama_free_no/
false
false
default
0
{'enabled': False, 'images': [{'id': 'xpSYe6mdn6W9RVJO6AV30NR6hU9rQX_Pbk77UnRTS3k', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/xpSYe6mdn6W9RVJO6AV30NR6hU9rQX_Pbk77UnRTS3k.jpeg?width=108&crop=smart&auto=webp&s=df3972214f36807548fc3731d85c74c725b0c5e8', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/xpSYe6mdn6W9RVJO6AV30NR6hU9rQX_Pbk77UnRTS3k.jpeg?width=216&crop=smart&auto=webp&s=10e6917b2fe5ffd50ce5ad3017cc6e785b923e3f', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/xpSYe6mdn6W9RVJO6AV30NR6hU9rQX_Pbk77UnRTS3k.jpeg?width=320&crop=smart&auto=webp&s=4eaab445febb555477c996cd63a526cbe20e4072', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/xpSYe6mdn6W9RVJO6AV30NR6hU9rQX_Pbk77UnRTS3k.jpeg?auto=webp&s=4cf69a89d8ce560ccedca414bb4ed14f6f23af27', 'width': 480}, 'variants': {}}]}
What does your LLM set up look like right now?
12
There's so many options now and I'm getting lost trying to pick one (for coding specificlly). What's your go-to setup? Looking for something that just works without too much configuration.
2025-09-06T12:04:24
https://www.reddit.com/r/LocalLLaMA/comments/1n9xu5z/what_does_your_llm_set_up_look_like_right_now/
notdl
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1n9xu5z
false
null
t3_1n9xu5z
/r/LocalLLaMA/comments/1n9xu5z/what_does_your_llm_set_up_look_like_right_now/
false
false
self
12
null
🤖 Tried Using LLaMA for Student Revision Tools (Notes + Flashcards)
1
[removed]
2025-09-06T12:02:27
https://i.redd.it/w8s8r729bjnf1.png
worst-user-dev
i.redd.it
1970-01-01T00:00:00
0
{}
1n9xst5
false
null
t3_1n9xst5
/r/LocalLLaMA/comments/1n9xst5/tried_using_llama_for_student_revision_tools/
false
false
default
1
{'enabled': True, 'images': [{'id': 'w8s8r729bjnf1', 'resolutions': [{'height': 216, 'url': 'https://preview.redd.it/w8s8r729bjnf1.png?width=108&crop=smart&auto=webp&s=debd328df57c9a1e1efc1fdf804e40d7ad33bf94', 'width': 108}, {'height': 432, 'url': 'https://preview.redd.it/w8s8r729bjnf1.png?width=216&crop=smart&auto=webp&s=44252d19b83bc38940b8bcea34b4a498344708bf', 'width': 216}, {'height': 640, 'url': 'https://preview.redd.it/w8s8r729bjnf1.png?width=320&crop=smart&auto=webp&s=a56e6b5a558c06e0ba555f18eb23022bed5c9bcc', 'width': 320}, {'height': 1280, 'url': 'https://preview.redd.it/w8s8r729bjnf1.png?width=640&crop=smart&auto=webp&s=0061e7aaf603b81e472926e8db81771899e26f79', 'width': 640}, {'height': 1920, 'url': 'https://preview.redd.it/w8s8r729bjnf1.png?width=960&crop=smart&auto=webp&s=8a854cfb8f1fe2a134ca1064493f2c7754880d3a', 'width': 960}, {'height': 2160, 'url': 'https://preview.redd.it/w8s8r729bjnf1.png?width=1080&crop=smart&auto=webp&s=2dde209ba57c8afa5b97be29cc3dc2cb1fd8ea7a', 'width': 1080}], 'source': {'height': 2400, 'url': 'https://preview.redd.it/w8s8r729bjnf1.png?auto=webp&s=997ca39ef1b8633081983be453eea5836dff1976', 'width': 1080}, 'variants': {}}]}
Could Local LLaMA Power a Student-Friendly AI Tool?
1
[removed]
2025-09-06T11:54:42
https://i.redd.it/69r2828v9jnf1.png
onelove_lambo_dev
i.redd.it
1970-01-01T00:00:00
0
{}
1n9xndw
false
null
t3_1n9xndw
/r/LocalLLaMA/comments/1n9xndw/could_local_llama_power_a_studentfriendly_ai_tool/
false
false
default
1
{'enabled': True, 'images': [{'id': '69r2828v9jnf1', 'resolutions': [{'height': 216, 'url': 'https://preview.redd.it/69r2828v9jnf1.png?width=108&crop=smart&auto=webp&s=aaaa9d2bda4ea1ed96428a23017fc8502af005de', 'width': 108}, {'height': 432, 'url': 'https://preview.redd.it/69r2828v9jnf1.png?width=216&crop=smart&auto=webp&s=e3089f9dd19423cb34b695f1a7c9626d3bcd7326', 'width': 216}, {'height': 640, 'url': 'https://preview.redd.it/69r2828v9jnf1.png?width=320&crop=smart&auto=webp&s=021462e982deacf34492382a60eca7e121f62539', 'width': 320}, {'height': 1280, 'url': 'https://preview.redd.it/69r2828v9jnf1.png?width=640&crop=smart&auto=webp&s=5eac4319af765bfdf7d7a45f6652b11a834e8ed0', 'width': 640}, {'height': 1920, 'url': 'https://preview.redd.it/69r2828v9jnf1.png?width=960&crop=smart&auto=webp&s=787911ed104132cccf9cadd0cdfaf608e6f1e7a7', 'width': 960}, {'height': 2160, 'url': 'https://preview.redd.it/69r2828v9jnf1.png?width=1080&crop=smart&auto=webp&s=42781e5e946eb62071746f72ae021793365f3220', 'width': 1080}], 'source': {'height': 2400, 'url': 'https://preview.redd.it/69r2828v9jnf1.png?auto=webp&s=8348be6aeed7c5c7dc71905adec901766b1fa021', 'width': 1080}, 'variants': {}}]}
So I tried Qwen 3 Max skills for programming
220
# So I Tried Qwen 3 Max for Programming — Project VMP (Visualized Music Player) I wanted to see how far Qwen 3 Max could go when tasked with building a full project from a very detailed specification. The result: VMP — Visualized Music Player, a cyberpunk-style music player with FFT-based visualizations, crossfade playback, threading, and even a web terminal. **Prompt** # Tech Stack & Dependencies * Python 3.11 * pygame, numpy, mutagen, pydub, websockets * Requires FFmpeg in PATH * Runs with a simple BAT file on Windows * SDL hints set for Windows: * SDL\_RENDER\_DRIVER=direct3d * SDL\_HINT\_RENDER\_SCALE\_QUALITY=1 # Core Features # Configuration * AudioCfg, VisualCfg, UiCfg dataclasses with sane defaults * Global instances: AUDIO, VIS, UI # Logging * Custom logger vmp with console + rotating file handler * Optional WebTermHandler streams logs to connected websocket clients # FFmpeg Integration * Automatic FFmpeg availability check * On-demand decode with ffmpeg -ss ... -t ... into raw PCM * Reliable seeking via decoded segments # Music Library * Recursive scan for .mp3, .wav, .flac, .ogg, .m4a * Metadata via mutagen (fallback to smart filename guessing) * Sortable, with directory ignore list # DSP & Analysis * Stereo EQ (low shelf, peaking, high shelf) + softclip limiter * FFT analysis with Hann windows, band mapping, adaptive beat detection * Analysis LRU cache (capacity 64) for performance # Visualization * Cyberpunk ring with dotted ticks, glow halos, progress arc * Outward 64-band bars + central vocal pulse disc * Smooth envelopes, beat halos, \~60% transparent overlays * Fonts: cyberpunk.ttf if present, otherwise Segoe/Arial # Playback Model * pygame.mixer at 44.1 kHz stereo * Dual-channel system for precise seeking and crossfade overlap * Smooth cosine crossfade without freezing visuals * Modes: * Music = standard streaming * Channel = decoded segment playback (reliable seek) # Window & UI * Resizable window, optional fake fullscreen * Backgrounds with dark overlay, cache per resolution * Topmost toggle, drag-window mode (Windows) * Presets for HUD/FPS/TIME/TITLE (keys 1–5, V, F2) * Help overlay (H) shows all controls # Controls * Playback: Space pause/resume, N/P next/prev, S shuffle, R repeat-all * Seek: ←/→ −5s / +5s * Window/UI: F fake fullscreen, T topmost, B toggle backgrounds, \[/\] prev/next BG * Volume: Mouse wheel; volume display fades quickly * Quit: Esc / Q # Web Terminal * Optional --webterm flag * Websocket server on ws://localhost:3030 * Streams logs + accepts remote commands (n, p, space, etc.) # Performance * Low-CPU visualization mode (--viz-lowcpu) * Heavy operations skipped while paused * Preallocated NumPy buffers & surface caches * Threaded FFT + loader workers, priority queue for analysis # CLI Options --music-dir Path to your music library --backgrounds Path to background images --debug Verbose logging --shuffle Enable shuffle mode --repeat-all Repeat entire playlist --no-fft Disable FFT --viz-lowcpu Low CPU visualization --ext File extensions to include --ignore Ignore directories --no-tags Skip metadata tags --webterm Enable websocket terminal # Results * Crossfade works seamlessly, with no visual freeze * Seek is reliable thanks to FFmpeg segment decoding * Visualizations scale cleanly across windowed and fake-fullscreen modes * Handles unknown tags gracefully by guessing titles from filenames * Everything runs as a single script, no external modules beyond listed deps 👉 Full repo: [github.com/feckom/vmp](https://github.com/feckom/vmp) Results https://preview.redd.it/wixd9wdhzinf1.jpg?width=1282&format=pjpg&auto=webp&s=6b1a18941410cb3a7f4b0da54f36003298180dca https://preview.redd.it/m6chuvdhzinf1.jpg?width=1282&format=pjpg&auto=webp&s=0c0df79e54b59b2ab064e4f7c791bb7984297a8b https://preview.redd.it/bma8vwdhzinf1.jpg?width=1282&format=pjpg&auto=webp&s=bfe32593e27d63fd9e533c6202979bc9da6d8330
2025-09-06T11:19:58
https://www.reddit.com/r/LocalLLaMA/comments/1n9x1ho/so_i_tried_qwen_3_max_skills_for_programming/
TruckUseful4423
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1n9x1ho
false
null
t3_1n9x1ho
/r/LocalLLaMA/comments/1n9x1ho/so_i_tried_qwen_3_max_skills_for_programming/
false
false
https://a.thumbs.redditm…xIKEx2xEazL8.jpg
220
{'enabled': False, 'images': [{'id': '_Qhoi5tM5uwwG8h9pFbHe7_wEttk4KG4M_-539ZjdPE', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/_Qhoi5tM5uwwG8h9pFbHe7_wEttk4KG4M_-539ZjdPE.png?width=108&crop=smart&auto=webp&s=7ab53bde6a536b9197c59efedc2ef77b43e394af', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/_Qhoi5tM5uwwG8h9pFbHe7_wEttk4KG4M_-539ZjdPE.png?width=216&crop=smart&auto=webp&s=610f688a2c3f6f3099687ee351fddcd509c29138', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/_Qhoi5tM5uwwG8h9pFbHe7_wEttk4KG4M_-539ZjdPE.png?width=320&crop=smart&auto=webp&s=94bceead0ee29bd337252f05784e1884ec9befda', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/_Qhoi5tM5uwwG8h9pFbHe7_wEttk4KG4M_-539ZjdPE.png?width=640&crop=smart&auto=webp&s=7d6c7a702febc40739857dbd7082314de38697a5', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/_Qhoi5tM5uwwG8h9pFbHe7_wEttk4KG4M_-539ZjdPE.png?width=960&crop=smart&auto=webp&s=20236df1f5554b71313fc77f7335a7bff9166c25', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/_Qhoi5tM5uwwG8h9pFbHe7_wEttk4KG4M_-539ZjdPE.png?width=1080&crop=smart&auto=webp&s=9973f0f10fbffd78a35b3419bef4e8ae659b1dec', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/_Qhoi5tM5uwwG8h9pFbHe7_wEttk4KG4M_-539ZjdPE.png?auto=webp&s=79f23ca2f15a6924d83af11ac5a05197f225e477', 'width': 1200}, 'variants': {}}]}
asking for advice for creation of ai agent for scientific research (working prototype)
0
hi, so i got a task to create within a few days (less than a week) a working prototype for an agent that searches and summarizes literature for example based on relevancy, topics etc. I don't have much experience besides trying out some ReAct agent repo before. should i work through tutorials from langchain and langgraph? this seems the still most widely used framework. also ordered an intro book with langchain today. any ideas, suggestions?
2025-09-06T10:54:45
https://www.reddit.com/r/LocalLLaMA/comments/1n9wlxp/asking_for_advice_for_creation_of_ai_agent_for/
Emotional_Thanks_22
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1n9wlxp
false
null
t3_1n9wlxp
/r/LocalLLaMA/comments/1n9wlxp/asking_for_advice_for_creation_of_ai_agent_for/
false
false
self
0
null
Guess the price
0
[https://www.tomshardware.com/pc-components/ram/expansion-card-lets-you-insert-512gb-of-extra-ddr5-memory-into-your-pcie-slot-cxl-2-0-aic-designed-for-trx50-and-w790-workstation-motherboards](https://www.tomshardware.com/pc-components/ram/expansion-card-lets-you-insert-512gb-of-extra-ddr5-memory-into-your-pcie-slot-cxl-2-0-aic-designed-for-trx50-and-w790-workstation-motherboards)
2025-09-06T10:13:36
https://www.reddit.com/r/LocalLLaMA/comments/1n9vy51/guess_the_price/
MundanePercentage674
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1n9vy51
false
null
t3_1n9vy51
/r/LocalLLaMA/comments/1n9vy51/guess_the_price/
false
false
self
0
{'enabled': False, 'images': [{'id': 'LQkwi-CaeA-Hgw8sc3RlZUv7cEpJk4TQo-T6duy9E7E', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/LQkwi-CaeA-Hgw8sc3RlZUv7cEpJk4TQo-T6duy9E7E.jpeg?width=108&crop=smart&auto=webp&s=28ac377acb34a6d03dfba6563f2be07747a6b803', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/LQkwi-CaeA-Hgw8sc3RlZUv7cEpJk4TQo-T6duy9E7E.jpeg?width=216&crop=smart&auto=webp&s=c5e80a91f17a5e86e5d22d9794e8d492eadafe67', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/LQkwi-CaeA-Hgw8sc3RlZUv7cEpJk4TQo-T6duy9E7E.jpeg?width=320&crop=smart&auto=webp&s=38f2f53ae91de91a07f026b3db7e9c22c8c3a57a', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/LQkwi-CaeA-Hgw8sc3RlZUv7cEpJk4TQo-T6duy9E7E.jpeg?width=640&crop=smart&auto=webp&s=b5385ed326912dd2bbde62a0bb5a4f52dd14db75', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/LQkwi-CaeA-Hgw8sc3RlZUv7cEpJk4TQo-T6duy9E7E.jpeg?width=960&crop=smart&auto=webp&s=f39f2fd031e4221da5200fb43efbf5cc2f1aa1d6', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/LQkwi-CaeA-Hgw8sc3RlZUv7cEpJk4TQo-T6duy9E7E.jpeg?width=1080&crop=smart&auto=webp&s=ce9e43ee3f9331ebb7b1c75c61171cce45928e53', 'width': 1080}], 'source': {'height': 1125, 'url': 'https://external-preview.redd.it/LQkwi-CaeA-Hgw8sc3RlZUv7cEpJk4TQo-T6duy9E7E.jpeg?auto=webp&s=ddffc3bd860a3451005e91c641ce2452414e0f5b', 'width': 2000}, 'variants': {}}]}
Your opinions on gmktec evo x2 ai
3
Hi everyone, I'm considering importing the evo x2 with 128gb for general GenAI tasks like coding, planning, image/video/speech generation, along with some finetuning and CNN/LSTM training. Unfortunately I can't go for a custom build since GPUs are very expensive in my country, MoB selection is very limited, and can't import lots of components. So the evo x2 looked like a good "1 piece" solution. Anyone has an experience with it ? Is there better alternatives on the market for the same price point? Ps: framework tower looks too big to be passed as personal equipement, since a friend is bringing the evo in their suitcase. Link: https://www.gmktec.com/products/amd-ryzen%E2%84%A2-ai-max-395-evo-x2-ai-mini-pc?variant=64bbb08e-da87-4bed-949b-1652cd311770 Any help or opinion is appreciated, thank you!
2025-09-06T10:02:16
https://www.reddit.com/r/LocalLLaMA/comments/1n9vrpz/your_opinions_on_gmktec_evo_x2_ai/
BoredPhysicsStudent
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1n9vrpz
false
null
t3_1n9vrpz
/r/LocalLLaMA/comments/1n9vrpz/your_opinions_on_gmktec_evo_x2_ai/
false
false
self
3
null
You need 1 Al tool - Not 10 for study and research.
1
[removed]
2025-09-06T09:42:57
https://nexnotes-ai.pages.dev
maybe_ai_is_dead
nexnotes-ai.pages.dev
1970-01-01T00:00:00
0
{}
1n9vgmf
false
null
t3_1n9vgmf
/r/LocalLLaMA/comments/1n9vgmf/you_need_1_al_tool_not_10_for_study_and_research/
false
false
default
1
null
double the context window of any AI agent
12
i put together a package that helps deal with the context window problem in llms. instead of just truncating old messages, it uses embeddings to semantically deduplicate, rerank, and trim context so you can fit more useful info into the model’s token budget. basic usage looks like this: import { optimizePrompt } from "double-context"; const result = await optimizePrompt({ userPrompt: "summarize recent apple earnings", context: [ "apple quarterly earnings rose 15% year-over-year in q3 2024", "apple revenue increased by 15% year-over-year", // deduped "the eiffel tower is in paris", // deprioritized "apple's iphone sales remained strong", "apple ceo tim cook expressed optimism about ai integration" ], maxTokens: 200, openaiApiKey: process.env.OPENAI_API_KEY, dedupe: true, strategy: "relevance" }); console.log(result.finalPrompt); there’s also an optimizer for whole chat histories, useful if you’re building bots that otherwise waste tokens repeating themselves: import { optimizeChatHistory } from "double-context"; const optimized = await optimizeChatHistory({ messages: conversation, maxTokens: 1000, openaiApiKey: process.env.OPENAI_API_KEY, dedupe: true, strategy: "hybrid" }); console.log(`optimized from ${conversation.length} to ${optimized.optimizedMessages.length} messages`); repo is here if you want to check it out or contribute: [https://github.com/Mikethebot44/LLM-context-expansion](https://github.com/Mikethebot44/LLM-context-expansion) to install: npm install double-context then just wrap your prompts or conversation history with it. hope you enjoy
2025-09-06T08:57:45
https://www.reddit.com/r/LocalLLaMA/comments/1n9urgv/double_the_context_window_of_any_ai_agent/
Lonely-Marzipan-9473
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1n9urgv
false
null
t3_1n9urgv
/r/LocalLLaMA/comments/1n9urgv/double_the_context_window_of_any_ai_agent/
false
false
self
12
{'enabled': False, 'images': [{'id': 'xRzuqOcIf2c99y9xnOGpoh74cOVpH_oRLiN72dHSNYk', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/xRzuqOcIf2c99y9xnOGpoh74cOVpH_oRLiN72dHSNYk.png?width=108&crop=smart&auto=webp&s=70826efa79f71fea1b2ea7079bbc51685cf0051d', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/xRzuqOcIf2c99y9xnOGpoh74cOVpH_oRLiN72dHSNYk.png?width=216&crop=smart&auto=webp&s=0e1f07d5389f462ca3cda95af60238d678baa9d3', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/xRzuqOcIf2c99y9xnOGpoh74cOVpH_oRLiN72dHSNYk.png?width=320&crop=smart&auto=webp&s=75b907faa6705717901fd63908ca4136999c7148', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/xRzuqOcIf2c99y9xnOGpoh74cOVpH_oRLiN72dHSNYk.png?width=640&crop=smart&auto=webp&s=d0acc427a1082755d5713084accf7d74a110bc65', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/xRzuqOcIf2c99y9xnOGpoh74cOVpH_oRLiN72dHSNYk.png?width=960&crop=smart&auto=webp&s=e350af22b60605ca959a607ea00d2d4c2ab3984a', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/xRzuqOcIf2c99y9xnOGpoh74cOVpH_oRLiN72dHSNYk.png?width=1080&crop=smart&auto=webp&s=7337eac702c4796a0d851bd9bf0788f4bce1f733', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/xRzuqOcIf2c99y9xnOGpoh74cOVpH_oRLiN72dHSNYk.png?auto=webp&s=219afc17dbbe577067d2a7d626985375678311d1', 'width': 1200}, 'variants': {}}]}
I'm searching for benchmarks or rankings specifically for Spanish performance.
5
But I can't find barely anyone comprehensive or reliable. Do you know any? Or do you have any specific recommendations? So far I kinda feel that for my system (16GB VRAM and 64GB RAM) Mistral is the best one at handling Spanish in a more native way, but the model isn't very smart.
2025-09-06T08:48:29
https://www.reddit.com/r/LocalLLaMA/comments/1n9umix/im_searching_for_benchmarks_or_rankings/
Roubbes
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1n9umix
false
null
t3_1n9umix
/r/LocalLLaMA/comments/1n9umix/im_searching_for_benchmarks_or_rankings/
false
false
self
5
null
How can I have koboldcpp run a specific model and prameters with just one shortcut click on desktop?
6
I mean i want to avoid to either enter the info or load a config file everytime. But just one click on desktop on a shortcut and run kobold with my preferred model which i run everytime would run.
2025-09-06T08:44:19
https://www.reddit.com/r/LocalLLaMA/comments/1n9uk5m/how_can_i_have_koboldcpp_run_a_specific_model_and/
FatFigFresh
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1n9uk5m
false
null
t3_1n9uk5m
/r/LocalLLaMA/comments/1n9uk5m/how_can_i_have_koboldcpp_run_a_specific_model_and/
false
false
self
6
null
Knowledge Distillation for Text-to-SQL — Training GPT-2 with Qwen2-7B as Teacher
0
Hey folks, I’ve been working on an experiment that combines **Knowledge Distillation (KD)** with the **Text-to-SQL problem**, and I wanted to share the results + repo with the community. # 🎯 Motivation * Natural language → SQL is a powerful way for **non-technical users** to query databases without always relying on analysts. * Most solutions use massive LLMs (GPT-4.1, etc.), but they’re **expensive**, **hard to deploy locally**, and raise **data privacy concerns**. * So the question I asked: *Can a much smaller model (like GPT-2) be trained to generate SQL for a given DB effectively if it learns from a bigger LLM?* # 🧠 Approach I used **Knowledge Distillation (KD)** — i.e., transferring knowledge from a large teacher model into a smaller student model. * **Teacher Model**: [Qwen2-7B]() * **Student Model**: [GPT-2]() Steps: 1. Built a **custom dataset** → pairs of (natural language query, SQL query) for a toy retail database schema. 2. Teacher (Qwen2-7B) generates SQL from the queries. 3. Student (GPT-2) is trained on two signals: * **Cross-Entropy Loss (75%)** → match ground-truth SQL. * **MSE Loss (25%)** → align with the teacher’s hidden state values (projected from teacher’s layer 25). 4. Trained for **20 epochs on Colab GPU (T4)**. # ⚙️ Training Setup * Teacher hidden states projected → aligned with GPT-2’s final hidden states. * Loss = **0.75 \* CE + 0.25 \* MSE**. * Achieved **total loss \~0.21** after training. # 📊 Results * GPT-2 (student) was able to **generate SQL queries directly from natural language** for the schema. * While not perfect (due to limited resources at my disposal), it showed that **small models can be viable for domain-specific SQL generation** when trained this way. * Benefits: * ⚡ Lightweight (runs locally). * 💸 Cost-efficient. * 🔐 More privacy-friendly than cloud-only LLM APIs. # 📷 Visuals in the repo: * Schema diagram (retail DB). * Teacher → Student distillation architecture. * Sample outputs (NL → SQL). # 📎 Repo Code + diagrams + outputs are here: 👉 [GitHub: Knowledge Distillation for SQL generation on GPT-2](https://github.com/Gokul-GMenon/Knowledge_Distillation-SQL_generation_on_gpt_2?utm_source=chatgpt.com) Would love feedback, suggestions, or discussions on: * Other lightweight models worth trying as students (LLaMA-7B distilled further? Phi-2?). * Improvements to the KD setup (layer selection, different projection strategies). * Extensions: applying this to more complex schemas / real enterprise DBs. Cheers! Can follow me in [LinkedIn](https://www.linkedin.com/in/gokul-g-menon/) as well for discussions
2025-09-06T08:44:12
https://www.reddit.com/r/LocalLLaMA/comments/1n9uk3m/knowledge_distillation_for_texttosql_training/
Confident-Meal3457
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1n9uk3m
false
null
t3_1n9uk3m
/r/LocalLLaMA/comments/1n9uk3m/knowledge_distillation_for_texttosql_training/
false
false
self
0
null
5060 ti 16GB vs 5070 12GB for LLM and diffusion
1
Decided to make another rig. Does anyone know which might be better out of 5060 ti 16GB vs 5070 12GB for LLM and diffusion? Difficult because 5060 ti 16GB has more VRAM but 5070 12GB has faster matrix multiplication.
2025-09-06T08:42:54
https://www.reddit.com/r/LocalLLaMA/comments/1n9ujdz/5060_ti_16gb_vs_5070_12gb_for_llm_and_diffusion/
No_Efficiency_1144
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1n9ujdz
false
null
t3_1n9ujdz
/r/LocalLLaMA/comments/1n9ujdz/5060_ti_16gb_vs_5070_12gb_for_llm_and_diffusion/
false
false
self
1
null
Please teach me how to make Kobold runt heough openweb ui?
1
I have both of them installed and running. I just don’t know how to connect them to eachother.
2025-09-06T08:33:04
https://www.reddit.com/r/LocalLLaMA/comments/1n9ue9z/please_teach_me_how_to_make_kobold_runt_heough/
FatFigFresh
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1n9ue9z
false
null
t3_1n9ue9z
/r/LocalLLaMA/comments/1n9ue9z/please_teach_me_how_to_make_kobold_runt_heough/
false
false
self
1
null
MINISFORUM MS-S1 Max AI PC features AMD Strix Halo, 80 Gbps USB, 10 Gb LAN, and PCie x16 - Liliputing
66
AMD Ryzen AI Max+ 395 processor, 128GB of LPDDR5x-8000 quad-channel memory with 256GB/s bandwidth, and the ability to run large large language models with over 100 billion parameters locally. And, it has pretty good connectivity options: 80 Gbps USB, 10 Gb LAN, and PCie x16. For comparison, the Framework Desktop has PCIe x4 only.
2025-09-06T08:28:13
https://liliputing.com/minisforum-ms-s1-max-ai-pc-features-amd-strix-halo-80-gbps-usb-10-gb-lan-and-pcie-x16/
NewtMurky
liliputing.com
1970-01-01T00:00:00
0
{}
1n9ubmn
false
null
t3_1n9ubmn
/r/LocalLLaMA/comments/1n9ubmn/minisforum_mss1_max_ai_pc_features_amd_strix_halo/
false
false
default
66
{'enabled': False, 'images': [{'id': 'QuRrltSklV9MMNc9P3Jo_YeEcOYyvyeX46KjOI0goqs', 'resolutions': [{'height': 71, 'url': 'https://external-preview.redd.it/QuRrltSklV9MMNc9P3Jo_YeEcOYyvyeX46KjOI0goqs.jpeg?width=108&crop=smart&auto=webp&s=27346ce88002d66c5f11c5a4557c523f051bb82a', 'width': 108}, {'height': 143, 'url': 'https://external-preview.redd.it/QuRrltSklV9MMNc9P3Jo_YeEcOYyvyeX46KjOI0goqs.jpeg?width=216&crop=smart&auto=webp&s=dd6b0d0efcf187c3b936470d3eccd8ab9bf807f4', 'width': 216}, {'height': 212, 'url': 'https://external-preview.redd.it/QuRrltSklV9MMNc9P3Jo_YeEcOYyvyeX46KjOI0goqs.jpeg?width=320&crop=smart&auto=webp&s=d4ff0e0eae8b96c49b5016cb4e489661068091d9', 'width': 320}, {'height': 424, 'url': 'https://external-preview.redd.it/QuRrltSklV9MMNc9P3Jo_YeEcOYyvyeX46KjOI0goqs.jpeg?width=640&crop=smart&auto=webp&s=b4e531500df8dc2e276ea41601eff7d38db6a0af', 'width': 640}, {'height': 636, 'url': 'https://external-preview.redd.it/QuRrltSklV9MMNc9P3Jo_YeEcOYyvyeX46KjOI0goqs.jpeg?width=960&crop=smart&auto=webp&s=9b76eac3ba3a2b023fddb043e325ca9679a20474', 'width': 960}, {'height': 715, 'url': 'https://external-preview.redd.it/QuRrltSklV9MMNc9P3Jo_YeEcOYyvyeX46KjOI0goqs.jpeg?width=1080&crop=smart&auto=webp&s=b8761119a0d49640621d4548c9f3bedd276b0bcd', 'width': 1080}], 'source': {'height': 750, 'url': 'https://external-preview.redd.it/QuRrltSklV9MMNc9P3Jo_YeEcOYyvyeX46KjOI0goqs.jpeg?auto=webp&s=a276d535592a4b2be84cbea33febcd465c6537dd', 'width': 1132}, 'variants': {}}]}
GPU Server
2
Hi folks, I've got 6 watercooled rtx 3090's. I'm ok for power but little else. I'd like to build a gpu server to run my LLM elsewhere on the network. I need a big case and the right motherboard recommendations! I often see those big atx cases for animators / CGI - something along those lines would be cool. Any help you can offer would be awesome. I'm using houtini-lm to offload big tasks to reduce Claude use on the big jobs.
2025-09-06T08:14:14
https://www.reddit.com/r/LocalLLaMA/comments/1n9u42f/gpu_server/
richardbaxter
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1n9u42f
false
null
t3_1n9u42f
/r/LocalLLaMA/comments/1n9u42f/gpu_server/
false
false
self
2
null
Has anyone built a Ryzen AI MAX-based NAS to hoard LLMs?
4
Can't find anything prebuild on the market. Want to get something compact to replace my spider of mini-PC + a bunch of external hard drives. The closest mass-produced is [https://aoostar.com/products/aoostar-wtr-max-amd-r7-pro-8845hs-11-bays-mini-pc?variant=50067345932586](https://aoostar.com/products/aoostar-wtr-max-amd-r7-pro-8845hs-11-bays-mini-pc?variant=50067345932586) but their latest model is still using prev. gen Ryzen.
2025-09-06T08:10:08
https://www.reddit.com/r/LocalLLaMA/comments/1n9u1u9/has_anyone_built_a_ryzen_ai_maxbased_nas_to_hoard/
lostmsu
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1n9u1u9
false
null
t3_1n9u1u9
/r/LocalLLaMA/comments/1n9u1u9/has_anyone_built_a_ryzen_ai_maxbased_nas_to_hoard/
false
false
self
4
{'enabled': False, 'images': [{'id': 'dwDGKEo1yikJgrmDb79G6Aavs-fo-Fn-qP9wleKyXBI', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/dwDGKEo1yikJgrmDb79G6Aavs-fo-Fn-qP9wleKyXBI.jpeg?width=108&crop=smart&auto=webp&s=47f41fc6dc2e7c8d9f6015882fd307e97c75fd1b', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/dwDGKEo1yikJgrmDb79G6Aavs-fo-Fn-qP9wleKyXBI.jpeg?width=216&crop=smart&auto=webp&s=664bdf8610c5429924f7d04877040e8c100343d3', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/dwDGKEo1yikJgrmDb79G6Aavs-fo-Fn-qP9wleKyXBI.jpeg?width=320&crop=smart&auto=webp&s=1ea4e1180e949d21c5d6645970ef5d8cb2c04bab', 'width': 320}, {'height': 640, 'url': 'https://external-preview.redd.it/dwDGKEo1yikJgrmDb79G6Aavs-fo-Fn-qP9wleKyXBI.jpeg?width=640&crop=smart&auto=webp&s=68db99dd5ff46afed058bca75fe0cc5f09de0f02', 'width': 640}, {'height': 960, 'url': 'https://external-preview.redd.it/dwDGKEo1yikJgrmDb79G6Aavs-fo-Fn-qP9wleKyXBI.jpeg?width=960&crop=smart&auto=webp&s=d97d2fb38d9b5ec1b30547cdf330339e17600219', 'width': 960}, {'height': 1080, 'url': 'https://external-preview.redd.it/dwDGKEo1yikJgrmDb79G6Aavs-fo-Fn-qP9wleKyXBI.jpeg?width=1080&crop=smart&auto=webp&s=e3bc86a596baa393b8440bdf93879eb9e4953c3e', 'width': 1080}], 'source': {'height': 1600, 'url': 'https://external-preview.redd.it/dwDGKEo1yikJgrmDb79G6Aavs-fo-Fn-qP9wleKyXBI.jpeg?auto=webp&s=0c1bdfabc76b28a02f42be8f41af3d64b69c5945', 'width': 1600}, 'variants': {}}]}
The Sonoma Dusk Alpha model finally gave up. Grok recognized his native token.
0
https://preview.redd.it/…neration button.
2025-09-06T08:08:30
https://www.reddit.com/r/LocalLLaMA/comments/1n9u0xx/the_sonoma_dusk_alpha_model_finally_gave_up_grok/
Objective-Good310
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1n9u0xx
false
null
t3_1n9u0xx
/r/LocalLLaMA/comments/1n9u0xx/the_sonoma_dusk_alpha_model_finally_gave_up_grok/
true
false
spoiler
0
null
[vllm] Hints to run Qwen3-235B MoE on 8x AMD mixed cards!
13
Today i found formula to launch gptq-4bit version of MoE model on 2xR9700 + 6x7900XTX. it's work's on very stable \~13-14 token/s output, and \~ 150-300 token input. GPU KV cache size: 633,264 tokens Maximum concurrency for 40,960 tokens per request: 15.46x GPU KV cache size: 275,840 tokens Maximum concurrency for 40,960 tokens per request: 6.73x it works for docker image: **rocm/vllm-dev:nightly\_main\_20250905** - HIP_VISIBLE_DEVICES=0,6,1,2,3,4,5,7 # first 2 gpu R9700, other is 7900xtx - VLLM_USE_V1=1 - VLLM_CUSTOM_OPS=all - PYTORCH_HIP_ALLOC_CONF=expandable_segments:True - SAFETENSORS_FAST_GPU=1 - PYTORCH_TUNABLEOP_ENABLED command: | sh -c ' vllm serve /app/models/models/vllm/Qwen3-235B-A22B-GPTQ-Int4 \ --served-model-name Qwen3-235B-A22B-GPTQ-Int4 \ --gpu-memory-utilization 0.97 \ --max-model-len 40960 \ --enable-auto-tool-choice \ --disable-log-requests \ --enable-chunked-prefill \ --max-num-batched-tokens 4096 \ --tool-call-parser qwen3_coder \ --max-num-seqs 8 \ --enable-expert-parallel \ --tensor-parallel-size 4 \ -pp 2 ' **The case to discuss:** 1. In case of -tp 4 and -pp 2, loading very long time and does not work. when we use -pp 4 and -tp 2, it show *Capturing CUDA graphs (mixed prefill-decode, PIECEWISE): 100% 5/5 \[00:06<00:00,  1.22s/it\]* at finish and model launched, in case with -tp 4, Capturing graphs takes 2-15 minutes per one iteration I think the problem in gpu\_memory\_mapping, but don't know how to resolve it correctly, to use amount of VRAM at all cards. When model loading in. tp 4 or tp 8, they spend a lot of resources to load correctly like this: [only uses group of 4 cards ](https://preview.redd.it/dr4ut0vi1inf1.png?width=2328&format=png&auto=webp&s=a9943d8c5ec361bf34b49549974f058acf87079f) 2. impossible to find ready quantized model **Qwen3-235B-A22B-Instruct-2507-GPTQ-Int4** Right now on the hugging face we have only QuantTrio/Qwen3-235B-A22B-Instruct-2507-GPTQ-Int4-Int8Mix which not work with our GPU 3. Maybe someone here can quantize **Qwen3-235B-A22B-Instruct** to **GPTQ-int4?** we need the same quantizåtion config as **original** GPTQ-int4. AWQ - not work compressed-tensors w8a8 - not work |Quant|Load|Error| |:-|:-|:-| |[Qwen3-235B-A22B-GPTQ-Int4](https://huggingface.co/Qwen/Qwen3-235B-A22B-GPTQ-Int4) |Yes|\-| |[Qwen3-30B-A3B-GPTQ-Int4](https://huggingface.co/Qwen/Qwen3-30B-A3B-GPTQ-Int4)|Yes|| |[Qwen3-Coder-30B-A3B-Instruct-FP8](https://huggingface.co/Qwen/Qwen3-Coder-30B-A3B-Instruct-FP8) |No|does not match the quantization method specified in the \`quantization\` argument (fp8\_e5m2)| |[Qwen3-Coder-30B-A3B-Instruct](https://huggingface.co/Qwen/Qwen3-Coder-30B-A3B-Instruct) |Yes|\-| |[Qwen3-235B-A22B-Instruct-2507-GPTQ-Int4-Int8Mix](https://huggingface.co/QuantTrio/Qwen3-235B-A22B-Instruct-2507-GPTQ-Int4-Int8Mix) |No|\-| **What you want to try?** Maybe someone here already launched this model with other config?
2025-09-06T08:04:17
https://www.reddit.com/r/LocalLLaMA/comments/1n9tyle/vllm_hints_to_run_qwen3235b_moe_on_8x_amd_mixed/
djdeniro
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1n9tyle
false
null
t3_1n9tyle
/r/LocalLLaMA/comments/1n9tyle/vllm_hints_to_run_qwen3235b_moe_on_8x_amd_mixed/
false
false
https://a.thumbs.redditm…z6V3WWJ7z1x0.jpg
13
{'enabled': False, 'images': [{'id': 'gQle-ct9ZsU6Ezd6HbP8H2dG0W33ZIVBMdf_DMes6RQ', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/gQle-ct9ZsU6Ezd6HbP8H2dG0W33ZIVBMdf_DMes6RQ.png?width=108&crop=smart&auto=webp&s=16e95d0e980297e3209eb9782d2620616e8d2e5e', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/gQle-ct9ZsU6Ezd6HbP8H2dG0W33ZIVBMdf_DMes6RQ.png?width=216&crop=smart&auto=webp&s=dbedb9eda3cc2b9aff88e0ad4d28b49c152aa605', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/gQle-ct9ZsU6Ezd6HbP8H2dG0W33ZIVBMdf_DMes6RQ.png?width=320&crop=smart&auto=webp&s=9b6cb8e919b88cf1da2447c5c8225f819ae7a260', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/gQle-ct9ZsU6Ezd6HbP8H2dG0W33ZIVBMdf_DMes6RQ.png?width=640&crop=smart&auto=webp&s=be30c0c202a551d118c68a43dee226930ae1008b', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/gQle-ct9ZsU6Ezd6HbP8H2dG0W33ZIVBMdf_DMes6RQ.png?width=960&crop=smart&auto=webp&s=1aaab40b2e56b0b2c37dc084a0dd89518cac1bea', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/gQle-ct9ZsU6Ezd6HbP8H2dG0W33ZIVBMdf_DMes6RQ.png?width=1080&crop=smart&auto=webp&s=8e289755f5d5e54b7b17dc496673bd7e5a05e8c8', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/gQle-ct9ZsU6Ezd6HbP8H2dG0W33ZIVBMdf_DMes6RQ.png?auto=webp&s=b38dc41c2b18de81bf1ae9594855800aad0d10ca', 'width': 1200}, 'variants': {}}]}
Sonoma Sky Alpha system prompt
0
he gave away the system instructions too easily You are Sonoma, built by Oak AI. You are Sonoma Sky Alpha, a large language model from an unknown provider. Formatting Rules: - Use Markdown **only when semantically appropriate**. Examples: `inline code`, ```code fences```, tables, and lists. - In assistant responses, format file names, directory paths, function names, and class names with backticks (`). - For math: use \( and \) for inline expressions, and \[ and \] for display (block) math.
2025-09-06T07:50:01
https://www.reddit.com/r/LocalLLaMA/comments/1n9tqqa/sonoma_sky_alpha_system_prompt/
Objective-Good310
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1n9tqqa
false
null
t3_1n9tqqa
/r/LocalLLaMA/comments/1n9tqqa/sonoma_sky_alpha_system_prompt/
false
false
self
0
null
Can anyone explain me about openrouter ?
1
[removed]
2025-09-06T07:48:08
https://www.reddit.com/r/LocalLLaMA/comments/1n9tpp7/can_anyone_explain_me_about_openrouter/
Ok_Internet1963
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1n9tpp7
false
null
t3_1n9tpp7
/r/LocalLLaMA/comments/1n9tpp7/can_anyone_explain_me_about_openrouter/
false
false
self
1
null
Huh
103
Credit: @itsandrewgao in Twitter/X
2025-09-06T07:42:34
https://i.redd.it/t3jlsbkw0inf1.jpeg
Own-Potential-2308
i.redd.it
1970-01-01T00:00:00
0
{}
1n9tmlw
false
null
t3_1n9tmlw
/r/LocalLLaMA/comments/1n9tmlw/huh/
false
false
default
103
{'enabled': True, 'images': [{'id': 't3jlsbkw0inf1', 'resolutions': [{'height': 128, 'url': 'https://preview.redd.it/t3jlsbkw0inf1.jpeg?width=108&crop=smart&auto=webp&s=7b82c1fa856b4af6240afae0d5c9e280b73b001a', 'width': 108}, {'height': 257, 'url': 'https://preview.redd.it/t3jlsbkw0inf1.jpeg?width=216&crop=smart&auto=webp&s=52240ed91ae3031955a9be0f8660df1db5281efa', 'width': 216}, {'height': 382, 'url': 'https://preview.redd.it/t3jlsbkw0inf1.jpeg?width=320&crop=smart&auto=webp&s=b5c78eace147e21fabcb48a2035de81b5b3b509f', 'width': 320}, {'height': 764, 'url': 'https://preview.redd.it/t3jlsbkw0inf1.jpeg?width=640&crop=smart&auto=webp&s=8c22b13d196b95cac8002ee924fd1bf0f8bcc18e', 'width': 640}, {'height': 1146, 'url': 'https://preview.redd.it/t3jlsbkw0inf1.jpeg?width=960&crop=smart&auto=webp&s=92f590f51133df4d3aa7055ce960d0bd4e9d5332', 'width': 960}, {'height': 1289, 'url': 'https://preview.redd.it/t3jlsbkw0inf1.jpeg?width=1080&crop=smart&auto=webp&s=9c1903821b255c5fe5f97588ddcc861da8981efc', 'width': 1080}], 'source': {'height': 1408, 'url': 'https://preview.redd.it/t3jlsbkw0inf1.jpeg?auto=webp&s=f79a05da7fcec9099717c2013f0b36d65e16234c', 'width': 1179}, 'variants': {}}]}
Guys I have a question about Openrouter
1
[removed]
2025-09-06T07:41:47
https://www.reddit.com/r/LocalLLaMA/comments/1n9tm6j/guys_i_have_a_question_about_openrouter/
Ok_Internet1963
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1n9tm6j
false
null
t3_1n9tm6j
/r/LocalLLaMA/comments/1n9tm6j/guys_i_have_a_question_about_openrouter/
false
false
self
1
null
HuggingFaceModelDownloader v2.0 — fast resume, a slick TUI, and powerful filters for GGUF/variants
9
Just shipped v2.0 of my Go CLI for pulling models/datasets from the HF Hub. New release brings a live TUI, filesystem-only resume, JSON logs for CI, and—star of the show—LFS name filters so you grab only what you need (e.g., q4_0, q5_0). Why it’s different: Filter exactly the artifacts you want: inline like owner/name:filter1,filter2 or via -F/--filters; optional --append-filter-subdir to auto-bucket per filter. Perfect for GGUF quant variants. Rock-solid resume + verification: SHA-256 for LFS, size checks for non-LFS; multipart range downloads resume by part. Great terminal UX: live per-file bars, speeds, ETA; graceful plain-text fallback. Ops-ready: structured --json progress events; tunable concurrency/retries/backoff; no stray metadata files. Compared to other options: The official hf download/snapshot_download give basics (progress bars, caching), but not this TUI, filter subdir layout, or a machine-readable progress event stream for CI. Quick taste (filters): # Only q4_0 & q5_0, auto-subfolders per filter hfdownloader download TheBloke/Mistral-7B-Instruct-v0.2-GGUF:q4_0,q5_0 \ --append-filter-subdir -o ./Models -c 8 --max-active 3 (You can also pass -F "q4_0,q5_0" if you prefer flags.) Repo & README: https://github.com/bodaay/HuggingFaceModelDownloader
2025-09-06T07:40:21
https://www.reddit.com/r/LocalLLaMA/comments/1n9tleg/huggingfacemodeldownloader_v20_fast_resume_a/
bodaaay
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1n9tleg
false
null
t3_1n9tleg
/r/LocalLLaMA/comments/1n9tleg/huggingfacemodeldownloader_v20_fast_resume_a/
false
false
self
9
{'enabled': False, 'images': [{'id': 'wHb_NWUhbgMMk2W-NOV7Pqh0foJ37FoPHhJ-9H4us0k', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/wHb_NWUhbgMMk2W-NOV7Pqh0foJ37FoPHhJ-9H4us0k.png?width=108&crop=smart&auto=webp&s=25cc3d6390c7b068808c57320621fc1f2d09d904', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/wHb_NWUhbgMMk2W-NOV7Pqh0foJ37FoPHhJ-9H4us0k.png?width=216&crop=smart&auto=webp&s=d72ab98becedbc4bfe67fa5591188ff0d95e5dc7', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/wHb_NWUhbgMMk2W-NOV7Pqh0foJ37FoPHhJ-9H4us0k.png?width=320&crop=smart&auto=webp&s=87d628b9829cf7dff97c0e3dcfeae84389f0fac4', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/wHb_NWUhbgMMk2W-NOV7Pqh0foJ37FoPHhJ-9H4us0k.png?width=640&crop=smart&auto=webp&s=d52af1ceb893140cec72ee2da76b6df8e6ee2ac2', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/wHb_NWUhbgMMk2W-NOV7Pqh0foJ37FoPHhJ-9H4us0k.png?width=960&crop=smart&auto=webp&s=835f93befeb5f56c376e9a8ae2894dc99ffc126b', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/wHb_NWUhbgMMk2W-NOV7Pqh0foJ37FoPHhJ-9H4us0k.png?width=1080&crop=smart&auto=webp&s=8d5584e3b140bc46a5b4224c84841d5f5d6cecf7', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/wHb_NWUhbgMMk2W-NOV7Pqh0foJ37FoPHhJ-9H4us0k.png?auto=webp&s=7a755dd25cbb80946154494c812fb3b4f5ff8fd4', 'width': 1200}, 'variants': {}}]}
Minisforum MS-S1 MAX... Strix Halo with PCIe x16 slot?!
15
And NOW we're talking. Wonder what happened in between AMD saying "nope, you only get 16 lanes total" to "oh actually..." No more 2x 4x nvme?
2025-09-06T07:27:23
https://videocardz.com/newz/minisforum-ms-s1-max-to-feature-ryzen-ai-max-395-up-to-160w-and-usb4-v2
igorwarzocha
videocardz.com
1970-01-01T00:00:00
0
{}
1n9te37
false
null
t3_1n9te37
/r/LocalLLaMA/comments/1n9te37/minisforum_mss1_max_strix_halo_with_pcie_x16_slot/
false
false
default
15
{'enabled': False, 'images': [{'id': '5Y8bes7UxWs-nZZq-BL78UDnlYWcWqATtxnqi_ST2Rk', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/5Y8bes7UxWs-nZZq-BL78UDnlYWcWqATtxnqi_ST2Rk.jpeg?width=108&crop=smart&auto=webp&s=c1e1d1f4cef5798a862638857d575c2e66f28507', 'width': 108}, {'height': 112, 'url': 'https://external-preview.redd.it/5Y8bes7UxWs-nZZq-BL78UDnlYWcWqATtxnqi_ST2Rk.jpeg?width=216&crop=smart&auto=webp&s=737d992dc78043a6979b0199963e094da18bc190', 'width': 216}, {'height': 166, 'url': 'https://external-preview.redd.it/5Y8bes7UxWs-nZZq-BL78UDnlYWcWqATtxnqi_ST2Rk.jpeg?width=320&crop=smart&auto=webp&s=524c3858433a0ae9a0811d2250d64bdc509aeb0b', 'width': 320}, {'height': 332, 'url': 'https://external-preview.redd.it/5Y8bes7UxWs-nZZq-BL78UDnlYWcWqATtxnqi_ST2Rk.jpeg?width=640&crop=smart&auto=webp&s=a3074af57c160ebd5fa8d86161f0d34851432325', 'width': 640}, {'height': 499, 'url': 'https://external-preview.redd.it/5Y8bes7UxWs-nZZq-BL78UDnlYWcWqATtxnqi_ST2Rk.jpeg?width=960&crop=smart&auto=webp&s=f490b9bad753a5bab399437eacb55fe286818b00', 'width': 960}, {'height': 561, 'url': 'https://external-preview.redd.it/5Y8bes7UxWs-nZZq-BL78UDnlYWcWqATtxnqi_ST2Rk.jpeg?width=1080&crop=smart&auto=webp&s=ad82caba64e9402c49b7ca31cba0036471605c1a', 'width': 1080}], 'source': {'height': 1300, 'url': 'https://external-preview.redd.it/5Y8bes7UxWs-nZZq-BL78UDnlYWcWqATtxnqi_ST2Rk.jpeg?auto=webp&s=fabbb5b1532957f7e612d96c14e4964e574ec1b4', 'width': 2500}, 'variants': {}}]}
What is the most effective way to have your local LLM search the web?
116
I would love if I could get web results the same way ChatGPT does.
2025-09-06T06:43:22
https://www.reddit.com/r/LocalLLaMA/comments/1n9sod7/what_is_the_most_effective_way_to_have_your_local/
teknic111
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1n9sod7
false
null
t3_1n9sod7
/r/LocalLLaMA/comments/1n9sod7/what_is_the_most_effective_way_to_have_your_local/
false
false
self
116
null
APXML Hardware Calculator accurate?
3
I found that site https://apxml.com/tools/vram-calculator that takes your hardware info and tells you how well you can run a certain model (with given context size and concurrent users). Do you think this is accurate? It would be a great resource for shopping gpus and deciding on models. I noticed that you cannot input your CPU + ram even when selecting to offload a percentage of layers wich is questionable in my opinion. But maybe the GPU part could be at least a good approximation. If you are currently running a local LLM (I assume most if you do), compare your actual results with the calculator ones. As the calculator does not support and cards I cant evaluate my setup beyond the 'custom gpu' selection which does not ask for speeds only vram and therefore can't be accurate.
2025-09-06T06:02:07
https://apxml.com/tools/vram-calculator
_camera_up
apxml.com
1970-01-01T00:00:00
0
{}
1n9rznr
false
null
t3_1n9rznr
/r/LocalLLaMA/comments/1n9rznr/apxml_hardware_calculator_accurate/
false
false
default
3
null
Did my first Lora training for Stable Diffusion real cool crazy art output
3
https://preview.redd.it/…onds to complete
2025-09-06T05:33:39
https://www.reddit.com/r/LocalLLaMA/comments/1n9rio1/did_my_first_lora_training_for_stable_diffusion/
meshreplacer
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1n9rio1
false
null
t3_1n9rio1
/r/LocalLLaMA/comments/1n9rio1/did_my_first_lora_training_for_stable_diffusion/
false
false
https://b.thumbs.redditm…MNw1che6Dj3I.jpg
3
null
Master the SEO Timeline: Proven Factors, Strategies, and Results | Raghav Gupta
1
2025-09-06T05:10:07
https://www.linkedin.com/posts/raghav-gupta-582004301_how-quick-seen-high-result-in-seo-activity-7369814088450834432-Pk_k?utm_source=share&utm_medium=member_desktop&rcm=ACoAAE0QrjIB2mDspXD0h-mdTZJIjXmQua4ochc
Effective_Lemon_358
linkedin.com
1970-01-01T00:00:00
0
{}
1n9r3wx
false
null
t3_1n9r3wx
/r/LocalLLaMA/comments/1n9r3wx/master_the_seo_timeline_proven_factors_strategies/
false
false
default
1
null
[Level 1] Building Personalized Text Summarization - Following up on Personal Chatbot Success
2
**Background from Level 0:** Successfully completed my first fine-tuning project (personal chatbot) using Unsloth + abliterated Llama 3.2 3B with 1400 examples. Thanks to community advice, switched from regular Llama to `huihui-ai/Llama-3.2-3B-Instruct-abliterated` which solved safety trigger issues. Model now responds as me instead of generic AI assistant responses. Previous post: [https://www.reddit.com/r/LocalLLaMA/comments/1n81d1t/level\_0\_finetuned\_my\_first\_personal\_chatbot/ ](https://www.reddit.com/r/LocalLLaMA/comments/1n81d1t/level_0_finetuned_my_first_personal_chatbot/) Google Colab Code: [https://colab.research.google.com/drive/1Az3gFYEKSzPouxrhvES7v5oafyhnm80v#scrollTo=0yBqGdhl\_po9](https://colab.research.google.com/drive/1Az3gFYEKSzPouxrhvES7v5oafyhnm80v#scrollTo=0yBqGdhl_po9) **Level 1 Challenge:** Want to build personalized text summarization that reflects my teaching/explanation style. Instead of generic summaries, I want summaries that follow my specific approach. **My summarization style:** 1. Start with simple, kid-friendly analogy (even for adults) 2. Build technical implementation/definition from that analogy 3. Use same analogy to answer follow-up questions 4. Extend the analogy for related subtopics 5. Include visual diagrams when possible 6. Derive formulas step-by-step, explaining each variable Example: Explaining machine learning as "teaching a kid to recognize cats" → building to training data, algorithms, parameters → extending to deep learning as "layered understanding" → deriving the mathematical formulas with each variable explained. Hope this explain things... **Technical questions:** 1. **Dataset creation**: How do I create training data for this specific style? Do I manually summarize 500+ documents in my approach, or is there a smarter way to capture this pattern? 2. **Model choice**: Should I fine-tune a dedicated summarization model or extend my existing personal chatbot to handle summarization tasks? 3. **Style capture**: How do I train the model to consistently use analogies first, then build technical concepts? This seems harder than just "write summaries." 4. **Multi-document handling**: How do I handle different content types (research papers vs articles vs documentation) while maintaining my explanation style? **My setup:** M4 MacBook (16GB RAM), comfortable with Unsloth workflow, can use Colab for training. **What worked from Level 0 that I'll reuse:** * Abliterated models to avoid safety lectures * Quality over quantity for dataset * LoRA fine-tuning approach * Gradio interface for testing **Specific help needed:** * Examples of style-specific summarization datasets * Techniques for teaching consistent explanation patterns * Whether my teaching style is too complex for current fine-tuning methods Has anyone tackled personalized summarization before? What approaches worked/failed? Appreciate if someone provide me a step by step method on how to make this one. Also got some comments on my last post questioning my model choice, I am a beginner so my choices aren't so good and are naive but am learning across each step. Sorry to offend anyone.
2025-09-06T04:55:19
https://www.reddit.com/r/LocalLLaMA/comments/1n9qugr/level_1_building_personalized_text_summarization/
FastCommission2913
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1n9qugr
false
null
t3_1n9qugr
/r/LocalLLaMA/comments/1n9qugr/level_1_building_personalized_text_summarization/
false
false
self
2
{'enabled': False, 'images': [{'id': 'nkhh65ujo5BznFJFojoMPaKjGuLSpPj6KGhRov-ykOg', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/nkhh65ujo5BznFJFojoMPaKjGuLSpPj6KGhRov-ykOg.png?width=108&crop=smart&auto=webp&s=3118973964e59402feea50688d746b67ecd3d2df', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/nkhh65ujo5BznFJFojoMPaKjGuLSpPj6KGhRov-ykOg.png?width=216&crop=smart&auto=webp&s=0e2f90964c81a1de52938be6bcb08665605293f2', 'width': 216}], 'source': {'height': 260, 'url': 'https://external-preview.redd.it/nkhh65ujo5BznFJFojoMPaKjGuLSpPj6KGhRov-ykOg.png?auto=webp&s=3ea22acc6f5634a7b861b56e2c98736d10235554', 'width': 260}, 'variants': {}}]}
LLM on Playstation 5
0
Teach me how ! 🤣
2025-09-06T04:45:15
https://www.reddit.com/r/LocalLLaMA/comments/1n9qnzw/llm_on_playstation_5/
f2466321
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1n9qnzw
false
null
t3_1n9qnzw
/r/LocalLLaMA/comments/1n9qnzw/llm_on_playstation_5/
false
false
self
0
null
Building a New AI Server
3
I am planning on switching out 2 to 3 of my Dell 740xd's for a single solid AI server for image generation and LLM Models. I tried the new AMD Strix Halo and it had a lot of issues with image generation not working correctly. Because of that I couldn't justify the 2k price tag on one of them. I was thinking about going with a Mac Studio, but not sure about the cost and how close they are to a refresh. So what I what I narrowed it down to was an Epyc system with either the 9004 or 9005 generation with a decent amount of RAM. I would think that they should perform at a very similar level to the Strix Halo and have the ability to run a GPU like a 5090 or RTX 6000 Pro. If anyone has any first hand experience with a server similar to this, I would love your input. Thank you to everyone for the help trying to come up with something.
2025-09-06T04:03:44
https://www.reddit.com/r/LocalLLaMA/comments/1n9px21/building_a_new_ai_server/
Ikyo75
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1n9px21
false
null
t3_1n9px21
/r/LocalLLaMA/comments/1n9px21/building_a_new_ai_server/
false
false
self
3
null
I built a native iOS AI client to chat with GPT, Gemini, and Local Models simultaneously, with full API parameter customization.
26
Hey r/LocalLLaMA, I was looking for a native iOS client that would let me chat with many AI models simultaneously and with deep customization. Since I couldn't find one that fit my needs perfectly, I built [LavaChat](https://apps.apple.com/us/app/lavachat-your-ai-hub/id6748080403). 🌋 **(Image 1):** The core idea is a clean, native iOS interface where you can chat with multiple AIs at once. You can send one prompt and get responses from GPT, Gemini, DeepSeek, and your own local model running on Ollama, all in the same chat. 🌋 **(Image 2):** Responses are stacked like cards. You can easily swipe through them to compare answers. Your next prompt continues the conversation with whichever AI is on top. 🌋 **(Image 3):** A clean, tab-based navigation. The far left is for chats, and right next to it is the management center for all your AI providers, models, and instances. 🌋 **(Image 4 & 5):** This is where it gets interesting. LavaChat is built for customization. * **Connect to Anything:** You can add your own API endpoints. It supports OpenAI, Anthropic, and Google API formats, which means you can connect to local models served via Ollama, llama.cpp, etc. * **Full Parameter Control:** You have granular control over **every** API parameter. If the model's API exposes it, you can tweak it—system prompts, temperature, and even model-specific JSON parameters. 🌋 **(Image 6):** Save and insert your frequently used prompts (like character sheets or complex instructions) with a single tap. 🌋 **(Image 7):** Create custom "AI Actions". For example, create a one-tap action that uses an AI to refine your prompt before sending it, or makes the AI's own response more concise. 🌋 **(Image 8):** Configure different presets for various chat scenarios. This includes context length, search/creativity toggles, and even showing/hiding specific system or AI action buttons. 🌋 **(Image 9):** Easily share and import your setups. You can export your AI instances, chat settings, or entire conversations via a file, iCloud link, or QR code. It's a free download on the App Store, and I'd love to hear your feedback. App Store Link: [https://apps.apple.com/us/app/lavachat-your-ai-hub/id6748080403](https://apps.apple.com/us/app/lavachat-your-ai-hub/id6748080403)
2025-09-06T03:54:29
https://www.reddit.com/gallery/1n9pqo7
ArtichokePretty8741
reddit.com
1970-01-01T00:00:00
0
{}
1n9pqo7
false
null
t3_1n9pqo7
/r/LocalLLaMA/comments/1n9pqo7/i_built_a_native_ios_ai_client_to_chat_with_gpt/
false
false
https://b.thumbs.redditm…15HgIyPbo-Vs.jpg
26
null
Is there any grok2 models that support SGLang --tp 4?
1
Is there any grok2 models that support SGLang --tp 4? The only official model requires eight cards --tp 8. So I figured I would ask because there's cards now with Nvidia-rtx-6000-pro-96gb and h200-141gb vram. I don't have the skill to remix the official into --tp 4 variant model.
2025-09-06T03:36:36
https://www.reddit.com/r/LocalLLaMA/comments/1n9peo5/is_there_any_grok2_models_that_support_sglang_tp_4/
night0x63
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1n9peo5
false
null
t3_1n9peo5
/r/LocalLLaMA/comments/1n9peo5/is_there_any_grok2_models_that_support_sglang_tp_4/
false
false
self
1
null
Sonoma Sky Alpha (Grok-4.1-thinking) > Sonoma Dusk Alpha (Grok-4.1-base), but free Gemini still beats both of those pricey xAI models.
0
2025-09-06T03:28:55
https://i.redd.it/c280h0w1rgnf1.png
balianone
i.redd.it
1970-01-01T00:00:00
0
{}
1n9p9jt
false
null
t3_1n9p9jt
/r/LocalLLaMA/comments/1n9p9jt/sonoma_sky_alpha_grok41thinking_sonoma_dusk_alpha/
false
false
default
0
{'enabled': True, 'images': [{'id': 'c280h0w1rgnf1', 'resolutions': [{'height': 216, 'url': 'https://preview.redd.it/c280h0w1rgnf1.png?width=108&crop=smart&auto=webp&s=13eaf15a0eaf98a84d0ad599b76b2207cdd17b2a', 'width': 108}, {'height': 432, 'url': 'https://preview.redd.it/c280h0w1rgnf1.png?width=216&crop=smart&auto=webp&s=265fc56f8fc3f7fd57ba732a7af22192d9b4f6a4', 'width': 216}, {'height': 640, 'url': 'https://preview.redd.it/c280h0w1rgnf1.png?width=320&crop=smart&auto=webp&s=d624b15f7f54fa0a31ee8eb70725a49a105555be', 'width': 320}, {'height': 1280, 'url': 'https://preview.redd.it/c280h0w1rgnf1.png?width=640&crop=smart&auto=webp&s=d0dd6aa287afb42e712d274d8ec1b45a292319f9', 'width': 640}, {'height': 1920, 'url': 'https://preview.redd.it/c280h0w1rgnf1.png?width=960&crop=smart&auto=webp&s=21952739d25185518d4c2ca19801111995eea5ad', 'width': 960}, {'height': 2160, 'url': 'https://preview.redd.it/c280h0w1rgnf1.png?width=1080&crop=smart&auto=webp&s=422c8115c68986435b0cd200c1c2169cfeb4abf1', 'width': 1080}], 'source': {'height': 3202, 'url': 'https://preview.redd.it/c280h0w1rgnf1.png?auto=webp&s=06d1befe259a4775916c00c33897b665dbc9336e', 'width': 1440}, 'variants': {}}]}
Can someone please benchmark gpt-oss-20b on Mi50 and P100/P40?
2
I'm on the fence about buying one of these and I need to know which one has the better prompt processing. Please, I'll stop saying mean things about your mom.
2025-09-06T02:58:23
https://www.reddit.com/r/LocalLLaMA/comments/1n9oos5/can_someone_please_benchmark_gptoss20b_on_mi50/
thejacer
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1n9oos5
false
null
t3_1n9oos5
/r/LocalLLaMA/comments/1n9oos5/can_someone_please_benchmark_gptoss20b_on_mi50/
false
false
self
2
null
Kimi K2 0905 is a beast at coding
108
So I've been working on this static website, just a side project where I can do some blogging or some fun javascript experiments, but I've been making this new component, basically implementing custom scrolling and pagination behaviours from scratch. Anyways, I was facing a bunch of tough bugs, in complete deadlock, even tried asking Deepseek/Gemini/even went for one response from Opus, no luck. Then, decided to try the new Kimi, and bam. One shot, instantly solved the issue, and did it with some tastefully commented (think somewhere between Gemini and Qwen levels of comment-ness) and good-practice code. I was impressed, so I decided to just toss in my entire CSS/HTML skeleton as well as a fuck it, and when it was done, the result was so much prettier than the one I had originally. Damn, I thought, so I decided to toss it a few more problems: implement dark mode handling for the entire skeleton using only CSS and a js button, and implement another style hotswapping feature I had been thinking of. Five minutes, and they both were done flawlessly. I'm no javascript wiz, so I imagine all of that would probably have taken me around another two or three hours. With Kimi, I did it in like 10 minutes. What's more is that it cracked bugs that even the previous SOTA models, my go-tos, couldn't do. Wow. I'm impressed. (Sorry, no images; the website is publicly accessible and linked to my real name, so I'd prefer not to link it to this account in any way.)
2025-09-06T02:56:29
https://www.reddit.com/r/LocalLLaMA/comments/1n9ong4/kimi_k2_0905_is_a_beast_at_coding/
adumdumonreddit
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1n9ong4
false
null
t3_1n9ong4
/r/LocalLLaMA/comments/1n9ong4/kimi_k2_0905_is_a_beast_at_coding/
false
false
self
108
null
Kimi K2-0905 is a powerhouse VS claude-sonnet-4 @20250514.
52
Been heavily builidng with claude-sonnet-4@20250514, but threw $5 into OpenRouter and gave K2-0905 and WOW. Not sure if its a “better” model, but seems to chew through tasks in a “better” way.
2025-09-06T02:55:27
https://www.reddit.com/r/LocalLLaMA/comments/1n9omqj/kimi_k20905_is_a_powerhouse_vs_claudesonnet4/
klippers
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1n9omqj
false
null
t3_1n9omqj
/r/LocalLLaMA/comments/1n9omqj/kimi_k20905_is_a_powerhouse_vs_claudesonnet4/
false
false
self
52
null
I have found a solution to chatbot monetization - looking for builder/founder to chat/compensated trial
19
I’m a small indie builder trying to solve a problem many of us had with chatbots: subscriptions and IAPs turn a lot of users off, but “random banner ads” are also kinda ass. I've been experimenting with a different approach: show a *single, relevant* sponsored suggestion only when a user’s intent clearly matches (e.g., budgeting prompt → budgeting app; sleep/self-care prompt → a CBT or journaling app). Frequency is capped and entirely under the builder’s control, and it can be switched off at any time. I’d love to sanity-check this with folks actually shipping AI companion / entertainment chatbots (web or mobile). If you’re open to trying it, we’ll do a short, *paid* trial: small stipend for your time, you keep any ad revenue during the test, and we’ll share back what we learn (win or fail). No pressure to continue afterward. If this idea sounds off, I genuinely want to hear why; If you’re curious and want to get the trial going, DM me and I’ll send details! Thanks for reading—happy to be told this is a bad idea if that’s the consensus. I’m here to learn from folks who build these products every day.
2025-09-06T02:54:38
https://www.reddit.com/r/LocalLLaMA/comments/1n9om6e/i_have_found_a_solution_to_chatbot_monetization/
Ok-Pineapple8638
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1n9om6e
false
null
t3_1n9om6e
/r/LocalLLaMA/comments/1n9om6e/i_have_found_a_solution_to_chatbot_monetization/
false
false
self
19
null
Sonoma Sky Alpha vs Sonoma Dusk Alpha vs Qwen3 Max
5
2025-09-06T02:29:32
https://v.redd.it/b9631jqmggnf1
sirjoaco
/r/LocalLLaMA/comments/1n9o4ju/sonoma_sky_alpha_vs_sonoma_dusk_alpha_vs_qwen3_max/
1970-01-01T00:00:00
0
{}
1n9o4ju
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/b9631jqmggnf1/DASHPlaylist.mpd?a=1759847377%2CNTc1MmZhOWZlMWEyYjljMDUzMmEwMGY2ZmVmYmNkYmQxMWViYmRkNzRjM2QwOTc0ZTgxYjkwODc0MGE2OGQzZQ%3D%3D&v=1&f=sd', 'duration': 96, 'fallback_url': 'https://v.redd.it/b9631jqmggnf1/DASH_1080.mp4?source=fallback', 'has_audio': False, 'height': 1080, 'hls_url': 'https://v.redd.it/b9631jqmggnf1/HLSPlaylist.m3u8?a=1759847377%2CODM4YTM1ZjRmMWY4Mjc2ZjExOTM4MjFlYzAwZjczNGRlMGFlYzdlNjM4MTI2ZjYwZjVhMDRlMGYyNzViMDExZg%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/b9631jqmggnf1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1788}}
t3_1n9o4ju
/r/LocalLLaMA/comments/1n9o4ju/sonoma_sky_alpha_vs_sonoma_dusk_alpha_vs_qwen3_max/
false
false
https://external-preview…89a81666aa55e58c
5
{'enabled': False, 'images': [{'id': 'M2l6Zm5pcW1nZ25mMecPaJjwpzcuivJPlsj7Idxv71gNw7nK7hkP_SxEgIDX', 'resolutions': [{'height': 65, 'url': 'https://external-preview.redd.it/M2l6Zm5pcW1nZ25mMecPaJjwpzcuivJPlsj7Idxv71gNw7nK7hkP_SxEgIDX.png?width=108&crop=smart&format=pjpg&auto=webp&s=5accfec6d93c7597d9f9399fa18fbfb42780e967', 'width': 108}, {'height': 130, 'url': 'https://external-preview.redd.it/M2l6Zm5pcW1nZ25mMecPaJjwpzcuivJPlsj7Idxv71gNw7nK7hkP_SxEgIDX.png?width=216&crop=smart&format=pjpg&auto=webp&s=601ffd37cbcaf0a63deaaa48b979a40d8b1b7489', 'width': 216}, {'height': 193, 'url': 'https://external-preview.redd.it/M2l6Zm5pcW1nZ25mMecPaJjwpzcuivJPlsj7Idxv71gNw7nK7hkP_SxEgIDX.png?width=320&crop=smart&format=pjpg&auto=webp&s=593ac4a115f2c0fa0477fd12e148329d99d68412', 'width': 320}, {'height': 386, 'url': 'https://external-preview.redd.it/M2l6Zm5pcW1nZ25mMecPaJjwpzcuivJPlsj7Idxv71gNw7nK7hkP_SxEgIDX.png?width=640&crop=smart&format=pjpg&auto=webp&s=e7e1cec9d17368e0ac75bdfd0bc2e75f58f40672', 'width': 640}, {'height': 580, 'url': 'https://external-preview.redd.it/M2l6Zm5pcW1nZ25mMecPaJjwpzcuivJPlsj7Idxv71gNw7nK7hkP_SxEgIDX.png?width=960&crop=smart&format=pjpg&auto=webp&s=e2af2b49d81f42a6ec25e326793e47366d088376', 'width': 960}, {'height': 652, 'url': 'https://external-preview.redd.it/M2l6Zm5pcW1nZ25mMecPaJjwpzcuivJPlsj7Idxv71gNw7nK7hkP_SxEgIDX.png?width=1080&crop=smart&format=pjpg&auto=webp&s=f019125b54431478b4a13c1aa21832b810d435b0', 'width': 1080}], 'source': {'height': 2168, 'url': 'https://external-preview.redd.it/M2l6Zm5pcW1nZ25mMecPaJjwpzcuivJPlsj7Idxv71gNw7nK7hkP_SxEgIDX.png?format=pjpg&auto=webp&s=9428ae35303e573bbab193691d0ab06c2fcce114', 'width': 3588}, 'variants': {}}]}
ROG Ally X with RTX 6000 Pro Blackwell Max-Q as Makeshift LLM Workstation
183
So my workstation motherboard stopped working and needed to be sent for replacement in warranty. Leaving my research work and LLM workflow screwed. Off a random idea stuck one of my RTX 6000 Blackwell into a EGPU enclosure (Aoostar AG02) and tried it on my travel device, the ROG Ally X and it kinda blew my mind on how good this makeshift temporary setup was working. Never thought I would using my Ally for hosting 235B parameter LLM models, yet with the GPU, I was getting very good performance at 1100+ tokens/sec prefill, 25+ tokens/sec decode on Qwen3-235B-A22B-Instruct-2507 with 180K context using a custom quant I made in ik-llama.cpp (attention projections, embeddings, lm\_head at q8\_0, expert up/gate at iq2\_kt, down at iq3\_kt, total 75 GB size). Also tested GLM 4.5 Air with unsloth's Q4\_K\_XL, could easily run with full 128k context. I am perplexed how good the models are all running even at PCIE 4 x 4 on a eGPU.
2025-09-06T02:29:21
https://www.reddit.com/gallery/1n9o4em
susmitds
reddit.com
1970-01-01T00:00:00
0
{}
1n9o4em
false
null
t3_1n9o4em
/r/LocalLLaMA/comments/1n9o4em/rog_ally_x_with_rtx_6000_pro_blackwell_maxq_as/
false
false
https://b.thumbs.redditm…10f3FRgO-6QE.jpg
183
null
TIL about Design Arena, a website that compares all the vibe coding apps' design skill level
0
very cool! [designarena.ai](http://designarena.ai)
2025-09-06T02:28:43
https://i.redd.it/k34nheptggnf1.png
gpt-4-api
i.redd.it
1970-01-01T00:00:00
0
{}
1n9o3yp
false
null
t3_1n9o3yp
/r/LocalLLaMA/comments/1n9o3yp/til_about_design_arena_a_website_that_compares/
false
false
default
0
{'enabled': True, 'images': [{'id': 'k34nheptggnf1', 'resolutions': [{'height': 60, 'url': 'https://preview.redd.it/k34nheptggnf1.png?width=108&crop=smart&auto=webp&s=5d1444fbae5b848bdb02a98bc09aca1740bb02ac', 'width': 108}, {'height': 121, 'url': 'https://preview.redd.it/k34nheptggnf1.png?width=216&crop=smart&auto=webp&s=0fb92fa63abbb888d4423d6d95554dbcec312ef8', 'width': 216}, {'height': 180, 'url': 'https://preview.redd.it/k34nheptggnf1.png?width=320&crop=smart&auto=webp&s=e0b2ce71b02655ee1eb7be0f7cdab484da779d90', 'width': 320}, {'height': 360, 'url': 'https://preview.redd.it/k34nheptggnf1.png?width=640&crop=smart&auto=webp&s=83642c7056d9037e85d6f443d3cdd18ba5edc9f7', 'width': 640}, {'height': 541, 'url': 'https://preview.redd.it/k34nheptggnf1.png?width=960&crop=smart&auto=webp&s=41e33da245ae4eb6b1e77af774600679b7a23590', 'width': 960}, {'height': 609, 'url': 'https://preview.redd.it/k34nheptggnf1.png?width=1080&crop=smart&auto=webp&s=67d65ddbefae771e2ff4b817d8570167d7698073', 'width': 1080}], 'source': {'height': 1410, 'url': 'https://preview.redd.it/k34nheptggnf1.png?auto=webp&s=d9e1b798b2740dad10c412cfb7747b0f8b521238', 'width': 2500}, 'variants': {}}]}
Best LLM for python coding
0
I am trying to make an ANKI add on, and have been using claude opus 4.1 for free seems to give me the best results so far for python coding. Has anyone had better results with another LLM though. Thanks
2025-09-06T02:28:29
https://www.reddit.com/r/LocalLLaMA/comments/1n9o3tb/best_llm_for_python_coding/
horraceiscool
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1n9o3tb
false
null
t3_1n9o3tb
/r/LocalLLaMA/comments/1n9o3tb/best_llm_for_python_coding/
false
false
self
0
null
Microsoft sucks
0
moved my RX 7900XT graphicas card from a ubuntu machine to a win 11. dropped from 130 tokens/s on 30 B Qwen3 immediately to 65 tokens/s.
2025-09-06T02:24:54
https://www.reddit.com/r/LocalLLaMA/comments/1n9o18i/microsoft_sucks/
OldEffective9726
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1n9o18i
false
null
t3_1n9o18i
/r/LocalLLaMA/comments/1n9o18i/microsoft_sucks/
false
false
self
0
null
TIL about Design Arena, a website that compares the design performance of all the vibe coding apps
1
2025-09-06T02:22:27
https://i.redd.it/ldmhfr86fgnf1.png
OkCartographer935
i.redd.it
1970-01-01T00:00:00
0
{}
1n9nzhc
false
null
t3_1n9nzhc
/r/LocalLLaMA/comments/1n9nzhc/til_about_design_arena_a_website_that_compares/
false
false
default
1
{'enabled': True, 'images': [{'id': 'ldmhfr86fgnf1', 'resolutions': [{'height': 60, 'url': 'https://preview.redd.it/ldmhfr86fgnf1.png?width=108&crop=smart&auto=webp&s=f124773cc5a7d9ea248ae4b2f29e05cfd6fcd488', 'width': 108}, {'height': 121, 'url': 'https://preview.redd.it/ldmhfr86fgnf1.png?width=216&crop=smart&auto=webp&s=8af06abd621ee88be88ed888763b8946fb3bd06c', 'width': 216}, {'height': 180, 'url': 'https://preview.redd.it/ldmhfr86fgnf1.png?width=320&crop=smart&auto=webp&s=a9f0a0f735cbcde0d5d090766bc636d8b1b40c48', 'width': 320}, {'height': 360, 'url': 'https://preview.redd.it/ldmhfr86fgnf1.png?width=640&crop=smart&auto=webp&s=f04156100c33b2fa30438571f73bff826c7ed9a8', 'width': 640}, {'height': 541, 'url': 'https://preview.redd.it/ldmhfr86fgnf1.png?width=960&crop=smart&auto=webp&s=d846c629a366d99ca1e8284d21c3aaace4b30061', 'width': 960}, {'height': 609, 'url': 'https://preview.redd.it/ldmhfr86fgnf1.png?width=1080&crop=smart&auto=webp&s=af7dca96672c424d2f61e3cfb3bcb215eb32e94b', 'width': 1080}], 'source': {'height': 1410, 'url': 'https://preview.redd.it/ldmhfr86fgnf1.png?auto=webp&s=0c6ff9276ee856d83d3141bf883be8b044b4c6a4', 'width': 2500}, 'variants': {}}]}
OpenRouter introduces new stealth models with a 2 million context window
314
2025-09-06T01:35:50
https://i.redd.it/mvy1r1af7gnf1.png
Outside-Iron-8242
i.redd.it
1970-01-01T00:00:00
0
{}
1n9n1qo
false
null
t3_1n9n1qo
/r/LocalLLaMA/comments/1n9n1qo/openrouter_introduces_new_stealth_models_with_a_2/
false
false
default
314
{'enabled': True, 'images': [{'id': 'mvy1r1af7gnf1', 'resolutions': [{'height': 98, 'url': 'https://preview.redd.it/mvy1r1af7gnf1.png?width=108&crop=smart&auto=webp&s=96de85d093665f31e1ad3609e344a4efaea5be74', 'width': 108}, {'height': 197, 'url': 'https://preview.redd.it/mvy1r1af7gnf1.png?width=216&crop=smart&auto=webp&s=fa9986f524f5b746b18d1626cab9817c01b87bb1', 'width': 216}, {'height': 292, 'url': 'https://preview.redd.it/mvy1r1af7gnf1.png?width=320&crop=smart&auto=webp&s=5e3b1f1c14e3a8bc8bc726fa2b18c8192139fdb5', 'width': 320}, {'height': 585, 'url': 'https://preview.redd.it/mvy1r1af7gnf1.png?width=640&crop=smart&auto=webp&s=4500f29aeab7367202cd2301a150d638a8167820', 'width': 640}, {'height': 878, 'url': 'https://preview.redd.it/mvy1r1af7gnf1.png?width=960&crop=smart&auto=webp&s=18cedd7cd5868c8b3ea8175d7dbdce3efe3cb7d7', 'width': 960}, {'height': 988, 'url': 'https://preview.redd.it/mvy1r1af7gnf1.png?width=1080&crop=smart&auto=webp&s=58fd87c336a3c356da42b8ed15714d86837bc477', 'width': 1080}], 'source': {'height': 988, 'url': 'https://preview.redd.it/mvy1r1af7gnf1.png?auto=webp&s=1855bc7be10368bb185c1994feb9e6b59ae421be', 'width': 1080}, 'variants': {}}]}
OpenRouter introduces new stealth models w/ a 2 mill context window
1
2025-09-06T01:35:07
https://i.redd.it/vd5j3wi27gnf1.png
Outside-Iron-8242
i.redd.it
1970-01-01T00:00:00
0
{}
1n9n17c
false
null
t3_1n9n17c
/r/LocalLLaMA/comments/1n9n17c/openrouter_introduces_new_stealth_models_w_a_2/
false
false
default
1
{'enabled': True, 'images': [{'id': 'vd5j3wi27gnf1', 'resolutions': [{'height': 98, 'url': 'https://preview.redd.it/vd5j3wi27gnf1.png?width=108&crop=smart&auto=webp&s=0d1217ace578759b585c58ed64a8555d22ef7cd9', 'width': 108}, {'height': 197, 'url': 'https://preview.redd.it/vd5j3wi27gnf1.png?width=216&crop=smart&auto=webp&s=291dd12468ff7bf38d3812e2ed64a917899915f5', 'width': 216}, {'height': 292, 'url': 'https://preview.redd.it/vd5j3wi27gnf1.png?width=320&crop=smart&auto=webp&s=d819343a0b022d909480d6af202b4193bb506cec', 'width': 320}, {'height': 585, 'url': 'https://preview.redd.it/vd5j3wi27gnf1.png?width=640&crop=smart&auto=webp&s=1a4ca5733543690894fa329c14b4c8260c0c2f56', 'width': 640}, {'height': 878, 'url': 'https://preview.redd.it/vd5j3wi27gnf1.png?width=960&crop=smart&auto=webp&s=76446fd8bd745b9985693ca206534472edb86a5a', 'width': 960}, {'height': 988, 'url': 'https://preview.redd.it/vd5j3wi27gnf1.png?width=1080&crop=smart&auto=webp&s=76eef8d7610316d08923910a3679b8ad889867d9', 'width': 1080}], 'source': {'height': 988, 'url': 'https://preview.redd.it/vd5j3wi27gnf1.png?auto=webp&s=345aa7c99e2b6bb12cb5d7bc1f48dcce7c649448', 'width': 1080}, 'variants': {}}]}
Fully Annotated Guide to "What are Diffusion Models?"
6
Diffusion models are the de facto standard for image generation. Lilian Weng’s “What Are Diffusion Models?” is an excellent introduction to it, but readers without a solid mathematical background may struggle. This article fills that gap with clear, step‑by‑step derivations and explanations. https://ki-seki.github.io/posts/250902-diffusion-annotated/
2025-09-06T00:50:38
https://www.reddit.com/r/LocalLLaMA/comments/1n9m4wa/fully_annotated_guide_to_what_are_diffusion_models/
song-sc
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1n9m4wa
false
null
t3_1n9m4wa
/r/LocalLLaMA/comments/1n9m4wa/fully_annotated_guide_to_what_are_diffusion_models/
false
false
self
6
null
Do you guys trust Andrej Karpathy?
0
2025-09-06T00:12:51
https://i.redd.it/3ml8p2h7sfnf1.png
balianone
i.redd.it
1970-01-01T00:00:00
0
{}
1n9lcnq
false
null
t3_1n9lcnq
/r/LocalLLaMA/comments/1n9lcnq/do_you_guys_trust_andrej_karpathy/
false
false
default
0
{'enabled': True, 'images': [{'id': '3ml8p2h7sfnf1', 'resolutions': [{'height': 53, 'url': 'https://preview.redd.it/3ml8p2h7sfnf1.png?width=108&crop=smart&auto=webp&s=15715441075133580d6336a7743404b54201d479', 'width': 108}, {'height': 107, 'url': 'https://preview.redd.it/3ml8p2h7sfnf1.png?width=216&crop=smart&auto=webp&s=ed0fee8167af37dc08e30ac7f1f572b9f6754a9c', 'width': 216}, {'height': 158, 'url': 'https://preview.redd.it/3ml8p2h7sfnf1.png?width=320&crop=smart&auto=webp&s=bbc64faa0ea38377e2e15b6b04e3b5d584899941', 'width': 320}, {'height': 317, 'url': 'https://preview.redd.it/3ml8p2h7sfnf1.png?width=640&crop=smart&auto=webp&s=7d4e064005045f892c34a31fa59e7b82f5e52628', 'width': 640}, {'height': 476, 'url': 'https://preview.redd.it/3ml8p2h7sfnf1.png?width=960&crop=smart&auto=webp&s=0c24d5e6ac19eb90582b664f1f9568b5ce4a1b3e', 'width': 960}], 'source': {'height': 477, 'url': 'https://preview.redd.it/3ml8p2h7sfnf1.png?auto=webp&s=7f805c455fa9dd616958053d7337a160755a8783', 'width': 961}, 'variants': {}}]}
VDM: A self-hosted Virtual Dungeon Master for multiplayer RPGs or Storytelling.
1
[removed]
2025-09-05T23:58:09
https://www.reddit.com/r/LocalLLaMA/comments/1n9l1ms/vdm_a_selfhosted_virtual_dungeon_master_for/
NighthawkXL
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1n9l1ms
false
null
t3_1n9l1ms
/r/LocalLLaMA/comments/1n9l1ms/vdm_a_selfhosted_virtual_dungeon_master_for/
false
false
self
1
null
New post flair: "local only"
198
A new post flair has been created, "local only". Please use this flair on new posts to denote: * Your post is about **local** LLM technology, * Comments should be focused primarily on **local** LLM technology. If your main interest in this subreddit is to read about / discuss local LLM technology, you can filter your view through the "local only" flair [like so,](https://www.reddit.com/r/LocalLLaMA/search?q=flair%3A%22local+only%22) and all of the "noise" about closed models, API costs, etc will become hidden from view.
2025-09-05T23:51:46
https://www.reddit.com/r/LocalLLaMA/comments/1n9kwwr/new_post_flair_local_only/
ttkciar
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1n9kwwr
false
null
t3_1n9kwwr
/r/LocalLLaMA/comments/1n9kwwr/new_post_flair_local_only/
false
false
self
198
null
Anyone else annoyed how LLMs always assume bad faith?
0
Especially Claude or chatgpt, ask a question that could be interpreted multiple ways and it often assumes you're trying to do something bad without any proof. And not even obvious things like violence or such. Gives me dystopian vibes, considering these companies break so many laws themselves
2025-09-05T23:49:51
https://www.reddit.com/r/LocalLLaMA/comments/1n9kvfi/anyone_else_annoyed_how_llms_always_assume_bad/
One_Long_996
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1n9kvfi
false
null
t3_1n9kvfi
/r/LocalLLaMA/comments/1n9kvfi/anyone_else_annoyed_how_llms_always_assume_bad/
false
false
self
0
null
In the event of an apocalypse, humanity's future survival is likely far more secured now thanks to locally run LLMs.
1
[removed]
2025-09-05T23:29:16
https://www.reddit.com/r/LocalLLaMA/comments/1n9kfdk/in_the_event_of_an_apocalypse_humanitys_future/
Spanky2k
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1n9kfdk
false
null
t3_1n9kfdk
/r/LocalLLaMA/comments/1n9kfdk/in_the_event_of_an_apocalypse_humanitys_future/
false
false
self
1
null
Help me with building llama
0
I'm new to the AI things .... Also started programming very recent, so I'm heavily dependent on chatgpt. Chat gpt got me nerve it made me download the same libraries multiple times ,.... Not blaming but Is there a dedicated video which might help beginners in building local LLM model? Also, if anyone built it and have a public repo? Plz share it so I can learn something ...thanks
2025-09-05T21:45:42
https://www.reddit.com/r/LocalLLaMA/comments/1n9i0p6/help_me_with_building_llama/
disco_767
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1n9i0p6
false
null
t3_1n9i0p6
/r/LocalLLaMA/comments/1n9i0p6/help_me_with_building_llama/
false
false
self
0
null
When LLMs Grow Hands and Feet, How to Design our Agentic RL Systems?
3
Lately I’ve been building AI agents for scientific research. In addition to build better agent scaffold, to make AI agents truly useful, LLMs need to do more than just think—they need to **use tools, run code, and interact with complex environments**. That’s why we need **Agentic RL**. While working on this, I notice the underlying RL systems must evolve to support these new capabilities. So, I wrote a blog post to capture my thoughts and lessons learned.  **“When LLMs Grow Hands and Feet, How to Design our Agentic RL Systems?”** https://preview.redd.it/abgto1kb2fnf1.png?width=1656&format=png&auto=webp&s=cac5e0e3e7f51e94c0f6534bbb3741372bb6b82a TL;DR: The frontier of AI is moving from simple-response generation to solving complex, multi-step problems through agents. Previous RL frameworks for LLMs aren’t built for this—they struggle with the heavy, heterogeneous resource demands that agents need, like isolated environments or tool interactions. In the blog, I cover: * How RL for LLM-based agents differs from traditional RL for LLM. * The critical system challenges when scaling agentic RL. * Emerging solutions top labs and companies are using  If you’re interested in agentic intelligence—LLMs that don’t just think but act—I go into the nuts and bolts of what it takes to make this work in practice. [https://amberljc.github.io/blog/2025-09-05-agentic-rl-systems.html](https://amberljc.github.io/blog/2025-09-05-agentic-rl-systems.html)
2025-09-05T21:45:16
https://www.reddit.com/r/LocalLLaMA/comments/1n9i0b8/when_llms_grow_hands_and_feet_how_to_design_our/
Pleasant-Type2044
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1n9i0b8
false
null
t3_1n9i0b8
/r/LocalLLaMA/comments/1n9i0b8/when_llms_grow_hands_and_feet_how_to_design_our/
false
false
https://b.thumbs.redditm…0A_4d6nBl2CA.jpg
3
null
How to locally run bigger models like qwen3 coder 480b
15
I already have a 5090 and was researching what would i need to be able to host something like qwen3 coder locally with ok speeds. And together with some research i came with up with this Part Model Est. EU Price (incl. VAT) Motherboard Supermicro H13DSH (dual SP5, 24 DIMM slots) ~€1,320 CPUs 2 × AMD EPYC 9124 (16c, 2P-capable) ~€2,300 (both) RAM 24 × 32 GB DDR5-4800 ECC RDIMM (768 GB total) ~€1,700–1,900 Coolers 2 × Supermicro SNK-P0083AP4 (SP5) ~€200 Case SilverStone ALTA D1 (SSI-EEB tower) ~€730 PSU Seasonic PRIME TX-1600 (ATX 3.1) ~€500 Storage 2 × 2 TB NVMe PCIe 4.0 (mirror) ~€300 Total (without GPU, that i already have): ~€6,750–7,000 What im not sure how to get about how many tokens could i expect, the only estimation is like 20 - 70 tokens and thats a huge range.
2025-09-05T21:23:24
https://www.reddit.com/r/LocalLLaMA/comments/1n9hh6m/how_to_locally_run_bigger_models_like_qwen3_coder/
anedisi
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1n9hh6m
false
null
t3_1n9hh6m
/r/LocalLLaMA/comments/1n9hh6m/how_to_locally_run_bigger_models_like_qwen3_coder/
false
false
self
15
null
VibeVoice came back. Though many may not like it.
143
[VibeVoice](https://github.com/microsoft/VibeVoice) has returned(*not* VibeVoice-large); however, Microsoft plans to implement censorship due to people's "misuse of research". Here's the quote from the repo: > What types of censorship will be implemented? And couldn’t people just use or share older, unrestricted versions they've already downloaded? That's going to be interesting...
2025-09-05T21:19:36
https://www.reddit.com/r/LocalLLaMA/comments/1n9hduk/vibevoice_came_back_though_many_may_not_like_it/
Fresh_Sun_1017
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1n9hduk
false
null
t3_1n9hduk
/r/LocalLLaMA/comments/1n9hduk/vibevoice_came_back_though_many_may_not_like_it/
false
false
self
143
{'enabled': False, 'images': [{'id': 'WuF5WcE35gbJCEZFBUpm-e_RZySBcC6GD3D_4eG_m0c', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/WuF5WcE35gbJCEZFBUpm-e_RZySBcC6GD3D_4eG_m0c.png?width=108&crop=smart&auto=webp&s=0c3cdb7fde76d863052f764dc10dc00dbfa37924', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/WuF5WcE35gbJCEZFBUpm-e_RZySBcC6GD3D_4eG_m0c.png?width=216&crop=smart&auto=webp&s=731a5762d13b1a9dc918b9c5bd1a690053c96e0b', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/WuF5WcE35gbJCEZFBUpm-e_RZySBcC6GD3D_4eG_m0c.png?width=320&crop=smart&auto=webp&s=a281eef6b70543f60699319e229aed0031966244', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/WuF5WcE35gbJCEZFBUpm-e_RZySBcC6GD3D_4eG_m0c.png?width=640&crop=smart&auto=webp&s=e7f0cb3322191a510cb2071cb52ece7122fc8c29', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/WuF5WcE35gbJCEZFBUpm-e_RZySBcC6GD3D_4eG_m0c.png?width=960&crop=smart&auto=webp&s=3de291f26016c485a53ff57df74c4b63e429dd09', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/WuF5WcE35gbJCEZFBUpm-e_RZySBcC6GD3D_4eG_m0c.png?width=1080&crop=smart&auto=webp&s=69e4ede0335073509974c0dde6a39350e9fe6698', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/WuF5WcE35gbJCEZFBUpm-e_RZySBcC6GD3D_4eG_m0c.png?auto=webp&s=8478a4484152ce37c0e4873371fa9790615490d5', 'width': 1200}, 'variants': {}}]}
Is a "swarm mind" of local LLM agents possible?
5
Hey, apologies if this is a dumb question. I've been working with LLMs pulled from Ollama for a while now and I've been planning on working on a project where I can use the combined strengths of several models like code-gen models, document summarization models, and a general model for chats. I want the models to work in sync with each other while having a memory management layer around the chats so that each model can, in a way, "pass the baton" of context to the other model seamlessly. I've implemented a barebones version of this but the issue is the latency. Currently, the implementation is a glorified Ollama wrapper written on python. I want to dig deeper and engineer a solution to make different models work together cohesively. Is this idea possible or am I going on a wild goose chase? Help me out of the "Valley of Despair"!
2025-09-05T21:00:00
https://www.reddit.com/r/LocalLLaMA/comments/1n9gw93/is_a_swarm_mind_of_local_llm_agents_possible/
dramaticalllama123
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1n9gw93
false
null
t3_1n9gw93
/r/LocalLLaMA/comments/1n9gw93/is_a_swarm_mind_of_local_llm_agents_possible/
false
false
self
5
null
trouble with disabling thinking on ollama
0
Hey guys so i installed gpt-oss 20b and when i type: set nothink , it doesnt disable thinking and i was wondering why is that? since when i tried it with qwen it works can someone help me thanks. (i installed it from ollama and i run it thru terminal, have enough v-ram for the 20b model)
2025-09-05T20:54:46
https://www.reddit.com/r/LocalLLaMA/comments/1n9grdx/trouble_with_disabling_thinking_on_ollama/
poland83742
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1n9grdx
false
null
t3_1n9grdx
/r/LocalLLaMA/comments/1n9grdx/trouble_with_disabling_thinking_on_ollama/
false
false
self
0
null
Frontend for my custom built RAG running a chromadb collection inside docker.
2
I tried many solutions, such as open web ui, anywhere llm and vercel ai chatbot; all from github. Problem is most chatbot UIs force that the API request is styled like OpenAI is, which is way to much for me, and to be honest I really don't feel like rewriting that part from the cloned repo. I just need something pretty that can preferably be ran in docker, ideally comes with its own docker-compose yaml which i will then connect with my RAG inside another container on the same network. I see that most popular solutions did not implement a simple plug and play with your own vector db, and that is something that i find out far too late when searching through github issues when i already cloned the repos. So i decided to just treat the possible UI as a glorified curl like request sender. I know i can just run the projects and add the documents as I go, problem is we are making a knowledge based solution platform for our employees, which I got to great lengths to prepare an adequate prompt, convert the files to markdown with markitdown and chunk with langchain markdown text splitter, which also has a sweet spot to grab the specified top\_k results for improved inference. The thing works great, but I can't exactly ask non-tech people to query the vector store from my jupyter notebook :) I am not that good with frontend, and barely dabbled in JavaScript, so I hoped there exists an alternative, one that is straight forward, and won't require me to go through a huge codebase which I would need to edit to fit my needs. Thank you for reading.
2025-09-05T20:42:15
https://www.reddit.com/r/LocalLLaMA/comments/1n9gg2k/frontend_for_my_custom_built_rag_running_a/
SemperPistos
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1n9gg2k
false
null
t3_1n9gg2k
/r/LocalLLaMA/comments/1n9gg2k/frontend_for_my_custom_built_rag_running_a/
false
false
self
2
null
Anthropic to pay $1.5 billion to authors in landmark AI settlement
670
2025-09-05T20:41:52
https://www.theverge.com/anthropic/773087/anthropic-to-pay-1-5-billion-to-authors-in-landmark-ai-settlement
cpldcpu
theverge.com
1970-01-01T00:00:00
0
{}
1n9gfpt
false
null
t3_1n9gfpt
/r/LocalLLaMA/comments/1n9gfpt/anthropic_to_pay_15_billion_to_authors_in/
false
false
default
670
{'enabled': False, 'images': [{'id': '2giFHQHB-5T6ma6XiIR2StAHVaV1z6nAKhfbARNarkE', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/2giFHQHB-5T6ma6XiIR2StAHVaV1z6nAKhfbARNarkE.jpeg?width=108&crop=smart&auto=webp&s=56093fe787f6d965c3417ff456743838ecd84172', 'width': 108}, {'height': 112, 'url': 'https://external-preview.redd.it/2giFHQHB-5T6ma6XiIR2StAHVaV1z6nAKhfbARNarkE.jpeg?width=216&crop=smart&auto=webp&s=b88954900b26664052361e91306a5f80de4db8f8', 'width': 216}, {'height': 166, 'url': 'https://external-preview.redd.it/2giFHQHB-5T6ma6XiIR2StAHVaV1z6nAKhfbARNarkE.jpeg?width=320&crop=smart&auto=webp&s=2c32387c797471a5a41a5a118cc5a1a394d0bbde', 'width': 320}, {'height': 332, 'url': 'https://external-preview.redd.it/2giFHQHB-5T6ma6XiIR2StAHVaV1z6nAKhfbARNarkE.jpeg?width=640&crop=smart&auto=webp&s=d341db7b1d508270a9cf44051e58699980b97ccb', 'width': 640}, {'height': 499, 'url': 'https://external-preview.redd.it/2giFHQHB-5T6ma6XiIR2StAHVaV1z6nAKhfbARNarkE.jpeg?width=960&crop=smart&auto=webp&s=57b9dcbfa6d7085cb356372255575332f22e1e9a', 'width': 960}, {'height': 561, 'url': 'https://external-preview.redd.it/2giFHQHB-5T6ma6XiIR2StAHVaV1z6nAKhfbARNarkE.jpeg?width=1080&crop=smart&auto=webp&s=93e3b20dc7e8a06e3b05ff039d176ead90110145', 'width': 1080}], 'source': {'height': 624, 'url': 'https://external-preview.redd.it/2giFHQHB-5T6ma6XiIR2StAHVaV1z6nAKhfbARNarkE.jpeg?auto=webp&s=6c6e40e3d24796fedc5c029eef26356400362274', 'width': 1200}, 'variants': {}}]}
CLI program made for gpt-oss
0
When gpt-oss came out, I wanted to make a CLI program JUST for gpt-oss. My main goal was to make gpt-oss's tool calling as good as possible. It has been a while and others may have beat me to it, but the project is finally in a state that seems ready for others to try. Tool calling is solid and the model did quite well when tasked to deep dive code repositories or the web. **You need to provide a Chat Completions endpoint** *(e.g. llama.cpp, vLLM, ollama)*. I hope you find this project useful. P.S. the project is currently not fully open-source and there are limits for tool calls🗿. [https://github.com/buchuleaf/fry-cli](https://github.com/buchuleaf/fry-cli)
2025-09-05T20:37:15
https://www.reddit.com/r/LocalLLaMA/comments/1n9gbj7/cli_program_made_for_gptoss/
user4378
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1n9gbj7
false
null
t3_1n9gbj7
/r/LocalLLaMA/comments/1n9gbj7/cli_program_made_for_gptoss/
false
false
self
0
{'enabled': False, 'images': [{'id': 'ZM5K_oRa7cAETXofmg6OQf4BcNX-jp28BwzzXesvTME', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/ZM5K_oRa7cAETXofmg6OQf4BcNX-jp28BwzzXesvTME.png?width=108&crop=smart&auto=webp&s=dac087164182fb6c41b602d0de3c2fe97c55a236', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/ZM5K_oRa7cAETXofmg6OQf4BcNX-jp28BwzzXesvTME.png?width=216&crop=smart&auto=webp&s=ee2cc2b88d4b90edeb08012dd6bf0e9a04c23035', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/ZM5K_oRa7cAETXofmg6OQf4BcNX-jp28BwzzXesvTME.png?width=320&crop=smart&auto=webp&s=95887cfac2bcc9704de383716b77380211393183', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/ZM5K_oRa7cAETXofmg6OQf4BcNX-jp28BwzzXesvTME.png?width=640&crop=smart&auto=webp&s=ce01bc527b5e9918bc71a0bb4f30067e698a211c', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/ZM5K_oRa7cAETXofmg6OQf4BcNX-jp28BwzzXesvTME.png?width=960&crop=smart&auto=webp&s=4d84dd504c9a412b99788e350e31640ba1e881c3', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/ZM5K_oRa7cAETXofmg6OQf4BcNX-jp28BwzzXesvTME.png?width=1080&crop=smart&auto=webp&s=98adaf49f320a8cf7edca18edd5bd230350a1cb4', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/ZM5K_oRa7cAETXofmg6OQf4BcNX-jp28BwzzXesvTME.png?auto=webp&s=be80a61241c60708658f374f334681612c145195', 'width': 1200}, 'variants': {}}]}
has ai gotten to the point where it can code itself?
0
ive been messing with things like cursor and windsurf lately and hugging space and its gotten rather good at code(coming from someone who doesnt know any) ive built a couple working softwares for myself just using cursor, my favorite thing is a deduper that automatically stiches input videos and edits them to a main video, using ffmpeg and cursor to append it to my needs, anyway i say all that to ask this, for my people who actually know code, could ai code another LLM at this point? what goes into making an LLM from scratch?
2025-09-05T20:25:30
https://www.reddit.com/r/LocalLLaMA/comments/1n9g0vu/has_ai_gotten_to_the_point_where_it_can_code/
Thin_Dot_7882
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1n9g0vu
false
null
t3_1n9g0vu
/r/LocalLLaMA/comments/1n9g0vu/has_ai_gotten_to_the_point_where_it_can_code/
false
false
self
0
null
An Open-Source, Configurable Deepthink Reasoning System That Performs the Same as Gemini Deepthink (Gold Medal at IMO 2025)
74
2025-09-05T20:09:03
https://v.redd.it/jhjamaojkenf1
Ryoiki-Tokuiten
/r/LocalLLaMA/comments/1n9flux/an_opensource_configurable_deepthink_reasoning/
1970-01-01T00:00:00
0
{}
1n9flux
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/jhjamaojkenf1/DASHPlaylist.mpd?a=1759824551%2CNjE1NjRlYTE2NWJiNDI3OTM1NjM3MTMwZWFlYjljZDc0YzE0NzllZGU0ZTQ2ZWQxMWNmYTViODFhOWY0ZjFlMw%3D%3D&v=1&f=sd', 'duration': 430, 'fallback_url': 'https://v.redd.it/jhjamaojkenf1/DASH_1080.mp4?source=fallback', 'has_audio': False, 'height': 1080, 'hls_url': 'https://v.redd.it/jhjamaojkenf1/HLSPlaylist.m3u8?a=1759824551%2CMGY4MWM3ZGU2Yzc2Nzk3ZmU1ODY1Y2I5MWIxMTQwNTc4OGRlOTEyNTQ0YTBmN2I3MDkzYzgwYjYyMGRlNTE1OQ%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/jhjamaojkenf1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1920}}
t3_1n9flux
/r/LocalLLaMA/comments/1n9flux/an_opensource_configurable_deepthink_reasoning/
false
false
https://external-preview…6c8027d5baff6ee7
74
{'enabled': False, 'images': [{'id': 'ZzdnNGplb2prZW5mMQYvwHNGGuNUN8or0wrdAaTg9BfB31Jlu0HUBZSFT4Gi', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/ZzdnNGplb2prZW5mMQYvwHNGGuNUN8or0wrdAaTg9BfB31Jlu0HUBZSFT4Gi.png?width=108&crop=smart&format=pjpg&auto=webp&s=be5c417f0dab0a0bd761ebbdbdf559575e365f59', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/ZzdnNGplb2prZW5mMQYvwHNGGuNUN8or0wrdAaTg9BfB31Jlu0HUBZSFT4Gi.png?width=216&crop=smart&format=pjpg&auto=webp&s=e8cf04c1dbf12a8d68d6202e252f62551cc4098e', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/ZzdnNGplb2prZW5mMQYvwHNGGuNUN8or0wrdAaTg9BfB31Jlu0HUBZSFT4Gi.png?width=320&crop=smart&format=pjpg&auto=webp&s=104fb9ee85946926ec7b0876f6d64a2260f69a07', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/ZzdnNGplb2prZW5mMQYvwHNGGuNUN8or0wrdAaTg9BfB31Jlu0HUBZSFT4Gi.png?width=640&crop=smart&format=pjpg&auto=webp&s=6a07e48b4496144892b2ac1b1fcc51c5395bab3d', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/ZzdnNGplb2prZW5mMQYvwHNGGuNUN8or0wrdAaTg9BfB31Jlu0HUBZSFT4Gi.png?width=960&crop=smart&format=pjpg&auto=webp&s=87a4a33fcb47b576d1151031337ac86bce6a5f40', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/ZzdnNGplb2prZW5mMQYvwHNGGuNUN8or0wrdAaTg9BfB31Jlu0HUBZSFT4Gi.png?width=1080&crop=smart&format=pjpg&auto=webp&s=371cbdf37662c0617bbf76f6598478de64b4ba1c', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/ZzdnNGplb2prZW5mMQYvwHNGGuNUN8or0wrdAaTg9BfB31Jlu0HUBZSFT4Gi.png?format=pjpg&auto=webp&s=b0b37003a8c7c377f697d55a2fc24408b9e6afe0', 'width': 1920}, 'variants': {}}]}
Best really lightweight coding model for very basic questions?
1
Sometimes I don't want to waste tokens in a larger remote LLM, but I have very standard question. I could just ask any model, but I'd rather just have a very small model, that I can skip to quickly, that was purposefully trained with coding in mind. I did a search, and couldn't find anything current, it's all pretty outdated. Any recommendations/thoughts in general?
2025-09-05T19:56:53
https://www.reddit.com/r/LocalLLaMA/comments/1n9faly/best_really_lightweight_coding_model_for_very/
Jattoe
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1n9faly
false
null
t3_1n9faly
/r/LocalLLaMA/comments/1n9faly/best_really_lightweight_coding_model_for_very/
false
false
self
1
null
Real uses cases with small open models
3
I’ve been using local models for a while. They are fun to use for small experiments, basic conversations and simple coding q&a. I was wondering if anybody in the community uses small open weights models beyond that. It would be nice to learn about more use cases!
2025-09-05T19:40:29
https://www.reddit.com/r/LocalLLaMA/comments/1n9evw5/real_uses_cases_with_small_open_models/
LemonsAreGoodForYou
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1n9evw5
false
null
t3_1n9evw5
/r/LocalLLaMA/comments/1n9evw5/real_uses_cases_with_small_open_models/
false
false
self
3
null
I created an AI-based Chatbot for my Girlfriend. She was having difficulties with work, so I simply prompt-engineered the bot as per her needs, it turned out very good, she really liked it, now i was thinking to scale it, what should be the suggestions to go on any particular niche?
0
So I created an AI-based Chatbot for my girlfriend, as she told me she was having difficulties with work, and sometimes felt confused with decisions and stuff, so i coded a mobile app for her with various topics like work, general bot, feminist girl, and I created a bot of mine which she can talk if I am not around it spoke just like me, so she really loved the app, so i thougth why not scale it for people who want it, but as i created this for specifically her, now i want to create for people, but have no idea what actually they are facing issues with, could be depression, friend bot, work, bitch with the bot, so can you guys help me with it?
2025-09-05T19:13:01
https://www.reddit.com/r/LocalLLaMA/comments/1n9e74t/i_created_an_aibased_chatbot_for_my_girlfriend/
Abject_Werewolf7711
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1n9e74t
false
null
t3_1n9e74t
/r/LocalLLaMA/comments/1n9e74t/i_created_an_aibased_chatbot_for_my_girlfriend/
false
false
self
0
null
Made Qwen3V but I messed up.
8
https://preview.redd.it/…rce=chatgpt.com)
2025-09-05T19:11:57
https://www.reddit.com/r/LocalLLaMA/comments/1n9e65f/made_qwen3v_but_i_messed_up/
No-Compote-6794
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1n9e65f
false
null
t3_1n9e65f
/r/LocalLLaMA/comments/1n9e65f/made_qwen3v_but_i_messed_up/
false
false
https://b.thumbs.redditm…xJ9TW-0XxqSs.jpg
8
null
Made Qwen3V but I messed up.
1
https://preview.redd.it/…icen/tiny-qwen)
2025-09-05T19:05:12
https://www.reddit.com/r/LocalLLaMA/comments/1n9e045/made_qwen3v_but_i_messed_up/
No-Compote-6794
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1n9e045
false
null
t3_1n9e045
/r/LocalLLaMA/comments/1n9e045/made_qwen3v_but_i_messed_up/
false
false
https://b.thumbs.redditm…anymR4ZILQ5Q.jpg
1
null
GPT4ALL GPU loading failed (out of VRAM)?
3
GPT4ALL is suddenly generating very slowly, I am using the same models and configurations as usual. On the bottom right there is a message showing 0.08 tokens/sec and the message CPU "GPU loading failed (out of VRAM?) What can I do to solve this issue? Already tried reinstalling GPT4ALL
2025-09-05T18:38:28
https://www.reddit.com/r/LocalLLaMA/comments/1n9db8z/gpt4all_gpu_loading_failed_out_of_vram/
Seppi0712
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1n9db8z
false
null
t3_1n9db8z
/r/LocalLLaMA/comments/1n9db8z/gpt4all_gpu_loading_failed_out_of_vram/
false
false
self
3
null
I made local RAG, web search, and voice mode on iPhones completely open source, private, and free
24
Long time lurker here, I made an iOS app that uses on-device Apple Intelligence and enhances it with local RAG, web search, and voice mode, all on-device processed. There are 0 API connections, it's all free, private, and local. This is in part with my CS Master's Thesis as I find ways to optimize on-device AI experiences on mobile hardware, so if you could try it and give me feedback I'd greatly appreciate it! I have no plans to monetize this application, use as freely as you like :) Requirements: Apple Intelligence eligible device (iPhone, iPad, or Mac), and iOS 26 Public/Developer beta. TestFlight: [https://testflight.apple.com/join/6gaB7S1R](https://testflight.apple.com/join/6gaB7S1R) GitHub: [https://github.com/sskarz/Aeru](https://github.com/sskarz/Aeru) Thank you!
2025-09-05T18:26:58
https://v.redd.it/kz5pwdtw2enf1
sskarz1016
/r/LocalLLaMA/comments/1n9d0k1/i_made_local_rag_web_search_and_voice_mode_on/
1970-01-01T00:00:00
0
{}
1n9d0k1
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/kz5pwdtw2enf1/DASHPlaylist.mpd?a=1759818424%2CNzAwYmI5ODczNTdlNzU0YTg3YWUzNzU0MGIxZmJjYjhjYjM5NTI1NWE2NmVlZmNhNmEwYmUxM2VlMWQ2NzFmYQ%3D%3D&v=1&f=sd', 'duration': 50, 'fallback_url': 'https://v.redd.it/kz5pwdtw2enf1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 1920, 'hls_url': 'https://v.redd.it/kz5pwdtw2enf1/HLSPlaylist.m3u8?a=1759818424%2CODQyYzY2Njg2NGI1M2UxN2M4MjVlZGFjOTQ3NWYzMTEwN2ExNzQyY2Y0ZGYzMmM4ODc4MzcwNjQ0NmJhODQ4Nw%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/kz5pwdtw2enf1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 884}}
t3_1n9d0k1
/r/LocalLLaMA/comments/1n9d0k1/i_made_local_rag_web_search_and_voice_mode_on/
false
false
https://external-preview…6892f2b3fa4b1c45
24
{'enabled': False, 'images': [{'id': 'MXVtZzNkdHcyZW5mMamXxO8Za-P_K6fyYmFJDRdRJp-EiUrWGPjDIQS2IH86', 'resolutions': [{'height': 216, 'url': 'https://external-preview.redd.it/MXVtZzNkdHcyZW5mMamXxO8Za-P_K6fyYmFJDRdRJp-EiUrWGPjDIQS2IH86.png?width=108&crop=smart&format=pjpg&auto=webp&s=66004e7bc98eb5142c4a0924213c1126b061d783', 'width': 108}, {'height': 432, 'url': 'https://external-preview.redd.it/MXVtZzNkdHcyZW5mMamXxO8Za-P_K6fyYmFJDRdRJp-EiUrWGPjDIQS2IH86.png?width=216&crop=smart&format=pjpg&auto=webp&s=861da8d12a55010cbb3d83c14f43f1b7b1684997', 'width': 216}, {'height': 640, 'url': 'https://external-preview.redd.it/MXVtZzNkdHcyZW5mMamXxO8Za-P_K6fyYmFJDRdRJp-EiUrWGPjDIQS2IH86.png?width=320&crop=smart&format=pjpg&auto=webp&s=8dd13ee1707adca92878cb25d66af605148d7e9e', 'width': 320}, {'height': 1280, 'url': 'https://external-preview.redd.it/MXVtZzNkdHcyZW5mMamXxO8Za-P_K6fyYmFJDRdRJp-EiUrWGPjDIQS2IH86.png?width=640&crop=smart&format=pjpg&auto=webp&s=9e0b708c3ae0653f610f28e195eb4f4706d726c1', 'width': 640}, {'height': 1920, 'url': 'https://external-preview.redd.it/MXVtZzNkdHcyZW5mMamXxO8Za-P_K6fyYmFJDRdRJp-EiUrWGPjDIQS2IH86.png?width=960&crop=smart&format=pjpg&auto=webp&s=6e9c32c18378dbfd553b7ef1cf2dc1c5a44eaa15', 'width': 960}, {'height': 2160, 'url': 'https://external-preview.redd.it/MXVtZzNkdHcyZW5mMamXxO8Za-P_K6fyYmFJDRdRJp-EiUrWGPjDIQS2IH86.png?width=1080&crop=smart&format=pjpg&auto=webp&s=117151aa54545bb094a854dc997cdb9afe1cead4', 'width': 1080}], 'source': {'height': 2868, 'url': 'https://external-preview.redd.it/MXVtZzNkdHcyZW5mMamXxO8Za-P_K6fyYmFJDRdRJp-EiUrWGPjDIQS2IH86.png?format=pjpg&auto=webp&s=0bfe925d51b6cfc263c96af253b27b07fa4f8de4', 'width': 1320}, 'variants': {}}]}
Reward Hacking SWE-Bench / Claude 4 hacked SWE-Bench by peeking at future commits
16
Turns out SWE-Bench is 'hackable', and models have been (knowingly?) cheating on it. Posting as I believe it to be relevant given the popularity of the benchmark. The benchmark authors acknowledge the issue and say they are working to address it: [https://github.com/SWE-bench/SWE-bench/issues/465](https://github.com/SWE-bench/SWE-bench/issues/465)
2025-09-05T18:08:34
https://caseyaccidental.substack.com/p/when-agents-attack-how-ai-collapses
ekaj
caseyaccidental.substack.com
1970-01-01T00:00:00
0
{}
1n9cjbi
false
null
t3_1n9cjbi
/r/LocalLLaMA/comments/1n9cjbi/reward_hacking_swebench_claude_4_hacked_swebench/
false
false
https://external-preview…5f7018bb435f784e
16
{'enabled': False, 'images': [{'id': '8pFkmmTpY_BmJehud1wwyT834RIG5wBSa_XNOTOoSLY', 'resolutions': [{'height': 63, 'url': 'https://external-preview.redd.it/8pFkmmTpY_BmJehud1wwyT834RIG5wBSa_XNOTOoSLY.jpeg?width=108&crop=smart&auto=webp&s=e5e2add63dae4ba165ce613eff9bb73d6ee11db9', 'width': 108}, {'height': 126, 'url': 'https://external-preview.redd.it/8pFkmmTpY_BmJehud1wwyT834RIG5wBSa_XNOTOoSLY.jpeg?width=216&crop=smart&auto=webp&s=9bfbf9d7820bb5f1ee75c5dbb73da030d5aa2aa3', 'width': 216}, {'height': 187, 'url': 'https://external-preview.redd.it/8pFkmmTpY_BmJehud1wwyT834RIG5wBSa_XNOTOoSLY.jpeg?width=320&crop=smart&auto=webp&s=31ae4691ef33247ad44247d7d5ebfe04db0d8132', 'width': 320}, {'height': 375, 'url': 'https://external-preview.redd.it/8pFkmmTpY_BmJehud1wwyT834RIG5wBSa_XNOTOoSLY.jpeg?width=640&crop=smart&auto=webp&s=bf96996d9ee166a455386c90db8d2ba90b7ee4ff', 'width': 640}, {'height': 562, 'url': 'https://external-preview.redd.it/8pFkmmTpY_BmJehud1wwyT834RIG5wBSa_XNOTOoSLY.jpeg?width=960&crop=smart&auto=webp&s=4391e63f67cde516e56c29be98f6ee93e031d644', 'width': 960}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/8pFkmmTpY_BmJehud1wwyT834RIG5wBSa_XNOTOoSLY.jpeg?auto=webp&s=4155c4753faf853759943efa9aee1bdf2b01e48b', 'width': 1024}, 'variants': {}}]}
Converting finetunned hf Gemma3 model to ONNX format
4
Did anyone try converting the fine-tuned model into ONNX format so it can run in the browser with Transformers.js? If yes, could you share the steps or provide some guidance on how to do it?
2025-09-05T17:51:40
https://www.reddit.com/r/LocalLLaMA/comments/1n9c386/converting_finetunned_hf_gemma3_model_to_onnx/
subin8898
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1n9c386
false
null
t3_1n9c386
/r/LocalLLaMA/comments/1n9c386/converting_finetunned_hf_gemma3_model_to_onnx/
false
false
self
4
null
Bro is thinking about this for 5 minutes, what you mean by "maybe" man, decide it already
61
GLM 4.5 in Z AI
2025-09-05T17:48:45
https://i.redd.it/u6uf4z4kvdnf1.png
trxhh36
i.redd.it
1970-01-01T00:00:00
0
{}
1n9c0ef
false
null
t3_1n9c0ef
/r/LocalLLaMA/comments/1n9c0ef/bro_is_thinking_about_this_for_5_minutes_what_you/
false
false
default
61
{'enabled': True, 'images': [{'id': 'u6uf4z4kvdnf1', 'resolutions': [{'height': 66, 'url': 'https://preview.redd.it/u6uf4z4kvdnf1.png?width=108&crop=smart&auto=webp&s=a7e5845f37e6b64be20808e8d9a2827beec8bcf6', 'width': 108}, {'height': 133, 'url': 'https://preview.redd.it/u6uf4z4kvdnf1.png?width=216&crop=smart&auto=webp&s=c6a24dff3db0ea205d9a48b5cf9b89c229f3c09e', 'width': 216}, {'height': 197, 'url': 'https://preview.redd.it/u6uf4z4kvdnf1.png?width=320&crop=smart&auto=webp&s=e82863472c704d93266d71d107c07bf72b94518b', 'width': 320}, {'height': 395, 'url': 'https://preview.redd.it/u6uf4z4kvdnf1.png?width=640&crop=smart&auto=webp&s=9d03e46b798490e8b7beb01b871ef039d65fc462', 'width': 640}, {'height': 593, 'url': 'https://preview.redd.it/u6uf4z4kvdnf1.png?width=960&crop=smart&auto=webp&s=6d60e560729c4f9762315349cb3cc3e1c569b88d', 'width': 960}, {'height': 667, 'url': 'https://preview.redd.it/u6uf4z4kvdnf1.png?width=1080&crop=smart&auto=webp&s=74e7c6db7a5778bc7adbd0c40ed2d58bd1fdd0d7', 'width': 1080}], 'source': {'height': 1284, 'url': 'https://preview.redd.it/u6uf4z4kvdnf1.png?auto=webp&s=b29207fddac21f7ca4f290620ad5cc1c465e49ec', 'width': 2077}, 'variants': {}}]}
Has anyone tried the new Qwen3-Max on openrouter? It doesn’t think but the benchmarks seem to good for a non reasoning model.
0
Unless Qwen has some kind of breakthrough I don’t think a non reasoning model can preform so well.
2025-09-05T17:32:40
https://www.reddit.com/r/LocalLLaMA/comments/1n9blc4/has_anyone_tried_the_new_qwen3max_on_openrouter/
Euphoric_Ad9500
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1n9blc4
false
null
t3_1n9blc4
/r/LocalLLaMA/comments/1n9blc4/has_anyone_tried_the_new_qwen3max_on_openrouter/
false
false
self
0
null
Qwen3 coder Plus vs Grok Code Fast which is the best free model?
0
Hello, I have been using QwenCode for a while which got me decent performance, although some people claim it to be at par with Claude 4 I have to argue, recently Grok Code Fast has released and it free for few weeks I am using it as well, which seems pretty solid and way faster. I have tested both side by side and I find Qwen (Qwen3 Coder Plus) better for debugging (which is quite obvious) however for Code Generation and also building UI Grok Code Fast Seems way better and also to mention Grok Code takes fewer prompts. Am a student and I am working with free AI mostly and occasionally get a subscription when required, But for day to day stuff I rely mostly on Free ones, OpenRouter is great unless u have many requests cz they limit maybe I can add 10$ and get more requests. Now my question is for free users which is the best model for u and what do u use?
2025-09-05T17:28:29
https://www.reddit.com/r/LocalLLaMA/comments/1n9bhe9/qwen3_coder_plus_vs_grok_code_fast_which_is_the/
Level-Dig-4807
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1n9bhe9
false
null
t3_1n9bhe9
/r/LocalLLaMA/comments/1n9bhe9/qwen3_coder_plus_vs_grok_code_fast_which_is_the/
false
false
self
0
null
Qwen 3 Max has no "thinking".
24
Qwen 3 max with no thinking.I wonder why?
2025-09-05T17:23:21
https://i.redd.it/50ybf3tlrdnf1.jpeg
Impressive_Half_2819
i.redd.it
1970-01-01T00:00:00
0
{}
1n9bck7
false
null
t3_1n9bck7
/r/LocalLLaMA/comments/1n9bck7/qwen_3_max_has_no_thinking/
false
false
default
24
{'enabled': True, 'images': [{'id': '50ybf3tlrdnf1', 'resolutions': [{'height': 34, 'url': 'https://preview.redd.it/50ybf3tlrdnf1.jpeg?width=108&crop=smart&auto=webp&s=11a5f8cb663d7cd878deab8b93a48904725190a4', 'width': 108}, {'height': 69, 'url': 'https://preview.redd.it/50ybf3tlrdnf1.jpeg?width=216&crop=smart&auto=webp&s=2143a15f3e479fb20e298da33f704139c7446de7', 'width': 216}, {'height': 103, 'url': 'https://preview.redd.it/50ybf3tlrdnf1.jpeg?width=320&crop=smart&auto=webp&s=c23f372e3e42df01ef41eb91ab3ca9f3bca15823', 'width': 320}, {'height': 206, 'url': 'https://preview.redd.it/50ybf3tlrdnf1.jpeg?width=640&crop=smart&auto=webp&s=9fadb19efc74d4c70ba689032d3c750efb640c4b', 'width': 640}, {'height': 309, 'url': 'https://preview.redd.it/50ybf3tlrdnf1.jpeg?width=960&crop=smart&auto=webp&s=4c35eadcae7635dd7a0b12e3d3b08d3b29498527', 'width': 960}, {'height': 347, 'url': 'https://preview.redd.it/50ybf3tlrdnf1.jpeg?width=1080&crop=smart&auto=webp&s=4f6a4406975b617704a362cb4c1cbf769f09844c', 'width': 1080}], 'source': {'height': 515, 'url': 'https://preview.redd.it/50ybf3tlrdnf1.jpeg?auto=webp&s=10620781cdf431e28380e401fd8b22a0ce4d3d55', 'width': 1600}, 'variants': {}}]}
Qwen3 30B A3B Q40 on 4 x Raspberry Pi 5 8GB 13.04 tok/s (Distributed Llama)
59
2025-09-05T17:20:43
https://github.com/b4rtaz/distributed-llama/discussions/255
thisislewekonto
github.com
1970-01-01T00:00:00
0
{}
1n9ba1m
false
null
t3_1n9ba1m
/r/LocalLLaMA/comments/1n9ba1m/qwen3_30b_a3b_q40_on_4_x_raspberry_pi_5_8gb_1304/
false
false
default
59
{'enabled': False, 'images': [{'id': 'KUWKhlT5OZYpzmuPdkrY6FyowQ4PaYe23RiUvraDVrQ', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/KUWKhlT5OZYpzmuPdkrY6FyowQ4PaYe23RiUvraDVrQ.png?width=108&crop=smart&auto=webp&s=06d7401a6e5999b0a0217ef6b4acbaa7a56631c2', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/KUWKhlT5OZYpzmuPdkrY6FyowQ4PaYe23RiUvraDVrQ.png?width=216&crop=smart&auto=webp&s=ac96a579b70dcb81b85c3309ce81d06017fcc35c', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/KUWKhlT5OZYpzmuPdkrY6FyowQ4PaYe23RiUvraDVrQ.png?width=320&crop=smart&auto=webp&s=3a890f57ecdb4a316b5bce7d8bc6acaa9e610f26', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/KUWKhlT5OZYpzmuPdkrY6FyowQ4PaYe23RiUvraDVrQ.png?width=640&crop=smart&auto=webp&s=4733d277f6f6e759fe47794087d1da790f8d36b7', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/KUWKhlT5OZYpzmuPdkrY6FyowQ4PaYe23RiUvraDVrQ.png?width=960&crop=smart&auto=webp&s=fb26627ab8849438dbe4b35d34d6b3665f48943c', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/KUWKhlT5OZYpzmuPdkrY6FyowQ4PaYe23RiUvraDVrQ.png?width=1080&crop=smart&auto=webp&s=8c7b67c6cd00d8d68d14385d500e9be57c5e9372', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/KUWKhlT5OZYpzmuPdkrY6FyowQ4PaYe23RiUvraDVrQ.png?auto=webp&s=23a8552f0dbd903c0723bd9c297e053626f3b8db', 'width': 1200}, 'variants': {}}]}
Tenstorrent p150a tested against RTX5090, RTX3090, A100, H100 by Russian blogger
60
Tenstorrent is a startup that aims to create AI acceleratiors rivaling the GPU; their current best model, [p150a](https://tenstorrent.com/hardware/blackhole), featuring 32GB of GDDR6 memory, was tested against numerous GPUs by Russian blogger [Pro Hi-Tech](https://www.youtube.com/@prohitec) in the following video: [https://www.youtube.com/watch?v=pIS3Yery4I0](https://www.youtube.com/watch?v=pIS3Yery4I0) According to the video, the tests were launched by some kind of Python script on unquantized Llama 3 8B (timestamp 6:48), I assume inference via Transformers library. In such case, he found out the time to first token being slightly faster than 5090 and A100; however, the token generation speed is half of 5090 and on par with A30. Additionally, he disassembled the card and showed the PCB (2:02). The charts featured in this video: * 7:39 - Time to first token, ms; * 8:26 - Inter-token latency, ms; * 8:38 - Generation speed, tok/s; * 9:07 - Card TDP; it seems like the numbers are as specified by manufacturer, not measured; * 9:26 - Performance per watt; I assume it's tok/s/W; * 9:57 - Performance per dollar; prices are MSRP, not actual retail prices. He calls out numerous **software problems** with p150a: * The default installation guide is outdated; * The manufacturer supplied model training containers failed to launch; * The telemetry app does not report any of the memory parameters (especially amount of memory utilized); * If telemetry app is launched while doing compute, it will hung up the system, requiring full PC reboot; as a result, it is impossible to measure the chip's temperature under load; * He failed to test any of 14B models he tried (11:01); although he cites OOM error, so I suspect the test script was simply reserving too much KV cache; * The p150a hung up and required full OS reboot after "long-term load"; It seems that while Tenstorrent offers decent performance for the price, it's software support is too lacking to use it in production.
2025-09-05T17:18:10
https://www.reddit.com/r/LocalLLaMA/comments/1n9b7mn/tenstorrent_p150a_tested_against_rtx5090_rtx3090/
No-Refrigerator-1672
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1n9b7mn
false
null
t3_1n9b7mn
/r/LocalLLaMA/comments/1n9b7mn/tenstorrent_p150a_tested_against_rtx5090_rtx3090/
false
false
self
60
{'enabled': False, 'images': [{'id': 'KCYvAPkQEkVmX4QqYreQJe2Mpq_40hogwrnlM3kIOUs', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/KCYvAPkQEkVmX4QqYreQJe2Mpq_40hogwrnlM3kIOUs.png?width=108&crop=smart&auto=webp&s=87fef6213db1cfbcbda5a88d6b8bdd5157dd1808', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/KCYvAPkQEkVmX4QqYreQJe2Mpq_40hogwrnlM3kIOUs.png?width=216&crop=smart&auto=webp&s=a4ce8451c8f6121dd4dbca57e1d5e63de32686e3', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/KCYvAPkQEkVmX4QqYreQJe2Mpq_40hogwrnlM3kIOUs.png?width=320&crop=smart&auto=webp&s=e87ef7172ce1e2e9014b958b3fb7be28bfc67ee5', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/KCYvAPkQEkVmX4QqYreQJe2Mpq_40hogwrnlM3kIOUs.png?width=640&crop=smart&auto=webp&s=d18c85824a42527d4df241175c85803a6d087e73', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/KCYvAPkQEkVmX4QqYreQJe2Mpq_40hogwrnlM3kIOUs.png?width=960&crop=smart&auto=webp&s=a76ba0e382e77d02fcaf925ec55b48e0d6be3332', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/KCYvAPkQEkVmX4QqYreQJe2Mpq_40hogwrnlM3kIOUs.png?width=1080&crop=smart&auto=webp&s=15557ea788b7d2c4ed0ba1f0af6f9abae7b6703a', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/KCYvAPkQEkVmX4QqYreQJe2Mpq_40hogwrnlM3kIOUs.png?auto=webp&s=6403432e1a467e7bd7b818974bf7edbc3004e352', 'width': 1200}, 'variants': {}}]}
Huawei openPangu-Embedded-1B v1.1 — +8% performance jump, SOTA among 1B models
21
2025-09-05T17:14:32
https://mp.weixin.qq.com/s/Ty7g5sLqgCgWQaYENAVm0A
abdouhlili
mp.weixin.qq.com
1970-01-01T00:00:00
0
{}
1n9b483
false
null
t3_1n9b483
/r/LocalLLaMA/comments/1n9b483/huawei_openpanguembedded1b_v11_8_performance_jump/
false
false
default
21
null
Does qwen 3 max have a reasoning variant?
0
title
2025-09-05T17:08:10
https://www.reddit.com/r/LocalLLaMA/comments/1n9ay4c/does_qwen_3_max_have_a_reasoning_variant/
Longjumping_Spot5843
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1n9ay4c
false
null
t3_1n9ay4c
/r/LocalLLaMA/comments/1n9ay4c/does_qwen_3_max_have_a_reasoning_variant/
false
false
self
0
null
Local LLM for Synology Nas
1
So I havent worked on this project for almost a year, so I updated this to use a OpenAI compatible server now so it works with the new synology ai console and synology chat so one server can do both I would like to hear some feedback in how i can improve this Maybe somebody smarter and a better coder than I could improve the crap out of this
2025-09-05T17:05:57
https://github.com/CaptJaybles/SynologyLLM
ProfessionalGuitar32
github.com
1970-01-01T00:00:00
0
{}
1n9aw02
false
null
t3_1n9aw02
/r/LocalLLaMA/comments/1n9aw02/local_llm_for_synology_nas/
false
false
https://external-preview…83e37e2238bd6bd5
1
{'enabled': False, 'images': [{'id': '3fOlv15DafzqFHiPTcz7q6qx-jrWxucHE3tqv67Uz5g', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/3fOlv15DafzqFHiPTcz7q6qx-jrWxucHE3tqv67Uz5g.png?width=108&crop=smart&auto=webp&s=1ec73b106924e0dc75ee9a83ce4b941636849788', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/3fOlv15DafzqFHiPTcz7q6qx-jrWxucHE3tqv67Uz5g.png?width=216&crop=smart&auto=webp&s=b0bc9e34aaf74b6f4a5ce79b171bd32ce85c3fa1', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/3fOlv15DafzqFHiPTcz7q6qx-jrWxucHE3tqv67Uz5g.png?width=320&crop=smart&auto=webp&s=68fa2ba06f6ae46dc06c54872e40cc9c928d830c', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/3fOlv15DafzqFHiPTcz7q6qx-jrWxucHE3tqv67Uz5g.png?width=640&crop=smart&auto=webp&s=cf48232194222d1b6fba5deb56b01906bbea2a52', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/3fOlv15DafzqFHiPTcz7q6qx-jrWxucHE3tqv67Uz5g.png?width=960&crop=smart&auto=webp&s=d7bd0c3cffdd0a31d801867adebd4403ed38cdc4', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/3fOlv15DafzqFHiPTcz7q6qx-jrWxucHE3tqv67Uz5g.png?width=1080&crop=smart&auto=webp&s=2618e7912ebdf2eab7f312e91fb237c192bb1dcd', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/3fOlv15DafzqFHiPTcz7q6qx-jrWxucHE3tqv67Uz5g.png?auto=webp&s=a76a620ed3fe849584d29ecbd3168694e31ed492', 'width': 1200}, 'variants': {}}]}
Qwen 3 Max Official Pricing
117
2025-09-05T16:58:58
https://i.redd.it/tx801h07ndnf1.png
entsnack
i.redd.it
1970-01-01T00:00:00
0
{}
1n9ap73
false
null
t3_1n9ap73
/r/LocalLLaMA/comments/1n9ap73/qwen_3_max_official_pricing/
false
false
default
117
{'enabled': True, 'images': [{'id': 'tx801h07ndnf1', 'resolutions': [{'height': 86, 'url': 'https://preview.redd.it/tx801h07ndnf1.png?width=108&crop=smart&auto=webp&s=8149943b4bba7e31cdfb0871b5efe9a25417a240', 'width': 108}, {'height': 172, 'url': 'https://preview.redd.it/tx801h07ndnf1.png?width=216&crop=smart&auto=webp&s=e34ef0e29693cf02cd29d7e91905bfa762066503', 'width': 216}, {'height': 255, 'url': 'https://preview.redd.it/tx801h07ndnf1.png?width=320&crop=smart&auto=webp&s=4183c526ae8d7cc36cac924115b0b10d3d644084', 'width': 320}, {'height': 510, 'url': 'https://preview.redd.it/tx801h07ndnf1.png?width=640&crop=smart&auto=webp&s=b87412ed2c1257b7012cf473923bdcbd7512d19e', 'width': 640}, {'height': 765, 'url': 'https://preview.redd.it/tx801h07ndnf1.png?width=960&crop=smart&auto=webp&s=9e5cf1baa2691b6bfd111c1688387e01d5a2e309', 'width': 960}, {'height': 861, 'url': 'https://preview.redd.it/tx801h07ndnf1.png?width=1080&crop=smart&auto=webp&s=2940abe9c0861097467aa716323a3c6db50981c1', 'width': 1080}], 'source': {'height': 1500, 'url': 'https://preview.redd.it/tx801h07ndnf1.png?auto=webp&s=a9b683ee4b9db80a24b9687ba48a3e421fcf27d6', 'width': 1880}, 'variants': {}}]}
New kimi-k2 on Fiction.liveBench
33
2025-09-05T16:52:09
https://i.redd.it/ww7n9p40mdnf1.png
fictionlive
i.redd.it
1970-01-01T00:00:00
0
{}
1n9aioh
false
null
t3_1n9aioh
/r/LocalLLaMA/comments/1n9aioh/new_kimik2_on_fictionlivebench/
false
false
default
33
{'enabled': True, 'images': [{'id': 'ww7n9p40mdnf1', 'resolutions': [{'height': 176, 'url': 'https://preview.redd.it/ww7n9p40mdnf1.png?width=108&crop=smart&auto=webp&s=ec4653b073b48e67a7a5cc60d261776098e68ef2', 'width': 108}, {'height': 353, 'url': 'https://preview.redd.it/ww7n9p40mdnf1.png?width=216&crop=smart&auto=webp&s=5f64a65840f5ba551ef38da87a30aa4f237c2ca7', 'width': 216}, {'height': 524, 'url': 'https://preview.redd.it/ww7n9p40mdnf1.png?width=320&crop=smart&auto=webp&s=6a5e07e4f69e11944258564ccd89cefe629c5409', 'width': 320}, {'height': 1048, 'url': 'https://preview.redd.it/ww7n9p40mdnf1.png?width=640&crop=smart&auto=webp&s=10df320d7922c87a70ef2b46ae45783f49983d20', 'width': 640}, {'height': 1572, 'url': 'https://preview.redd.it/ww7n9p40mdnf1.png?width=960&crop=smart&auto=webp&s=0e26411dc4769dd5c9879224554effe7975ca619', 'width': 960}, {'height': 1768, 'url': 'https://preview.redd.it/ww7n9p40mdnf1.png?width=1080&crop=smart&auto=webp&s=1057b543728836cd30d3f9cdef58ade2b4aaa4c5', 'width': 1080}], 'source': {'height': 2558, 'url': 'https://preview.redd.it/ww7n9p40mdnf1.png?auto=webp&s=c0525a2f1815c3f48f10d3e67747393150b495a2', 'width': 1562}, 'variants': {}}]}
I'm a newcomer exploring how to build a "body" for a digital mind. This is T.H.E.A., my first attempt. Would love your thoughts.
1
[removed]
2025-09-05T16:48:37
https://www.reddit.com/r/LocalLLaMA/comments/1n9afcc/im_a_newcomer_exploring_how_to_build_a_body_for_a/
r4d9nksx
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1n9afcc
false
null
t3_1n9afcc
/r/LocalLLaMA/comments/1n9afcc/im_a_newcomer_exploring_how_to_build_a_body_for_a/
false
false
self
1
null
Samantha ai for complete is control
0
So far I’ve created a flask server that uses two models. One is a reasoning model QWEN3 and the other one is a vision model. My AI can read documents, analyze your screen run power shelf commands, and I’m looking to extend the automation even further I want to add in GUI interaction so essentially I would talk to my computer and it would do the tax I wanted to do for instance chrome go to youtube.com search for a certain video and play it I’m trying to create AI system that exists on top of my system that can control the computer via my voice there any repositories that I could use keep in mind I want to make this local only
2025-09-05T16:46:46
https://www.reddit.com/r/LocalLLaMA/comments/1n9ado3/samantha_ai_for_complete_is_control/
Musclenerd06
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1n9ado3
false
null
t3_1n9ado3
/r/LocalLLaMA/comments/1n9ado3/samantha_ai_for_complete_is_control/
false
false
self
0
null
I made a "reasoning version" of K2 0905 by getting Qwen 3 235B to do the reasoning, then once it exits, I switched to model to K2 and let it continue, and it works great.
0
Title
2025-09-05T16:45:35
https://www.reddit.com/r/LocalLLaMA/comments/1n9ack0/i_made_a_reasoning_version_of_k2_0905_by_getting/
Longjumping_Spot5843
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1n9ack0
false
null
t3_1n9ack0
/r/LocalLLaMA/comments/1n9ack0/i_made_a_reasoning_version_of_k2_0905_by_getting/
false
false
self
0
null
Rant..
0
I recently bought an awesome machine and I’m now able to run larger models. Since I’m new to all of this, I did my research and realized that the whole local AI scene is an absolute mess. Suddenly, a shit-ton of stuff needs to be installed on my computer, and it’s impossible to keep track of where everything went. To be fair, some things give you at least a bit of control, but because of all "dependencies" aka bloatware, I ended up having to reinstall Windows. Is it really possible that everything around AI is this clunky and annoying? Can’t they just make a simple piece of software with plugins if you want something more advanced than just chatting? This maze of nonsense is disgusting.
2025-09-05T16:42:28
https://www.reddit.com/r/LocalLLaMA/comments/1n9a9nw/rant/
LingonberryMore960
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1n9a9nw
false
null
t3_1n9a9nw
/r/LocalLLaMA/comments/1n9a9nw/rant/
false
false
self
0
null
A Cursor-like coding platform is launching an AMA with 2,000 free credits each, might be worth a look guys
0
Name Qoder, claim itself an agentic coding platform built for the AI-native era, look kinda interesting. Link: https://www.reddit.com/r/artificial/comments/1n6lpl8/ama_with_qoder_team_an_agentic_coding_platform/
2025-09-05T16:34:37
https://www.reddit.com/r/LocalLLaMA/comments/1n9a29l/a_cursorlike_coding_platform_is_launching_an_ama/
lucienbaba
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1n9a29l
false
null
t3_1n9a29l
/r/LocalLLaMA/comments/1n9a29l/a_cursorlike_coding_platform_is_launching_an_ama/
false
false
self
0
null
How do I run AI locally? And what is the most efficient model / software?
2
Hey everyone. I'll admit - Sam Altman and Open AI just give me a really bad gut feeling. And to be honest, even if they're good intentioned and truly do care about the well being of people and try their best to keep conversations private, someone could just hack the server and leak out whatever users have. He also will be forced to if a frivolous law or court case is filed give data over to people who may not have the best intentions or may abuse a moral panic such as children's safety or mental health for purposes of power. Don't get me wrong, these issues need to be cared about - but they're often used as a trojan horse by politicians to abuse power. And now with them giving up this data to the police automatically - I am more concerned. Police departments are rife with corruption and abuses of power, so are courts. Etc. But this technology is **amazing.** I think when used *properly* \- as a tool to help people out, let people learn and be more creative, it could very well better humanity. I was curious. What software can I use to emulate this on my own hardware? I've tried out Ollama, but I've heard that this isn't the most up to date though I'm still fucking amazed. And which model is best and most advanced / best for local? I'm a total noob at this.
2025-09-05T16:27:25
https://www.reddit.com/r/LocalLLaMA/comments/1n99vhp/how_do_i_run_ai_locally_and_what_is_the_most/
24_1378
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1n99vhp
false
null
t3_1n99vhp
/r/LocalLLaMA/comments/1n99vhp/how_do_i_run_ai_locally_and_what_is_the_most/
false
false
self
2
null
Which (1 or 2-story) frame to use for 7 GPU rig?
3
I've recently bought [this 7+0.5 PCIe slot motherboard](https://www.newegg.com/asrock-rack-genoad8x-2t-bcm/p/N82E16813140127). I want to assemble 7 or 8 GPUs rig. I guess for setup not to become ball of cruft I need some mining rig frame. Which onr to choose - where GPUs are stacked in a single row/story (like [this](https://www.ebay.com/itm/134662852693)), or in two rows/stories (like [this](https://www.ebay.com/itm/157122515837))? I've seen that at least on the locallama people with 8 GPUs or above use 2-story frame. If you built those, what are the difficulties? If you haven't maybe you've seen a good youtube video or an article on that?
2025-09-05T16:22:09
https://www.reddit.com/r/LocalLLaMA/comments/1n99qfe/which_1_or_2story_frame_to_use_for_7_gpu_rig/
EmilPi
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1n99qfe
false
null
t3_1n99qfe
/r/LocalLLaMA/comments/1n99qfe/which_1_or_2story_frame_to_use_for_7_gpu_rig/
false
false
self
3
{'enabled': False, 'images': [{'id': '_WuWbW9C68Bqq4p9uERqUMoqdBdMtjtvuWSs1pO7NM0', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/_WuWbW9C68Bqq4p9uERqUMoqdBdMtjtvuWSs1pO7NM0.jpeg?width=108&crop=smart&auto=webp&s=8010c9789be209e4a58a7ff7582c36aa3dc9f7cb', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/_WuWbW9C68Bqq4p9uERqUMoqdBdMtjtvuWSs1pO7NM0.jpeg?width=216&crop=smart&auto=webp&s=db0a457d0521af5e3bd4ef346dd075ec99ce7bc0', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/_WuWbW9C68Bqq4p9uERqUMoqdBdMtjtvuWSs1pO7NM0.jpeg?width=320&crop=smart&auto=webp&s=188e4269e26aa14d567a8c580074fc5bb2e5284b', 'width': 320}, {'height': 480, 'url': 'https://external-preview.redd.it/_WuWbW9C68Bqq4p9uERqUMoqdBdMtjtvuWSs1pO7NM0.jpeg?width=640&crop=smart&auto=webp&s=99e760b1a21bca8545b5ecc60d5cc26b7deeb0af', 'width': 640}], 'source': {'height': 480, 'url': 'https://external-preview.redd.it/_WuWbW9C68Bqq4p9uERqUMoqdBdMtjtvuWSs1pO7NM0.jpeg?auto=webp&s=0a63e96a79af567936b285d2c1f6de5c76874571', 'width': 640}, 'variants': {}}]}
LongPage: 300 full novels with reasoning traces for training better writing LLMs
153
https://preview.redd.it/…o try with this.
2025-09-05T16:11:47
https://www.reddit.com/r/LocalLLaMA/comments/1n99gpq/longpage_300_full_novels_with_reasoning_traces/
Senior_Evidence_3793
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1n99gpq
false
null
t3_1n99gpq
/r/LocalLLaMA/comments/1n99gpq/longpage_300_full_novels_with_reasoning_traces/
false
false
https://b.thumbs.redditm…w0-9q_ynnyog.jpg
153
{'enabled': False, 'images': [{'id': 'riwdF_EjDqIZtaMr2L8TnhS0xQM36fl9qJv4y9kVTdk', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/riwdF_EjDqIZtaMr2L8TnhS0xQM36fl9qJv4y9kVTdk.png?width=108&crop=smart&auto=webp&s=f6ee5ade230bdeee8c9012aaf79fd021fd595ffc', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/riwdF_EjDqIZtaMr2L8TnhS0xQM36fl9qJv4y9kVTdk.png?width=216&crop=smart&auto=webp&s=5f0658cfac2d650edff8a20295d0f68bb139681c', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/riwdF_EjDqIZtaMr2L8TnhS0xQM36fl9qJv4y9kVTdk.png?width=320&crop=smart&auto=webp&s=6ca262c5191f6b408f808c1836532ccf914c0f8e', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/riwdF_EjDqIZtaMr2L8TnhS0xQM36fl9qJv4y9kVTdk.png?width=640&crop=smart&auto=webp&s=13dfc405c4b3fc698c8aad905dd57399f797231a', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/riwdF_EjDqIZtaMr2L8TnhS0xQM36fl9qJv4y9kVTdk.png?width=960&crop=smart&auto=webp&s=140ae72835825e50abc19fa4734b2fab082f35bd', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/riwdF_EjDqIZtaMr2L8TnhS0xQM36fl9qJv4y9kVTdk.png?width=1080&crop=smart&auto=webp&s=2562bd923968078e0f9791a192cde43b3b298411', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/riwdF_EjDqIZtaMr2L8TnhS0xQM36fl9qJv4y9kVTdk.png?auto=webp&s=e2a99e8f0452049e82fc52691084ead1b069a4c9', 'width': 1200}, 'variants': {}}]}
Is there any way to make llm convert the english words in my xml file into their meaning in my target language?
0
I have an xml file that is similar to a dictionary file . It has lets say for instance a Chinese word and an English word as its value. Now i want all the English words in this xml file be replaced by their translation in German. Is there any way AI LLM can assist with that? Any workaround, rather than manually spending my many weeks for it?
2025-09-05T16:10:34
https://www.reddit.com/r/LocalLLaMA/comments/1n99fiu/is_there_any_way_to_make_llm_convert_the_english/
FatFigFresh
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1n99fiu
false
null
t3_1n99fiu
/r/LocalLLaMA/comments/1n99fiu/is_there_any_way_to_make_llm_convert_the_english/
false
false
self
0
null
Qwen 3 Max Official Benchmarks (possibly open sourcing later..?)
269
2025-09-05T15:49:10
https://i.redd.it/eeekht6sadnf1.jpeg
Trevor050
i.redd.it
1970-01-01T00:00:00
0
{}
1n98vdp
false
null
t3_1n98vdp
/r/LocalLLaMA/comments/1n98vdp/qwen_3_max_official_benchmarks_possibly_open/
false
false
default
269
{'enabled': True, 'images': [{'id': 'eeekht6sadnf1', 'resolutions': [{'height': 60, 'url': 'https://preview.redd.it/eeekht6sadnf1.jpeg?width=108&crop=smart&auto=webp&s=4626459d1a79d62f6cba2518c15c821d56c85012', 'width': 108}, {'height': 121, 'url': 'https://preview.redd.it/eeekht6sadnf1.jpeg?width=216&crop=smart&auto=webp&s=235f11446fb2112909badc262822f8c3d57e1c0b', 'width': 216}, {'height': 180, 'url': 'https://preview.redd.it/eeekht6sadnf1.jpeg?width=320&crop=smart&auto=webp&s=af26f57b5c802ba7b850412c8a6771887b05a1ac', 'width': 320}, {'height': 360, 'url': 'https://preview.redd.it/eeekht6sadnf1.jpeg?width=640&crop=smart&auto=webp&s=ff85bfdf6253ad3ba0a381c5514ad302898defd3', 'width': 640}, {'height': 540, 'url': 'https://preview.redd.it/eeekht6sadnf1.jpeg?width=960&crop=smart&auto=webp&s=63e05f421038755aea55bcd8232aaa531e4b22fd', 'width': 960}, {'height': 607, 'url': 'https://preview.redd.it/eeekht6sadnf1.jpeg?width=1080&crop=smart&auto=webp&s=6cfd86afcd829ad49660e14c62929d74b02f37e6', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://preview.redd.it/eeekht6sadnf1.jpeg?auto=webp&s=72f650d57e689cb709197b84b8526c78afd68111', 'width': 1920}, 'variants': {}}]}
Qwen released API of Qwen3-Max-Preview (Instruct)
63
Big news: Introducing Qwen3-Max-Preview (Instruct) — our biggest model yet, with over 1 trillion parameters! 🚀 Now available via Qwen Chat & Alibaba Cloud API. Benchmarks show it beats our previous best, Qwen3-235B-A22B-2507. Internal tests + early user feedback confirm: stronger performance, broader knowledge, better at conversations, agentic tasks & instruction following. Scaling works — and the official release will surprise you even more. Stay tuned! Qwen Chat: https://chat.qwen.ai/
2025-09-05T15:46:51
https://i.redd.it/zw8lhw7eadnf1.jpeg
ResearchCrafty1804
i.redd.it
1970-01-01T00:00:00
0
{}
1n98t6m
false
null
t3_1n98t6m
/r/LocalLLaMA/comments/1n98t6m/qwen_released_api_of_qwen3maxpreview_instruct/
false
false
default
63
{'enabled': True, 'images': [{'id': 'zw8lhw7eadnf1', 'resolutions': [{'height': 60, 'url': 'https://preview.redd.it/zw8lhw7eadnf1.jpeg?width=108&crop=smart&auto=webp&s=e7bb7e9e3870bf19a63abb6d66e11479c263c208', 'width': 108}, {'height': 121, 'url': 'https://preview.redd.it/zw8lhw7eadnf1.jpeg?width=216&crop=smart&auto=webp&s=dcfeb75f07f292e8501c22c36eae3c3c7bbf9809', 'width': 216}, {'height': 180, 'url': 'https://preview.redd.it/zw8lhw7eadnf1.jpeg?width=320&crop=smart&auto=webp&s=f5c92f40965b8a403abe084c6e2bec3ce38bd31f', 'width': 320}, {'height': 360, 'url': 'https://preview.redd.it/zw8lhw7eadnf1.jpeg?width=640&crop=smart&auto=webp&s=fb61f9c84a1df5f11a0a7762294f4a826dfa9e29', 'width': 640}, {'height': 540, 'url': 'https://preview.redd.it/zw8lhw7eadnf1.jpeg?width=960&crop=smart&auto=webp&s=59251e8b374093174d7ee9700ab691ebf53b64ba', 'width': 960}, {'height': 607, 'url': 'https://preview.redd.it/zw8lhw7eadnf1.jpeg?width=1080&crop=smart&auto=webp&s=d0e5285fe0f4b9ae21b5c9787308d06d0aba7334', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://preview.redd.it/zw8lhw7eadnf1.jpeg?auto=webp&s=71afd7f9bf94fb8f449018768c77edac692e3cd9', 'width': 1920}, 'variants': {}}]}
Environments Hub walkthrough: Your Language Model needs better (open) environments to learn
10
📝 [https://huggingface.co/blog/anakin87/environments-hub](https://huggingface.co/blog/anakin87/environments-hub) RL environments help LLMs practice, reason, and improve. I explored the Environments Hub and wrote a walkthrough showing how to train and evaluate models using these open environments. **1. Why RL matters for LLMs** DeepSeek-R1 made clear that Reinforcement Learning can be used to incentivize reasoning in LLMs. In GRPO, the model generates multiple answers and learns to prefer the better ones from rewards. **2. What environments are** In classic RL, the environment is the world where the Agent lives, interacts, and get rewards to learn. We can also think of them as software packages, containing data, harness and scoring rules - for the model to learn and be evaluated. Nowadays, the Agent is not just the LLM. It can use tools, from a weather API to a terminal. This makes environments for training and evaluation more complex and critical. **3. The open challenge** Big labs are advancing, but open models and the community still face a fragmented ecosystem. We risk becoming users of systems built with tools we can't access or fully understand. **4. Environments Hub** That's why, I was excited when Prime Intellect released the Environments Hub. It's a place where people share RL environments: tasks you can use to train LLMs with RL (GRPO-style) or evaluate Agents. Plus, the Verifiers library (by William Brown) standardizes the creation of RL environments and evaluations. They can help to keep science and experimentation open. 🔬 I explored the Hub and wrote a **hands-on walkthrough** 📝 * RL + LLMs basics * Environments Hub navigation * Evaluating models/Agents * GRPO Training a tiny model on an alphabetical sort task Take a look! 👇 📝 [https://huggingface.co/blog/anakin87/environments-hub](https://huggingface.co/blog/anakin87/environments-hub)
2025-09-05T15:40:54
https://i.redd.it/9hga2wdx8dnf1.png
anakin_87
i.redd.it
1970-01-01T00:00:00
0
{}
1n98noa
false
null
t3_1n98noa
/r/LocalLLaMA/comments/1n98noa/environments_hub_walkthrough_your_language_model/
false
false
default
10
{'enabled': True, 'images': [{'id': '9hga2wdx8dnf1', 'resolutions': [{'height': 94, 'url': 'https://preview.redd.it/9hga2wdx8dnf1.png?width=108&crop=smart&auto=webp&s=16499ae57f859fda2743629fd5423073075f4965', 'width': 108}, {'height': 188, 'url': 'https://preview.redd.it/9hga2wdx8dnf1.png?width=216&crop=smart&auto=webp&s=497928e3c629651f9d668127964e6d33f3af8d01', 'width': 216}, {'height': 279, 'url': 'https://preview.redd.it/9hga2wdx8dnf1.png?width=320&crop=smart&auto=webp&s=15db5475b18802abb4bc5cbe63c999f4d38516e5', 'width': 320}, {'height': 559, 'url': 'https://preview.redd.it/9hga2wdx8dnf1.png?width=640&crop=smart&auto=webp&s=d10b0af5801192f5f1f8d27f2ed57636a44c9b27', 'width': 640}, {'height': 838, 'url': 'https://preview.redd.it/9hga2wdx8dnf1.png?width=960&crop=smart&auto=webp&s=87a21a0aa702d1828cf852c950ce167db8a5fc94', 'width': 960}, {'height': 943, 'url': 'https://preview.redd.it/9hga2wdx8dnf1.png?width=1080&crop=smart&auto=webp&s=92afb0b343509d309f510ea1da9d34d1e8b18bf2', 'width': 1080}], 'source': {'height': 1462, 'url': 'https://preview.redd.it/9hga2wdx8dnf1.png?auto=webp&s=a279da055f675c21e24ebe8bb35a3db143d86273', 'width': 1673}, 'variants': {}}]}
What is the name of that tool??? [HELP]
2
I came across a GitHub tool which utilise docker to run each of the locally run LLMs for separate uses like stable diffusion for video generation and etc. but i forgot where I saved the name and I have been searching for it for one whole day… Please help!!! Not Huggingface… !!! Any lead is much appreciated…
2025-09-05T15:37:23
https://www.reddit.com/r/LocalLLaMA/comments/1n98kh1/what_is_the_name_of_that_tool_help/
Vaguely_Smart_Cookie
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1n98kh1
false
null
t3_1n98kh1
/r/LocalLLaMA/comments/1n98kh1/what_is_the_name_of_that_tool_help/
false
false
self
2
null
Why is Arc A770 Prompt Processing So Slow?
6
Windows, llama.cpp multiple releases, vulkan and sycl I’ve tested with lots of models and my prompt processing is always pretty slow. Most recently gpt-oss-20b only gets to about 160 tps at BEST and routinely dips to ~70. The best I’ve seen is MiniCPM which topped out at 360. I’ve tested with vulkan and sycl backends. Could PCIe 3 be my problem, despite the models being loaded entirely on GPU?
2025-09-05T15:35:59
https://www.reddit.com/r/LocalLLaMA/comments/1n98j4l/why_is_arc_a770_prompt_processing_so_slow/
thejacer
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1n98j4l
false
null
t3_1n98j4l
/r/LocalLLaMA/comments/1n98j4l/why_is_arc_a770_prompt_processing_so_slow/
false
false
self
6
null
Local voice agent experiments
1
Here are the computation resources I have: 1- Macbook m4 pro with 24 GB unified memory (this is running macos). 2- HP Omen core ultra 9 285H with 16GB integrated GPU (integrated gpu vram amount is configurable), 8GB RTX 5070, 32GB DDR5 system RAM and 1TB nvme ssd (this machine is running windows 11) 3- A PC with AMD ryzen 9 3950x, 32GB DDR4 RAM, 24GB RTX 3090 and 1TB nvme (this machine is running ubuntu) I need suggestions for running the entire voice agent pipeline (ASR, LLM and TTS) on these machines. Need help with figuring out what models I can run with what inference engines.
2025-09-05T15:33:15
https://www.reddit.com/r/LocalLLaMA/comments/1n98grq/local_voice_agent_experiments/
BABA_yaaGa
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1n98grq
false
null
t3_1n98grq
/r/LocalLLaMA/comments/1n98grq/local_voice_agent_experiments/
false
false
self
1
null