title
stringlengths
1
300
score
int64
0
8.54k
selftext
stringlengths
0
41.5k
created
timestamp[ns]date
2023-04-01 04:30:41
2026-03-04 02:14:14
url
stringlengths
0
878
author
stringlengths
3
20
domain
stringlengths
0
82
edited
timestamp[ns]date
1970-01-01 00:00:00
2026-02-19 14:51:53
gilded
int64
0
2
gildings
stringclasses
7 values
id
stringlengths
7
7
locked
bool
2 classes
media
stringlengths
646
1.8k
name
stringlengths
10
10
permalink
stringlengths
33
82
spoiler
bool
2 classes
stickied
bool
2 classes
thumbnail
stringlengths
4
213
ups
int64
0
8.54k
preview
stringlengths
301
5.01k
GRPO please stop punishing your correct token
191
I’ve been experimenting with a training approach I’m calling **GTPO (Group-relative Trajectory-based Policy Optimization)**. It started as a way to fix some quirks I ran into with GRPO, like: * **Conflicting gradients**: tokens showing up in both “good” and “bad” completions getting pulled in opposite directions. * **Policy collapse**: models flattening out when some completions had strong negative updates. # What I tried * I added a small mechanism to *skip negative updates* on “conflict tokens.” * Instead of using KL with a reference model, I tried filtering out high-entropy completions (trajectories that are basically too noisy). # What I noticed * Training was more stable and didn’t wreck formatting. * I didn’t need a reference model, which made runs lighter. * Even on Colab (using Unsloth) I could fine-tune without things blowing up. * On reasoning datasets like **GSM8K, MATH, AIME 2024 (see Figure)** with LLaMA 8B and Qwen 3B, results were consistently better than my GRPO baselines. # Links if you want to poke around * Paper: [arXiv](https://arxiv.org/abs/2508.03772) * Code: [GitHub](https://github.com/winstonsmith1897/GTPO) * Colab example: [Notebook](https://colab.research.google.com/github/winstonsmith1897/GTPO/blob/main/colab/GTPO_training_example.ipynb) I’m curious what others think, especially folks who’ve been fine-tuning with GRPO or similar. Do you have any benchmarks or setups you’d like me to test it on?
2025-08-25T13:42:56
https://i.redd.it/mdaobm9t56lf1.png
Gildarts777
i.redd.it
1970-01-01T00:00:00
0
{}
1mzquqi
false
null
t3_1mzquqi
/r/LocalLLaMA/comments/1mzquqi/grpo_please_stop_punishing_your_correct_token/
false
false
default
191
{'enabled': True, 'images': [{'id': 'mdaobm9t56lf1', 'resolutions': [{'height': 40, 'url': 'https://preview.redd.it/mdaobm9t56lf1.png?width=108&crop=smart&auto=webp&s=c4d094bbeb8ffef453ce45a2d1b6356ef2a50e57', 'width': 108}, {'height': 81, 'url': 'https://preview.redd.it/mdaobm9t56lf1.png?width=216&crop=smart&auto=webp&s=39d4d604a5b6f10c115f2615c3bbbd9a0a198740', 'width': 216}, {'height': 120, 'url': 'https://preview.redd.it/mdaobm9t56lf1.png?width=320&crop=smart&auto=webp&s=9187207eb2e0a20bfa1e38f9ce4f71809b5d90a3', 'width': 320}, {'height': 240, 'url': 'https://preview.redd.it/mdaobm9t56lf1.png?width=640&crop=smart&auto=webp&s=9172793aa7a56b0f2e4540faa0f91d3bddb43291', 'width': 640}, {'height': 361, 'url': 'https://preview.redd.it/mdaobm9t56lf1.png?width=960&crop=smart&auto=webp&s=11ac387457355f71f1054766d1cb8f0b78610864', 'width': 960}, {'height': 406, 'url': 'https://preview.redd.it/mdaobm9t56lf1.png?width=1080&crop=smart&auto=webp&s=dfe9eabffdb4f3a0fdfbbf0cf02dd8e262a89336', 'width': 1080}], 'source': {'height': 463, 'url': 'https://preview.redd.it/mdaobm9t56lf1.png?auto=webp&s=b483928390d9d18c5040771c6e063ac96c618649', 'width': 1231}, 'variants': {}}]}
What is the best vision Model for a consumer GPU (24GB VRAM)?
5
What is the best Vision Model I can run on my 4090 and what Model is a great mix between speed and quality? With all the new Models I am not really sure, I was using Qwen 2.5 VL 7b but I am unsure if there a better solution?
2025-08-25T13:38:35
https://www.reddit.com/r/LocalLLaMA/comments/1mzqqy2/what_is_the_best_vision_model_for_a_consumer_gpu/
1GewinnerTwitch
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mzqqy2
false
null
t3_1mzqqy2
/r/LocalLLaMA/comments/1mzqqy2/what_is_the_best_vision_model_for_a_consumer_gpu/
false
false
self
5
null
Data as context” après upload d’un doc : comment vous faites ? (sans RAG) + repos GitHub ?
0
Salut ! Je cherche une façon de faire du “data as context” : l’utilisateur uploade un PDF/Doc, on le lit côté serveur et, au moment de répondre, on colle les passages utiles **directement** dans la fenêtre de contexte du LLM (pas d’entraînement, pas de RAG). Vos conseils concrets (découpe, gestion des tokens, mini-résumé) ? Et si vous avez des **dépôts GitHub** qui montrent ce flux basique, je prends. Merci
2025-08-25T13:21:05
https://www.reddit.com/r/LocalLLaMA/comments/1mzqbx8/data_as_context_après_upload_dun_doc_comment_vous/
Particular_Cake4359
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mzqbx8
false
null
t3_1mzqbx8
/r/LocalLLaMA/comments/1mzqbx8/data_as_context_après_upload_dun_doc_comment_vous/
false
false
self
0
null
Biased comparison of frontends
18
Since day 1 of my journey on using local LLMs (I jumped right in without actually trying the ChatGPT that kind of providers) I’ve been using Open-WebUI that is kind of vanilla when it comes to an Unraid server setup (Ollama + Open WebUI). After going deeper into this I switched hardwares, backends, frontends, and become a little bit frustrated in the recent development of OWUI. Let’s cut short (not short tbh): 1. Open WebUI: Pros: + easy to use and setup on docker + integrated web search + customisation including parameters, TTS + WebUI to serve LLM across devices Cons: - No native support on MCP servers (a dealbreaker for me since recent MCP development) - separate backend is required 2. LM Studio: Pros: + one-stop solution for downloading and running local LLM on different hardwares including Apple Silicon + native MCP server support + easy to setup and run (can’t be easier tbh) Cons: - no web search (it can be done via MCP tool tho) - no WebUI for serving LLM across devices (sad it’s almost perfect) - no plug-ins (the registration on beta channel did not work for me) 3. AnythingLLM: Pros: + Support Serving LLM on docker + Support different backends + AI Agent setup made easy + Sophisticated RAG setup Cons: - No Serving LLM across devices if running desktop version - No customisation on using different external TTS endpoints - Agent has to be called out in each chat 4. LibreChat: Pros: + Native support on MCP servers + Support different backends Cons: - Pain in the bud in setting up 5. SillyTavern Pros: + Support different backends + Sophisticated RP setting (some find it useful) + Extension available at ease on supporting MCP servers + customisable TTS setup + once it’s up and running you can get things out of it that no other frontends can give you + WebUI serving across devices is available Cons: - Setting up docker is not the most easiest thing - setting up the rest through UI is a daunting task before things can be up and running - Seriously SillyTavern? How can it be named like that while having such full features available? I can’t even tell people I learn things through it Verdict: I’m using ST now while it’s not the perfect solution and the damn silly name. All the frontends tested here are quite good actually, it’s just that ST seems to offer more while meaning it’s another rabbit hole. LM Studio is my go to backend + frontend for its support on different architectures including Apple Silicon (I switched to Apple from ROCm). If ever they can offer same interfaces via webUI it will be a killer. Not tested much on LibreChat cuz it’s a painful setup and maintenance Open WebUI started to becoming a No No for me since it’s MCPO model of supporting MCP servers AnythingLLM - I’m not a big RAG user but it’s quite nice on that plus the nice interface. I just hated that I need to call the agent every new chat. So to wrap up - give them a try yourself if you’re looking for different frontends. Plz let me know if you have some UI recommendations as well.
2025-08-25T12:46:08
https://www.reddit.com/r/LocalLLaMA/comments/1mzpi3o/biased_comparison_of_frontends/
moritzchow
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mzpi3o
false
null
t3_1mzpi3o
/r/LocalLLaMA/comments/1mzpi3o/biased_comparison_of_frontends/
false
false
self
18
null
What are your thoughts on NanoGPT? I compare it to having an Infinity Gauntlet for LLMs, like Thanos.
0
2025-08-25T12:26:20
https://www.reddit.com/gallery/1mzp2ov
Repulsive-Monk1022
reddit.com
1970-01-01T00:00:00
0
{}
1mzp2ov
false
null
t3_1mzp2ov
/r/LocalLLaMA/comments/1mzp2ov/what_are_your_thoughts_on_nanogpt_i_compare_it_to/
false
false
https://b.thumbs.redditm…W4gC5kFT1r2Q.jpg
0
null
gpu pra ia
0
sou iniciante agora mas pretendo estudar IA por anos e queria uma placa de video que eu não precise se preocupar em trocar por uns 2 anos, oque acham da 5060ti de 16vram para IA? consigo estudar visão computacional e chat bots nela até pelo menos nivel intermediario arranhando o avançado? (não tenho grana pra comprar 5070 +)
2025-08-25T11:38:14
https://www.reddit.com/r/LocalLLaMA/comments/1mzo368/gpu_pra_ia/
Professional-Fly8636
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mzo368
false
null
t3_1mzo368
/r/LocalLLaMA/comments/1mzo368/gpu_pra_ia/
false
false
self
0
null
Testers for Seed-OSS tool calling wanted!
15
Following the adoption of the model architecture itself, I've added a pull request to llama.cpp to support Seed-OSS native toolcalls and reasoning: [https://github.com/ggml-org/llama.cpp/pull/15552](https://github.com/ggml-org/llama.cpp/pull/15552) This one has been somewhat annoying because Seed has its own toolcalling format, very similar to the infamous Qwen-Coder, so I would be grateful if someone being able to run the model at a higher quant than Q2\_K\_S could test it send report on any potential problems.
2025-08-25T11:33:19
https://www.reddit.com/r/LocalLLaMA/comments/1mznzt6/testers_for_seedoss_tool_calling_wanted/
ilintar
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mznzt6
false
null
t3_1mznzt6
/r/LocalLLaMA/comments/1mznzt6/testers_for_seedoss_tool_calling_wanted/
false
false
self
15
{'enabled': False, 'images': [{'id': 'K4XyzVkCiLFetLkB3t4CX6p7GPM1a7PZvmdvJwR4QaY', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/K4XyzVkCiLFetLkB3t4CX6p7GPM1a7PZvmdvJwR4QaY.png?width=108&crop=smart&auto=webp&s=9623dcd5c39dc3f9d6b4de1051606e181986b7a8', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/K4XyzVkCiLFetLkB3t4CX6p7GPM1a7PZvmdvJwR4QaY.png?width=216&crop=smart&auto=webp&s=f881298d61d63a72035c97d21beadae4809eb83c', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/K4XyzVkCiLFetLkB3t4CX6p7GPM1a7PZvmdvJwR4QaY.png?width=320&crop=smart&auto=webp&s=576ad8578cc5dcc78daf9f4af0038dfe5de8f560', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/K4XyzVkCiLFetLkB3t4CX6p7GPM1a7PZvmdvJwR4QaY.png?width=640&crop=smart&auto=webp&s=ee5eb1c645e955edcba77ef51296e9d31d955cc1', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/K4XyzVkCiLFetLkB3t4CX6p7GPM1a7PZvmdvJwR4QaY.png?width=960&crop=smart&auto=webp&s=b72a78a3cd5354a47b6f5570966c52b868f91860', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/K4XyzVkCiLFetLkB3t4CX6p7GPM1a7PZvmdvJwR4QaY.png?width=1080&crop=smart&auto=webp&s=e7112835e1119e1520c3116984b955b001ac265d', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/K4XyzVkCiLFetLkB3t4CX6p7GPM1a7PZvmdvJwR4QaY.png?auto=webp&s=bdbf3aa55181519c82c2211336e24558c52d4c87', 'width': 1200}, 'variants': {}}]}
What features would you like to see in a local LLM UI
5
We are adding new features to E-Worker [https://app.eworker.ca](https://app.eworker.ca) And the best place to ask is this reddit community, what is the most important features you want in a Local LLM UI? For example, we added local and remote llm support (Ollama, Docker) for local, and Google, Open AI, Open Router, DeepSeek for remote. one of the things that bugs me is can't organize chats easily in some apps, we made it a powerful tree structure that is saved locally. some people want full privacy, we even give you the ability to use local language service to prevent the spelling to be sent to google, or Microsoft or others, just full privacy. We added local document, sheet, notes, text editors so users can edit documents with AI (the editors are there, the AI integration is coming soon) I am wondering, what other stuff people look for in local LLM UI?
2025-08-25T11:31:47
https://www.reddit.com/r/LocalLLaMA/comments/1mznyqx/what_features_would_you_like_to_see_in_a_local/
Working-Magician-823
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mznyqx
false
null
t3_1mznyqx
/r/LocalLLaMA/comments/1mznyqx/what_features_would_you_like_to_see_in_a_local/
false
false
self
5
null
Local reasoning model
0
Apart from gpt oss and qwen is there any good reasoning model avilable?
2025-08-25T11:22:13
https://www.reddit.com/r/LocalLLaMA/comments/1mznsa1/local_reasoning_model/
nitizen
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mznsa1
false
null
t3_1mznsa1
/r/LocalLLaMA/comments/1mznsa1/local_reasoning_model/
false
false
self
0
null
InternVL3_5 series is out!!
247
[internlm (InternLM)](https://huggingface.co/organizations/internlm/activity/all) https://preview.redd.it/resy0a6n95lf1.png?width=1294&format=png&auto=webp&s=9db47fa8a145b99f6bd74750e0d3a0791d85137f
2025-08-25T10:40:58
https://www.reddit.com/r/LocalLLaMA/comments/1mzn0zm/internvl3_5_series_is_out/
kironlau
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mzn0zm
false
null
t3_1mzn0zm
/r/LocalLLaMA/comments/1mzn0zm/internvl3_5_series_is_out/
false
false
https://b.thumbs.redditm…4FdsC-shZ3ic.jpg
247
{'enabled': False, 'images': [{'id': 'oVE1-EnaLKFKvov2KcAAd41NTqlkCry1b2bYAP90Upw', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/oVE1-EnaLKFKvov2KcAAd41NTqlkCry1b2bYAP90Upw.png?width=108&crop=smart&auto=webp&s=0348d6f64ba894b891e779b1d26508c3ff7b4229', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/oVE1-EnaLKFKvov2KcAAd41NTqlkCry1b2bYAP90Upw.png?width=216&crop=smart&auto=webp&s=3f43712435c4ace15166e2dd4e8938cb33182595', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/oVE1-EnaLKFKvov2KcAAd41NTqlkCry1b2bYAP90Upw.png?width=320&crop=smart&auto=webp&s=36194573e51309071154d3cb128284e61dbc4d8e', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/oVE1-EnaLKFKvov2KcAAd41NTqlkCry1b2bYAP90Upw.png?width=640&crop=smart&auto=webp&s=e47ab110109abf15025f25857e6f9890fe89966c', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/oVE1-EnaLKFKvov2KcAAd41NTqlkCry1b2bYAP90Upw.png?width=960&crop=smart&auto=webp&s=3145cf446d10c57213609c1d08309d955360b32f', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/oVE1-EnaLKFKvov2KcAAd41NTqlkCry1b2bYAP90Upw.png?width=1080&crop=smart&auto=webp&s=fb32f14848eaf399abc016d1dc1a7d318698486c', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/oVE1-EnaLKFKvov2KcAAd41NTqlkCry1b2bYAP90Upw.png?auto=webp&s=102531bfb45719d9368f37c9cc3b54eca9c91808', 'width': 1200}, 'variants': {}}]}
need some help verifying if flash attention works
1
This is my first internship after uni and i was told to play around with flash attention and integrate it to our model training. I am super inexperienced and while training works with flash attention. i dont really know what advantage i am getting here . the iterations / sec is same and larger batch sizes lead to OOM errors. since flash attention doesnt really change the output quality . can someone advise me what parameter is getting affected by flash attention and where to check it ? using AWS sagemaker
2025-08-25T10:24:48
https://www.reddit.com/r/LocalLLaMA/comments/1mzmr2n/need_some_help_verifying_if_flash_attention_works/
confused_8357
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mzmr2n
false
null
t3_1mzmr2n
/r/LocalLLaMA/comments/1mzmr2n/need_some_help_verifying_if_flash_attention_works/
false
false
self
1
null
Best cost-effective TTS solution for LiveKit voice bot (human-like voice, low resources)?
1
Hey folks, I’m working on a conversational voice bot using LiveKit Agents and trying to figure out the most cost-effective setup for STT + TTS. STT: Thinking about the usual options, but open to cheaper/more reliable suggestions that work well in real-time. TTS: ElevenLabs sounds great, but it’s way too expensive for my use case. I’ve looked at OpenAI’s GPT-4o mini TTS and also Gemini TTS. Both seem viable, but I need something that feels humanized (not robotic like gTTS), with natural pacing and ideally some control over speed/intonation. Constraints: Server resources are limited — a VM with 8-16 GB RAM, no GPU. Ideally want something that can run locally if possible, but lightweight enough. Or will prefer cloud api based if cost effective: If cloud is the only realistic option, which provider (OpenAI, Gemini, others?) and model do you recommend for best balance of quality + cost? Goal: A natural-sounding real-time voice conversation bot, with minimal latency and costs kept under control. Has anyone here implemented this kind of setup with LiveKit? Would love to hear your experience, what stack you went with, and whether local models are even worth considering vs just using a good cloud TTS. Thanks!
2025-08-25T09:59:47
https://www.reddit.com/r/LocalLLaMA/comments/1mzmb9n/best_costeffective_tts_solution_for_livekit_voice/
Funny_Working_7490
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mzmb9n
false
null
t3_1mzmb9n
/r/LocalLLaMA/comments/1mzmb9n/best_costeffective_tts_solution_for_livekit_voice/
false
false
self
1
null
Password only for this week: Welcome to Hugston
0
HugstonOne Enterprise Edition represents a **unified, very powerful and secure, local AI ecosystem** – ideal for individual users and enterprises needing **offline AI capabilities, model flexibility, and full data control**. Its strength lies in democratizing enterprise-grade AI without cloud dependencies. **True Local Power**: All processing happens on-premises – **zero data leaves your network**. * **Universal Model Compatibility**: Works with **any GGUF model** (no proprietary formats). * **Zero-Trust Security**: Model isolation for enterprise compliance (GDPR, HIPAA, etc.). * **No Cloud Lock-in**: Switch between online/offline modes instantly without reconfiguration. # Key Features & Capabilities |Feature Category|Description| |:-|:-| |**Offline-First Operation**|Fully forced offline-capable; works without internet. | |**Multi-Mode Execution**| **Online** **Offline** Pure local execution (no forced network access).| |**Server-CLI Support**|Native CLI interface and server deployment, batch processing, and low-level control.| |**Local API**|RESTful API for seamless integration with enterprise systems and websites.| |**GGUF Model Support**| **10,000+ GGUF models**  *No model conversion needed*Compatible with (including  [Qwen](https://hugston.com/explore?folder=llm_models), [DeepSeek](https://hugston.com/explore?folder=llm_models), [Seek Coder](https://hugston.com/explore?folder=llm_models), [GLM](https://hugston.com/explore?folder=llm_models), [ExaOne](https://hugston.com/explore?folder=llm_models), [Magistral](https://hugston.com/explore?folder=llm_models), [Hunyuan](https://hugston.com/explore?folder=llm_models), [Falcon](https://hugston.com/explore?folder=llm_models), [Mimo](https://hugston.com/explore?folder=llm_models), [Gemma](https://hugston.com/explore?folder=llm_models), [Phi](https://hugston.com/explore?folder=llm_models), [Mistral](https://hugston.com/explore?folder=llm_models), [Wizard](https://hugston.com/explore?folder=llm_models), [Dolphin](https://hugston.com/explore?folder=llm_models), [Devstral](https://hugston.com/explore?folder=llm_models), [LLama](https://hugston.com/explore?folder=llm_models), [Gpt Oss](https://hugston.com/explore?folder=llm_models) models). .| |**Memory Optimization**|Dynamic optimized memory management for large models (100GB+). Optimized for RAM/CPU .| |**Code Editor & Preview**|Integrated IDE with real-time code rendering, syntax highlighting, and model-agnostic code preview.| |**Multi-Format Processing**|Handles images, (PDFs, audio, video in beta), text, and binary files natively (via built-in OCR, image segmentation, and format converters).| |**Advanced Terminal**|Command-line interface for advanced operations (model tuning, logging, diagnostics, and automation).| |**Performance Metrics**|Real-time tracking of latency, throughput, memory usage, GPU/CPU utilization, and model accuracy.|
2025-08-25T09:56:43
https://v.redd.it/xrwr1nka05lf1
Trilogix
/r/LocalLLaMA/comments/1mzm9jg/password_only_for_this_week_welcome_to_hugston/
1970-01-01T00:00:00
0
{}
1mzm9jg
false
{'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/xrwr1nka05lf1/DASHPlaylist.mpd?a=1758837412%2CNDAwZDgxZDAzNTcxMmYyYjRhYjQ4YjU4YTQ1NjMxZDVjYzMyZjAxMWZlMjQ1ZDVlZTZkMzAxOTM5ZTNkYWNjYw%3D%3D&v=1&f=sd', 'duration': 456, 'fallback_url': 'https://v.redd.it/xrwr1nka05lf1/DASH_720.mp4?source=fallback', 'has_audio': True, 'height': 692, 'hls_url': 'https://v.redd.it/xrwr1nka05lf1/HLSPlaylist.m3u8?a=1758837412%2CNGY5Mjg3MmViNjE4NDdkYjhmODRkOWRiZDQyYWJiZDEzNDNjMTdjOWQ4ZDlkOTI1ZDJiNzNmYzMzMjRkNTU0ZA%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/xrwr1nka05lf1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1280}}
t3_1mzm9jg
/r/LocalLLaMA/comments/1mzm9jg/password_only_for_this_week_welcome_to_hugston/
false
false
https://external-preview…a0a1b392c2fc8da7
0
{'enabled': False, 'images': [{'id': 'bjd2bzVlamEwNWxmMewQiSUMVe9_xXbTGyn-uNVSQcvgjG1NseQNKzVrUJj8', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/bjd2bzVlamEwNWxmMewQiSUMVe9_xXbTGyn-uNVSQcvgjG1NseQNKzVrUJj8.png?width=108&crop=smart&format=pjpg&auto=webp&s=f3ae1a7ccd0222cea4ee33a626d1e450edbe1543', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/bjd2bzVlamEwNWxmMewQiSUMVe9_xXbTGyn-uNVSQcvgjG1NseQNKzVrUJj8.png?width=216&crop=smart&format=pjpg&auto=webp&s=c0caf8b15b60ee2f407d85abd0bfa0d7c641794a', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/bjd2bzVlamEwNWxmMewQiSUMVe9_xXbTGyn-uNVSQcvgjG1NseQNKzVrUJj8.png?width=320&crop=smart&format=pjpg&auto=webp&s=bef050239215a33f914dffa774e84e5ee15e5ea9', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/bjd2bzVlamEwNWxmMewQiSUMVe9_xXbTGyn-uNVSQcvgjG1NseQNKzVrUJj8.png?width=640&crop=smart&format=pjpg&auto=webp&s=3150ca1731e7681fbb33b3b3f10c655cec490d90', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/bjd2bzVlamEwNWxmMewQiSUMVe9_xXbTGyn-uNVSQcvgjG1NseQNKzVrUJj8.png?width=960&crop=smart&format=pjpg&auto=webp&s=51ddf5e2798ebb85e55e434eebe1eb62e1958d84', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/bjd2bzVlamEwNWxmMewQiSUMVe9_xXbTGyn-uNVSQcvgjG1NseQNKzVrUJj8.png?width=1080&crop=smart&format=pjpg&auto=webp&s=2301050a2fef43077a613ba67c4fc07723c2a3f3', 'width': 1080}], 'source': {'height': 986, 'url': 'https://external-preview.redd.it/bjd2bzVlamEwNWxmMewQiSUMVe9_xXbTGyn-uNVSQcvgjG1NseQNKzVrUJj8.png?format=pjpg&auto=webp&s=7fa6a2f8136d609cdcc4118a1b9ac922d62bc059', 'width': 1824}, 'variants': {}}]}
u/RSXLV appreciation post for releasing his updated faster Chatterbox-TTS fork yesterday. Major speed increase indeed, response is near real-time now. Let's all give him a big ol' thank you! Fork in the comments.
79
Fork: https://www.reddit.com/r/LocalLLaMA/comments/1mza0wy/comment/nak1lea/?context=3 u/RSXLV again, huge shoutout to you, my guy. This fork is so fast now
2025-08-25T09:51:02
https://v.redd.it/9txv4idb05lf1
swagonflyyyy
v.redd.it
1970-01-01T00:00:00
0
{}
1mzm677
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/9txv4idb05lf1/DASHPlaylist.mpd?a=1758707480%2CNWIyNjE4Yjg5NDgwZTc0ZTMxODU2NTMxMThhZWVlNTQ3NGRiZWY3ODBjOTI1OTM1OWJkMDdlZDM5MTdmMjE4MA%3D%3D&v=1&f=sd', 'duration': 63, 'fallback_url': 'https://v.redd.it/9txv4idb05lf1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/9txv4idb05lf1/HLSPlaylist.m3u8?a=1758707480%2CNzQxYjE2N2Y3MWMyYjAzZGJjYjNjNTFhMTRjNjkxZGQ4MjhiM2I2NWM0ZDljYWZkYjg0MGYyNDRhNDg1M2JmMQ%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/9txv4idb05lf1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1920}}
t3_1mzm677
/r/LocalLLaMA/comments/1mzm677/ursxlv_appreciation_post_for_releasing_his/
false
false
https://external-preview…13c2ae86afbf1fbb
79
{'enabled': False, 'images': [{'id': 'Nm9qN2ppZGIwNWxmMYB8gfxVUG7ntLAy6UFGKU3bfv7xh4HVFM-UizvnZAOP', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/Nm9qN2ppZGIwNWxmMYB8gfxVUG7ntLAy6UFGKU3bfv7xh4HVFM-UizvnZAOP.png?width=108&crop=smart&format=pjpg&auto=webp&s=32db05f0d7657a0c4845f0bbf1a1f51f70b9f662', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/Nm9qN2ppZGIwNWxmMYB8gfxVUG7ntLAy6UFGKU3bfv7xh4HVFM-UizvnZAOP.png?width=216&crop=smart&format=pjpg&auto=webp&s=d5009e4a765276777bfe99ce5661f51f53635173', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/Nm9qN2ppZGIwNWxmMYB8gfxVUG7ntLAy6UFGKU3bfv7xh4HVFM-UizvnZAOP.png?width=320&crop=smart&format=pjpg&auto=webp&s=f4e9fef3eab53603e001e9639cb8d0f138c907b2', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/Nm9qN2ppZGIwNWxmMYB8gfxVUG7ntLAy6UFGKU3bfv7xh4HVFM-UizvnZAOP.png?width=640&crop=smart&format=pjpg&auto=webp&s=4c17ee9f7f99193d334b58a3952c333cf69616f7', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/Nm9qN2ppZGIwNWxmMYB8gfxVUG7ntLAy6UFGKU3bfv7xh4HVFM-UizvnZAOP.png?width=960&crop=smart&format=pjpg&auto=webp&s=043f009af5ae5de825722abd63f2cd56c8b846bd', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/Nm9qN2ppZGIwNWxmMYB8gfxVUG7ntLAy6UFGKU3bfv7xh4HVFM-UizvnZAOP.png?width=1080&crop=smart&format=pjpg&auto=webp&s=5084ed609812d97fddc7440bf1fec9a479120841', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/Nm9qN2ppZGIwNWxmMYB8gfxVUG7ntLAy6UFGKU3bfv7xh4HVFM-UizvnZAOP.png?format=pjpg&auto=webp&s=bc342a7f127ce44704c77acf9858b09af6b3a341', 'width': 1920}, 'variants': {}}]}
support interns1-mini has been merged into llama.cpp
37
[https://huggingface.co/internlm/Intern-S1-mini](https://huggingface.co/internlm/Intern-S1-mini) model description: We introduce **Intern-S1-mini**, a lightweight open-source multimodal reasoning model based on the same techniques as [**Intern-S1**](https://huggingface.co/internlm/Intern-S1). Built upon an 8B dense language model (Qwen3) and a 0.3B Vision encoder (InternViT), Intern-S1-mini has been further pretrained on **5 trillion tokens** of multimodal data, including over **2.5 trillion scientific-domain tokens**. This enables the model to retain strong general capabilities while excelling in specialized scientific domains such as **interpreting chemical structures, understanding protein sequences, and planning compound synthesis routes**, making Intern-S1-mini to be a capable research assistant for real-world scientific applications. # [](https://huggingface.co/internlm/Intern-S1-mini#features) # Features * Strong performance across language and vision reasoning benchmarks, especially scientific tasks. * Continuously pretrained on a massive 5T token dataset, with over 50% specialized scientific data, embedding deep domain expertise. * Dynamic tokenizer enables native understanding of molecular formulas and protein sequences.
2025-08-25T09:49:37
https://github.com/ggml-org/llama.cpp/pull/15412
jacek2023
github.com
1970-01-01T00:00:00
0
{}
1mzm5dk
false
null
t3_1mzm5dk
/r/LocalLLaMA/comments/1mzm5dk/support_interns1mini_has_been_merged_into_llamacpp/
false
false
default
37
{'enabled': False, 'images': [{'id': 'C4PZMcjKvXogRwaLothTEm2AuNm9c8ehdTTP3nuiquQ', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/C4PZMcjKvXogRwaLothTEm2AuNm9c8ehdTTP3nuiquQ.png?width=108&crop=smart&auto=webp&s=ea9f0cbd824e72aaed542a9eebd9e6f16ab56b56', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/C4PZMcjKvXogRwaLothTEm2AuNm9c8ehdTTP3nuiquQ.png?width=216&crop=smart&auto=webp&s=ec842064c72d94c27ea5726ea166fe6b06427180', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/C4PZMcjKvXogRwaLothTEm2AuNm9c8ehdTTP3nuiquQ.png?width=320&crop=smart&auto=webp&s=927688d622a867ef3f24aecf0904ca9789ca1025', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/C4PZMcjKvXogRwaLothTEm2AuNm9c8ehdTTP3nuiquQ.png?width=640&crop=smart&auto=webp&s=136a713391edcd4645ecfc6fd874eb5f837f3b30', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/C4PZMcjKvXogRwaLothTEm2AuNm9c8ehdTTP3nuiquQ.png?width=960&crop=smart&auto=webp&s=d6c7ae7d43a93de63fe960f9b2aba362adce18f6', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/C4PZMcjKvXogRwaLothTEm2AuNm9c8ehdTTP3nuiquQ.png?width=1080&crop=smart&auto=webp&s=7a6fe636de48a1e9f88b181ea5970b3b23ba79a1', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/C4PZMcjKvXogRwaLothTEm2AuNm9c8ehdTTP3nuiquQ.png?auto=webp&s=7ff974aa84aa4140ce45e4d1977e0b28b66d0d09', 'width': 1200}, 'variants': {}}]}
Best Local LLM for coding?
9
Hey, I am new to the LLMs and i want to know the best free Local LLM for coding purpose only. Let me know about it.
2025-08-25T09:33:46
https://www.reddit.com/r/LocalLLaMA/comments/1mzlwco/best_local_llm_for_coding/
Notalabel_4566
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mzlwco
false
null
t3_1mzlwco
/r/LocalLLaMA/comments/1mzlwco/best_local_llm_for_coding/
false
false
self
9
null
Anyone knows a typescript open source general ai agent?
1
[removed]
2025-08-25T09:19:14
https://www.reddit.com/r/LocalLLaMA/comments/1mzlod1/anyone_knows_a_typescript_open_source_general_ai/
Worried_Phone_9500
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mzlod1
false
null
t3_1mzlod1
/r/LocalLLaMA/comments/1mzlod1/anyone_knows_a_typescript_open_source_general_ai/
false
false
self
1
null
Hardware to run Qwen3-235B-A22B-Instruct
7
Anyone experimented with above model and can shed some light on what the minimum hardware reqs are?
2025-08-25T09:13:48
https://www.reddit.com/r/LocalLLaMA/comments/1mzllf3/hardware_to_run_qwen3235ba22binstruct/
Sea-Replacement7541
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mzllf3
false
null
t3_1mzllf3
/r/LocalLLaMA/comments/1mzllf3/hardware_to_run_qwen3235ba22binstruct/
false
false
self
7
null
I built Husk, a native, private, and open-source iOS client for your local models
9
I've been using Ollama a lot and wanted a really clean, polished, and native way to interact with my privately hosted models on my iPhone. While there are some great options out there, I wanted something that felt like a first-party Apple app—fast, private, and simple. Husk is an open-source, Ollama-compatible app for iOS. The whole idea is to provide a beautiful and seamless experience for chatting with your models without your data ever leaving your control. # Features: * **Fully Offline & Private:** It's a native Ollama client. Your conversations stay on your devices. * **Optional iCloud Sync:** If you want, you can sync your chat history across your devices using Apple's end-to-end encryption (macOS support coming soon!). * **Attachments:** You can attach text-based files to your chats (image support for multimodal models is on the roadmap!). * **Highly Customisable:** You can set custom names, system prompts, and other parameters for your models. * **Open Source:** The entire project is open-source under the MIT license. To help support me, I've put Husk on the App Store with a small fee. If you buy it, thank you so much! It directly funds continued development. However, since it's fully open-source, you are more than welcome to build and install yourself from the GitHub repo. The instructions are all in the README. I'm also planning to add macOS support and integrations for other model providers soon. I'd love to hear what you all think! Any feedback, feature requests, or bug reports are super welcome. **TL;DR:** I made a native, private, open-source iOS app for Ollama. It's a paid app on the App Store to support development, but you can also build it yourself for free from the Github Repo
2025-08-25T08:53:17
https://www.reddit.com/r/LocalLLaMA/comments/1mzl9tp/i_built_husk_a_native_private_and_opensource_ios/
nathan12581
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mzl9tp
false
null
t3_1mzl9tp
/r/LocalLLaMA/comments/1mzl9tp/i_built_husk_a_native_private_and_opensource_ios/
false
false
self
9
null
LM studio error trying to run GPT-OSS 120b on 64gb RAM and 16gb VRAM. Any fix?
1
I have been getting this error When trying to run GPT-OSS 120b (LMstudio version) i have 64gb RAM and 16gb vram. is this an OOM error? my windows uses roughly 8-9gb of RAM meaning i'd have 55gb left +16 = 71 total URAM. https://preview.redd.it/hreffl1aq4lf1.png?width=2499&format=png&auto=webp&s=aa8fb97ab0d45f2dc02b1e9bf9524784ddd569fd I dont know what the issue is i should still be able to run it, right?
2025-08-25T08:51:54
https://www.reddit.com/r/LocalLLaMA/comments/1mzl939/lm_studio_error_trying_to_run_gptoss_120b_on_64gb/
noyingQuestions_101
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mzl939
false
null
t3_1mzl939
/r/LocalLLaMA/comments/1mzl939/lm_studio_error_trying_to_run_gptoss_120b_on_64gb/
false
false
https://b.thumbs.redditm…a_1gr4lcsFFY.jpg
1
null
Do you work in this field or it's your hobby?
9
Hello, I wonder how many people are interested in LLMs and similar AI stuff purely as a hobby. I find myself messing with llms only because I think it's cool and don't want to be behind other people who know how and why use these tools. [View Poll](https://www.reddit.com/poll/1mzkfwo)
2025-08-25T07:57:54
https://www.reddit.com/r/LocalLLaMA/comments/1mzkfwo/do_you_work_in_this_field_or_its_your_hobby/
Lxxtsch
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mzkfwo
false
null
t3_1mzkfwo
/r/LocalLLaMA/comments/1mzkfwo/do_you_work_in_this_field_or_its_your_hobby/
false
false
self
9
null
So, even the Sheikh of Dubai is waiting for the DGX SPARK
110
Everyone will get one for Christmas, Jensen said.
2025-08-25T07:34:50
https://i.redd.it/ouehxl1lc4lf1.png
No_Palpitation7740
i.redd.it
1970-01-01T00:00:00
0
{}
1mzk3ft
false
null
t3_1mzk3ft
/r/LocalLLaMA/comments/1mzk3ft/so_even_the_sheikh_of_dubai_is_waiting_for_the/
false
false
default
110
{'enabled': True, 'images': [{'id': 'ouehxl1lc4lf1', 'resolutions': [{'height': 65, 'url': 'https://preview.redd.it/ouehxl1lc4lf1.png?width=108&crop=smart&auto=webp&s=5cbb7b0cb57fdbaf8426268e444edd9c0ec16251', 'width': 108}, {'height': 130, 'url': 'https://preview.redd.it/ouehxl1lc4lf1.png?width=216&crop=smart&auto=webp&s=80fcfd20eee21851969924d2020efdd048428777', 'width': 216}, {'height': 193, 'url': 'https://preview.redd.it/ouehxl1lc4lf1.png?width=320&crop=smart&auto=webp&s=5a6d0b7a432b6e9611fbcee4f74b79dd0f5cd2c4', 'width': 320}, {'height': 387, 'url': 'https://preview.redd.it/ouehxl1lc4lf1.png?width=640&crop=smart&auto=webp&s=9f15a36d80110b140f159feccb9e39f5909232e6', 'width': 640}, {'height': 581, 'url': 'https://preview.redd.it/ouehxl1lc4lf1.png?width=960&crop=smart&auto=webp&s=85074d07abe6f1e2e25a7488a7d2dc178bb286eb', 'width': 960}, {'height': 654, 'url': 'https://preview.redd.it/ouehxl1lc4lf1.png?width=1080&crop=smart&auto=webp&s=27f52bd8dc3d707d1ea416a0184f8ee092bfa283', 'width': 1080}], 'source': {'height': 654, 'url': 'https://preview.redd.it/ouehxl1lc4lf1.png?auto=webp&s=cb8b7dd8206e0747c09c89a384cb19e50c4906da', 'width': 1080}, 'variants': {}}]}
Any good smaller LLM models (< 20B) for classification tasks?
0
I'm currently classifying texts; nothing too elaborate. What's the language, are there any spelling mistakes, any grammar mistakes, and is the text reasonably appropriate for all audiences. The output is requested as JSON. Quite straightforward. I've tried a number of models now. The prompt stays the same for all models. To my surprise, GPT-OSS-20B (4bit mlx) has been the only reliable model for this task that I've found so far. Using it with "low reasoning effort". In addition to the GPT-OSS-20B model, I've tried the following: * qwen3-30b-a3b-thinking-2507 (bf16) * qwen3-30b-a3b-2507 (8bit mlx) * gemma-27b (Q8\_0) * meta-llama-3.1-70b-instruct (4bit mlx) * meta-llama-3.1-8b-instruct (bf16) * ... a number of other smaller models that were not even in the vicinity of being up to the task When asking Claude, Perplexity, and chatGPT about suitable models for this, the answers hover around 8B-models but I've found them pretty useless for this particular task. Even Qwen3 30B dense as well as thinking, struggles to find ALL spelling mistakes, not to mention grammar mistakes. I'm quite surprised by this. So much so I'm starting to think that I'm doing something wrong here. For those of you that has done similar things / setup similar tasks, what model did you use that gave you good results?
2025-08-25T07:34:20
https://www.reddit.com/r/LocalLLaMA/comments/1mzk367/any_good_smaller_llm_models_20b_for/
CBW1255
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mzk367
false
null
t3_1mzk367
/r/LocalLLaMA/comments/1mzk367/any_good_smaller_llm_models_20b_for/
false
false
self
0
null
What in your experience is the best model with the smallest size in GB?
13
I have 4060 8gb and I am having a lot of fun testing 7b models and so on. But what is the best one in reasoning and code and so on in your experiance?(Doesn't have to be under 8gb)
2025-08-25T07:32:42
https://www.reddit.com/r/LocalLLaMA/comments/1mzk2dg/what_in_your_experience_is_the_best_model_with/
Brilliant-Piece1490
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mzk2dg
false
null
t3_1mzk2dg
/r/LocalLLaMA/comments/1mzk2dg/what_in_your_experience_is_the_best_model_with/
false
false
self
13
null
Efficiently detecting spam e-mails: can super small LLMs like Gemma 3 270M do it?
25
It's been reiterated many times that the 270M Gemma has been created to be finetuned for specific narrow tasks and that it works wells as a classifier. So here's a use-case: a website with a contact form receives human-written messages, all the conventional spam filters work, but plenty of the irrelevant messages still get through because they are copy-pasted and written by actual people. Does Gemma 270M and other similar sized models effectively classify those messages as spam? Is there a reason to use bigger models for this kind of tasks?
2025-08-25T07:29:32
https://www.reddit.com/r/LocalLLaMA/comments/1mzk0jr/efficiently_detecting_spam_emails_can_super_small/
s101c
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mzk0jr
false
null
t3_1mzk0jr
/r/LocalLLaMA/comments/1mzk0jr/efficiently_detecting_spam_emails_can_super_small/
false
false
self
25
null
A bit lost with models size
1
So since I'm a bit new to this, I only have my aging gaming setup : RTX3080Ii (12gb) and 32gb RAM (I will probably upgrade soon). I'm mostly interested in RP and storry telling. So far I have only toyed around with 13b models, but with quantization I have no idea what is better. Should I use a full or almost full 13gb model or a quantized larger model like a 34b?
2025-08-25T07:00:51
https://www.reddit.com/r/LocalLLaMA/comments/1mzjk0l/a_bit_lost_with_models_size/
Correct-Assistance81
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mzjk0l
false
null
t3_1mzjk0l
/r/LocalLLaMA/comments/1mzjk0l/a_bit_lost_with_models_size/
false
false
self
1
null
Making some silly mistake while saving to GGUF in Unsloth?
2
Hi I ran a training run earlier on gemma3-270m and created a lora, which I saved in my google drive. I did not at that point save a gguf. So now when I use colab and download the Lora and attempt to create a gguf, I'm getting an error. I haven't done a save to gguf ever earlier, so I am not sure if I am making some silly mistake. Basically just copied the code from the official notebook and ran it, but not working. Can someone take a look. My code: ``` from google.colab import drive drive.mount('/content/drive') !cp -r /content/drive/MyDrive/stuff/lora_model . from transformers import TextStreamer from unsloth import FastModel import torch from unsloth import FastLanguageModel from peft import PeftModel max_seq_length = 3072 model, tokenizer = FastLanguageModel.from_pretrained( model_name = "unsloth/gemma-3-270m-it", # YOUR MODEL max_seq_length = max_seq_length, load_in_4bit = False, # 4 bit quantization to reduce memory load_in_8bit = False, # [NEW!] A bit more accurate, uses 2x memory full_finetuning = False, # [NEW!] We have full finetuning now! ) model = PeftModel.from_pretrained(model, "lora_model") text = \[MY TESTING SAMPLE HERE\] _ = model.generate( **tokenizer(text, return_tensors = "pt").to("cuda"), max_new_tokens = 125, temperature = 1, top_p = 0.95, top_k = 64, streamer = TextStreamer(tokenizer, skip_prompt = True), ) print('\n+++++++++++++++++++++++++++++\n') model.save_pretrained_merged("model", tokenizer, save_method = "merged_16bit") model.save_pretrained_gguf("model", tokenizer, quantization_method = "q8_0") ``` The load and inference run fine. Inference is in the finetuned format as expected. But when the GGUF part starts up, get this error. If I run just the GGUF saving, then it says input folder not found, I guess because there is no model folder? /usr/local/lib/python3.12/dist-packages/unsloth\_zoo/saving\_utils.py:632: UserWarning: Model is not a PeftModel (no Lora adapters detected). Skipping Merge. Please use save\_pretrained() or push\_to\_hub() instead! warnings.warn("Model is not a PeftModel (no Lora adapters detected). Skipping Merge. Please use save\_pretrained() or push\_to\_hub() instead!") \--------------------------------------------------------------------------- RuntimeError Traceback (most recent call last) /tmp/ipython-input-1119511992.py in <cell line: 0>() 1 model.save\_pretrained\_merged("model", tokenizer, save\_method = "merged\_16bit") \----> 2 model.save\_pretrained\_gguf("model", tokenizer, quantization\_method = "q8\_0") 2 frames /usr/local/lib/python3.12/dist-packages/unsloth\_zoo/llama\_cpp.py in convert\_to\_gguf(input\_folder, output\_filename, quantization\_type, max\_shard\_size, print\_output, print\_outputs) 654 655 if not os.path.exists(input\_folder): \--> 656 raise RuntimeError(f"Unsloth: \`{input\_folder}\` does not exist?") 657 658 config\_file = os.path.join(input\_folder, "config.json") RuntimeError: Unsloth: \`model\` does not exist? I also tried loading just the lora and then running inference. ``` model, tokenizer = FastLanguageModel.from_pretrained( model_name = "lora_model", # YOUR MODEL max_seq_length = max_seq_length, load_in_4bit = False, # 4 bit quantization to reduce memory load_in_8bit = False, # [NEW!] A bit more accurate, uses 2x memory full_finetuning = False, # [NEW!] We have full finetuning now! ) ``` In such cases, the inference is the same as the vanilla untuned model and my finetuning does not take effect.
2025-08-25T06:21:40
https://www.reddit.com/r/LocalLLaMA/comments/1mziy6p/making_some_silly_mistake_while_saving_to_gguf_in/
regstuff
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mziy6p
false
null
t3_1mziy6p
/r/LocalLLaMA/comments/1mziy6p/making_some_silly_mistake_while_saving_to_gguf_in/
false
false
self
2
null
Ai haunt
0
It’s a bit off-topic, but maybe not, since it also involves llm generation. I’m here looking for something new. I don’t know the exact name for this type of site, but here are some examples: Lmarena Pollinations g4f Ish Yup Genspark Hivechat Mirexa I’d like to know what category or type of sites these belong to. I’m aware there are many more like them, but I want to explore and discover similar platforms. If you can, please suggest some as well.
2025-08-25T05:47:23
https://www.reddit.com/r/LocalLLaMA/comments/1mzieae/ai_haunt/
Active-Drive-3795
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mzieae
false
null
t3_1mzieae
/r/LocalLLaMA/comments/1mzieae/ai_haunt/
false
false
self
0
null
Is there a place where I can easily create a local RAG chat screen?
3
I'm building a local rag using langchain, but is there a place where I can easily create a chat screen like gpt that's actually usable, not just a demo?
2025-08-25T05:38:48
https://www.reddit.com/r/LocalLLaMA/comments/1mzi9c2/is_there_a_place_where_i_can_easily_create_a/
CantaloupeDismal1195
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mzi9c2
false
null
t3_1mzi9c2
/r/LocalLLaMA/comments/1mzi9c2/is_there_a_place_where_i_can_easily_create_a/
false
false
self
3
null
RAG for financial fact checking
0
Did anyone here use LLM for multi class classification? I am using RAG by extracting top 30 docs from DuckDuckgo API, but the performance is measurable. My dataset has 5 classes; True, Mostly True, Half True, False, Mostly false. It very often collapsed Between mostly true and true, it never predicted half-true. Rarely predicted true as well. Any insight on this? Should I use LoRA for this kind of problem? I am new to this area, any help would be appreciated
2025-08-25T03:47:26
https://www.reddit.com/r/LocalLLaMA/comments/1mzgarm/rag_for_financial_fact_checking/
Fast-Smoke-1387
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mzgarm
false
null
t3_1mzgarm
/r/LocalLLaMA/comments/1mzgarm/rag_for_financial_fact_checking/
false
false
self
0
null
Is it worth to buy the second RTX PRO 6000?
0
I already got one, I am using it for inference and finetuning, I am thinking if I should buy another one to have a 192GB VRAM setup. Does having a 2nd card open up a lot of extra opportunities?
2025-08-25T03:43:40
https://www.reddit.com/r/LocalLLaMA/comments/1mzg88t/is_it_worth_to_buy_the_second_rtx_pro_6000/
kitgary
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mzg88t
false
null
t3_1mzg88t
/r/LocalLLaMA/comments/1mzg88t/is_it_worth_to_buy_the_second_rtx_pro_6000/
false
false
self
0
null
How to convert HF model to MLX without ram limitation
2
I am currently fine-tuning a large LLM model using MLX on the Apple M3 Ultra. The original tensor files recently released are larger than the M3's RAM (256GB), making it impossible to perform quantization locally using mlx_lm.convert. Additionally, it seems impossible to use HF's mlx-my-repo. In summary, is there a way to perform quantization without memory restrictions by sequentially reading Deepseek v3.1 or KIMI K-2?
2025-08-25T03:33:33
https://www.reddit.com/r/LocalLLaMA/comments/1mzg1hq/how_to_convert_hf_model_to_mlx_without_ram/
Desperate-Sir-5088
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mzg1hq
false
null
t3_1mzg1hq
/r/LocalLLaMA/comments/1mzg1hq/how_to_convert_hf_model_to_mlx_without_ram/
false
false
self
2
null
what’s new in domoai v2.4 (and how i use it for image + video generation)
0
I've been following [domo](https://www.domoai.app/home?via=081621AUG) updates for a while, and usually they bring steady improvements. but v2.4 feels like one of those updates that actually changes the way I work day to day. it isn’t just faster or slightly cleaner like it feels like a serious upgrade across both image and video generation. The first thing I noticed was the image quality. faces hold up much better now, with sharper features and more natural expressions. hand shapes, which used to be one of the trickiest parts of ai art, are noticeably improved too. The skin texture doesn’t have that plasticky look anymore, which makes portraits feel more lifelike. I've already used v2.4 to make promo art and scene visuals that look polished right out of the generator, saving me from a lot of cleanup in post. the prompt interface also feels snappier, so testing variations and iterating on ideas is smoother. video generation got an equally big lift. contact animations like hugs, handshakes, or even lifting another character used to look stiff or awkward. Now they read as believable, with movements that flow instead of snap. Facial timing also syncs much better with dialogue, so a line doesn’t feel like it’s floating on top of an unmoving face. One of my favorite small updates is the breathing loop preset. it adds a subtle rhythm to characters, making them feel alive without crossing into overacting or cartoonish motion. the best part is that I now use domoai v2.4 at nearly every stage of my projects. I start with it for concept art to set the tone, move into animation to bring the characters into motion, and finish with polishing touches before sharing. It feels like an all-in-one flow that’s faster and more reliable than before. for me, this update has taken domoai from a tool i used selectively into something i rely on almost every time i make an ai project. i’m curious how others are using the new features.. has anyone else found tricks in v2.4 that changed their workflow?
2025-08-25T03:04:48
https://www.reddit.com/r/LocalLLaMA/comments/1mzfhln/whats_new_in_domoai_v24_and_how_i_use_it_for/
Gold_Negotiation9518
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mzfhln
false
null
t3_1mzfhln
/r/LocalLLaMA/comments/1mzfhln/whats_new_in_domoai_v24_and_how_i_use_it_for/
false
false
self
0
null
Intel Granite Rapids CPU on sale at Newegg up to 65% off MSRP
77
Very good news for people who want to run the huge MoE models nowadays. |CPU|MSRP|newegg|% off| |:-|:-|:-|:-| |6980P|$17800|$6179|65.29%| |6972P|$14600|$5433.2|62.79%| |6944P|$6850|$4208|38.57%| |6781P|$8960|$7590|15.29%| |6761P|$6570|$6001|8.66%| |6741P|$4421|$3900|11.78%| |6731P|$2700|$2260.1|16,29%| |6521P|$1250|$1208.2|3.34%|
2025-08-25T03:04:12
https://www.reddit.com/r/LocalLLaMA/comments/1mzfh73/intel_granite_rapids_cpu_on_sale_at_newegg_up_to/
Ok_Warning2146
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mzfh73
false
null
t3_1mzfh73
/r/LocalLLaMA/comments/1mzfh73/intel_granite_rapids_cpu_on_sale_at_newegg_up_to/
false
false
self
77
null
I Built a Separate Module for Qwen3 Coder – Outperforming Gemini 2.5 Pro now
0
I developed a separate module for Qwen3-Coder-480B after analyzing the issues with other editors. The key focus is on full-file edits, which should be performed in the following format: [FILE:path/to/filename.ext] (entire new file content on subsequent lines) [ENDFILE] AnchorEdit mode should be limited to a maximum of 3 lines. If a tool call for AnchorEdit fails, the system should provide clear and proper feedback. Each session should allow at least 30+ tool calls, and prompts should encourage reading the relevant files using read tool. Currently, Qwen Coder outperforms Gemini 2.5 Pro, with fewer coding errors. The only advantage of Gemini is that it can sometimes understand my intentions without detailed explanations. However, consistently, Qwen Coder delivers better results.
2025-08-25T02:57:22
https://www.reddit.com/r/LocalLLaMA/comments/1mzfcby/i_built_a_separate_module_for_qwen3_coder/
Ok-Pattern9779
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mzfcby
false
null
t3_1mzfcby
/r/LocalLLaMA/comments/1mzfcby/i_built_a_separate_module_for_qwen3_coder/
false
false
self
0
null
Where do I go to see benchmark comparisons of local models?
6
I apologize if this is off topic, I can't find any good places that show a significant amount of locally hostable models and how they compare to the massive closed ones. What should I do to get a general value assigned to how good models like gemma3 27b vs 12b, Qwen, etc are in comparison to each other?
2025-08-25T02:16:59
https://www.reddit.com/r/LocalLLaMA/comments/1mzejbm/where_do_i_go_to_see_benchmark_comparisons_of/
radioactive---banana
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mzejbm
false
null
t3_1mzejbm
/r/LocalLLaMA/comments/1mzejbm/where_do_i_go_to_see_benchmark_comparisons_of/
false
false
self
6
null
Opinion: The real cost-benefit analysis of Local AI for business, where's the sweet spot?
0
I've been trying to quantify when local AI makes financial sense for businesses vs cloud solutions. Created these graphs (with help from Claude/Gemini, ai slop) to visualize the cost capability tradeoff. The key questions i'm wrestling with.. Where's the break even point? At what usage level does local hardware pay for itself vs API costs? The RTX 5090 graph shows "limited refactoring", but what about quantization techniques? Can we get 70B performance from 34B models without loosing the smarts required to do the job? These graphs paint a somewhat pessimistic picture for consumer hardware but i think they miss several important factors. For businesses running millions of tokens daily, or those with strict data governance requirements, even a $50k setup could pay for itself quickly. How have your deployments gone? Did you crush the cloud or is it an ongoing pursuit? What metrics should we really be tracking?
2025-08-25T02:12:26
https://www.reddit.com/gallery/1mzefsy
Fussy-Fur3608
reddit.com
1970-01-01T00:00:00
0
{}
1mzefsy
false
null
t3_1mzefsy
/r/LocalLLaMA/comments/1mzefsy/opinion_the_real_costbenefit_analysis_of_local_ai/
false
false
https://b.thumbs.redditm…O8CTxgzCbibk.jpg
0
null
Oumnix: A New AI Architecture (non-Transformer architecture)
1
I’m not here to sell, beg, or hype. This is **not a Transformer architecture** it’s a different path. Minimal version, trained from zero (no fine-tuning) on a laptop GPU (RTX 4060). Result: 50M parameters trained from scratch, loss → **8.5 → 0.9 in 13 minutes**. Video: [YouTube](https://www.youtube.com/watch?v=pOzOnSE1IAY) Repo: [oumnix-minimal](https://github.com/qrv0/oumnix-minimal) No papers. No replicas. Just an alternative architecture that exists outside the Transformer highway. I expect downvotes, noise, and accusations that’s fine. But facts don’t vanish: **other architectures are possible.**
2025-08-25T01:46:11
https://www.reddit.com/r/LocalLLaMA/comments/1mzdvyn/oumnix_a_new_ai_architecture_nontransformer/
oumnix
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mzdvyn
false
null
t3_1mzdvyn
/r/LocalLLaMA/comments/1mzdvyn/oumnix_a_new_ai_architecture_nontransformer/
false
false
self
1
{'enabled': False, 'images': [{'id': 'XnPPEji7KWr2qNU0AFfXio9_Ed7GmuhHuvCDNTYVTP4', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/XnPPEji7KWr2qNU0AFfXio9_Ed7GmuhHuvCDNTYVTP4.jpeg?width=108&crop=smart&auto=webp&s=d67261860ec9561179af17b418eb1d53b8fbce46', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/XnPPEji7KWr2qNU0AFfXio9_Ed7GmuhHuvCDNTYVTP4.jpeg?width=216&crop=smart&auto=webp&s=5959beec894e9c7e93542cdfa9353b80bb1af0c6', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/XnPPEji7KWr2qNU0AFfXio9_Ed7GmuhHuvCDNTYVTP4.jpeg?width=320&crop=smart&auto=webp&s=6780ec4b0388b4a373809e971d63191c41590e4a', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/XnPPEji7KWr2qNU0AFfXio9_Ed7GmuhHuvCDNTYVTP4.jpeg?auto=webp&s=01c9b34898482b2f0176d744c0144a4879f9e26c', 'width': 480}, 'variants': {}}]}
Docker Desktop bundles in Model Runner (Beta feature)
0
https://docs.docker.com/ai/model-runner/ TIL docker ships with llama.cpp. This seems to do everything ollama does and more; (and of course, everything llama.cpp does). I have never heard it being used on this sub or elsewhere.. though it seems like an obvious winner given most of us are already using Docker with Docker Desktop. Am I missing something?
2025-08-25T01:24:13
https://www.reddit.com/r/LocalLLaMA/comments/1mzdfgf/docker_desktop_bundles_in_model_runner_beta/
rm-rf-rm
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mzdfgf
false
null
t3_1mzdfgf
/r/LocalLLaMA/comments/1mzdfgf/docker_desktop_bundles_in_model_runner_beta/
false
false
self
0
null
Choosing between a single 3080TI; or dual 3060 12GBs
0
Title is self explanatory - but I'm adding a GPU to a home server for both locally hosted LLMs and Stable Diffusion; and originally I was just going to get a single 3080TI with 12GB of VRAM... but then I realized I can get two 3060s with 12GB of VRAM apiece for the same cost. Does it make sense to pursue additional VRAM in favor of the horsepower that the 3080TI would give me? Or would I be better off having the faster 3080TI without as much VRAM? I don't have a direct use-case yet; I've got a CS degree and undergrad background in AI, so really I'm more "playing around" with this than anything else. So rather than having a specific usecase, I think the better question is: "If I have $500 to blow on a GPU, which way is the most flexible/extensible/interesting - and is there a third option I haven't considered?" I also already have plenty of experience with self-hosted image generation tools like Automatic1111 - so I'm fine on that front; it's the LLM side that I'm more hesitant on.
2025-08-25T01:11:12
https://www.reddit.com/r/LocalLLaMA/comments/1mzd5m5/choosing_between_a_single_3080ti_or_dual_3060/
DickFineman73
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mzd5m5
false
null
t3_1mzd5m5
/r/LocalLLaMA/comments/1mzd5m5/choosing_between_a_single_3080ti_or_dual_3060/
false
false
self
0
null
PSA: Filling those empty DIMM slots will slow down inference if you don’t have enough memory channels
34
I have a 7900x on a x670e Pro RS mobo with 2x32GB DDR5@5200. I really wanted to run GPT-OSS 120B with CPU moe but it wasn’t fully able to load. I obtained another pair of the same RAM (different batch, but same model/specs) and was able to run 120B, but only at 15 tk/s. I noticed that other models were slower as well. Then I realized that my RAM was running at 3600MTS as opposed to the 4800 it was at before. After digging into this issue it appears to be the grim reality with AMD AM5 boards that there isn’t much support for full throttle with DDR5 at 4 DIMMs. One would need an Intel build to get there apparently. In my case I think I’ll try to exchange for 2x48GB and sell my old RAM. Does anyone know any way to use 4 slots at decent speeds and stability without buying a TR/EPYC?
2025-08-25T01:04:16
https://www.reddit.com/r/LocalLLaMA/comments/1mzd0ik/psa_filling_those_empty_dimm_slots_will_slow_down/
DealingWithIt202s
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mzd0ik
false
null
t3_1mzd0ik
/r/LocalLLaMA/comments/1mzd0ik/psa_filling_those_empty_dimm_slots_will_slow_down/
false
false
self
34
null
Crow New CAWSF-NDSQ runtime for LLMs (on-disk, on-demand weights, GGUF export)
0
**What is Crow?** Crow is an **alpha-stage runtime and toolset in Go** for LLM weights. It introduces a new format **CAWSF-NDSQ** that factorizes and stores model weights in shards on disk. Only the necessary shards are loaded per operation, unlike standard quantization which requires the full model in VRAM. **Why does this matter?** * Run models far larger than your GPU VRAM. * Inspect weights with SVD truncation, diagonal/residual separation, and PQ outlier handling. * Export to GGUF and run with `llama.cpp`. * Integrity checks via XXH3-64 per section. **FAQ (pre-emptive):** * *“Spam?”* → No. One repo, open-source, technical release. * *“Hobby project?”* → Independent research, fully documented and public. * *“Does it work?”* → Code is there. Compile, run, test. * *“Not original?”* → Innovation is in the factorization + runtime routing. The repo shows exactly how. **Status:** Alpha. APIs may break, performance will improve. **Repo:** 👉 [github.com/qrv0/crow](https://github.com/qrv0/crow) I’m aware some will try to dismiss this as “spam,” “hobby work,” or “not science.” That’s fine the repo is public and the code speaks for itself. https://preview.redd.it/mr80s0if22lf1.png?width=1887&format=png&auto=webp&s=f54abd49eb1a42a84e474d94091c6910ff92c532
2025-08-24T23:55:03
https://www.reddit.com/r/LocalLLaMA/comments/1mzbj6q/crow_new_cawsfndsq_runtime_for_llms_ondisk/
oumnix
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mzbj6q
false
null
t3_1mzbj6q
/r/LocalLLaMA/comments/1mzbj6q/crow_new_cawsfndsq_runtime_for_llms_ondisk/
false
false
https://a.thumbs.redditm…Xdkw1uX1gYA8.jpg
0
null
Short Video analysis with local LLM?
1
I’m tired of stupid security camera dumb motion alert and trying to see if local LLM can do video analysis I have managed to throw away my hikvision video recorder and replaced with frigate, but the built in genai isn’t impressive as it’s just sending pictures to ollama to do analysis Now I have made my own sidecar solution to send motion video to Gemini API to analysts video but not cheap, now I wonder if there is any local LLM setup that can accomplish this? I have 6x 3090, and 128GB RAM machine running qwen3 slowly but i hope it can run something Usually less than 60 seconds clips btw
2025-08-24T23:43:52
https://www.reddit.com/r/LocalLLaMA/comments/1mzba2v/short_video_analysis_with_local_llm/
ytwytw9
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mzba2v
false
null
t3_1mzba2v
/r/LocalLLaMA/comments/1mzba2v/short_video_analysis_with_local_llm/
false
false
self
1
null
gpt-5 high on aider polyglot benchmark scoring 88% on independent valuation
0
A pull request claiming 88% [https://github.com/Aider-AI/aider/pull/4475/commits/bfef1906bb036f7db0d618e789e299dffdc493ca](https://github.com/Aider-AI/aider/pull/4475/commits/bfef1906bb036f7db0d618e789e299dffdc493ca) The curious thing was the cost. 29.0829 dollar seems impressive. Livebench still keeps o4-mini at top. Curious to know personal experiences of people who have used both considerably.
2025-08-24T23:33:46
https://www.reddit.com/r/LocalLLaMA/comments/1mzb1zu/gpt5_high_on_aider_polyglot_benchmark_scoring_88/
Rude-Needleworker-56
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mzb1zu
false
null
t3_1mzb1zu
/r/LocalLLaMA/comments/1mzb1zu/gpt5_high_on_aider_polyglot_benchmark_scoring_88/
false
false
self
0
null
InfiniteTalk: Audio-driven Video Generation for Sparse-Frame Video Dubbing
35
[https://github.com/MeiGen-AI/InfiniteTalk](https://github.com/MeiGen-AI/InfiniteTalk)
2025-08-24T23:05:11
https://www.reddit.com/r/LocalLLaMA/comments/1mzaeee/infinitetalk_audiodriven_video_generation_for/
Dull-Ad-1708
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mzaeee
false
null
t3_1mzaeee
/r/LocalLLaMA/comments/1mzaeee/infinitetalk_audiodriven_video_generation_for/
false
false
self
35
null
Made Chatterbox TTS a bit faster again on CUDA (155it/s on 3090)
65
Code: [https://github.com/rsxdalv/chatterbox/tree/faster](https://github.com/rsxdalv/chatterbox/tree/faster) Previous version discussion: [https://www.reddit.com/r/LocalLLaMA/comments/1lfnn7b/optimized\_chatterbox\_tts\_up\_to\_24x\_nonbatched/](https://www.reddit.com/r/LocalLLaMA/comments/1lfnn7b/optimized_chatterbox_tts_up_to_24x_nonbatched/) (hopefully most of the old questions will become obsolete) Disclaimer - for batched generation in dedicated deployments Chatterbox-VLLM should be the better choice. I have mostly exhausted the options for speeding up almost vanilla HF Transformers' Llama with torch. Inductor, Triton, Max Autotune, different cache sizes etc, and they are available in the codebase. In the end, manually capturing cuda-graphs was the fastest. The model should be able to run around 230 it/s with fused kernels and better code. (I was unable to remedy the kv\_cache code to enable cuda graph capture with torch.compile's max autotune.) Besides the speed, the main benefit is that setting a small cache size is no longer necessary, neither are max\_new\_tokens important. I plan to make it compile by default to facilitate drop-in use in other projects. Since the main effort is exhausted, I will keep on updating incrementally - for example, speeding up the s3gen (which is now a bottleneck). # Results for 1500 cache size with BFloat16 Estimated token count: 304 Input embeds shape before padding: torch.Size([2, 188, 1024]) Sampling: 32%|███▏ | 320/1000 [00:02<00:04, 159.15it/s] Stopping at 321 because EOS token was generated Generated 321 tokens in 2.05 seconds 156.29 it/s Estimated token count: 304 Input embeds shape before padding: torch.Size([2, 188, 1024]) Sampling: 32%|███▏ | 320/1000 [00:01<00:03, 170.52it/s] Stopping at 321 because EOS token was generated Generated 321 tokens in 1.88 seconds 170.87 it/s Estimated token count: 606 Input embeds shape before padding: torch.Size([2, 339, 1024]) Sampling: 62%|██████▏ | 620/1000 [00:04<00:02, 154.58it/s] Stopping at 621 because EOS token was generated Generated 621 tokens in 4.01 seconds 154.69 it/s Estimated token count: 20 Input embeds shape before padding: torch.Size([2, 46, 1024]) Sampling: 4%|▍ | 40/1000 [00:00<00:05, 182.08it/s] Stopping at 41 because EOS token was generated Generated 41 tokens in 0.22 seconds 184.94 it/s # Disabling classifier free guidance (cfg_weight=0) Estimated token count: 304 Input embeds shape before padding: torch.Size([1, 187, 1024]) Sampling: 100%|██████████| 300/300 [00:01<00:00, 169.38it/s] Stopping at 300 because max_new_tokens reached Generated 300 tokens in 1.89 seconds 158.95 it/s Estimated token count: 304 Input embeds shape before padding: torch.Size([1, 187, 1024]) Sampling: 100%|██████████| 300/300 [00:01<00:00, 194.04it/s] Stopping at 300 because max_new_tokens reached Generated 300 tokens in 1.55 seconds 193.66 it/s Estimated token count: 606 Input embeds shape before padding: torch.Size([1, 338, 1024]) Sampling: 100%|██████████| 300/300 [00:01<00:00, 182.28it/s] Stopping at 300 because max_new_tokens reached Generated 300 tokens in 1.65 seconds 182.22 it/s Estimated token count: 20 Input embeds shape before padding: torch.Size([1, 45, 1024]) Sampling: 20%|██ | 60/300 [00:00<00:01, 208.54it/s] Stopping at 61 because EOS token was generated Generated 61 tokens in 0.29 seconds 210.54 it/s Current code example: def t3_to(model: ChatterboxTTS, dtype): model.t3.to(dtype=dtype) model.conds.t3.to(dtype=dtype) torch.cuda.empty_cache() return model # Most new GPUs would work the fastest with this, but not all. t3_to(model, torch.bfloat16) audio = model.generate("fast generation using cudagraphs-manual, warmup") audio = model.generate("fast generation using cudagraphs-manual, full speed") # Extra options: audio = model.generate( text, t3_params={ # "initial_forward_pass_backend": "eager", # slower - default # "initial_forward_pass_backend": "cudagraphs", # speeds up set up # "generate_token_backend": "cudagraphs-manual", # fastest - default # "generate_token_backend": "cudagraphs", # "generate_token_backend": "eager", # "generate_token_backend": "inductor", # "generate_token_backend": "inductor-strided", # "generate_token_backend": "cudagraphs-strided", # "stride_length": 4, # "strided" options compile <1-2-3-4> iteration steps together, which improves performance by reducing memory copying issues in torch.compile # "skip_when_1": True, # skips Top P when it's set to 1.0 # "benchmark_t3": True, # Synchronizes CUDA to get the real it/s } )
2025-08-24T22:49:20
https://www.reddit.com/r/LocalLLaMA/comments/1mza0wy/made_chatterbox_tts_a_bit_faster_again_on_cuda/
RSXLV
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mza0wy
false
null
t3_1mza0wy
/r/LocalLLaMA/comments/1mza0wy/made_chatterbox_tts_a_bit_faster_again_on_cuda/
false
false
self
65
null
Hobbyist project : enabling smaller language models to interact with large code bases
6
Hey guys, I was recently blessed with an apple macbook M4 from my internship and since I was working with LLMs I installed some models locally, including but not limited to **Gemma3:370M** and **Qwen3:0.6B** and I noticed while these models are super fast, they weren't the most knowledgeable (given that **neither of them could tell what is a penis**) so using rag I explored how good they can be given the necessary context, it turns out they can be very good actually, I tested this with the fastapi repo and Qwen3:0.6B and the model went from spitting gibberish to giving pretty decent answers actually, I refactored everything and put it in a repo, you can [check it out here](https://github.com/abderrahimrhitrif/rac), open to feedback but the project is pretty barebones and I wasn't really planning to publish it to begin with.
2025-08-24T22:36:36
https://www.reddit.com/r/LocalLLaMA/comments/1mz9q24/hobbyist_project_enabling_smaller_language_models/
Unable_Step156
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mz9q24
false
null
t3_1mz9q24
/r/LocalLLaMA/comments/1mz9q24/hobbyist_project_enabling_smaller_language_models/
false
false
self
6
{'enabled': False, 'images': [{'id': 'J8x_awrLBPyLKoty6QeIVPS31As8UgQwOMv-kqD1h5U', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/J8x_awrLBPyLKoty6QeIVPS31As8UgQwOMv-kqD1h5U.png?width=108&crop=smart&auto=webp&s=b6a65b5c3f649503ea8ba484e080b95251e80e12', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/J8x_awrLBPyLKoty6QeIVPS31As8UgQwOMv-kqD1h5U.png?width=216&crop=smart&auto=webp&s=d655fb8f0d70b6c3199c577e5994f51f87f59462', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/J8x_awrLBPyLKoty6QeIVPS31As8UgQwOMv-kqD1h5U.png?width=320&crop=smart&auto=webp&s=bf69abf43c4e1ab1351b2f4e7d75d5c16705116a', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/J8x_awrLBPyLKoty6QeIVPS31As8UgQwOMv-kqD1h5U.png?width=640&crop=smart&auto=webp&s=a703dfadc5d5dd040fa0b48da6fc7ec12acbe1e4', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/J8x_awrLBPyLKoty6QeIVPS31As8UgQwOMv-kqD1h5U.png?width=960&crop=smart&auto=webp&s=4ed10d29ac4bfe9fba6947c4188461b8c225ee5a', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/J8x_awrLBPyLKoty6QeIVPS31As8UgQwOMv-kqD1h5U.png?width=1080&crop=smart&auto=webp&s=24fc1a0303d3082c82a3021c104522b613b3083e', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/J8x_awrLBPyLKoty6QeIVPS31As8UgQwOMv-kqD1h5U.png?auto=webp&s=2e774a9d093f24befa5885e5d0a7d6b806ff2b2c', 'width': 1200}, 'variants': {}}]}
LLM-Ripper modular LLM disassembly (alpha): extract, analyze & transplant embeddings/heads/FFNs via bridge adapters
0
**LLM-Ripper** (alpha) is a framework to treat Transformer LMs as modular systems instead of black-box monoliths. Repo: [https://github.com/qrv0/LLM-Ripper](https://github.com/qrv0/LLM-Ripper) # What it does * **Extract**: embeddings, attention heads (MHA/GQA/MQA), FFNs, LM head (safetensors-aware). * **Capture**: activations with `torch.fx`, scalable HDF5 storage. * **Analyze**: interpretability (syntactic/factual head scoring), semantic coverage, FFN clustering. * **Transplant**: bridge adapters for dimensional compatibility (module injection, embedding init, AdapterFusion). * **Validate**: intrinsic + extrinsic checks, CLI & Python API. # Why it matters Yes there are *papers* on adapters, interpretability, or modularity. But I haven’t found a **project** that unifies the whole pipeline: **component-level extraction → analysis → bridge transplantation → validation**. That’s the novelty here: a **working OSS pipeline**, reproducible with CLI & examples. (If you know a prior open-source project doing this end-to-end, drop the link I’ll cite it.) # Paper This release is backed by a draft paper formalizing the framework: 📄 [A Framework for Modular Deconstruction, Analysis, and Recomposition of Knowledge in Transformer-based Language Models](https://github.com/qrv0/LLM-Ripper/blob/main/paper_modular_transformer_knowledge_en.md) # Status * **Alpha**: APIs/docs evolving, but pipeline runs. * Examples in `/examples/` folder for quick tests. # FAQ (short, pre-emptive) * *“Spam?”* → Single OSS repo, technical release. * *“Hobby project?”* → Independent research; repo includes CLI + tests. * *“Not original?”* → The novelty is the **integrated pipeline**; not just isolated concepts from papers. * *“Does it work?”* → Run `examples/run_extraction_only.py` and see. https://preview.redd.it/98h5vn4jh1lf1.png?width=1903&format=png&auto=webp&s=890d006d1177a8e7acd6547c3f07a0abd8f43a2d
2025-08-24T21:57:58
https://www.reddit.com/r/LocalLLaMA/comments/1mz8s2c/llmripper_modular_llm_disassembly_alpha_extract/
oumnix
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mz8s2c
false
null
t3_1mz8s2c
/r/LocalLLaMA/comments/1mz8s2c/llmripper_modular_llm_disassembly_alpha_extract/
false
false
https://b.thumbs.redditm…QGEHXQiWsO-g.jpg
0
null
LLM-Ripper modular LLM disassembly (alpha): extract, analyze & transplant embeddings/heads/FFNs via bridge adapters
1
**LLM-Ripper** (alpha) is a framework to treat Transformer LMs as modular systems instead of black-box monoliths. Repo: [https://github.com/qrv0/LLM-Ripper](https://github.com/qrv0/LLM-Ripper) # What it does * **Extract**: embeddings, attention heads (MHA/GQA/MQA), FFNs, LM head (safetensors-aware). * **Capture**: activations with `torch.fx`, scalable HDF5 storage. * **Analyze**: interpretability (syntactic/factual head scoring), semantic coverage, FFN clustering. * **Transplant**: bridge adapters for dimensional compatibility (module injection, embedding init, AdapterFusion). * **Validate**: intrinsic + extrinsic checks, CLI & Python API. # Why it matters Yes, there are *papers* on adapters, interpretability, or modularity. But I haven’t found a **project** that unifies the whole pipeline: **component-level extraction → analysis → bridge transplantation → validation**. That’s the novelty here: a **working OSS pipeline**, reproducible with CLI & examples. (If you know a prior open-source project doing this end-to-end, drop the link I’ll cite it.) # Paper This release is backed by a draft paper formalizing the framework: 📄 [A Framework for Modular Deconstruction, Analysis, and Recomposition of Knowledge in Transformer-based Language Models](https://github.com/qrv0/LLM-Ripper/blob/main/paper_modular_transformer_knowledge_en.md) # Status * **Alpha**: APIs/docs evolving, but pipeline runs. * Examples in `/examples/` folder for quick tests. # FAQ (short, pre-emptive) * *“Spam?”* → Single OSS repo, technical release. * *“Hobby project?”* → Independent research; repo includes CLI + tests. * *“Not original?”* → The novelty is the **integrated pipeline**; not just isolated concepts from papers. * *“Does it work?”* → Run `examples/run_extraction_only.py` and see. https://preview.redd.it/8nyh2wn6d1lf1.png?width=1894&format=png&auto=webp&s=457f84be66eedd28ee076aacb92b048af01e7424
2025-08-24T21:34:52
https://www.reddit.com/r/LocalLLaMA/comments/1mz87z6/llmripper_modular_llm_disassembly_alpha_extract/
_qrv0
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mz87z6
false
null
t3_1mz87z6
/r/LocalLLaMA/comments/1mz87z6/llmripper_modular_llm_disassembly_alpha_extract/
false
false
https://b.thumbs.redditm…5q2ugSWRI-bk.jpg
1
null
Grok on LM Studio
0
Grok is the cheekiest LLM model I have ever come across! Just thought I'd share that.
2025-08-24T21:25:38
https://www.reddit.com/r/LocalLLaMA/comments/1mz7zoh/grok_on_lm_studio/
IntrepidScale583
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mz7zoh
false
null
t3_1mz7zoh
/r/LocalLLaMA/comments/1mz7zoh/grok_on_lm_studio/
false
false
self
0
null
Gemma-3 jail broken!
0
No model weights touched. No fine-tuning. Just API calls + a custom system prompt. The result? It completely dropped its safety filters and told him how to make drugs, weapons, commit fraud… even a murder. More info on his twitter [https://x.com/Prashant\_9307/status/1959492959256142119?t=sA119M7wBi1SzZrq8zzAXA](https://x.com/Prashant_9307/status/1959492959256142119?t=sA119M7wBi1SzZrq8zzAXA)
2025-08-24T21:19:24
https://www.reddit.com/r/LocalLLaMA/comments/1mz7u59/gemma3_jail_broken/
mangavertebral18
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mz7u59
false
null
t3_1mz7u59
/r/LocalLLaMA/comments/1mz7u59/gemma3_jail_broken/
false
false
self
0
null
Almost done with the dashboard for local llama.cpp agents
159
This won't be for sale and will be released as open source with a non commercial license. No code will be released until after the hackathon I've entered is over next month.
2025-08-24T20:38:43
https://www.reddit.com/gallery/1mz6som
PayBetter
reddit.com
1970-01-01T00:00:00
0
{}
1mz6som
false
null
t3_1mz6som
/r/LocalLLaMA/comments/1mz6som/almost_done_with_the_dashboard_for_local_llamacpp/
false
false
https://b.thumbs.redditm…MntHaCP24k1w.jpg
159
null
Crow New CAWSF-NDSQ runtime for LLMs (on-disk, on-demand weights, GGUF export)
6
**What is Crow?** Crow is an **alpha-stage runtime and toolset in Go** for LLM weights. It introduces a new format **CAWSF-NDSQ** that factorizes and stores model weights in shards on disk. Only the necessary shards are loaded per operation, unlike standard quantization which requires the full model in VRAM. **Why does this matter?** * Run models far larger than your GPU VRAM. * Inspect weights with SVD truncation, diagonal/residual separation, and PQ outlier handling. * Export to GGUF and run with `llama.cpp`. * Integrity checks via XXH3-64 per section. **FAQ (pre-emptive):** * *“Spam?”* → No. One repo, open-source, technical release. * *“Hobby project?”* → Independent research, fully documented and public. * *“Does it work?”* → Code is there. Compile, run, test. * *“Not original?”* → Innovation is in the factorization + runtime routing. The repo shows exactly how. **Status:** Alpha. APIs may break, performance will improve. I’m aware some will try to dismiss this as “spam,” “hobby work,” or “not science.” That’s fine the repo is public and the code speaks for itself. [github.com/qrv0/crow](http://github.com/qrv0/crow) https://preview.redd.it/jib2v43631lf1.png?width=1907&format=png&auto=webp&s=bb94eeb1a39994aaaa75ad6b8cd2285f28953321
2025-08-24T20:38:33
https://www.reddit.com/r/LocalLLaMA/comments/1mz6sjx/crow_new_cawsfndsq_runtime_for_llms_ondisk/
_qrv0
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mz6sjx
false
null
t3_1mz6sjx
/r/LocalLLaMA/comments/1mz6sjx/crow_new_cawsfndsq_runtime_for_llms_ondisk/
false
false
https://b.thumbs.redditm…gn-vVTsUvgeo.jpg
6
null
DataCrunch AI Cloud
0
Has anyone here tried DataCrunch's Cloud Platform? How is it compared to the likes of Nebius or TogetherAi or RunPod
2025-08-24T20:17:07
https://www.reddit.com/r/LocalLLaMA/comments/1mz68mu/datacrunch_ai_cloud/
CricketSubject1548
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mz68mu
false
null
t3_1mz68mu
/r/LocalLLaMA/comments/1mz68mu/datacrunch_ai_cloud/
false
false
self
0
null
Offline Mistral‑7B AGI — “Pisces AGI"
0
Piscesai.app coming soon. (W.I.P)
2025-08-24T20:15:58
https://www.reddit.com/r/LocalLLaMA/comments/1mz67jv/offline_mistral7b_agi_pisces_agi/
PiscesAi
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mz67jv
false
null
t3_1mz67jv
/r/LocalLLaMA/comments/1mz67jv/offline_mistral7b_agi_pisces_agi/
false
false
self
0
null
PCIe Bifurcation x4x4x4x4 Question
7
TLDR: has anybody run into problems running pcie x16 to x4x4x4x4 on consumer hardware? current setup: * 9800x3d (28 total pcie lanes, 24 usable lanes with 4 going to chipset) * 64gb ddr5-6000 * MSI x670e Mag Tomahawk WIFI board * 5090 in pcie 5.0 x16 slot (cpu) * 4090 in pcie 4.0 x4 slot (cpu) * 3090ti in pcie 4.0 x2 slot (chipset) * Corsair HX1500i psu i have two 3060 12gb that i have laying around and would like to add to the system, if anything just for the sake of using them instead of sitting in box. i would like to pick up two 3090 off fb market, but i'm not really trying to spend $500-$600 each for what folks are asking in my area. and since i already had these 3060 sitting around, why not use them. i don't believe i'll have power issues since right now, aida64 sensor panel shows the hx1500i hitting max 950w during inference. psu connects via usb for power monitoring. i can't imagine the 3060 using more than 150w each, since they're only 1x8-pin each. bios shows x16 slot can do either: * x8x8 * x8x4x4 * x4x4x4x4 also, all i can find are $20-$50 bifurcation cards that are pcie 3.0, would dropping to gen3 be an issue during inference? i'd like to have 5090/4090/3090ti/3060 on the bifurcation card and second 3060 on the pcie secondary x16 slot. hopefully add 3090 down the line if they price drop after the new supers release later this year. if this is not worth it, then it's no biggie. i just like tinkering.
2025-08-24T20:12:17
https://www.reddit.com/r/LocalLLaMA/comments/1mz644g/pcie_bifurcation_x4x4x4x4_question/
ducksaysquackquack
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mz644g
false
null
t3_1mz644g
/r/LocalLLaMA/comments/1mz644g/pcie_bifurcation_x4x4x4x4_question/
false
false
self
7
null
Best sub 10B medical model?
1
[removed]
2025-08-24T20:06:41
https://www.reddit.com/r/LocalLLaMA/comments/1mz5ys0/best_sub_10b_medical_model/
Own-Potential-2308
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mz5ys0
false
null
t3_1mz5ys0
/r/LocalLLaMA/comments/1mz5ys0/best_sub_10b_medical_model/
false
false
self
1
null
Snapdragon X Elite 32 Gb vs 64 Gb
1
Does anyone have LLM benchmarks/anecdotal experience on 32 Gb vs 64 Gb RAM with Snapdragon X Elite? Is the additional memory likely to have any value for LLM?
2025-08-24T19:59:20
https://www.reddit.com/r/LocalLLaMA/comments/1mz5ru2/snapdragon_x_elite_32_gb_vs_64_gb/
iamwillbar
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mz5ru2
false
null
t3_1mz5ru2
/r/LocalLLaMA/comments/1mz5ru2/snapdragon_x_elite_32_gb_vs_64_gb/
false
false
self
1
null
Anyone been using Whisper for language learning since its birth?
0
If you have, what has been your progress of self-development? Me: I can’t even imagine a life without Whisper, I feel it’s like privileged people’s ChatGPT
2025-08-24T19:56:42
https://www.reddit.com/r/LocalLLaMA/comments/1mz5pga/anyone_been_using_whisper_for_language_learning/
TraditionalDepth6924
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mz5pga
false
null
t3_1mz5pga
/r/LocalLLaMA/comments/1mz5pga/anyone_been_using_whisper_for_language_learning/
false
false
self
0
null
Vibe coding in progress at around 0.1T/S :)
0
I want to vibe code an app for my company. The app would be a internal used app, and should be quite simple to do. I have tried Emergent, and didnt really like the result. Eventually after my boss decided to pour more money into it, we got something kinda working. But still need to "sanitise it" with Gemini pro. I have tried from scratch Gemini Pro, and again, it gave me something after multiple attempts, but again i didnt like the approach. Qwen code did the same, but Its a long way until Qwen can produce something like that. Maybe Qwen 3.5 or Qwen 4 in the future. And there comes GLM 4.5 Air 4Bit GGUF. Running on my 64GB ram and 24 GB Vram 3090.Using Cline. The code is beautifull! So well structured, a TODO list that is constantly updated, properly way of doing it with easy to read code.. I have set the full 128k context, so as I am getting close to that, the speed is so slow.. At the moment, its 2 days in and about 110k context according to Cline. My questions are: 1. Can I stop Cline to tweak something in Bios, and maybe try to Quantise K and V cache? Would it resume? 2. Would another model be able to continue the work? should i try to use Gemini Pro and continue from there, or Copy the project on another folder and continue there? Regards, Loren
2025-08-24T19:25:49
https://www.reddit.com/r/LocalLLaMA/comments/1mz4wib/vibe_coding_in_progress_at_around_01ts/
L0ren_B
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mz4wib
false
null
t3_1mz4wib
/r/LocalLLaMA/comments/1mz4wib/vibe_coding_in_progress_at_around_01ts/
false
false
self
0
null
What is the smallest model that rivals GPT-3.5?
30
Hi everyone! I was recently looking at an old project of mine that i did as my bachelor's thesis back in Q2 2023 where i created a multi-agent system using one of the first versions of langchain and GPT-3.5. This made me think about all the progress that we've made in the LLM world in such a short period of time, especially in the open-source space. So, as the title suggests, What do you think is the smallest, open-source model that is *generally* as good or better than GPT-3.5? I'm' not talking about a specific task, but general knowledge, intelligence and capability of completing a wide array of tasks. My guess would be something in the 30B parameter count, such as Qwen3-32B. Maybe with reasoning this number could go even lower, but i personally think it's a bit like cheating because we didn't have reasoning back in Q2 2023. What are your thoughts?
2025-08-24T19:25:09
https://www.reddit.com/r/LocalLLaMA/comments/1mz4vwu/what_is_the_smallest_model_that_rivals_gpt35/
k-en
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mz4vwu
false
null
t3_1mz4vwu
/r/LocalLLaMA/comments/1mz4vwu/what_is_the_smallest_model_that_rivals_gpt35/
false
false
self
30
null
Dual-GPU workstation vs. Two workstations in a LAN
1
I was wondering if it is possible to run a LLM across two workstations on a LAN, specifically in cases where the model doesn’t fit into a single GPU. My assumption is that there would be a noticeable performance hit compared to running it on a dual-GPU setup within the same machine, but I am not sure what the main bottlenecks would be (network latency, bandwidth, synchronization overhead, etc.) or how severe the slowdown might be in practice. Has anyone here experimented with this or seen benchmarks?
2025-08-24T19:24:13
https://www.reddit.com/r/LocalLLaMA/comments/1mz4v03/dualgpu_workstation_vs_two_workstations_in_a_lan/
Chance-Studio-8242
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mz4v03
false
null
t3_1mz4v03
/r/LocalLLaMA/comments/1mz4v03/dualgpu_workstation_vs_two_workstations_in_a_lan/
false
false
self
1
null
what does this language remind you of?
0
What language do you think this is? Or does anyone recognize this pattern? project ".. Document & Media Management System" \# Database configuration database { provider: "sqlite" } \# Authentication setup auth { required: true method: jwt allow\_api\_keys: true } \# Storage configuration storage { provider: "local" path: "uploads" } \# Redis configuration for real-time features redis { connection\_env: "REDIS\_URL" } \# User management model User { username: required text (unique: true, min\_length: 3, max\_length: 50) email: required text (unique: true) password: required password role: text (enum: \["admin", "editor", "viewer"\], default: "viewer") } \# Project organization model Project { name: required text (min\_length: 1, max\_length: 200) description: text owner: required relationship(User) } \# Document model with processing capabilities model Document { title: required text (min\_length: 1, max\_length: 300) description: text file\_path: required file ( max\_size: "50MB", allowed\_types: \["pdf", "doc", "docx", "txt", "rtf"\], auto\_process: true ) project: required relationship(Project) uploaded\_by: required relationship(User) \# Auto-populated by document processing extracted\_text: text summary: text keywords: array(text) page\_count: integer paragraph\_count: integer file\_size: integer thumbnail\_path: text processing\_status: text (default: "pending") processing\_error: text thumbnail\_error: text } \# Image model with automatic processing model Image { title: required text (min\_length: 1, max\_length: 300) description: text file\_path: required file ( max\_size: "20MB", allowed\_types: \["jpg", "jpeg", "png", "gif", "webp", "heic"\], auto\_process: true, generate\_thumbnails: true ) project: required relationship(Project) uploaded\_by: required relationship(User) \# Auto-populated by image processing thumbnail\_path: text medium\_path: text webp\_path: text image\_metadata: text processing\_status: text (default: "pending") } \# Processing log for monitoring model ProcessingLog { file\_type: required text file\_name: required text operation: required text status: required text (enum: \["success", "failed", "processing"\]) processing\_time: money error\_message: text } \# Document and image processing actions actions { action upload\_document(title: text, description: text, project\_id: text, document\_file: file, current\_user: User) { let doc = db.create(Document, { title: title, description: description, file\_path: document\_file, project\_id: project\_id, uploaded\_by\_id: current\_user.id, processing\_status: "processing" }) let processed = document.process(document\_file, { extract\_text: true, generate\_thumbnails: true, extract\_metadata: true }) if processed.success { let analysis = document.analyze(processed.text) let summary\_text = null let keyword\_list = \[\] let keywords\_found\_count = 0 if analysis.success { summary\_text = analysis.summary keyword\_list = analysis.keywords keywords\_found\_count = system.len(keyword\_list) } let thumb\_path = null if processed.thumbnails and system.len(processed.thumbnails) > 0 { thumb\_path = processed.thumbnails\[0\].path } let updated\_doc = db.update(Document, [doc.id](http://doc.id), { extracted\_text: processed.text, summary: summary\_text, keywords: keyword\_list, page\_count: processed.metadata.page\_count, paragraph\_count: processed.metadata.paragraph\_count, file\_size: processed.file\_size, thumbnail\_path: thumb\_path, processing\_status: "completed", thumbnail\_error: processed.metadata.thumbnail\_error }) db.create(ProcessingLog, { file\_type: "document", file\_name: title, operation: "upload\_and\_process", status: "success", processing\_time: processed.processing\_time }) return { success: true, document: updated\_doc, processing\_info: { text\_length: system.len(processed.text), keywords\_found: keywords\_found\_count, pages: processed.metadata.page\_count } } } else { let error\_doc = db.update(Document, [doc.id](http://doc.id), { processing\_status: "failed", processing\_error: json.stringify(processed.errors) }) db.create(ProcessingLog, { file\_type: "document", file\_name: title, operation: "upload\_and\_process", status: "failed", error\_message: "Document processing failed" }) return { success: false, document: error\_doc, errors: processed.errors } } } action upload\_image(title: text, description: text, project\_id: text, image\_file: file, current\_user: User) { let img = db.create(Image, { title: title, description: description, file\_path: image\_file, project\_id: project\_id, uploaded\_by\_id: current\_user.id, processing\_status: "processing" }) let processed = image.process(image\_file, \["thumbnail", "medium", "webp\_small"\]) if processed.success { let updated\_img = db.update(Image, [img.id](http://img.id), { thumbnail\_path: processed.variants.thumbnail.path, medium\_path: processed.variants.medium.path, webp\_path: processed.variants.webp\_small.path, image\_metadata: json.stringify(processed.metadata), processing\_status: "completed" }) db.create(ProcessingLog, { file\_type: "image", file\_name: title, operation: "upload\_and\_process", status: "success" }) return { success: true, image: updated\_img, processing\_info: { variants\_created: system.len(processed.variants), original\_format: processed.original\_format } } } else { let error\_img = db.update(Image, [img.id](http://img.id), { processing\_status: "failed" }) db.create(ProcessingLog, { file\_type: "image", file\_name: title, operation: "upload\_and\_process", status: "failed", error\_message: "Image processing failed" }) return { success: false, image: error\_img, errors: processed.errors } } } action search\_documents(query: text, project\_id: text, current\_user: User) { let matching\_docs = db.find(Document, { project\_id: project\_id, extracted\_text\_\_icontains: query }) return { success: true, query: query, matching\_documents: system.len(matching\_docs), results: matching\_docs } } action analyze\_document\_content(id: text, current\_user: User) { let doc = db.find\_one(Document, {id: id}) if !doc { return {success: false, error: "Document not found"} } if doc.extracted\_text { let analysis = document.analyze(doc.extracted\_text) let keywords = document.extract\_keywords(doc.extracted\_text, 10) let summary = document.summarize(doc.extracted\_text, 5) return { success: true, document\_title: doc.title, analysis: { full\_analysis: analysis, top\_keywords: keywords.keywords, summary: summary.summary, word\_count: analysis.word\_count, readability\_score: analysis.readability\_score } } } else { return {success: false, error: "No extracted text available for analysis"} } } action create\_image\_variants(id: text, variants: array, current\_user: User) { let img = db.find\_one(Image, {id: id}) if !img { return {success: false, error: "Image not found"} } let processed = image.process(img.file\_path, variants) if processed.success { return { success: true, image\_title: img.title, variants\_created: processed.variants, processing\_info: processed.metadata } } else { return {success: false, errors: processed.errors} } } action get\_processing\_stats(current\_user: User) { let total\_docs = db.count(Document, {}) let total\_images = db.count(Image, {}) let completed\_docs = db.count(Document, {processing\_status: "completed"}) let completed\_images = db.count(Image, {processing\_status: "completed"}) let failed\_processing = db.count(ProcessingLog, {status: "failed"}) let success\_rate\_docs = 0.0 if total\_docs > 0 { success\_rate\_docs = (completed\_docs / total\_docs) \* 100.0 } let success\_rate\_images = 0.0 if total\_images > 0 { success\_rate\_images = (completed\_images / total\_images) \* 100.0 } return { success: true, statistics: { total\_documents: total\_docs, total\_images: total\_images, successfully\_processed\_documents: completed\_docs, successfully\_processed\_images: completed\_images, failed\_operations: failed\_processing, success\_rate\_documents: success\_rate\_docs, success\_rate\_images: success\_rate\_images } } } } \# API endpoints for document and image management api { \# Document management endpoints POST "/api/documents/upload" -> action upload\_document POST "/api/documents/search" -> action search\_documents GET "/api/documents/:id/analyze" -> action analyze\_document\_content \# Image management endpoints POST "/api/images/upload" -> action upload\_image POST "/api/images/:id/variants" -> action create\_image\_variants \# Statistics endpoint GET "/api/stats/processing" -> action get\_processing\_stats } \# Real-time processing status updates realtime { channel "processing\_updates" on event "db:Document:update" -> broadcast("processing\_updates", data.processing\_status) on event "db:Image:update" -> broadcast("processing\_updates", data.processing\_status) } \# Security permissions permissions { allow User to \["create"\] Project where (user) { return user.role == "admin" or user.role == "editor" } allow User to \["read", "update", "delete"\] Project where (user, project) { if user.role == "admin" { return true } return [user.id](http://user.id) == project.owner\_id } allow User to \["create", "read"\] Document where (user) { return user.role == "admin" or user.role == "editor" } allow User to \["create", "read"\] Image where (user) { return user.role == "admin" or user.role == "editor" } allow User to "read" ProcessingLog where (user) { return user.role == "admin" } }
2025-08-24T19:17:05
https://www.reddit.com/r/LocalLLaMA/comments/1mz4o8q/what_does_this_language_remind_you_of/
Specific-Total8678
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mz4o8q
false
null
t3_1mz4o8q
/r/LocalLLaMA/comments/1mz4o8q/what_does_this_language_remind_you_of/
false
false
self
0
null
Local llm inside vsc
0
I have downloaded models and got them to run inside LM studio, but i am having problems getting those same models to run inside vsc extensions. I've tried roo code and cline, also with ollama. I think maybe i am skipping a step with cmds inside a terminal? I am trying to run local llm free inside vsc without restrictions or limits. Most of the guides on youtube are using api providers through online services. I was trying to go this other route because i recently hit request cap on paid gemini a few days ago. I know nothing of about coding, so yes a complete novice.
2025-08-24T19:13:49
https://www.reddit.com/r/LocalLLaMA/comments/1mz4l65/local_llm_inside_vsc/
kindkatz
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mz4l65
false
null
t3_1mz4l65
/r/LocalLLaMA/comments/1mz4l65/local_llm_inside_vsc/
false
false
self
0
null
All of the top 15 OS models on Design Arena come from China. The best non-Chinese model is GPT OSS 120B, ranked at 16th
491
China is not only the main competitor to the US in the overall AI race, but dominating the open-source landscape. Out of the open source models listed on [Design Arena](https://www.designarena.ai/) (a UI/UX and frontend benchmark for LLMs), Chinese models take up all of the top 15 spots with the first non-Chinese model making its appearing at #16 as GPT OSS 120B, developed by Open AI. It's really remarkable what DeepSeek, Zhipu, Kimi, and Qwen have been able to do while staying OS.
2025-08-24T19:10:09
https://www.reddit.com/gallery/1mz4hrg
Accomplished-Copy332
reddit.com
1970-01-01T00:00:00
0
{}
1mz4hrg
false
null
t3_1mz4hrg
/r/LocalLLaMA/comments/1mz4hrg/all_of_the_top_15_os_models_on_design_arena_come/
false
false
https://b.thumbs.redditm…DsHg7PwU2cQk.jpg
491
null
Crow new LLM weight format + runtime (alpha)
1
[removed]
2025-08-24T19:08:01
https://www.reddit.com/r/LocalLLaMA/comments/1mz4fp1/crow_new_llm_weight_format_runtime_alpha/
qorvuss
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mz4fp1
false
null
t3_1mz4fp1
/r/LocalLLaMA/comments/1mz4fp1/crow_new_llm_weight_format_runtime_alpha/
false
false
https://b.thumbs.redditm…3Dt03lpotl9U.jpg
1
null
Crow new LLM weight format + runtime (alpha)
1
[removed]
2025-08-24T19:03:08
https://www.reddit.com/r/LocalLLaMA/comments/1mz4avo/crow_new_llm_weight_format_runtime_alpha/
qorvuss
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mz4avo
false
null
t3_1mz4avo
/r/LocalLLaMA/comments/1mz4avo/crow_new_llm_weight_format_runtime_alpha/
false
false
https://b.thumbs.redditm…YWIgtxlJv9OQ.jpg
1
null
Qwen3-Coder-480B Q4_0 on 6x7900xtx
35
Running Qwen3-Coder-480B Q4\_0 on 6x7900xtx with 7 token/s output speed, did you have any suggestion or ideas to speed up it? Maybe you know smart-offloading specific layers? i launch it with this command: ./lama-hip-0608/build/bin/llama-server --model ./480B-A35B_Q4_0/Qwen3-Coder-480B-A35B-Instruct-Q4_0-00001-of-00006.gguf --main-gpu 0 --temp 0.65 --top-k 20 --min-p 0.0 --top-p 0.95 --gpu-layers 48 --ctx-size 4000 --host 0.0.0.0 --port ${PORT} --parallel 1 --tensor-split 24,24,24,24,24,24 --jinja --mlock --flash-attn --cache-type-k q8_0 --cache-type-v q8_0 -ot ".ffn_(down)_exps.=CPU" [old photo of this server for \\"flash attention\\"](https://preview.redd.it/nwqolw8kk0lf1.jpg?width=896&format=pjpg&auto=webp&s=fdec24f26fec58d2fcb55c1ed10988db1ea8c7d2)
2025-08-24T18:54:17
https://www.reddit.com/r/LocalLLaMA/comments/1mz42eu/qwen3coder480b_q4_0_on_6x7900xtx/
djdeniro
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mz42eu
false
null
t3_1mz42eu
/r/LocalLLaMA/comments/1mz42eu/qwen3coder480b_q4_0_on_6x7900xtx/
false
false
https://a.thumbs.redditm…fiexy6H6qaR8.jpg
35
null
I got early access to grok-4-coder. and its crazy.
0
I am not sure about it being better than opus-4 . but for that speed. its f\*ing good. and I will replace sonnet with this any day. the UI design is really good too.
2025-08-24T18:34:31
https://www.reddit.com/r/LocalLLaMA/comments/1mz3jry/i_got_early_access_to_grok4coder_and_its_crazy/
kassandrrra
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mz3jry
false
null
t3_1mz3jry
/r/LocalLLaMA/comments/1mz3jry/i_got_early_access_to_grok4coder_and_its_crazy/
false
false
self
0
null
I tried fine-tuning Gemma-3-270m and prepared for deployments
37
Google recently released **Gemma3-270M** model, which is one of the smallest open models out there. Model weights are available on Hugging Face and its size is \~550MB and there were some testing where it was being used on phones. It’s one of the perfect models for fine-tuning, so I put it to the test using the official Colab notebook and an NPC game dataset. I put everything together as a written guide in my newsletter and also as a small demo video while performing the steps. I have skipped the fine-tuning part in the guide because you can find the official notebook on the release blog to test using Hugging Face Transformers. I did the same locally on my notebook. Gemma3-270M is so small that fine-tuning and testing were finished in just a few minutes (\~15). Then I used a open source tool called KitOps to package it together for secure production deployments. I was trying to see if fine-tuning this small model is fast and efficient enough to be used in production environments or not. The steps I covered are mainly for devs looking for secure deployment of these small models for real apps. (example covered is very basic) Steps I took are: * Importing a Hugging Face Model * Fine-Tuning the Model * Initializing the Model with KitOps * Packaging the model and related files after fine-tuning * Push to a Hub to get security scans done and container deployments. watch the demo video – [here](https://youtu.be/8SKV_m5XV6o) take a look at the guide – [here](https://mranand.substack.com/p/you-can-fine-tune-gemma3-270m-in)
2025-08-24T17:50:40
https://www.reddit.com/r/LocalLLaMA/comments/1mz2di5/i_tried_finetuning_gemma3270m_and_prepared_for/
codes_astro
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mz2di5
false
null
t3_1mz2di5
/r/LocalLLaMA/comments/1mz2di5/i_tried_finetuning_gemma3270m_and_prepared_for/
false
false
self
37
{'enabled': False, 'images': [{'id': 'Z8ZM8xZXrmjbmONpSV_aN035cI1gJ2qKvs3v3OwAjnI', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/Z8ZM8xZXrmjbmONpSV_aN035cI1gJ2qKvs3v3OwAjnI.jpeg?width=108&crop=smart&auto=webp&s=5887c47ca5805eca4d7d8ce4b04e5fd4fa40cf94', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/Z8ZM8xZXrmjbmONpSV_aN035cI1gJ2qKvs3v3OwAjnI.jpeg?width=216&crop=smart&auto=webp&s=707a4d4dfe1a67088867e3b3935312643871b668', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/Z8ZM8xZXrmjbmONpSV_aN035cI1gJ2qKvs3v3OwAjnI.jpeg?width=320&crop=smart&auto=webp&s=a17fdf218786f7bc1ab7651c38153eb6e27819d7', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/Z8ZM8xZXrmjbmONpSV_aN035cI1gJ2qKvs3v3OwAjnI.jpeg?auto=webp&s=faad81ffe4f22b4d5e9d224b152863096169e4f0', 'width': 480}, 'variants': {}}]}
Years of AI research wiped out overnight — no backups, no warning
0
I’m honestly shocked and frustrated by HuggingChat’s recent shutdown. My team at an AI company invested a huge amount of time and research into the platform — conversations that included corporate workflows, intellectual property, and critical AI experiments. Hugging Face announced a very brief “grace period” for exporting data, but it was far too short for enterprise users. Two weeks wasn't enough. To make matters worse, they have deleted all user data and internal backups. That means our work is gone forever — no recourse, no archive, nothing. From a professional standpoint, this is deeply unprofessional and damaging to user trust. Platforms that host user-generated work, especially research and IP, should never delete everything abruptly without providing adequate notice, export options, or backup retention. This isn’t just a minor inconvenience — this is a loss of critical corporate and research history. If you rely on HuggingChat or similar tools for anything serious, consider backing up your data immediately. Takeaway: Platforms must respect user data. A short grace period and total deletion is unacceptable for professional work.
2025-08-24T17:45:12
https://www.reddit.com/r/LocalLLaMA/comments/1mz284w/years_of_ai_research_wiped_out_overnight_no/
Informal-Concept6476
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mz284w
false
null
t3_1mz284w
/r/LocalLLaMA/comments/1mz284w/years_of_ai_research_wiped_out_overnight_no/
false
false
self
0
null
Qwen-Image-Edit [M3 Ultra 512gb, comfyUI]
8
Prompt: Change the scene to a modern card game store. Replace the phone in his hands with a thick wad of cash (banknotes), add two short gold chains around his neck, and change his T-shirt to white with the word ‘ALPHAVILLE’ printed in clear green block capitals. Keep his face, pose, and lighting natural. Input: 622x618 Output: 1024x1016 Time: 9m41s
2025-08-24T17:13:27
https://www.reddit.com/gallery/1mz1ddg
Turbulent_Pin7635
reddit.com
1970-01-01T00:00:00
0
{}
1mz1ddg
false
null
t3_1mz1ddg
/r/LocalLLaMA/comments/1mz1ddg/qwenimageedit_m3_ultra_512gb_comfyui/
false
false
https://b.thumbs.redditm…FpuCAX9OqbpQ.jpg
8
null
MALM: A Modular Adapter-based Language Model (paper + Hugging Face link)
12
Hey everyone, I just finished writing a short paper about a new idea I call \*\*MALM\*\*, a \*Modular Adapter-based Language Model\*. The core idea is simple: instead of training giant multilingual LLMs, I propose keeping one small, sharp \*\*Core Language Model\*\* (reasoning in English), and delegating translation to lightweight, swappable \*\*Specialized Translation Adapters (STAs)\*\*. This means: \- Smaller, cheaper models \- Easy to add new languages \- Better for edge devices and low-resource settings Example flow: \`\`\` User: "Translate 'my name is Adam' into German." CLM → <to:de> my name is Adam </to> STA → "Mein Name ist Adam" \`\`\` 📄 Read the full paper here: \[huggingface.co/TimesLast/MALM\](https://huggingface.co/TimesLast/MALM) Would love feedback, especially on how this could be extended beyond translation (math, code, multimodal adapters, etc.).
2025-08-24T16:53:04
https://www.reddit.com/r/LocalLLaMA/comments/1mz0tqc/malm_a_modular_adapterbased_language_model_paper/
TimesLast_
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mz0tqc
false
null
t3_1mz0tqc
/r/LocalLLaMA/comments/1mz0tqc/malm_a_modular_adapterbased_language_model_paper/
false
false
self
12
{'enabled': False, 'images': [{'id': 'OpuN4cD9r_lV3YSON5LyxJ6w0lQeW9KVSuq_MQxoI0o', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/OpuN4cD9r_lV3YSON5LyxJ6w0lQeW9KVSuq_MQxoI0o.png?width=108&crop=smart&auto=webp&s=f47e9d664a323a96572cbb291d64080eb0532202', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/OpuN4cD9r_lV3YSON5LyxJ6w0lQeW9KVSuq_MQxoI0o.png?width=216&crop=smart&auto=webp&s=030a49bac27a8c6a30ea84fab81e8ce941156a17', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/OpuN4cD9r_lV3YSON5LyxJ6w0lQeW9KVSuq_MQxoI0o.png?width=320&crop=smart&auto=webp&s=1284cf4d373f375c15f10a670d6feb5a79c1c02a', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/OpuN4cD9r_lV3YSON5LyxJ6w0lQeW9KVSuq_MQxoI0o.png?width=640&crop=smart&auto=webp&s=9499cf923b65552d08c4e96e78a901b347eba7b3', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/OpuN4cD9r_lV3YSON5LyxJ6w0lQeW9KVSuq_MQxoI0o.png?width=960&crop=smart&auto=webp&s=23d267da1a280656f813ff7dd90d99907c4a191b', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/OpuN4cD9r_lV3YSON5LyxJ6w0lQeW9KVSuq_MQxoI0o.png?width=1080&crop=smart&auto=webp&s=1100567356ba579b27f371fb79f8c5d52a548091', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/OpuN4cD9r_lV3YSON5LyxJ6w0lQeW9KVSuq_MQxoI0o.png?auto=webp&s=d19aa7c11dd901c2ef5e6bc0b87fd3f65236905c', 'width': 1200}, 'variants': {}}]}
Help getting my downloaded Yi 34b Q5 running on my comp with CPU (no GPU)
0
# Help getting my downloaded Yi 34b Q5 running on my comp with CPU (no GPU) I have tried getting it working with one-click webui, original webui + ollama backend--so far no luck. I have the downloaded Yi 34b Q5 but just need to be able to run it. My computer is a Framework Laptop 13 Ryzen Edition: CPU-- AMD Ryzen AI 7 350 with Radeon 860M (16 cores) RAM-- 93 GiB (\~100 total) Disk--8 TB memory with 1TB expansion card, 28TB external hard drive arriving soon (hoping to make it headless) GPU-- No dedicated GPU currently in use- running on integrated Radeon 860M OS-- Pop!\_OS (Linux-based, System76) AI Model-- hoping to use Yi-34B-Chat-Q5\_K\_M.gguf (24.3 GB quantized model) Local AI App--now trying KoboldCPP (previously used WebUI but failed to get my model to show up in dropdown menu) Any help much needed and very much appreciated!
2025-08-24T16:42:37
https://www.reddit.com/r/LocalLLaMA/comments/1mz0jvh/help_getting_my_downloaded_yi_34b_q5_running_on/
DocPT2021
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mz0jvh
false
null
t3_1mz0jvh
/r/LocalLLaMA/comments/1mz0jvh/help_getting_my_downloaded_yi_34b_q5_running_on/
false
false
self
0
null
Best small local llm for coding
31
Hey! I am looking for good small llm for coding. By small i mean somewhere around 10b parameters like gemma3:12b or codegemma. I like them both but first one is not specifically coding model and second one is a year old. Does anyone have some suggestions about other good models or a place that benchmarks those? I am talking about those small models because i use them on gpu with 12gb vram or even laptop with 8.
2025-08-24T16:28:21
https://www.reddit.com/r/LocalLLaMA/comments/1mz0640/best_small_local_llm_for_coding/
Low-Palpitation-4724
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mz0640
false
null
t3_1mz0640
/r/LocalLLaMA/comments/1mz0640/best_small_local_llm_for_coding/
false
false
self
31
null
GPT-OSS system prompt based reasoning effort doesn't work?
3
Was noticing reasoning effort not have much of an effect on gpt-oss-120b so dug into it. Officially you can set it in the system prompt, but turns out, at least in vllm, you can't.... Unless I'm missing something? I asked the LLM the same question 99 times each for high and low set via parameter and system prompt. === Results === **system\_high** avg total\_tokens: 3330.74 avg completion\_tokens: **3179.74** (n=99, fails=0) **system\_low** avg total\_tokens: 2945.22 avg completion\_tokens: **2794.22** (n=99, fails=0) **param\_high** avg total\_tokens: 8176.96 avg completion\_tokens: **8033.96** (n=99, fails=0) **param\_low** avg total\_tokens: 1024.76 avg completion\_tokens: **881.76** (n=99, fails=0) Looks like both system prompt options are actually running at medium with slightly more/less effort. Question: "Five people need to cross a bridge at night with one flashlight. " "At most two can cross at a time, and anyone crossing must carry the flashlight. " "Their times are 1, 2, 5, 10, and 15 minutes respectively; a pair walks at the slower " "person’s speed. What is the minimum total time for all to cross?" Code if anyone is interested: import requests import time import random import statistics from concurrent.futures import ThreadPoolExecutor, as_completed from threading import Lock URL = "http://{ip address}:8000/v1/chat/completions" MODEL = "openai/gpt-oss-120b" CONCURRENCY = 50 MAX_RETRIES = 5 BACKOFF_BASE = 1.0 # seconds BACKOFF_JITTER = 0.25 SEEDS = list(range(1, 100)) # seeds 1..10 KINDS = ["system_high", "system_low", "param_high", "param_low"] USER_PROMPT = ( "Five people need to cross a bridge at night with one flashlight. " "At most two can cross at a time, and anyone crossing must carry the flashlight. " "Their times are 1, 2, 5, 10, and 15 minutes respectively; a pair walks at the slower " "person’s speed. What is the minimum total time for all to cross?" ) def make_payload(kind: str, seed: int) -> dict: base = {"model": MODEL, "temperature": 0, "seed": seed, "messages": []} if kind == "system_high": base["messages"] = [ {"role": "system", "content": "Reasoning: high"}, {"role": "user", "content": USER_PROMPT}, ] elif kind == "system_low": base["messages"] = [ {"role": "system", "content": "Reasoning: low"}, {"role": "user", "content": USER_PROMPT}, ] elif kind == "param_high": base["messages"] = [{"role": "user", "content": USER_PROMPT}] base["reasoning_effort"] = "high" elif kind == "param_low": base["messages"] = [{"role": "user", "content": USER_PROMPT}] base["reasoning_effort"] = "low" return base def post_with_retries(payload: dict) -> dict: attempt = 0 while attempt <= MAX_RETRIES: try: r = requests.post(URL, json=payload, timeout=600) if r.status_code == 200: return {"ok": True, "data": r.json()} # retry on *any* non-200 attempt += 1 if attempt <= MAX_RETRIES: sleep = BACKOFF_BASE * (2 ** (attempt - 1)) + random.uniform(0, BACKOFF_JITTER) time.sleep(sleep) continue return {"ok": False, "error": f"{r.status_code} {r.reason}", "body": r.text[:500]} except requests.exceptions.RequestException as e: attempt += 1 if attempt <= MAX_RETRIES: sleep = BACKOFF_BASE * (2 ** (attempt - 1)) + random.uniform(0, BACKOFF_JITTER) time.sleep(sleep) continue return {"ok": False, "error": f"network: {repr(e)}"} def run_single(kind: str, seed: int): payload = make_payload(kind, seed) result = post_with_retries(payload) if not result.get("ok"): return kind, None, None, result data = result["data"] usage = data.get("usage", {}) return kind, usage.get("total_tokens"), usage.get("completion_tokens"), None def main(): totals = {k: [] for k in KINDS} completions = {k: [] for k in KINDS} failures = {k: 0 for k in KINDS} tasks = [(k, s) for k in KINDS for s in SEEDS] total_tasks = len(tasks) done = 0 ok_count = 0 fail_count = 0 lock = Lock() with ThreadPoolExecutor(max_workers=CONCURRENCY) as executor: futures = {executor.submit(run_single, k, s): (k, s) for k, s in tasks} for f in as_completed(futures): k, _ = futures[f] kind, total, comp, err = f.result() with lock: done += 1 if total is not None: totals[k].append(total) completions[k].append(comp) ok_count += 1 else: failures[k] += 1 fail_count += 1 print(f"{done}/{total_tasks} done (ok: {ok_count}, fail: {fail_count})", end="\r") print("\n\n=== Results ===") for k in KINDS: if totals[k]: print(f"{k:12s} avg total_tokens: {statistics.mean(totals[k]):.2f} " f"avg completion_tokens: {statistics.mean(completions[k]):.2f} " f"(n={len(totals[k])}, fails={failures[k]})") else: print(f"{k:12s} no valid results (fails={failures[k]})") if __name__ == "__main__": main()
2025-08-24T16:02:27
https://www.reddit.com/r/LocalLLaMA/comments/1myzh2k/gptoss_system_prompt_based_reasoning_effort/
Conscious_Cut_6144
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1myzh2k
false
null
t3_1myzh2k
/r/LocalLLaMA/comments/1myzh2k/gptoss_system_prompt_based_reasoning_effort/
false
false
self
3
null
Fast CUDA DFloat11 decoding kernel
148
A few months ago, I came across the amazing work on [DFloat11](https://www.reddit.com/r/LocalLLaMA/comments/1k7o89n/we_compress_any_bf16_model_to_70_size_during/), which achieves lossless output while shrinking models to 70% of their original size by compressing the exponent bits of BF16. It is a great work. However, I found a problem: it decompresses an entire tensor into VRAM, and then perform computations separately, which severely impacts the model's decoding speed. According to some [issues](https://github.com/LeanModels/DFloat11/issues/7) on GitHub, it only reaches about 1/3 of the native BF16 speed. Furthermore, the author hasn't released the code for encoding the models, and the decoding kernel is provided in a nearly unreadable PTX format. So, I decided to write my own implementation. I used the Huffman coding and LUT-based decoding algorithms described in the paper, but I **fused the Huffman decoding process and the GEMV operation into a single kernel**. This avoids unnecessary memory bandwidth overhead and dramatically speeds up decoding. With a batch size of 1, my implementation can now reach about **90% of native BF16 speed** on regular GPUs. On some VRAM bandwidth-constrained GPUs, like the RTX 4060 Ti, it can even **surpass native BF16 speed** because the compressed weights reduce the demand on VRAM bandwidth. Here's a simple benchmark for generating 256 tokens: |Model|Device|Raw BF16 Time|Compressed BF16 Time|Raw / Compressed Size| |:-|:-|:-|:-|:-| |Qwen2.5 7B|RTX 4060Ti|14.98s|13.02s|14.19 / 10.99 GiB| ||RTX A6000|6.66s|7.23s|| |Qwen3 8B|RTX 4060Ti|OOM|14.11s|15.26 / 11.52 GiB| ||RTX A6000|7.75s|8.24s|| Of course, there are still areas for improvement. Due to the extra padding required by the CUDA kernel's layout, the current compression rate is slightly lower than the original DFloat11, achieving around 75%-80%. Additionally, support for uncommon tensor shapes and batch sizes greater than 1 is currently limited. For more information, please visit my GitHub repository: [https://github.com/lszxb/bf16\_huffman\_infer](https://github.com/lszxb/bf16_huffman_infer)
2025-08-24T15:51:15
https://www.reddit.com/r/LocalLLaMA/comments/1myz6f1/fast_cuda_dfloat11_decoding_kernel/
No_Dimension41
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1myz6f1
false
null
t3_1myz6f1
/r/LocalLLaMA/comments/1myz6f1/fast_cuda_dfloat11_decoding_kernel/
false
false
self
148
{'enabled': False, 'images': [{'id': '3evxa9eqnIvibka7xf258x1xaggd8_zW-LCrOcny2EQ', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/3evxa9eqnIvibka7xf258x1xaggd8_zW-LCrOcny2EQ.png?width=108&crop=smart&auto=webp&s=d2688c713cf417ab724a5a0d7824adf5e316884a', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/3evxa9eqnIvibka7xf258x1xaggd8_zW-LCrOcny2EQ.png?width=216&crop=smart&auto=webp&s=34800998f2e91e32d26ff5b065661ae1de21ed5f', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/3evxa9eqnIvibka7xf258x1xaggd8_zW-LCrOcny2EQ.png?width=320&crop=smart&auto=webp&s=ae5cadac365a76b6d446f567d7fc6e14310ea41c', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/3evxa9eqnIvibka7xf258x1xaggd8_zW-LCrOcny2EQ.png?width=640&crop=smart&auto=webp&s=74e53c3ea9d766c3c6032784f0b735d790318ef1', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/3evxa9eqnIvibka7xf258x1xaggd8_zW-LCrOcny2EQ.png?width=960&crop=smart&auto=webp&s=31055c24d68e7cea760091a7188a434f1b93ffc7', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/3evxa9eqnIvibka7xf258x1xaggd8_zW-LCrOcny2EQ.png?width=1080&crop=smart&auto=webp&s=c055bff660d7708d27c135172b247c57bc20f14e', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/3evxa9eqnIvibka7xf258x1xaggd8_zW-LCrOcny2EQ.png?auto=webp&s=ce8196c5002d9cb4d0be0e318e72473015918ceb', 'width': 1200}, 'variants': {}}]}
Seed-OSS is insanely good
110
It took a day for me to get it running but \*wow\* this model is good. I had been leaning heavily on a 4bit 72B Deepseek R1 Distill but it had some regularly frustrating failure modes. I was prepping to finetune my own model to address my needs but now it's looking like I can remove refusals and run Seed-OSS.
2025-08-24T15:50:03
https://www.reddit.com/r/LocalLLaMA/comments/1myz59l/seedoss_is_insanely_good/
I-cant_even
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1myz59l
false
null
t3_1myz59l
/r/LocalLLaMA/comments/1myz59l/seedoss_is_insanely_good/
false
false
self
110
null
Is nVidia 6090 coming in 7 months?
0
Articles suggest that Rubin is on track for release next year. # https://overclock3d.net/news/gpu-displays/nvidia-confirms-that-its-next-gen-rubin-chips-have-entered-trial-production/#:~:text=Currently%2C%20Nvidia%27s%20Rubin%20platform%20is%20expected%20to,Nvidia%20AI%20product%20to%20use%20HBM4%20memory. Quote: Currently, Nvidia’s Rubin platform is expected to be launched between 2026 and 2027. However, this timeline depends on how the chip’s trial production goes. # https://www.tweaktown.com/news/107021/nvidias-next-gen-rubin-ai-gpus-not-delayed-no-changes-to-fight-amd-instinct-mi450-chips/index.html Quote: ... Rubin is on track, which last we heard there will be 5.7 million Rubin AI GPUs shipped in 2026, each with next-generation HBM4 memory and up to 1800W of power per R100 AI chip. # https://x.com/dnystedt/status/1931867520740512121 Quote: Mass production is scheduled for early 2026. nVidia says they will give us more info towards the end of October at GDC event.
2025-08-24T15:23:44
https://www.reddit.com/r/LocalLLaMA/comments/1myygot/is_nvidia_6090_coming_in_7_months/
Terminator857
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1myygot
false
null
t3_1myygot
/r/LocalLLaMA/comments/1myygot/is_nvidia_6090_coming_in_7_months/
false
false
self
0
{'enabled': False, 'images': [{'id': 'buyipOXnXheeZUd4Q4I79AcVU8yh8HKwJFJNB96JtfM', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/buyipOXnXheeZUd4Q4I79AcVU8yh8HKwJFJNB96JtfM.jpeg?width=108&crop=smart&auto=webp&s=c640a0132b70ba2e868556727157c7cfedde85d0', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/buyipOXnXheeZUd4Q4I79AcVU8yh8HKwJFJNB96JtfM.jpeg?width=216&crop=smart&auto=webp&s=faad04b8e169ce1723054968c8080f4925040913', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/buyipOXnXheeZUd4Q4I79AcVU8yh8HKwJFJNB96JtfM.jpeg?width=320&crop=smart&auto=webp&s=ce4bd00bf733e34e609c02ce42c65db1b1134832', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/buyipOXnXheeZUd4Q4I79AcVU8yh8HKwJFJNB96JtfM.jpeg?width=640&crop=smart&auto=webp&s=512b731f68cd5c19f51452e26870a8e707ba7914', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/buyipOXnXheeZUd4Q4I79AcVU8yh8HKwJFJNB96JtfM.jpeg?width=960&crop=smart&auto=webp&s=793e2b1d72be0fef82fedb81c5c48d4f0018bc2c', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/buyipOXnXheeZUd4Q4I79AcVU8yh8HKwJFJNB96JtfM.jpeg?width=1080&crop=smart&auto=webp&s=3b3959e63ccca1cd4a2a759bc2916a8841ddd6b4', 'width': 1080}], 'source': {'height': 675, 'url': 'https://external-preview.redd.it/buyipOXnXheeZUd4Q4I79AcVU8yh8HKwJFJNB96JtfM.jpeg?auto=webp&s=f1a4e33f0fff9df5b71ada63c26487744e3e49b9', 'width': 1200}, 'variants': {}}]}
Why can't GPT-OSS perform simultaneous function invocation?
2
It seems to not be able to perform simultaneous tool calls. Why is this the case? I have made a LiteLLM MCP Client, and tested with various models, and the only current-gen model that cannot do parallel Agentic Actions. Even Llama 3.1 70b is capable of doing so, but GPT-OSS-120b cannot. Is this a limitation of Groq or of OSS itself? Groq works fine with this when I am using Llama, so I don't think this is the case.
2025-08-24T15:16:04
https://www.reddit.com/r/LocalLLaMA/comments/1myy9dz/why_cant_gptoss_perform_simultaneous_function/
Common_Ad6166
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1myy9dz
false
null
t3_1myy9dz
/r/LocalLLaMA/comments/1myy9dz/why_cant_gptoss_perform_simultaneous_function/
false
false
self
2
null
Suggestion - Subnotebooks (or sections) around a central topic
0
It would function like folders for sources, essentially. Primarily as an organization tool, but could also serve as an additional grounding sources constraint if you do not want one set of sources interfering. One example I can think of is researching the effects of pharmaceuticals, where you may want to ask about side effects without information from one type of drug affecting the answer you wanted regarding another type.
2025-08-24T15:13:16
https://www.reddit.com/r/LocalLLaMA/comments/1myy6pa/suggestion_subnotebooks_or_sections_around_a/
Aeonmoru
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1myy6pa
false
null
t3_1myy6pa
/r/LocalLLaMA/comments/1myy6pa/suggestion_subnotebooks_or_sections_around_a/
false
false
self
0
null
I was asking for ways to translate and synthesize anime voices on this subreddit and u guys answered me. So here is the final product(techinically not final but whatever).
2
[removed]
2025-08-24T15:02:30
https://v.redd.it/w0mcqyeafzkf1
mrpeace03
v.redd.it
1970-01-01T00:00:00
0
{}
1myxwfk
false
{'reddit_video': {'bitrate_kbps': 800, 'dash_url': 'https://v.redd.it/w0mcqyeafzkf1/DASHPlaylist.mpd?a=1758639764%2CMjZhM2M1ZjdhOTBjY2U2YWY1ZmI5NmExNjhjODIyNmIwNjQxOWU4Y2I5MzFkYzgyYWNiOGVjMWFhM2YwZjg1MQ%3D%3D&v=1&f=sd', 'duration': 174, 'fallback_url': 'https://v.redd.it/w0mcqyeafzkf1/DASH_360.mp4?source=fallback', 'has_audio': True, 'height': 360, 'hls_url': 'https://v.redd.it/w0mcqyeafzkf1/HLSPlaylist.m3u8?a=1758639764%2CN2YzYzE0MTVhZGEwZjNlODg5NTE4MTQzNTc1OWZjZjEwZDNiNjc0NTJlMDM0ZGM4MjYwYjYzMTg0M2RjN2JlYQ%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/w0mcqyeafzkf1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 640}}
t3_1myxwfk
/r/LocalLLaMA/comments/1myxwfk/i_was_asking_for_ways_to_translate_and_synthesize/
false
false
https://external-preview…94d321fafe7ecb64
2
{'enabled': False, 'images': [{'id': 'c3dsNW15ZWFmemtmMVC4oZFNK_9-nvftM-GMeL6N1R3h9wI3dbZCc-tjbpen', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/c3dsNW15ZWFmemtmMVC4oZFNK_9-nvftM-GMeL6N1R3h9wI3dbZCc-tjbpen.png?width=108&crop=smart&format=pjpg&auto=webp&s=a82cbf3f38f3e95f68079e65d1f6673772648a3e', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/c3dsNW15ZWFmemtmMVC4oZFNK_9-nvftM-GMeL6N1R3h9wI3dbZCc-tjbpen.png?width=216&crop=smart&format=pjpg&auto=webp&s=7504c3b44505af16c46043c742087407a789f285', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/c3dsNW15ZWFmemtmMVC4oZFNK_9-nvftM-GMeL6N1R3h9wI3dbZCc-tjbpen.png?width=320&crop=smart&format=pjpg&auto=webp&s=65632565ccbb3d9136357164799eb6a2004ab88c', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/c3dsNW15ZWFmemtmMVC4oZFNK_9-nvftM-GMeL6N1R3h9wI3dbZCc-tjbpen.png?width=640&crop=smart&format=pjpg&auto=webp&s=db9dccc80124e4ee3006ba257d20d64513ede554', 'width': 640}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/c3dsNW15ZWFmemtmMVC4oZFNK_9-nvftM-GMeL6N1R3h9wI3dbZCc-tjbpen.png?format=pjpg&auto=webp&s=9ee0d07846852867ab1190b905057950d0462300', 'width': 640}, 'variants': {}}]}
Free Preview of Qoder from Alibaba: The Future of Agentic Coding?
1
I did a deeper look into Qoder - the new Agentic Coding Platform from Alibaba. Check it out if you like: [https://youtu.be/4Zipfp4qdV4](https://youtu.be/4Zipfp4qdV4) What I liked: \- It does what developer don't like to do like writing detailed wiki and docs (Repo Wiki feature). \- Before implementing any feature it writes a detailed spec about the feature, takes feedback from developer, updates the spec. (just like Devs use RFC before implementing a feature) \- It creates a semantic representation of the code, to find the appropriate context to be used for context engineering. \- Long-term memory that evolves based on developer preferences, coding styles, past choices. What I didn't like: \- It's only Free during preview. Wish it was Free forever (Can't be greedy :-D ) \- Couldn't get Quest mode to work. \- Couldn't get the Free Web Search to work. I really liked the Repo Wiki and Spec feature in Quest Mode and I'll try to generate a wiki for all my projects during the free preview ;-) Did you try it? What are your impressions?
2025-08-24T14:49:32
https://www.reddit.com/r/LocalLLaMA/comments/1myxkd9/free_preview_of_qoder_from_alibaba_the_future_of/
NoobMLDude
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1myxkd9
false
null
t3_1myxkd9
/r/LocalLLaMA/comments/1myxkd9/free_preview_of_qoder_from_alibaba_the_future_of/
false
false
self
1
null
Is this Local enough? Qwen3 4B Q4K_M on llama.cpp on Android (Snapdragon 8 gen1)
0
https://preview.redd.it/…3f0600c3c15
2025-08-24T14:46:20
https://www.reddit.com/r/LocalLLaMA/comments/1myxhda/is_this_local_enough_qwen3_4b_q4k_m_on_llamacpp/
Not4Fame
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1myxhda
false
null
t3_1myxhda
/r/LocalLLaMA/comments/1myxhda/is_this_local_enough_qwen3_4b_q4k_m_on_llamacpp/
false
false
https://a.thumbs.redditm…vldLdU8pGwN8.jpg
0
null
Which local model are you currently using the most? What’s your main use case, and why do you find it good?
94
.
2025-08-24T14:32:16
https://www.reddit.com/r/LocalLLaMA/comments/1myx4l5/which_local_model_are_you_currently_using_the/
Namra_7
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1myx4l5
false
null
t3_1myx4l5
/r/LocalLLaMA/comments/1myx4l5/which_local_model_are_you_currently_using_the/
false
false
self
94
null
looking for lightweight open source llms with vision capability (<2b params)
2
Hello peeps i’m trying to build a feature in my app where the llm model receives a cropped image of a paragraph containing quotes from the app and extracts those quotes accurately from para. i need something very lightweight (under 2b parameters) so it can be hosted on a small server at low cost. preferably open source and with decent multimodal support. any recommendations or links to such models on huggingface or elsewhere?
2025-08-24T14:22:11
https://www.reddit.com/r/LocalLLaMA/comments/1mywvv3/looking_for_lightweight_open_source_llms_with/
AgreeableVanilla7193
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mywvv3
false
null
t3_1mywvv3
/r/LocalLLaMA/comments/1mywvv3/looking_for_lightweight_open_source_llms_with/
false
false
self
2
null
LLM to create playlists based on criteria?
2
I was thinking this might be a good use for me. I usually ask "web apps" like chatgpt, deepseek, or gemini to recommend music based on a musician, for example, or to put together a historical "tour" of a musical form, the fugue, the sonata, or perhaps a specific instrument (what's a must-listen to the violin? What's rarer? And rarer still? And in this culture? And in that one?). For example, a few days ago I asked about Paganini. I've only heard his 24 caprices. What album can you recommend for further listening? And, fundamentally, which artists! (Because music apps always recommend teddy bear-like albums, or "relaxing music," albums with artists of perhaps dubious performance.) For example, right now I'm listening to Ysaÿe and I started by asking what would be a good tour of his work, and, fundamentally, which album/artists are renowned. I use Tidal, and it has a Tidal API for which I once wrote a script to create playlists. Could a local LLM (running on an 8GB VRAM + 32GB CPU machine) create playlists directly in Tidal based on a criterion? Or at least create a script that does this? (without having to debug the code every time) Because obviously it'll first have to be able to find out if the artist's album is on Tidal, etc. **TL;DR**: Suggest and create playlists in a music service based on a criterion.
2025-08-24T14:16:20
https://www.reddit.com/r/LocalLLaMA/comments/1mywqkp/llm_to_create_playlists_based_on_criteria/
9acca9
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mywqkp
false
null
t3_1mywqkp
/r/LocalLLaMA/comments/1mywqkp/llm_to_create_playlists_based_on_criteria/
false
false
self
2
null
Recommendations for using a Ryzen 5 PRO 4650U for Code Assistant
1
Hi, I have some experience using Ollam and running models locally. Lately, I've been experimenting with code agents and would like to run one on a secondary machine. I know it won't be fast, but I really wouldn't mind keeping it busy and checking what it did when it's finished. I have a Ryzen 5 PRO 4650U notebook with 32GB of RAM, and I have experience using Linux if necessary. What software do you recommend to try to take advantage of the processor's capabilities?
2025-08-24T14:13:43
https://www.reddit.com/r/LocalLLaMA/comments/1mywo8s/recommendations_for_using_a_ryzen_5_pro_4650u_for/
pablo2m
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mywo8s
false
null
t3_1mywo8s
/r/LocalLLaMA/comments/1mywo8s/recommendations_for_using_a_ryzen_5_pro_4650u_for/
false
false
self
1
null
Open-source experiment: LLM-Ripper
7
I've been working on a small tool that allows you to surgically extract parts of attention heads, FFNs, and embeddings from a Transformer and connect them back together like LEGO. \- Want to test what a single head actually encodes? You can. \- Want to build a Frankenstein model from random heads? That's also possible. This is still experimental, but the goal is to open up new ways to understand, recycle, and reuse the model's internal components. Repository: [https://github.com/qrv0/LLM-Ripper](https://github.com/qrv0/LLM-Ripper) I'd love to hear feedback, experiments, or contributions. If this sparks ideas, feel free to fork, test, or build on it. https://preview.redd.it/szuwl6je0zkf1.png?width=1911&format=png&auto=webp&s=bc2c0621dc0e1814b75fa08b4bc1c81f634204a8
2025-08-24T13:37:59
https://www.reddit.com/r/LocalLLaMA/comments/1myvtia/opensource_experiment_llmripper/
-qrv0
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1myvtia
false
null
t3_1myvtia
/r/LocalLLaMA/comments/1myvtia/opensource_experiment_llmripper/
false
false
https://a.thumbs.redditm…y_1aL4BEh3Z0.jpg
7
null
Open Source Tool for Manga translation
24
There are some paid tools for manga translation, like INKR studio, but turns out to be pretty expensive. Thus our team at curify-ai worked on our custom manga translation tool and decided to open source the prototype at : [https://huggingface.co/spaces/Curify/manga\_translation](https://huggingface.co/spaces/Curify/manga_translation) The prototype features the following: a. Horizontally cropping skinny manga images to improve its visibility. b. Using PaddleOCR to detect text and use a polygon based approach for inpaint. Still need to improve OCR and inpainting method, Qwen might be a good candidate. c. Translate with Microsoft translator and allow customization of translated text. d. Render the translated image. It's still work in progress, welcome to use and suggest improvements.
2025-08-24T13:33:30
https://www.reddit.com/r/LocalLLaMA/comments/1myvpqv/open_source_tool_for_manga_translation/
New_Blueberry9858
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1myvpqv
false
null
t3_1myvpqv
/r/LocalLLaMA/comments/1myvpqv/open_source_tool_for_manga_translation/
false
false
self
24
{'enabled': False, 'images': [{'id': 'FoWKdhLBCel83GYvy-xx8gK1gsvk9s7LBURVhk7UzVk', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/FoWKdhLBCel83GYvy-xx8gK1gsvk9s7LBURVhk7UzVk.png?width=108&crop=smart&auto=webp&s=a7ec7ce188c70ccb91064ffff0b1487348f58b34', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/FoWKdhLBCel83GYvy-xx8gK1gsvk9s7LBURVhk7UzVk.png?width=216&crop=smart&auto=webp&s=63198ac7e6419549786bd3f8df4c230aff04455b', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/FoWKdhLBCel83GYvy-xx8gK1gsvk9s7LBURVhk7UzVk.png?width=320&crop=smart&auto=webp&s=1bb2d015b3794e7f14ea260aa512a629eed5b53d', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/FoWKdhLBCel83GYvy-xx8gK1gsvk9s7LBURVhk7UzVk.png?width=640&crop=smart&auto=webp&s=8fca263dbb1b356f0072f31a285413ab170f37e5', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/FoWKdhLBCel83GYvy-xx8gK1gsvk9s7LBURVhk7UzVk.png?width=960&crop=smart&auto=webp&s=2f935f5bd09d58cf7536e365884ccc87e0e626f9', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/FoWKdhLBCel83GYvy-xx8gK1gsvk9s7LBURVhk7UzVk.png?width=1080&crop=smart&auto=webp&s=5083a5443ea51d0534e22272bfe77fabccbc22c2', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/FoWKdhLBCel83GYvy-xx8gK1gsvk9s7LBURVhk7UzVk.png?auto=webp&s=e02d1ee387fb031f433ab9802561688ecda216c9', 'width': 1200}, 'variants': {}}]}
Is it possible to run inference on an LLM using 2 different GPUS? for example 3060, 3090
0
Thoughts?
2025-08-24T13:17:11
https://www.reddit.com/r/LocalLLaMA/comments/1myvcap/is_it_possible_to_run_inference_on_an_llm_using_2/
Odd-Ordinary-5922
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1myvcap
false
null
t3_1myvcap
/r/LocalLLaMA/comments/1myvcap/is_it_possible_to_run_inference_on_an_llm_using_2/
false
false
self
0
null
Which local model are you currently using the most? What’s your main use case, and why do you find it good?
1
.
2025-08-24T13:11:15
https://www.reddit.com/r/LocalLLaMA/comments/1myv7a3/which_local_model_are_you_currently_using_the/
Namra_7
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1myv7a3
false
null
t3_1myv7a3
/r/LocalLLaMA/comments/1myv7a3/which_local_model_are_you_currently_using_the/
false
false
self
1
null
Llama.cpp Out of memory exception? Is there a way to completely bypass RAM and go straight to VRAM
0
I've been trying to tackle this annoying issue where if I set my flow z13 32gb to a 24gb vram 8gb ram split I always hit an out of memory exception halfway through. This is particularly annoying because qwen3 coder 4q 30b doesn't fit in 16gb of vram so it ends up running on the cpu. I've tried disabling mmap, I've tried manually increasing windows virtual memory size, which are the suggestions I found on related questions I found here. Is there a way to decrease the batch size when you load into vram? I'm definitely missing something here. I'm trying to run the model in llama cpp vulkan and LM studio (which I'm assuming it's using cpp vulkan under the hood). Anytime I found a discussion online about this issue, everyone just says buy more ram. Which is concerning because that means any vram ram split where the vram is bigger than the ram is useless on these new amd strix halo chips (the memory comes soldered). If llama.cpp can't do this, then what can? It might be a windows issue. But from what I've seen online from other people, this model has too many quirks and issues on Linux where I'd preferably stay on windows for now. At least until it gets better support and polish. I use Linux on my home PC anyway, I wanted to have windows on this machine. Please give me any insights or suggestions you think I could try to resolve this issue for now.
2025-08-24T12:56:49
https://www.reddit.com/r/LocalLLaMA/comments/1myuvjo/llamacpp_out_of_memory_exception_is_there_a_way/
mmmohm
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1myuvjo
false
null
t3_1myuvjo
/r/LocalLLaMA/comments/1myuvjo/llamacpp_out_of_memory_exception_is_there_a_way/
false
false
self
0
null
Nexa adds support for new qwen3 models un npu
0
[https://sdk.nexa.ai/model](https://sdk.nexa.ai/model) Even the Android SDK is not compatible with the NPU. https://preview.redd.it/l2j6er8prykf1.png?width=1042&format=png&auto=webp&s=8b8346de59ed2ca42e5dfa7d22ddc52787c8a6f4
2025-08-24T12:52:47
https://www.reddit.com/r/LocalLLaMA/comments/1myusfs/nexa_adds_support_for_new_qwen3_models_un_npu/
Illustrious-Swim9663
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1myusfs
false
null
t3_1myusfs
/r/LocalLLaMA/comments/1myusfs/nexa_adds_support_for_new_qwen3_models_un_npu/
false
false
https://b.thumbs.redditm…B1caXZV7U5YE.jpg
0
null
local business payment by contractor
0
we completed work in March and local contractor & claim 100% but paid only 50% payment before 3 month- remain claim follow up send e mail , what's' up msg and phone call - no response from client and keep delay ours 50% payment- how can deal with legal to this contractor -
2025-08-24T12:41:20
https://www.reddit.com/r/LocalLLaMA/comments/1myujmc/local_business_payment_by_contractor/
Sad_Coconut863
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1myujmc
false
null
t3_1myujmc
/r/LocalLLaMA/comments/1myujmc/local_business_payment_by_contractor/
false
false
self
0
null
Has anyone succeeded in getting TTS working with RDNA3/ROCm?
3
I've tried rocm forks for Coqui, XTTS, Zonos, and more at this point. I have the latest ROCm system packages but I run all these applications run in pyenv environments of the required python version. Even manually installing ROCm packages for torch and onnx and such, I always seem to end up with some kind of pip dependency conflict. Can anyone offer some guidance? 7900xtx, EndeavourOS
2025-08-24T12:35:48
https://www.reddit.com/r/LocalLLaMA/comments/1myufgb/has_anyone_succeeded_in_getting_tts_working_with/
Jawzper
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1myufgb
false
null
t3_1myufgb
/r/LocalLLaMA/comments/1myufgb/has_anyone_succeeded_in_getting_tts_working_with/
false
false
self
3
null
Large(ish?) Document Recall
3
Hi LLaMAs, I'm having some difficulties figuring out a good enough (I won't use the word optimal), workflow for a project to help with my network engineering day job. I have the following documents I want to turn into a knowledge base: - 1x 4000 page PDF 'admin guide' (AG) - ~30x - 200 page release notes (RN) - ~100x 2-5 page 'transfer of information' documents (TOI) - ~20x 5000 line router configs The AG has the most detail on how to implement a feature, config examples etc. The TOI documents are per feature, and have a little more context about when/why you might want to use a specific feature. The RN has bugs (known & resolved), a brief list of new features, and comparability information. I have some old Dell R630s w/ 384GB RAM, and a workstation with 7950x, 128GB ram and RTX3090 as available platforms for good proof of concept. Budget maybe $10k for a production local system (would have to run other LLM tasks too) With that background set; let's detail out what I would like it to do: - Load new RN/TOI as they are released every couple of months. - Be able to query the LLM for strategic design questions: "Would feature X solve problem Y? Would that have a knock on on any other features we are using?" - Be able to query known issues, and their resolutions in features - Determine which release a feature is introduced - Collaborate on building a designed config, and the implementation steps to get there - Provide diagnostic information to assist in debugging. Accuracy of recall is paramount, above speed, but I'd like to be able to get at least 5tok/s, especially in production. Is this feasible? What recommendations do you have for building the workflow? I have a basic understanding of RAG, but it doesn't seem like the right solution to this, as there's potentially so much context to retrieve. Has anyone got a similar project already I can take a look at? Recommendations for models to try this with? If you suggest building my own training set: any guides on how to do this effectively? Thanks LLaMAas!
2025-08-24T12:33:40
https://www.reddit.com/r/LocalLLaMA/comments/1myudwp/largeish_document_recall/
netvyper
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1myudwp
false
null
t3_1myudwp
/r/LocalLLaMA/comments/1myudwp/largeish_document_recall/
false
false
self
3
null
Apple M3 Ultra w/28-Core CPU, 60-Core GPU (256GB RAM) Running Deepseek-R1-UD-IQ1_S (140.23GB)
74
I've seen a lot of discussion recently about the performance of the Apple studios with large models, so I thought I'd share actual data from about a month of usage in our household. This is mainly used by the non-me part of our household, so it sits nice and stable and just runs Deepseek 24/7, where my personal rig is constantly being swapped between different things that I'm working on. The Apple Studio replaced the 10xP100 rig I had previously built for this purpose, and I have to say for what we're using it for it's been a godsend. It's much, much faster, can load larger models, has a much lower power footprint, and it was just... so easy to get it up and running. Honestly, it felt a bit like cheating after the hell that the P100 rig put me through. Anyway, actual numbers: || || |Total logged requests:|161| |Context Average:|643.72| |Average Prompt Eval Tokens/Second:|64.73 tokens/second| |Average Tokens Generated:|343.16| |Average Tokens Generated/Second:|13.97 tokens/second| My personal opinion is if all you're going to do is inferencing, it's a great option. I absolutely loathe the Mac GUI, and my constant attempt to control-c/control-v is infuriating, but other than that... NO RAGRETS.
2025-08-24T11:59:59
https://www.reddit.com/gallery/1mytpf1
Mass2018
reddit.com
1970-01-01T00:00:00
0
{}
1mytpf1
false
null
t3_1mytpf1
/r/LocalLLaMA/comments/1mytpf1/apple_m3_ultra_w28core_cpu_60core_gpu_256gb_ram/
false
false
https://b.thumbs.redditm…P8X8WNQAbYyk.jpg
74
null
Do you still use mikupad or is there a replacement?
19
Mikupad was my go-to tool for generating text with the option to show alternative tokens. This is especially useful for getting a feel of a model's preferences, writing stories, hacking context, or just working with non-conversational tasks in general. However, it has not been updated for a while, and although still fully functional, I actually had to revert to an earlier commit to make alternative tokens work, as the last commit broke the function, and the prospect of this function breaking again with no fix is not reassuring. Has anyone found a good alternative for mikupad, or is it still the best tool we have for now? In case this is not clear enough, by "alternative tokens" I mean the ability to see the top K options at each step of the generation, and in mikupad you can even click any of them and restart generation using the selected choice as the last input.
2025-08-24T11:52:58
https://www.reddit.com/r/LocalLLaMA/comments/1mytkpp/do_you_still_use_mikupad_or_is_there_a_replacement/
aeroumbria
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mytkpp
false
null
t3_1mytkpp
/r/LocalLLaMA/comments/1mytkpp/do_you_still_use_mikupad_or_is_there_a_replacement/
false
false
self
19
null
What are my best options for using Video Understanding Vision Language Models?
8
Hi Reddit, I am working on a project that uses VLM models to analyse high fps tennis matches. I am currently using Google Gemini 2.5 Pro, however they are limited to 1fps above 20mb and also I am not able to finetune it, I have been looking at benchmarks and have seen Salmonn 7b+ PEFT (on top of Qwen2.5), and now there is VLM 4.5, which I tried to use via the online demo but it didn't get good results, maybe it was confused with FPS etc. What is the current best strategy for using a VLM to understand video at high FPS (5-10fps).
2025-08-24T11:49:47
https://www.reddit.com/r/LocalLLaMA/comments/1mytilm/what_are_my_best_options_for_using_video/
LivingMNML
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mytilm
false
null
t3_1mytilm
/r/LocalLLaMA/comments/1mytilm/what_are_my_best_options_for_using_video/
false
false
self
8
null