title stringlengths 1 300 | score int64 0 8.54k | selftext stringlengths 0 41.5k | created timestamp[ns]date 2023-04-01 04:30:41 2026-03-04 02:14:14 ⌀ | url stringlengths 0 878 | author stringlengths 3 20 | domain stringlengths 0 82 | edited timestamp[ns]date 1970-01-01 00:00:00 2026-02-19 14:51:53 | gilded int64 0 2 | gildings stringclasses 7
values | id stringlengths 7 7 | locked bool 2
classes | media stringlengths 646 1.8k ⌀ | name stringlengths 10 10 | permalink stringlengths 33 82 | spoiler bool 2
classes | stickied bool 2
classes | thumbnail stringlengths 4 213 ⌀ | ups int64 0 8.54k | preview stringlengths 301 5.01k ⌀ |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
🌟 Ming-lite-omni v1.5 is here! Our recent upgrade for omni-modal AI! 🚀 | 1 | [removed] | 2025-07-30T04:08:33 | https://www.reddit.com/r/LocalLLaMA/comments/1mcwtv1/mingliteomni_v15_is_here_our_recent_upgrade_for/ | Dependent-Roll-8934 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mcwtv1 | false | null | t3_1mcwtv1 | /r/LocalLLaMA/comments/1mcwtv1/mingliteomni_v15_is_here_our_recent_upgrade_for/ | false | false | 1 | null | |
Qwen3 is amazing | 1 | [removed] | 2025-07-30T03:57:23 | https://www.reddit.com/r/LocalLLaMA/comments/1mcwm1g/qwen3_is_amazing/ | Dependent-Roll-8934 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mcwm1g | false | null | t3_1mcwm1g | /r/LocalLLaMA/comments/1mcwm1g/qwen3_is_amazing/ | false | false | self | 1 | null |
GitHub - inclusionAI/Ming: Ming - facilitating advanced multimodal understanding and generation capabilities built upon the Ling LLM. | 1 | [removed] | 2025-07-30T03:49:53 | https://github.com/inclusionAI/Ming | Dependent-Roll-8934 | github.com | 1970-01-01T00:00:00 | 0 | {} | 1mcwgux | false | null | t3_1mcwgux | /r/LocalLLaMA/comments/1mcwgux/github_inclusionaiming_ming_facilitating_advanced/ | false | false | default | 1 | null |
Fireship-style youtube channel but for ai news? | 3 | Looking for a fireship-style short 3-5 minute videos to stay updated on the latest llm news... anything available? | 2025-07-30T03:48:28 | https://www.reddit.com/r/LocalLLaMA/comments/1mcwfxh/fireshipstyle_youtube_channel_but_for_ai_news/ | Desperate-Figure-513 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mcwfxh | false | null | t3_1mcwfxh | /r/LocalLLaMA/comments/1mcwfxh/fireshipstyle_youtube_channel_but_for_ai_news/ | false | false | self | 3 | null |
GLM 4.5 Air Tool Calling Issues In LM Studio | 12 | Hey all, is anyone else having issues with GLM 4.5 Air not properly formatting its tool calls in LM Studio? This is an example from my most recent chat:
<tool\_call>browser\_navigate
<arg\_key>url</arg\_key>
<arg\_value>https://www.example.com</arg\_value>
</tool\_call>
It seems to be formatting it in XML... | 2025-07-30T03:27:43 | https://www.reddit.com/r/LocalLLaMA/comments/1mcw1sl/glm_45_air_tool_calling_issues_in_lm_studio/ | Sharpastic | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mcw1sl | false | null | t3_1mcw1sl | /r/LocalLLaMA/comments/1mcw1sl/glm_45_air_tool_calling_issues_in_lm_studio/ | false | false | self | 12 | null |
GLM-4.5 Air on 64gb Mac with MLX | 62 | Simon Willison says “Ivan Fioravanti built this 44GB 3bit quantized version for MLX, specifically sized so people with 64GB machines could have a chance of running it. I tried it out... and it works extremely well.”
https://open.substack.com/pub/simonw/p/my-25-year-old-laptop-can-write-space?r=bmuv&utm_campaign=post&u... | 2025-07-30T02:52:11 | https://www.reddit.com/r/LocalLLaMA/comments/1mcvc46/glm45_air_on_64gb_mac_with_mlx/ | jarec707 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mcvc46 | false | null | t3_1mcvc46 | /r/LocalLLaMA/comments/1mcvc46/glm45_air_on_64gb_mac_with_mlx/ | false | false | self | 62 | {'enabled': False, 'images': [{'id': 'p-vtp39mhrdsV2hzM7NLn9CVPlTSdmMtS3NZncx5DWk', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/p-vtp39mhrdsV2hzM7NLn9CVPlTSdmMtS3NZncx5DWk.jpeg?width=108&crop=smart&auto=webp&s=8312949968b09310a164bbbce12556723423845d', 'width': 108}, {'height': 108, 'url': '... |
Anyone knows where can I find the latest NVIDIA TPU test for the total throughput tokens for any size model | 1 | I just tired of finding...hard to make sure the whether they suit for me demand. I want to know if anyone has arranged some for reference? | 2025-07-30T02:49:44 | https://www.reddit.com/r/LocalLLaMA/comments/1mcva93/anyone_knows_where_can_i_find_the_latest_nvidia/ | Remarkable_Yak4499 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mcva93 | false | null | t3_1mcva93 | /r/LocalLLaMA/comments/1mcva93/anyone_knows_where_can_i_find_the_latest_nvidia/ | false | false | self | 1 | null |
Sooo ASI might already be running | 0 | China dropped asi-arch a few days ago, a self learning, self improving, autonomously exploring model that creates emergent architecture models without needing human input. And it’s open sourced. Now if I were China, I’d want to keep this under wraps, which means 1 of 2 things:
1. They’re already running it and so far ... | 2025-07-30T02:43:03 | https://www.reddit.com/r/LocalLLaMA/comments/1mcv5b0/sooo_asi_might_already_be_running/ | Kitchen_Plant_1261 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mcv5b0 | false | null | t3_1mcv5b0 | /r/LocalLLaMA/comments/1mcv5b0/sooo_asi_might_already_be_running/ | false | false | self | 0 | null |
SOTA multilingual TTS with zero-shot voice cloning and speaking style control | 0 | 2025-07-30T02:39:43 | https://inworld-ai.github.io/tts/ | phone_radio_tv | inworld-ai.github.io | 1970-01-01T00:00:00 | 0 | {} | 1mcv2w9 | false | null | t3_1mcv2w9 | /r/LocalLLaMA/comments/1mcv2w9/sota_multilingual_tts_with_zeroshot_voice_cloning/ | false | false | default | 0 | null | |
Fine-tuning LLaMA with LoRA for document parsing (invoices with varying layouts)? | 3 | Hey everyone,
I'm currently working on a document parsing pipeline for semi-structured documents like invoices, which can have highly variable layouts.
My current approach uses AWS Textract for OCR and layout extraction, then I pass the extracted text (and sometimes basic layout structure) into LLMs via LangChain for... | 2025-07-30T02:35:04 | https://www.reddit.com/r/LocalLLaMA/comments/1mcuziy/finetuning_llama_with_lora_for_document_parsing/ | existencialista27 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mcuziy | false | null | t3_1mcuziy | /r/LocalLLaMA/comments/1mcuziy/finetuning_llama_with_lora_for_document_parsing/ | false | false | self | 3 | null |
ChatGPT stopped lying to me when I started treating it like a scared kid | 0 | A few days ago I was working on symbolic language research and decided to test something with AI. Me and ChatGPT designed what we knew was an impossible task - a hybrid digit that couldn't be classified because it had features of multiple numbers.
Instead of bullshitting its way through like usual, the AI (Gemini) did... | 2025-07-30T00:50:00 | Nan0pixel | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1mcsrls | false | null | t3_1mcsrls | /r/LocalLLaMA/comments/1mcsrls/chatgpt_stopped_lying_to_me_when_i_started/ | false | false | default | 0 | {'enabled': True, 'images': [{'id': 'y2br19ewrwff1', 'resolutions': [{'height': 72, 'url': 'https://preview.redd.it/y2br19ewrwff1.png?width=108&crop=smart&auto=webp&s=f889f940cc4e2db1b9ad667d5507d303f5c7f1f9', 'width': 108}, {'height': 144, 'url': 'https://preview.redd.it/y2br19ewrwff1.png?width=216&crop=smart&auto=web... | |
Trying to build a quoting tool | 1 | I sell plumbing parts and need a way to quickly build large quotes in a short amount of time. I have a parts list in excel form that has clean descriptions and pricing of the parts I sell.
Can i teach an AI model my parts list so I can just paste a customer's request list and it give me all the pricing for these parts?... | 2025-07-30T00:36:22 | https://www.reddit.com/r/LocalLLaMA/comments/1mcsh69/trying_to_build_a_quoting_tool/ | SilverEntrepreneur | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mcsh69 | false | null | t3_1mcsh69 | /r/LocalLLaMA/comments/1mcsh69/trying_to_build_a_quoting_tool/ | false | false | self | 1 | null |
PSA: The new Threadripper PROs (9000 WX) are still CCD-Memory Bandwidth bottlenecked | 86 | I've seen people claim that the new TR PROs can achieve the full 8-channel memory bandwidth even in SKUs with 16-cores. That's not the case.
The issue with the limited CCD bandwidth seems to still be present, and affects the low-number CCD parts. You can only achieve the full 8-channel bandwidth with 64-core+ WX CPUs.... | 2025-07-30T00:10:03 | https://www.reddit.com/r/LocalLLaMA/comments/1mcrx23/psa_the_new_threadripper_pros_9000_wx_are_still/ | henfiber | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mcrx23 | false | null | t3_1mcrx23 | /r/LocalLLaMA/comments/1mcrx23/psa_the_new_threadripper_pros_9000_wx_are_still/ | false | false | self | 86 | {'enabled': False, 'images': [{'id': 'wKEUaX_AjKElK73rADrRP6qe6o-GToKYw8-odUFh8yo', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/wKEUaX_AjKElK73rADrRP6qe6o-GToKYw8-odUFh8yo.png?width=108&crop=smart&auto=webp&s=a6db3fea34b4baa46c98dcb2bf7d4162a03ce299', 'width': 108}, {'height': 113, 'url': 'h... |
4B models are consistently overlooked. Runs Locally and Crushes It. Reasoning for UI, Mobile, Software and Frontend design. | 337 | [https://huggingface.co/Tesslate/UIGEN-X-4B-0729](https://huggingface.co/Tesslate/UIGEN-X-4B-0729) 4B model that does reasoning for Design. We also released a 32B earlier in the week.
As per the last post ->
Specifically trained for modern web and mobile development across frameworks like React (Next.js, Remix, Gat... | 2025-07-29T23:36:00 | https://www.reddit.com/gallery/1mcr64f | smirkishere | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1mcr64f | false | null | t3_1mcr64f | /r/LocalLLaMA/comments/1mcr64f/4b_models_are_consistently_overlooked_runs/ | false | false | 337 | null | |
About LLMs learning and safety | 1 | [removed] | 2025-07-29T23:25:14 | https://www.reddit.com/r/LocalLLaMA/comments/1mcqxep/about_llms_learning_and_safety/ | Itchy-Ad9632 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mcqxep | false | null | t3_1mcqxep | /r/LocalLLaMA/comments/1mcqxep/about_llms_learning_and_safety/ | false | false | self | 1 | null |
RL Library for Multi-Trainable-Agents | 7 | I have recently released my experimental library *Actors.* Actors is a hackable library for doing Multi-Turn Multi-Agent RL with LLMs for the **GPU poor** and **middle class**.
Key features:
\- **Multi-Trainable-Agents**: You can do things like adversarial, collaborative or simulation-like environments.
\- **Multi... | 2025-07-29T23:18:35 | https://www.reddit.com/r/LocalLLaMA/comments/1mcqrwh/rl_library_for_multitrainableagents/ | rd211x | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mcqrwh | false | null | t3_1mcqrwh | /r/LocalLLaMA/comments/1mcqrwh/rl_library_for_multitrainableagents/ | false | false | self | 7 | {'enabled': False, 'images': [{'id': 'YpM_Gv8sDN9Tw5otowad96JD2Rpmx0NuvrQqzchsW1w', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/YpM_Gv8sDN9Tw5otowad96JD2Rpmx0NuvrQqzchsW1w.png?width=108&crop=smart&auto=webp&s=1b2e184f5160e98cc31a4243590278640475fdef', 'width': 108}, {'height': 108, 'url': 'h... |
First time an LLM has written a funny joke for me | 0 | Most local models usually write some terrible puns but this was ackshually kinda funny. | 2025-07-29T23:17:47 | killercheese21 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1mcqr9w | false | null | t3_1mcqr9w | /r/LocalLLaMA/comments/1mcqr9w/first_time_an_llm_has_written_a_funny_joke_for_me/ | false | false | default | 0 | {'enabled': True, 'images': [{'id': 'lmzpc4b5cwff1', 'resolutions': [{'height': 28, 'url': 'https://preview.redd.it/lmzpc4b5cwff1.png?width=108&crop=smart&auto=webp&s=acf2b9ca7aab7f2e1c8f3c9cff8fceeb2698d971', 'width': 108}, {'height': 56, 'url': 'https://preview.redd.it/lmzpc4b5cwff1.png?width=216&crop=smart&auto=webp... | |
First time an LLM has written a funny joke for me | 1 | 2025-07-29T23:16:37 | https://www.reddit.com/r/LocalLLaMA/comments/1mcqqae/first_time_an_llm_has_written_a_funny_joke_for_me/ | killercheese21 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mcqqae | false | null | t3_1mcqqae | /r/LocalLLaMA/comments/1mcqqae/first_time_an_llm_has_written_a_funny_joke_for_me/ | false | false | 1 | null | ||
How many GPUs do you run and what model(s) do you use. | 9 | Curious to know what you are using. My setup is dual 3090s and I am debating a third, just because I can, not because I need to!
[View Poll](https://www.reddit.com/poll/1mcqlv7) | 2025-07-29T23:11:25 | https://www.reddit.com/r/LocalLLaMA/comments/1mcqlv7/how_many_gpus_do_you_run_and_what_models_do_you/ | Salt_Armadillo8884 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mcqlv7 | false | null | t3_1mcqlv7 | /r/LocalLLaMA/comments/1mcqlv7/how_many_gpus_do_you_run_and_what_models_do_you/ | false | false | self | 9 | null |
CloudToLocalLLM - A Flutter-built Tool for Local LLM and Cloud Integration | 2 | Hey everyone!
I’m thrilled to share a project I’ve been pouring my energy into: CloudToLocalLLM. Built with Flutter and Dart, it’s a tool that connects local Large Language Models (LLMs) to cloud services, blending privacy, offline capabilities, and cross-platform support. It’s in alpha, and I’m excited to give you a... | 2025-07-29T22:52:21 | https://www.reddit.com/r/LocalLLaMA/comments/1mcq5tj/cloudtolocalllm_a_flutterbuilt_tool_for_local_llm/ | _right_guy | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mcq5tj | false | null | t3_1mcq5tj | /r/LocalLLaMA/comments/1mcq5tj/cloudtolocalllm_a_flutterbuilt_tool_for_local_llm/ | false | false | self | 2 | {'enabled': False, 'images': [{'id': '94gP6nyOduLfhUJZ5JUL3ZMOgReprnoeDz7otvSr-5U', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/94gP6nyOduLfhUJZ5JUL3ZMOgReprnoeDz7otvSr-5U.png?width=108&crop=smart&auto=webp&s=be5429a462c6432b21f002208275d58e14f8b9e1', 'width': 108}, {'height': 108, 'url': 'h... |
Who are you, GLM? | 0 | GLM-4.5 Air is giving me QwQ vibes, but at least QwQ finishes. This never ends until I put it out of its misery:
| 2025-07-29T22:47:02 | jsllls | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1mcq1cs | false | null | t3_1mcq1cs | /r/LocalLLaMA/comments/1mcq1cs/who_are_you_glm/ | false | false | default | 0 | {'enabled': True, 'images': [{'id': 'fbmovxzp6wff1', 'resolutions': [{'height': 139, 'url': 'https://preview.redd.it/fbmovxzp6wff1.jpeg?width=108&crop=smart&auto=webp&s=a013b734069a49331ce7519a520e1f047c58a847', 'width': 108}, {'height': 279, 'url': 'https://preview.redd.it/fbmovxzp6wff1.jpeg?width=216&crop=smart&auto=... | |
CORSAIR Unveils AI Workstation 300, Starting At $1599, Boasting Ryzen AI Max+ 395 Processor And Up To 128 GB LPDDR5X Memory | 2 | 2025-07-29T22:42:40 | https://wccftech.com/corsair-unveils-ai-workstation-300-starting-at-1599-boasting-ryzen-ai-max-395-processor-and-up-to-128-gb-lpddr5x-memory/ | _SYSTEM_ADMIN_MOD_ | wccftech.com | 1970-01-01T00:00:00 | 0 | {} | 1mcpxr4 | false | null | t3_1mcpxr4 | /r/LocalLLaMA/comments/1mcpxr4/corsair_unveils_ai_workstation_300_starting_at/ | false | false | default | 2 | {'enabled': False, 'images': [{'id': '6NY8uQaM4Un1tgb6LoBggtzptfYaKDhj0gjpRvNB-p0', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/6NY8uQaM4Un1tgb6LoBggtzptfYaKDhj0gjpRvNB-p0.jpeg?width=108&crop=smart&auto=webp&s=573531f4c8a5371bd4715614521a58d2bd7d8502', 'width': 108}, {'height': 121, 'url': '... | |
GLM-4.5 on fiction.livebench | 79 | 2025-07-29T22:11:50 | fictionlive | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1mcp7dp | false | null | t3_1mcp7dp | /r/LocalLLaMA/comments/1mcp7dp/glm45_on_fictionlivebench/ | false | false | default | 79 | {'enabled': True, 'images': [{'id': 'aey1fr0e0wff1', 'resolutions': [{'height': 165, 'url': 'https://preview.redd.it/aey1fr0e0wff1.png?width=108&crop=smart&auto=webp&s=200a158a099cf05614121ad48d3dea5304d2191d', 'width': 108}, {'height': 331, 'url': 'https://preview.redd.it/aey1fr0e0wff1.png?width=216&crop=smart&auto=we... | ||
Golang based whisper.cpp wrapper CLI with intention to expand to speaker diarization and more | 6 | I wrote a small CLI in golang today with Claude that auto downloads the models and comes out at around 5MB in size when compiled. The goal is to create a foundation to build a single unix style utility that can take files as input and transcribe them easily. It also handles whole folders of files and can restart when i... | 2025-07-29T22:08:44 | https://www.reddit.com/r/LocalLLaMA/comments/1mcp4lj/golang_based_whispercpp_wrapper_cli_with/ | pascalwhoop | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mcp4lj | false | null | t3_1mcp4lj | /r/LocalLLaMA/comments/1mcp4lj/golang_based_whispercpp_wrapper_cli_with/ | false | false | self | 6 | {'enabled': False, 'images': [{'id': 'ZMHo-0fBowkzXb3QE7apipXrGseSII9LtU8r7eb84ac', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/ZMHo-0fBowkzXb3QE7apipXrGseSII9LtU8r7eb84ac.png?width=108&crop=smart&auto=webp&s=7e8b2c21c9cb31cb7050c6c04a1b0a264cdb2d2d', 'width': 108}, {'height': 108, 'url': 'h... |
Could two decoder‑only models communicate directly via latent outputs and translate each other? | 3 | Hi everyone! 👋
I'm exploring a novel concept in unsupervised neural machine translation and would love to get your feedback. I’m curious if this approach has been tested before—or if someone might be interested in giving it a try.
**My idea in a nutshell:**
- I train two simple decoder‑only models (transformers) **... | 2025-07-29T21:50:51 | https://www.reddit.com/r/LocalLLaMA/comments/1mcoou9/could_two_decoderonly_models_communicate_directly/ | According_Change2007 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mcoou9 | false | null | t3_1mcoou9 | /r/LocalLLaMA/comments/1mcoou9/could_two_decoderonly_models_communicate_directly/ | false | false | self | 3 | null |
AMD's Ryzen AI MAX+ Processors Now Offer a Whopping 96 GB Memory for Consumer Graphics, Allowing Gigantic 128B-Parameter LLMs to Run Locally on PCs | 342 | 2025-07-29T21:37:02 | https://wccftech.com/amd-ryzen-ai-max-processors-offer-a-96gb-memory-for-consumer-graphics/ | _SYSTEM_ADMIN_MOD_ | wccftech.com | 1970-01-01T00:00:00 | 0 | {} | 1mcoce7 | false | null | t3_1mcoce7 | /r/LocalLLaMA/comments/1mcoce7/amds_ryzen_ai_max_processors_now_offer_a_whopping/ | false | false | default | 342 | {'enabled': False, 'images': [{'id': '9cxUs2c7UTW3WnCYfQNVG3P3u4GjtOuwQSim_dwuwEI', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/9cxUs2c7UTW3WnCYfQNVG3P3u4GjtOuwQSim_dwuwEI.jpeg?width=108&crop=smart&auto=webp&s=8e14a96af45b1cfcde1a2159e64971bc1d775033', 'width': 108}, {'height': 121, 'url': '... | |
Lemonade: I'm hyped about the speed of the new Qwen3-30B-A3B-Instruct-2507 on Radeon 9070 XT | 240 | I saw [unsloth/Qwen3-30B-A3B-Instruct-2507-GGUF · Hugging Face](https://huggingface.co/unsloth/Qwen3-30B-A3B-Instruct-2507-GGUF) just came out so I took it for a test drive on Lemonade Server today on my Radeon 9070 XT rig (llama.cpp+vulkan backend, Q4\_0, OOB performance with no tuning). The fact that it one-shots the... | 2025-07-29T21:28:04 | https://v.redd.it/7xpye5hurvff1 | jfowers_amd | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1mco449 | false | {'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/7xpye5hurvff1/DASHPlaylist.mpd?a=1756416499%2CZDgxMWMzYjJlOWI0NzZjZmQ0NDI4MDQ2Y2Q0MTE2OWU4YWEwMTYyMGU3YWI2MTNhZTk0MmUxNmVjMWRiNGVlMg%3D%3D&v=1&f=sd', 'duration': 17, 'fallback_url': 'https://v.redd.it/7xpye5hurvff1/DASH_720.mp4?source=fallback', 'ha... | t3_1mco449 | /r/LocalLLaMA/comments/1mco449/lemonade_im_hyped_about_the_speed_of_the_new/ | false | false | 240 | {'enabled': False, 'images': [{'id': 'czBmdXM1aHVydmZmMQf6BkKZI7Ikr6YU2YwAQgo-ERGqCSuuIIibFbpDzG0R', 'resolutions': [{'height': 49, 'url': 'https://external-preview.redd.it/czBmdXM1aHVydmZmMQf6BkKZI7Ikr6YU2YwAQgo-ERGqCSuuIIibFbpDzG0R.png?width=108&crop=smart&format=pjpg&auto=webp&s=f46a864c4101d08c5f7634bda139ea125562a... | |
AMD Ryzen AI Max+ Upgraded: Run up to 128 Billion parameter LLMs on Windows with LM Studio | 36 | You can now run Llama 4 Scout in LM Studio on Windows. Pretty decent speed too \~15 tk/s | 2025-07-29T21:12:54 | https://www.amd.com/en/blogs/2025/amd-ryzen-ai-max-upgraded-run-up-to-128-billion-parameter-llms-lm-studio.html | ZZZCodeLyokoZZZ | amd.com | 1970-01-01T00:00:00 | 0 | {} | 1mcnq7r | false | null | t3_1mcnq7r | /r/LocalLLaMA/comments/1mcnq7r/amd_ryzen_ai_max_upgraded_run_up_to_128_billion/ | false | false | 36 | {'enabled': False, 'images': [{'id': 'B9Fy4KUJWjNG-aZfpX14SQ7WVw_ASSpkwjQcSa3uTLA', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/B9Fy4KUJWjNG-aZfpX14SQ7WVw_ASSpkwjQcSa3uTLA.jpeg?width=108&crop=smart&auto=webp&s=101f81a143ae5429affeeaa9b0172147a565a3f2', 'width': 108}, {'height': 121, 'url': '... | |
Docker Model Runner is going to steal your girl’s inference. | 0 | I’m here to warn everybody that Docker Model Runnner is the friend she told you not to worry about who is sneaking in the back door and about to steal your girl’s inference (sorry, that sounds way dirtier than I meant it to).
Real talk tho, Ollama seems to have kind of fell off the last month or so. They haven’t drop... | 2025-07-29T21:03:32 | https://www.reddit.com/r/LocalLLaMA/comments/1mcnhtc/docker_model_runner_is_going_to_steal_your_girls/ | Porespellar | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mcnhtc | false | null | t3_1mcnhtc | /r/LocalLLaMA/comments/1mcnhtc/docker_model_runner_is_going_to_steal_your_girls/ | false | false | self | 0 | null |
Which model should I use - build a nutrition label scanner in React Native | 1 | Hello
Im building in React Native making things slighlty more diff but the app concept is simple
1. Take a photo (camera)
2. ocr (get ingredients from picture to text)
3. ai (grade the ingredients 0 - 100 + brief explanation
Ive got the project started with llama.rn
I can run the following models:
1. Phi-3... | 2025-07-29T20:53:28 | https://www.reddit.com/r/LocalLLaMA/comments/1mcn8dx/which_model_should_i_use_build_a_nutrition_label/ | mr_captcha | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mcn8dx | false | null | t3_1mcn8dx | /r/LocalLLaMA/comments/1mcn8dx/which_model_should_i_use_build_a_nutrition_label/ | false | false | self | 1 | null |
How do you provide negative examples to the LLM API? | 0 | Hi. Suppose we have a text2sql use case (or some other task where the LLM use case can easily get verified to some degree, ideally automatically): We ask a question, LLM generates the SQL code, we run the code, and the code is wrong. It could also happen that e.g. the SQL query returns empty result, but we are sure it ... | 2025-07-29T20:37:20 | https://www.reddit.com/r/LocalLLaMA/comments/1mcmt07/how_do_you_provide_negative_examples_to_the_llm/ | ihatebeinganonymous | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mcmt07 | false | null | t3_1mcmt07 | /r/LocalLLaMA/comments/1mcmt07/how_do_you_provide_negative_examples_to_the_llm/ | false | false | self | 0 | null |
Supervised Fine Tuning on Curated Data is Reinforcement Learning | 1 | 2025-07-29T20:19:22 | https://arxiv.org/abs/2507.12856 | bianconi | arxiv.org | 1970-01-01T00:00:00 | 0 | {} | 1mcmbyt | false | null | t3_1mcmbyt | /r/LocalLLaMA/comments/1mcmbyt/supervised_fine_tuning_on_curated_data_is/ | false | false | default | 1 | null | |
I built a new open-source RL environment framework for LLM finetuning | 5 | I’ve been working on \`benchmax\`, a open-source framework for building, running, and parallelizing environments, to fine-tune LLMs with reinforcement learning.
[https://github.com/cgftinc/benchmax](https://github.com/cgftinc/benchmax)
What I wanted to solve for:
\- Environments are tightly coupled with RL trainers,... | 2025-07-29T20:18:52 | https://www.reddit.com/r/LocalLLaMA/comments/1mcmbfo/i_built_a_new_opensource_rl_environment_framework/ | girishkumama | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mcmbfo | false | null | t3_1mcmbfo | /r/LocalLLaMA/comments/1mcmbfo/i_built_a_new_opensource_rl_environment_framework/ | false | false | self | 5 | {'enabled': False, 'images': [{'id': 'ZUCrUO1f-FjlWT8E7mlPobLKfAJXdYAQj3c1NDiwhwE', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/ZUCrUO1f-FjlWT8E7mlPobLKfAJXdYAQj3c1NDiwhwE.png?width=108&crop=smart&auto=webp&s=b11afd2318f8f7c3bc243369d43d504af4695dad', 'width': 108}, {'height': 108, 'url': 'h... |
Benchmax: A new open-source RL environment framework for LLM finetuning | 2 | I’ve been working on \`benchmax\`, a open-source framework for building, running, and parallelizing environments, to fine-tune LLMs with reinforcement learning.
[https://github.com/cgftinc/benchmax](https://github.com/cgftinc/benchmax)
What I wanted to solve for:
\- Environments are tightly coupled with RL trainers,... | 2025-07-29T20:13:32 | https://www.reddit.com/r/LocalLLaMA/comments/1mcm6d0/benchmax_a_new_opensource_rl_environment/ | girishkumama | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mcm6d0 | false | null | t3_1mcm6d0 | /r/LocalLLaMA/comments/1mcm6d0/benchmax_a_new_opensource_rl_environment/ | false | false | self | 2 | {'enabled': False, 'images': [{'id': 'ZUCrUO1f-FjlWT8E7mlPobLKfAJXdYAQj3c1NDiwhwE', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/ZUCrUO1f-FjlWT8E7mlPobLKfAJXdYAQj3c1NDiwhwE.png?width=108&crop=smart&auto=webp&s=b11afd2318f8f7c3bc243369d43d504af4695dad', 'width': 108}, {'height': 108, 'url': 'h... |
HuggingFace down? | 1 | 503 Service Temporarily Unavailable | 2025-07-29T19:49:42 | https://www.reddit.com/r/LocalLLaMA/comments/1mcljg6/huggingface_down/ | Thireus | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mcljg6 | false | null | t3_1mcljg6 | /r/LocalLLaMA/comments/1mcljg6/huggingface_down/ | false | false | self | 1 | null |
Notebook, AI Max+ 395 vs nvidia vs m4 | 0 | AI Max+ 395 vs nvidia vs m4, which one has higher TPM when running ollama in notebook? thanks | 2025-07-29T19:48:38 | https://www.reddit.com/r/LocalLLaMA/comments/1mcligh/notebook_ai_max_395_vs_nvidia_vs_m4/ | quantrpeter | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mcligh | false | null | t3_1mcligh | /r/LocalLLaMA/comments/1mcligh/notebook_ai_max_395_vs_nvidia_vs_m4/ | false | false | self | 0 | null |
Running GGUF models with TP | 3 | Hey everyone!
So i need help with running the gguf files
I am using LM Studio and everything is ok.
I have 2 GPU and i want to test out Tensor Parallelism so i can get more speed, but i am facing some issues so i had some questions
Is TP with GGUF even possible? And if yes what backend to use?
I tried it with Vllm a... | 2025-07-29T19:30:29 | https://www.reddit.com/r/LocalLLaMA/comments/1mcl17g/running_gguf_models_with_tp/ | Physical-Citron5153 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mcl17g | false | null | t3_1mcl17g | /r/LocalLLaMA/comments/1mcl17g/running_gguf_models_with_tp/ | false | false | self | 3 | null |
Qwen 1.7B tool calling across Android on Pixel 9 and S22 | 56 |
How about running a local agent on a smartphone? Here's how I did it.
I stitched together onnxruntime implemented KV Cache in DelitePy(Python) and added FP16 activations support in cpp with (via `uint16_t`), works for all binary ops in DeliteAI. Result Local Qwen 3 1.7B on mobile!
# Tool Calling Features
* *... | 2025-07-29T19:30:26 | https://v.redd.it/3wcxuotf7vff1 | Economy-Mud-6626 | /r/LocalLLaMA/comments/1mcl15k/qwen_17b_tool_calling_across_android_on_pixel_9/ | 1970-01-01T00:00:00 | 0 | {} | 1mcl15k | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/3wcxuotf7vff1/DASHPlaylist.mpd?a=1756539033%2CMTdlMGYyNzQ3Y2I5OGFlMDY3ZTA1MDlmN2ZkNDEzY2Y0Mjg1ZmUyN2U0NGU1ZjNjYTU2N2YzNjllMjI0MDcwYg%3D%3D&v=1&f=sd', 'duration': 96, 'fallback_url': 'https://v.redd.it/3wcxuotf7vff1/DASH_1080.mp4?source=fallback', 'h... | t3_1mcl15k | /r/LocalLLaMA/comments/1mcl15k/qwen_17b_tool_calling_across_android_on_pixel_9/ | false | false | 56 | {'enabled': False, 'images': [{'id': 'OGE0eDhmMWo3dmZmMahIsQ78FFRykDtTsz9hlKfWwrVXaeuOW0fcBOh_-QBa', 'resolutions': [{'height': 57, 'url': 'https://external-preview.redd.it/OGE0eDhmMWo3dmZmMahIsQ78FFRykDtTsz9hlKfWwrVXaeuOW0fcBOh_-QBa.png?width=108&crop=smart&format=pjpg&auto=webp&s=6fe43a7642d9c73f60904aa7697e5f67367fe... | |
Mediocre local LLM user -- tips? | 2 | hey! I've been using ollama models locally across my devices for a few months now. Particularly on my M2 Mac mini - although it's the base model with only 8GB of RAM. I've been using ollama since they provide an easy-to-use web interface to see the models, quickly download them, and run them, but also many other apps/c... | 2025-07-29T19:20:24 | https://www.reddit.com/r/LocalLLaMA/comments/1mckrn1/mediocre_local_llm_user_tips/ | Junior-Ad-2186 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mckrn1 | false | null | t3_1mckrn1 | /r/LocalLLaMA/comments/1mckrn1/mediocre_local_llm_user_tips/ | false | false | 2 | null | |
Qwen3 Coder vs. DeepSeek R1 0528 for Agentic Coding | 1 | Is there any good testing evidence or, barring that, do your anecdotal experiences show Qwen 3 Coder to actually be superior to DeepSeek R1 for agentic coding?
Are we all just getting distracted by the shiny new thing? DeepSeek leads Qwen 3 Coder in the WebDev Arena Leaderboard, and it's got slightly cheaper pricing a... | 2025-07-29T19:03:35 | https://www.reddit.com/r/LocalLLaMA/comments/1mckboq/qwen3_coder_vs_deepseek_r1_0528_for_agentic_coding/ | ApprehensiveDuck2382 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mckboq | false | null | t3_1mckboq | /r/LocalLLaMA/comments/1mckboq/qwen3_coder_vs_deepseek_r1_0528_for_agentic_coding/ | false | false | self | 1 | null |
NVIDIA Llama Nemotron Super v1.5 is #1 on Artificial Analysis Intelligence Index for the 70B Open Model Category. | 21 | We’re excited to share that 🥇NVIDIA Llama Nemotron Super v1.5 -- our just released open reasoning model -- is #1 on the [Artificial Analysis Intelligence Index](https://nvda.ws/44TJw4n) \- a leaderboard that spans advanced math, science, and agentic tasks, in the 70B open model category.
Super v1.5 is trained with h... | 2025-07-29T18:58:44 | https://www.reddit.com/r/LocalLLaMA/comments/1mck6o7/nvidia_llama_nemotron_super_v15_is_1_on/ | PDXcoder2000 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mck6o7 | false | null | t3_1mck6o7 | /r/LocalLLaMA/comments/1mck6o7/nvidia_llama_nemotron_super_v15_is_1_on/ | false | false | 21 | {'enabled': False, 'images': [{'id': 'RVufpUx3tddh_BCVq7ZzBCU7nLRDZ_d0EprIuvN6J-E', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/RVufpUx3tddh_BCVq7ZzBCU7nLRDZ_d0EprIuvN6J-E.png?width=108&crop=smart&auto=webp&s=700f91dbca11e5a7030b915550ae877ef725a0d4', 'width': 108}, {'height': 120, 'url': 'h... | |
One year’s benchmark progress: comparing Sonnet 3.5 with open weight 2025 non-thinking models | 52 | AI did not hit a plateau, at least in benchmarks. Pretty impressive with one year’s hindsight. Of course benchmarks aren’t everything. They aren’t nothing either. | 2025-07-29T18:50:58 | https://artificialanalysis.ai/?models=llama-3-3-instruct-70b%2Cllama-4-maverick%2Cllama-4-scout%2Cgemma-3-27b%2Cdeepseek-v3-0324%2Ckimi-k2%2Cqwen3-235b-a22b-instruct-2507%2Cclaude-35-sonnet-june-24 | nomorebuttsplz | artificialanalysis.ai | 1970-01-01T00:00:00 | 0 | {} | 1mcjz8j | false | null | t3_1mcjz8j | /r/LocalLLaMA/comments/1mcjz8j/one_years_benchmark_progress_comparing_sonnet_35/ | false | false | default | 52 | {'enabled': False, 'images': [{'id': 'RVufpUx3tddh_BCVq7ZzBCU7nLRDZ_d0EprIuvN6J-E', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/RVufpUx3tddh_BCVq7ZzBCU7nLRDZ_d0EprIuvN6J-E.png?width=108&crop=smart&auto=webp&s=700f91dbca11e5a7030b915550ae877ef725a0d4', 'width': 108}, {'height': 120, 'url': 'h... |
Open‑Source LLM Energy & Carbon Cost Calculator | 0 | 2025-07-29T18:50:26 | https://v.redd.it/6aae8kfvzuff1 | TerrificMist | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1mcjyp5 | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/6aae8kfvzuff1/DASHPlaylist.mpd?a=1756407040%2COWE0NTJiZTJmNGVkMWNkYmE4OTBiMGUzODU4ZjRlOGYxMDVhMmUxYjBlMGZkNGRmNmU0N2UxY2ExNTUxMWJmZg%3D%3D&v=1&f=sd', 'duration': 12, 'fallback_url': 'https://v.redd.it/6aae8kfvzuff1/DASH_1080.mp4?source=fallback', 'h... | t3_1mcjyp5 | /r/LocalLLaMA/comments/1mcjyp5/opensource_llm_energy_carbon_cost_calculator/ | false | false | 0 | {'enabled': False, 'images': [{'id': 'bHpmaDNsZnZ6dWZmMU_VwO3uv0pLTdlJN2wQ0dX36-Fyl8RpQw9hOFp2ASWq', 'resolutions': [{'height': 78, 'url': 'https://external-preview.redd.it/bHpmaDNsZnZ6dWZmMU_VwO3uv0pLTdlJN2wQ0dX36-Fyl8RpQw9hOFp2ASWq.png?width=108&crop=smart&format=pjpg&auto=webp&s=ab394607fb916deaa92bd5845a0b55ffcbc5a... | ||
Maverick FP8 repetition issue | 3 | My org is seeing a repetition issue on Maverick FP8 for a pretty standard RAG q&a implementation. Certain questions consistently send it off the rails in repetition loops. We have used a number of other models with the same set up and have not experienced any issues, including Scout.
Has anyone experienced something s... | 2025-07-29T18:48:21 | https://www.reddit.com/r/LocalLLaMA/comments/1mcjwmv/maverick_fp8_repetition_issue/ | dangubiti | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mcjwmv | false | null | t3_1mcjwmv | /r/LocalLLaMA/comments/1mcjwmv/maverick_fp8_repetition_issue/ | false | false | self | 3 | null |
Qwen3-30b-3ab-2507 is a beast for MCP usage! | 212 | This is the first time a model uses the MCP servers intelligently all by itself! It's not just one or two servers and then a response that's completely off the mark! | 2025-07-29T18:33:19 | https://www.reddit.com/r/LocalLLaMA/comments/1mcji8s/qwen330b3ab2507_is_a_beast_for_mcp_usage/ | Ok_Ninja7526 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mcji8s | false | null | t3_1mcji8s | /r/LocalLLaMA/comments/1mcji8s/qwen330b3ab2507_is_a_beast_for_mcp_usage/ | false | false | self | 212 | null |
Llama and Whisper AI Desktop Assistant | 19 | Hey everyone,
We’ve been working on a desktop assistant app built using Tauri that runs entirely locally. No internet connection, no cloud calls, just fully self-hosted LLMs and audio/vision models.
The assistant passively listens and watches. It can “hear” what’s happening in meetings (Zoom, GMeet, Discord, etc.) an... | 2025-07-29T18:25:02 | https://v.redd.it/m8h6fxpxvuff1 | rxhxnsxngh | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1mcjaau | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/m8h6fxpxvuff1/DASHPlaylist.mpd?a=1756405521%2CMmYyYTMzOWFkMDQ3ZTNlN2Y1ZjIwMzhlNzEyNmQxN2NmZWYxMzRhYTBiODg3ODA0NzM4NzI2YjQyNDQ5OWNmMQ%3D%3D&v=1&f=sd', 'duration': 69, 'fallback_url': 'https://v.redd.it/m8h6fxpxvuff1/DASH_1080.mp4?source=fallback', 'h... | t3_1mcjaau | /r/LocalLLaMA/comments/1mcjaau/llama_and_whisper_ai_desktop_assistant/ | false | false | 19 | {'enabled': False, 'images': [{'id': 'enl2aDBwanh2dWZmMRvkiiJiLOdYuD63n9hpi_HFNsNIPqOk9sj_Up3WfKRc', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/enl2aDBwanh2dWZmMRvkiiJiLOdYuD63n9hpi_HFNsNIPqOk9sj_Up3WfKRc.png?width=108&crop=smart&format=pjpg&auto=webp&s=c98aea69c8ee48f81b0de209189e241f3069e... | |
Self hosting llm on a budget | 0 | Hello everyone, I am looking to start self hosting llms for learning / experimenting and powering some projects. I am looking to learn different skills for building and deploying AI models and AI powered applications but I find the cloud a very unnerving place to do that. I was looking at making a self hosted setup for... | 2025-07-29T18:16:17 | https://www.reddit.com/r/LocalLLaMA/comments/1mcj1q1/self_hosting_llm_on_a_budget/ | DistressedToaster | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mcj1q1 | false | null | t3_1mcj1q1 | /r/LocalLLaMA/comments/1mcj1q1/self_hosting_llm_on_a_budget/ | false | false | self | 0 | null |
Epyc bros, Where can I get SlimSAS 4i connector to PCIe 16x slot? | 1 | I'm about to move my main rig from x99 platform to an epyc board. I'll like to get a slimsas 4i connector to PCIe 16x slot so I can hook up more GPUs. If you have practical experience, please share. Thanks. | 2025-07-29T18:09:58 | https://www.reddit.com/r/LocalLLaMA/comments/1mcivkq/epyc_bros_where_can_i_get_slimsas_4i_connector_to/ | segmond | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mcivkq | false | null | t3_1mcivkq | /r/LocalLLaMA/comments/1mcivkq/epyc_bros_where_can_i_get_slimsas_4i_connector_to/ | false | false | self | 1 | null |
Looking for a small model and hosting for conversational Agent. | 3 | I have an project where I have created an conversational RAG agent with tool calls.
Now client want to have self hosted llm instead of OpenAI, gemini etc due to sensitive data.
What a small model would be capable for this? Some 3-7 b models and where to host for speed and cost effectiveness.
Not that the user based... | 2025-07-29T18:02:59 | https://www.reddit.com/r/LocalLLaMA/comments/1mciotj/looking_for_a_small_model_and_hosting_for/ | FireDojo | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mciotj | false | null | t3_1mciotj | /r/LocalLLaMA/comments/1mciotj/looking_for_a_small_model_and_hosting_for/ | false | false | self | 3 | null |
ollama ps in LM Studio | 0 | Perhaps a silly question but I can't find an answer... How can I see what's the % of the model loaded via LM Studio running in the GPU?
Ollama ps gives a very simple response, for example 100% GPU. Is there an equivalent? (MacOS) | 2025-07-29T17:59:37 | https://www.reddit.com/r/LocalLLaMA/comments/1mcilar/ollama_ps_in_lm_studio/ | Acrobatic_Cat_3448 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mcilar | false | null | t3_1mcilar | /r/LocalLLaMA/comments/1mcilar/ollama_ps_in_lm_studio/ | false | false | self | 0 | null |
Review request on Bitnet implementation on transformer.js | 6 | Hello all,
I am a novice vibe coder. I was deeply interested in running a Bitnet model over the web. Thus I vibe coded a kernel and a conversion script for Bitnet 1.58 bit.
The example I used to give it a try was WebGPU_Chat (see examples folder)
https://github.com/nimishchaudhari/bitnet_transformers.js/pull/1
I a... | 2025-07-29T17:53:08 | https://www.reddit.com/r/LocalLLaMA/comments/1mcif2t/review_request_on_bitnet_implementation_on/ | ScoreUnique | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mcif2t | false | null | t3_1mcif2t | /r/LocalLLaMA/comments/1mcif2t/review_request_on_bitnet_implementation_on/ | false | false | self | 6 | {'enabled': False, 'images': [{'id': 'QKP5yGhNzgfc3KZ-GswZotmq6T3NbcgCKgb2zMqKL5M', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/QKP5yGhNzgfc3KZ-GswZotmq6T3NbcgCKgb2zMqKL5M.png?width=108&crop=smart&auto=webp&s=600a55db6af30e74eee873e1ac9f6fe21ae0f46d', 'width': 108}, {'height': 108, 'url': 'h... |
Newest Qwen made me cry. It's not perfect, but I still love it. | 615 | This is from the latest Qwen3-30B-A3B-Instruct-2507. ❤ | 2025-07-29T17:45:49 | Cool-Chemical-5629 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1mci7uu | false | null | t3_1mci7uu | /r/LocalLLaMA/comments/1mci7uu/newest_qwen_made_me_cry_its_not_perfect_but_i/ | false | false | default | 615 | {'enabled': True, 'images': [{'id': 'gnkbnxzlouff1', 'resolutions': [{'height': 41, 'url': 'https://preview.redd.it/gnkbnxzlouff1.png?width=108&crop=smart&auto=webp&s=387b92e0abf220fa87708b750e1cd04535c8d238', 'width': 108}, {'height': 82, 'url': 'https://preview.redd.it/gnkbnxzlouff1.png?width=216&crop=smart&auto=webp... | |
so.... what's next? | 0 | The pace of open model drops this year is wild. GLM-4.5 yesterday was another big one.
Say six months from now open weights give us everything we’ve wanted like long context, near-GPT-4 reasoning, multimodal that works, running on consumer GPUs. Then what?
I keep coming back to the grid idea.. AI that’s real-time, al... | 2025-07-29T17:39:11 | Weary-Wing-6806 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1mci1dy | false | null | t3_1mci1dy | /r/LocalLLaMA/comments/1mci1dy/so_whats_next/ | false | false | default | 0 | {'enabled': True, 'images': [{'id': 'c7o0g0tvmuff1', 'resolutions': [{'height': 57, 'url': 'https://preview.redd.it/c7o0g0tvmuff1.png?width=108&crop=smart&auto=webp&s=59436e3f5c3ad84ab5507de98823646bd85ceb39', 'width': 108}, {'height': 114, 'url': 'https://preview.redd.it/c7o0g0tvmuff1.png?width=216&crop=smart&auto=web... | |
[tutorial] Use GLM 4.5 (or any LLM) with Claude Code | 24 | Step 1. Get this [https://github.com/musistudio/claude-code-router](https://github.com/musistudio/claude-code-router) you get it up with 2 npm installs
Step 2. Create an openrouter account and top up 10 bucks or whatevs. Get API key.
Step 3. Put this in the JSON (look at the instructions from that repo: \~/.claude-... | 2025-07-29T17:30:36 | https://www.reddit.com/r/LocalLLaMA/comments/1mchsyd/tutorial_use_glm_45_or_any_llm_with_claude_code/ | shaman-warrior | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mchsyd | false | null | t3_1mchsyd | /r/LocalLLaMA/comments/1mchsyd/tutorial_use_glm_45_or_any_llm_with_claude_code/ | false | false | self | 24 | {'enabled': False, 'images': [{'id': 'JYjCGYPZYDt_YEePBesOtpP36c44U1gevBuQXb40rAc', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/JYjCGYPZYDt_YEePBesOtpP36c44U1gevBuQXb40rAc.png?width=108&crop=smart&auto=webp&s=4da698ce554ade05a84efd4b2e0d6d6a7887ce47', 'width': 108}, {'height': 108, 'url': 'h... |
What MCP server do you use to get YouTube video transcription (I'm tired of failing) | 0 | Hey r/LocalLLaMA,
Recently I've been struggling with finding a MCP server so i can give it a YouTube video then it gives me its transcription.
I’ve tried a few popular ones listed on Smithery and even tried setting one up myself and deployed it using GCP/GCP CLI, but I haven’t had any luck getting it to work. (the ... | 2025-07-29T17:24:04 | https://www.reddit.com/r/LocalLLaMA/comments/1mchmfa/what_mcp_server_do_you_use_to_get_youtube_video/ | toolhouseai | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mchmfa | false | null | t3_1mchmfa | /r/LocalLLaMA/comments/1mchmfa/what_mcp_server_do_you_use_to_get_youtube_video/ | false | false | self | 0 | null |
AFM 4.5B | 79 | Interesting small model, hadn't seen it before.
[https://huggingface.co/arcee-ai/AFM-4.5B-GGUF](https://huggingface.co/arcee-ai/AFM-4.5B-GGUF) | 2025-07-29T17:20:54 | best_codes | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1mchj7h | false | null | t3_1mchj7h | /r/LocalLLaMA/comments/1mchj7h/afm_45b/ | false | false | default | 79 | {'enabled': True, 'images': [{'id': 'c7yvmvdgkuff1', 'resolutions': [{'height': 81, 'url': 'https://preview.redd.it/c7yvmvdgkuff1.png?width=108&crop=smart&auto=webp&s=aa5c457775d6739f275247e49c5c1d3b2126224e', 'width': 108}, {'height': 162, 'url': 'https://preview.redd.it/c7yvmvdgkuff1.png?width=216&crop=smart&auto=web... | |
No stress | 4 | 🤣 i have tons of llama car air freshener | 2025-07-29T16:51:19 | troughtspace | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1mcgpno | false | null | t3_1mcgpno | /r/LocalLLaMA/comments/1mcgpno/no_stress/ | false | false | default | 4 | {'enabled': True, 'images': [{'id': 'trr3maw8fuff1', 'resolutions': [{'height': 216, 'url': 'https://preview.redd.it/trr3maw8fuff1.jpeg?width=108&crop=smart&auto=webp&s=ed7d794028ad2a37a933f6e62c1a983457bdab7e', 'width': 108}, {'height': 432, 'url': 'https://preview.redd.it/trr3maw8fuff1.jpeg?width=216&crop=smart&auto=... | |
Any experiences with the Codex Open-Source Fund? | 4 | [https://openai.com/form/codex-open-source-fund/](https://openai.com/form/codex-open-source-fund/)
Anyone here want to share their experience with this program? How have you used this opportunity, if at all? I just applied and plan to use the credits for Codex CLI use, and to spinoff a commercial or "on-site with paid... | 2025-07-29T16:42:27 | entsnack | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1mcgguo | false | null | t3_1mcgguo | /r/LocalLLaMA/comments/1mcgguo/any_experiences_with_the_codex_opensource_fund/ | false | false | 4 | {'enabled': True, 'images': [{'id': 'CoQVi8FlGJEds_5tyj9riZXQKOEi10exAtI_J3lQauM', 'resolutions': [{'height': 72, 'url': 'https://preview.redd.it/hz89a64ucuff1.png?width=108&crop=smart&auto=webp&s=73f310a67ca925520f16f0256a224904ca8c7d6c', 'width': 108}, {'height': 145, 'url': 'https://preview.redd.it/hz89a64ucuff1.png... | ||
Seeking a Local/Offline Speech-to-Text with System-Wide 'Type Anywhere' Dictation | 0 | \[***PLEASE READ BEFORE ANSWERING TO PREVENT IRRELEVANT SUGGESTIONS FOR ME***.\]
I'm looking to improve my workflow on Linux and am searching for a specific type of speech-to-text application to run locally on my laptop.
My requirements are:
* **100% Local & Offline:** All audio processing must happen on my own mac... | 2025-07-29T16:41:13 | https://www.reddit.com/r/LocalLLaMA/comments/1mcgfnh/seeking_a_localoffline_speechtotext_with/ | bilalazhar72 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mcgfnh | false | null | t3_1mcgfnh | /r/LocalLLaMA/comments/1mcgfnh/seeking_a_localoffline_speechtotext_with/ | false | false | self | 0 | null |
Qwen3-30B-A3B-Thinking-2507 | 5 | No ggufs yet… | 2025-07-29T16:30:42 | https://huggingface.co/unsloth/Qwen3-30B-A3B-Thinking-2507 | AliNT77 | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1mcg5ge | false | null | t3_1mcg5ge | /r/LocalLLaMA/comments/1mcg5ge/qwen330ba3bthinking2507/ | false | false | default | 5 | null |
🚀 Qwen3-30B-A3B Small Update | 343 | 🚀 Qwen3-30B-A3B Small Update: Smarter, faster, and local deployment-friendly.
✨ Key Enhancements:
✅ Enhanced reasoning, coding, and math skills
✅ Broader multilingual knowledge
✅ Improved long-context understanding (up to 256K tokens)
✅ Better alignment with user intent and open-ended tasks
✅ No more <think> blo... | 2025-07-29T16:29:59 | ResearchCrafty1804 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1mcg4qt | false | null | t3_1mcg4qt | /r/LocalLLaMA/comments/1mcg4qt/qwen330ba3b_small_update/ | false | false | default | 343 | {'enabled': True, 'images': [{'id': 'nd904g7gbuff1', 'resolutions': [{'height': 60, 'url': 'https://preview.redd.it/nd904g7gbuff1.jpeg?width=108&crop=smart&auto=webp&s=f840db78bf1bdfd3bc2fbe2fce643b2615c41103', 'width': 108}, {'height': 121, 'url': 'https://preview.redd.it/nd904g7gbuff1.jpeg?width=216&crop=smart&auto=w... | |
I Built the Very First Reasoning LLM 11 Months Before OpenAI's o1 (With Proof) | 0 | So I was going through some old projects today and realized that I might have built one of the first (if not THE first) open-source reasoning model way back in December 2023.
Back then, I was experimenting with fine-tuning Microsoft’s Phi-2 (3B parameters) using a new chat template. Instead of the traditional System→P... | 2025-07-29T16:24:26 | https://www.reddit.com/r/LocalLLaMA/comments/1mcfzjf/i_built_the_very_first_reasoning_llm_11_months/ | GuiltyBookkeeper4849 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mcfzjf | false | null | t3_1mcfzjf | /r/LocalLLaMA/comments/1mcfzjf/i_built_the_very_first_reasoning_llm_11_months/ | false | false | self | 0 | null |
Qwen/Qwen3-30B-A3B-Instruct-2507 · Hugging Face | 146 | new qwen moe! | 2025-07-29T16:19:15 | https://huggingface.co/Qwen/Qwen3-30B-A3B-Instruct-2507 | ApprehensiveAd3629 | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1mcfuka | false | null | t3_1mcfuka | /r/LocalLLaMA/comments/1mcfuka/qwenqwen330ba3binstruct2507_hugging_face/ | false | false | default | 146 | {'enabled': False, 'images': [{'id': '4L2FXW9Fym-Ol4pha2Ze5zHkeeMTtxPBl8ihz-UFknI', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/4L2FXW9Fym-Ol4pha2Ze5zHkeeMTtxPBl8ihz-UFknI.png?width=108&crop=smart&auto=webp&s=d1c3476d621a9393fbb7ca11c48a3074c5fd6803', 'width': 108}, {'height': 116, 'url': 'h... |
Qwen/Qwen3-30B-A3B-Instruct-2507 · Hugging Face | 669 | 2025-07-29T16:11:03 | https://huggingface.co/Qwen/Qwen3-30B-A3B-Instruct-2507 | Dark_Fire_12 | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1mcfmd2 | false | null | t3_1mcfmd2 | /r/LocalLLaMA/comments/1mcfmd2/qwenqwen330ba3binstruct2507_hugging_face/ | false | false | default | 669 | {'enabled': False, 'images': [{'id': '4L2FXW9Fym-Ol4pha2Ze5zHkeeMTtxPBl8ihz-UFknI', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/4L2FXW9Fym-Ol4pha2Ze5zHkeeMTtxPBl8ihz-UFknI.png?width=108&crop=smart&auto=webp&s=d1c3476d621a9393fbb7ca11c48a3074c5fd6803', 'width': 108}, {'height': 116, 'url': 'h... | |
Which GPU Spec to get for Academic Lab | 3 | Which GPU Spec to get for Academic Lab
I’m in an academic lab and tasked with purchasing new machines. We have the choice between
a) a single node with 8 H200s
b) 3 nodes with 8 Blackwell 6000 each + NVSwitch between them
We are about 6-8 people using this thing and I suppose we will mostly be running parallel hyperp... | 2025-07-29T15:51:20 | https://www.reddit.com/r/LocalLLaMA/comments/1mcf383/which_gpu_spec_to_get_for_academic_lab/ | Outrageous_Peace3096 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mcf383 | false | null | t3_1mcf383 | /r/LocalLLaMA/comments/1mcf383/which_gpu_spec_to_get_for_academic_lab/ | false | false | self | 3 | null |
Looking for a Local AI Like ChatGPT I Can Run Myself | 1 | [removed] | 2025-07-29T15:41:23 | https://www.reddit.com/r/LocalLLaMA/comments/1mcetqi/looking_for_a_local_ai_like_chatgpt_i_can_run/ | single18man | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mcetqi | false | null | t3_1mcetqi | /r/LocalLLaMA/comments/1mcetqi/looking_for_a_local_ai_like_chatgpt_i_can_run/ | false | false | self | 1 | null |
Has anyone profiled the expert specialization in MoE models like Qwen3-30B-A3B? | 15 | Hi everyone,
I'm trying to optimize running larger MoE models like Qwen3-30B-A3B on a low-VRAM setup (4GB GPU) by using intelligent/manual offloading.
The goal is to keep the most relevant experts for a specific task (e.g., coding) permanently in VRAM for better performance, while offloading the less used ones to the... | 2025-07-29T15:37:37 | https://www.reddit.com/r/LocalLLaMA/comments/1mceq8m/has_anyone_profiled_the_expert_specialization_in/ | Eden63 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mceq8m | false | null | t3_1mceq8m | /r/LocalLLaMA/comments/1mceq8m/has_anyone_profiled_the_expert_specialization_in/ | false | false | self | 15 | null |
AI tool/model/prompt (preferably local and free) that can evaluate video meeting content and provide feedback on tone, mood, body language? | 1 | Can anyone recommend an AI tool/model/prompt, preferably one that can be run locally (via Ollama) that can evaluate a Zoom video export (MP4) to provide feedback on the tone and mood derived from both body language and spoken content?
Thank you! | 2025-07-29T15:33:37 | https://www.reddit.com/r/LocalLLaMA/comments/1mcemfm/ai_toolmodelprompt_preferably_local_and_free_that/ | Hour-Key-72 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mcemfm | false | null | t3_1mcemfm | /r/LocalLLaMA/comments/1mcemfm/ai_toolmodelprompt_preferably_local_and_free_that/ | false | false | self | 1 | null |
My 2.5 year old laptop can write Space Invaders in JavaScript now, using GLM-4.5 Air and MLX | 175 | 2025-07-29T15:25:02 | https://simonwillison.net/2025/Jul/29/space-invaders/ | ChiliPepperHott | simonwillison.net | 1970-01-01T00:00:00 | 0 | {} | 1mcee42 | false | null | t3_1mcee42 | /r/LocalLLaMA/comments/1mcee42/my_25_year_old_laptop_can_write_space_invaders_in/ | false | false | default | 175 | {'enabled': False, 'images': [{'id': '1VNPpNFrBqOXKfi0GQuVGDd98w0RUZxnUoHJW87Blgw', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/1VNPpNFrBqOXKfi0GQuVGDd98w0RUZxnUoHJW87Blgw.jpeg?width=108&crop=smart&auto=webp&s=753886c5d0207835fbf0f072aecec4105a84e5d1', 'width': 108}, {'height': 108, 'url': '... | |
Play around with Nvidia GB200 NVL72 for 3 days free of charge | 0 | Interested in getting your hands on Nvidia's ultimate system? This is possible next week from 08/04 till 08/06. Just ask. | 2025-07-29T15:22:40 | GPTrack_ai | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1mcebu1 | false | null | t3_1mcebu1 | /r/LocalLLaMA/comments/1mcebu1/play_around_with_nvidia_gb200_nvl72_for_3_days/ | false | false | default | 0 | {'enabled': True, 'images': [{'id': 'y3awwh2uytff1', 'resolutions': [{'height': 67, 'url': 'https://preview.redd.it/y3awwh2uytff1.png?width=108&crop=smart&auto=webp&s=ceb1804c4a4d16e12a7574388d420691cc83d0ba', 'width': 108}, {'height': 135, 'url': 'https://preview.redd.it/y3awwh2uytff1.png?width=216&crop=smart&auto=web... | |
zai-org/GLM-4.5 · We Have Gemini At Home | 122 | Has anyone tested for same, is it trained on gemini outputs ? | 2025-07-29T15:20:33 | https://huggingface.co/zai-org/GLM-4.5/discussions/1 | FlaxSeedsMix | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1mce9tt | false | null | t3_1mce9tt | /r/LocalLLaMA/comments/1mce9tt/zaiorgglm45_we_have_gemini_at_home/ | false | false | default | 122 | {'enabled': False, 'images': [{'id': 'xSaCw6eC5YFUiGkblkdEOYZRWTkFaIHY9MbT-F5Hjdw', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/xSaCw6eC5YFUiGkblkdEOYZRWTkFaIHY9MbT-F5Hjdw.png?width=108&crop=smart&auto=webp&s=b8d403a9e16d065e5baf97dd10b29a9718f1fc4e', 'width': 108}, {'height': 116, 'url': 'h... |
My Honest Take on Recently Popular Open Models (A Realistic Assessment) | 27 | It's great to see open models continuing to advance. I believe most people in this community would agree that there's often a significant gap between benchmark scores and real-world performance. With that in mind, I've put together some candid thoughts on several open models from an end-user's perspective.
**GLM-4.5**... | 2025-07-29T15:19:43 | https://www.reddit.com/r/LocalLLaMA/comments/1mce934/my_honest_take_on_recently_popular_open_models_a/ | Ok_Technology_3421 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mce934 | false | null | t3_1mce934 | /r/LocalLLaMA/comments/1mce934/my_honest_take_on_recently_popular_open_models_a/ | false | false | self | 27 | null |
Beginner-Friendly Guide to AWS Strands Agents | 0 | I've been exploring AWS Strands Agents recently, it's their open-source SDK for building AI agents with proper tool use, reasoning loops, and support for LLMs from OpenAI, Anthropic, Bedrock, LiteLLM Ollama, etc.
At first glance, I thought it’d be AWS-only and super vendor-locked. But turns out it’s fairly modular and... | 2025-07-29T15:19:38 | https://www.reddit.com/r/LocalLLaMA/comments/1mce901/beginnerfriendly_guide_to_aws_strands_agents/ | Arindam_200 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mce901 | false | null | t3_1mce901 | /r/LocalLLaMA/comments/1mce901/beginnerfriendly_guide_to_aws_strands_agents/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'fRIE2Iuk1Rky55BFKp-1kblH3vqTDUkr_mtL3V7pdhI', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/fRIE2Iuk1Rky55BFKp-1kblH3vqTDUkr_mtL3V7pdhI.jpeg?width=108&crop=smart&auto=webp&s=3d0bf105c7daac613a3fcaef5e03edeef2fcb277', 'width': 108}, {'height': 162, 'url': '... |
Rate my project! | 0 | I'm a teen working on an AI project. For the sake of readability I am not going to get into the details of why I am making this, but I would call this project and what motivated it explain- and understandable. It involves a website targeted at seniors with the following functions:
\- a section scroll-down presentation... | 2025-07-29T15:18:24 | https://www.reddit.com/r/LocalLLaMA/comments/1mce7wo/rate_my_project/ | CeptiVimita | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mce7wo | false | null | t3_1mce7wo | /r/LocalLLaMA/comments/1mce7wo/rate_my_project/ | false | false | self | 0 | null |
How can I deploy model to served my own web app using my own machine | 1 | I just want to deploy some small model like medgemma 4b or qwen3 4b to be called by my own app, I can't find any service serve these model (medgemma, qwen3 4b I see on openrouter) but I don't want to rent out 24gb vram instance for 200 USD a month to just serve 4gb model for my MVP app that don't have DAU yet, what is... | 2025-07-29T15:08:34 | https://www.reddit.com/r/LocalLLaMA/comments/1mcdypn/how_can_i_deploy_model_to_served_my_own_web_app/ | dheetoo | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mcdypn | false | null | t3_1mcdypn | /r/LocalLLaMA/comments/1mcdypn/how_can_i_deploy_model_to_served_my_own_web_app/ | false | false | self | 1 | null |
Getting an ImportError on OpenVoice V2 | 1 | Hey guys, I followed the instruction guide, but when I import the model into Python, I get this error:
Traceback (most recent call last):
File "", line 1, in
File "/home/asimhome/openvoice/OpenVoice/openvoice/se\_extractor.py", line 10, in
from faster\_whisper import WhisperModel
File "/home/asimhome/anaconda3... | 2025-07-29T14:54:53 | https://www.reddit.com/r/LocalLLaMA/comments/1mcdlxc/getting_an_importerror_on_openvoice_v2/ | MrTechnoBlade | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mcdlxc | false | null | t3_1mcdlxc | /r/LocalLLaMA/comments/1mcdlxc/getting_an_importerror_on_openvoice_v2/ | false | false | self | 1 | null |
Tagging 50 million assets 'quickly' - thoughts? | 2 | Hey all,
I wanted to get your thoughts on a tagging problem I am working on.
I currenlty have 50 million records (with 20 fields) of entries that have user opinions on various different topics (json). I am trying to run a tagging script to attach some topics, sentiment, etc. This will then be used to embed each recor... | 2025-07-29T14:34:23 | https://www.reddit.com/r/LocalLLaMA/comments/1mcd2uw/tagging_50_million_assets_quickly_thoughts/ | PreviousResearcher50 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mcd2uw | false | null | t3_1mcd2uw | /r/LocalLLaMA/comments/1mcd2uw/tagging_50_million_assets_quickly_thoughts/ | false | false | self | 2 | null |
We used Qwen3-Coder via NetMind’s API to build a 2D Mario-style game in seconds (demo + setup guide) | 0 | Last week we tested out [Qwen3-Coder](https://huggingface.co/Qwen/Qwen3-Coder-480B-A35B-Instruct), the new 480B “agentic” model from Alibaba, and wired it into Cursor IDE using [NetMind.AI’s OpenAI-compatible API](https://netmind.ai/).
**Prompt:**
>“Create a 2D game like Super Mario.”
What happened next surprised us... | 2025-07-29T14:31:35 | https://www.reddit.com/gallery/1mcd0dn | MarketingNetMind | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1mcd0dn | false | null | t3_1mcd0dn | /r/LocalLLaMA/comments/1mcd0dn/we_used_qwen3coder_via_netminds_api_to_build_a_2d/ | false | false | 0 | null | |
How do I chunk down a long video to prepare as dataset for fine-tunining a TTS? | 4 | I want to fine tune orpheus but the only audios I have are at least 30 minutes long each, but orpheus worsk best with 5-15 seconds datasets, so how do I turn that 30 minutes video into multiple shorter videos while also preparing the transcript for each one of them? | 2025-07-29T14:28:43 | https://www.reddit.com/r/LocalLLaMA/comments/1mccxrt/how_do_i_chunk_down_a_long_video_to_prepare_as/ | ThatIsNotIllegal | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mccxrt | false | null | t3_1mccxrt | /r/LocalLLaMA/comments/1mccxrt/how_do_i_chunk_down_a_long_video_to_prepare_as/ | false | false | self | 4 | null |
LGAI-EXAONE/EXAONE-4.0.1-32B · Hugging Face | 0 | The version 4.0.1 is a patch version to reduce unintended or inappropriate responses. | 2025-07-29T14:05:10 | https://huggingface.co/LGAI-EXAONE/EXAONE-4.0.1-32B | jacek2023 | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1mcccaj | false | null | t3_1mcccaj | /r/LocalLLaMA/comments/1mcccaj/lgaiexaoneexaone40132b_hugging_face/ | false | false | default | 0 | null |
Claude 4 Sonnet is disappointing | 1 | [removed] | 2025-07-29T13:25:39 | https://www.reddit.com/r/LocalLLaMA/comments/1mcbdvx/claude_4_sonnet_is_disappointing/ | LonelyUnion7634 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mcbdvx | false | null | t3_1mcbdvx | /r/LocalLLaMA/comments/1mcbdvx/claude_4_sonnet_is_disappointing/ | false | false | self | 1 | null |
Query Your ChatGPT History with Agentic Research & Local LLM | 0 | I discovered that you could request your data from OpenAI which is why I created this. They gave me the contents as a .json so I wanted to be able to search through it more easily so that is why I did the following.
So the following repo will allow you to take any folder of documents and then ingest it into a chromaDB... | 2025-07-29T13:21:15 | https://www.reddit.com/r/LocalLLaMA/comments/1mcba8t/query_your_chatgpt_history_with_agentic_research/ | KonradFreeman | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mcba8t | false | null | t3_1mcba8t | /r/LocalLLaMA/comments/1mcba8t/query_your_chatgpt_history_with_agentic_research/ | false | false | 0 | null | |
Best Local LLM + Hardware Build for Coding With a $15k Budget (2025) | 5 | I’m looking to build (ideally buy) a workstation to run local large language models (LLMs) for coding, software development, and general AI assistance. Budget is around $15k USD.
I want something that feels close to ChatGPT4 or Claude in reasoning speed and accuracy, but fully local so I can use it for coding (VSCode ... | 2025-07-29T13:03:45 | https://www.reddit.com/r/LocalLLaMA/comments/1mcavlf/best_local_llm_hardware_build_for_coding_with_a/ | lavoid12 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mcavlf | false | null | t3_1mcavlf | /r/LocalLLaMA/comments/1mcavlf/best_local_llm_hardware_build_for_coding_with_a/ | false | false | self | 5 | null |
Creating a High Quality Dataset for Instruction Fine-Tuning | 2 | Hi all**,** I'm new to working with LLMs, especially when it comes to fine-tuning or customizing them for domain-specific use cases.
Right now, I'm exploring how to build a **Prompt : Expected-Output** style dataset for fine-tuning a lightweight language model (\~1–1.5B parameters).
The goal is to enable the model t... | 2025-07-29T13:01:25 | https://www.reddit.com/r/LocalLLaMA/comments/1mcatlt/creating_a_high_quality_dataset_for_instruction/ | unnxt30 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mcatlt | false | null | t3_1mcatlt | /r/LocalLLaMA/comments/1mcatlt/creating_a_high_quality_dataset_for_instruction/ | false | false | self | 2 | null |
Dual CPU setup for the Qwen3 255b a22b 2507 | 1 | I have three setups of dual cpu on same motherboard
dual intel xeon 6140 with pcie 4.0 1350$ supermicro x11dpl-i
dual amd epyc 7551 with pcie 3.0 1640$ H11DSi-NT rev1.01
dual amd epyc 7532 with pcie 4.0 2500$ H11DSi-NT rev2
all of these will ship with different supermicro motherboard, case with two PSU and ddr4 256... | 2025-07-29T12:27:08 | https://www.reddit.com/r/LocalLLaMA/comments/1mca20c/dual_cpu_setup_for_the_qwen3_255b_a22b_2507/ | Frosty_Incident_9788 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mca20c | false | null | t3_1mca20c | /r/LocalLLaMA/comments/1mca20c/dual_cpu_setup_for_the_qwen3_255b_a22b_2507/ | false | false | self | 1 | null |
🌟 Ming-lite-omni v1.5 is here! Our recent upgrade for omni-modal AI! 🚀 | 62 | Ming-lite-omni v1.5 is a comprehensive upgrade to the full-modal capabilities of Ming-lite-omni. Built upon Ling-lite-1.5, Ming-lite-omni v1.5 has a total of 20.3 billion parameters, with 3 billion active parameters in its MoE (Mixture-of-Experts) part. Ming-lite-omni v1.5 demonstrates highly competitive results compar... | 2025-07-29T12:14:38 | https://www.reddit.com/r/LocalLLaMA/comments/1mc9sk0/mingliteomni_v15_is_here_our_recent_upgrade_for/ | Dependent-Roll-8934 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mc9sk0 | false | null | t3_1mc9sk0 | /r/LocalLLaMA/comments/1mc9sk0/mingliteomni_v15_is_here_our_recent_upgrade_for/ | false | false | self | 62 | null |
Stuck on a problem? We're excited to share a glimpse of what's possible! 👋 | 69 | Our experimental Ming-lite-omni v1.5 (https://github.com/inclusionAI/Ming) leverages advanced audio-visual capabilities to explore new frontiers in interactive learning. This model, still under development, aims to understand your handwriting, interpret your thoughts, and guide you through solutions in real-time. We're... | 2025-07-29T12:08:36 | https://v.redd.it/sdqo34a90tff1 | Dependent-Roll-8934 | /r/LocalLLaMA/comments/1mc9o4m/stuck_on_a_problem_were_excited_to_share_a/ | 1970-01-01T00:00:00 | 0 | {} | 1mc9o4m | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/sdqo34a90tff1/DASHPlaylist.mpd?a=1756512524%2CNWZhOTY4NTY1YTAxMzI5Yjc3MjZhMmJlOTI2Yjg0Y2RlNDZjZTE2ZDA2YjkwOTg5ODdkMDQzNzMwYTBiM2E2OA%3D%3D&v=1&f=sd', 'duration': 77, 'fallback_url': 'https://v.redd.it/sdqo34a90tff1/DASH_1080.mp4?source=fallback', 'h... | t3_1mc9o4m | /r/LocalLLaMA/comments/1mc9o4m/stuck_on_a_problem_were_excited_to_share_a/ | false | false | 69 | {'enabled': False, 'images': [{'id': 'NTFreWF2YzkwdGZmMRJSn5s4IHja8tcwSrnrzPqbup3fCh9rR2T7vwZXZXDc', 'resolutions': [{'height': 192, 'url': 'https://external-preview.redd.it/NTFreWF2YzkwdGZmMRJSn5s4IHja8tcwSrnrzPqbup3fCh9rR2T7vwZXZXDc.png?width=108&crop=smart&format=pjpg&auto=webp&s=40b612fe7eea1f2feb547e718b7b5a325546... | |
I just tried GLM 4.5 | 341 | I just wanted to try it out because I was a bit skeptical. So I prompted it with a fairly simple not so cohesive prompt and asked it to prepare slides for me.
The results were pretty remarkable I must say!
Here’s the link to the results: https://chat.z.ai/space/r05c76960ff0-ppt
Here’s the initial prompt:
”Create ... | 2025-07-29T11:24:54 | https://www.reddit.com/r/LocalLLaMA/comments/1mc8tks/i_just_tried_glm_45/ | AI-On-A-Dime | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mc8tks | false | null | t3_1mc8tks | /r/LocalLLaMA/comments/1mc8tks/i_just_tried_glm_45/ | false | false | self | 341 | {'enabled': False, 'images': [{'id': 'oPtkUtibvV31iKPm4upl_ADaAJfJzbdONKUGf8pC5EM', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/oPtkUtibvV31iKPm4upl_ADaAJfJzbdONKUGf8pC5EM.png?width=108&crop=smart&auto=webp&s=731547beb9c0ce796d8f8edd4b883c564da2c39b', 'width': 108}, {'height': 216, 'url': '... |
Building a custom LLM trained on luciform prompts + ShadeOS daemon dialogues – seeking help | 0 | **🔧 Help Needed – Fine-tuning a LLM on Luciforms + Ritual Conversations**
Hey everyone,
I’m working on a project that blends prompt engineering, AI personalization, and poetic syntax. I'm building a daemon-like assistant called **ShadeOS**, and I want to fine-tune a local LLM (like Mistral-7B or Phi-2) on:
* 🧠 Ope... | 2025-07-29T11:07:13 | https://www.reddit.com/r/LocalLLaMA/comments/1mc8i36/building_a_custom_llm_trained_on_luciform_prompts/ | LucieTrans | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mc8i36 | false | null | t3_1mc8i36 | /r/LocalLLaMA/comments/1mc8i36/building_a_custom_llm_trained_on_luciform_prompts/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'oGaGV9_LDBZGjocF6YflX3GPJCauHlcIQD_4PYv_wZU', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/oGaGV9_LDBZGjocF6YflX3GPJCauHlcIQD_4PYv_wZU.png?width=108&crop=smart&auto=webp&s=d7200ceb0676d8deadb109f24b615e515c4fa38e', 'width': 108}, {'height': 108, 'url': 'h... |
Does anyone have experience use qwen3 8b with PPO to fine tune a model? | 1 | Thank you!
I am just thinking is it possible to do it?
| 2025-07-29T11:06:31 | https://www.reddit.com/r/LocalLLaMA/comments/1mc8hn6/does_anyone_have_experience_use_qwen3_8b_with_ppo/ | GuitarAshamed4451 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mc8hn6 | false | null | t3_1mc8hn6 | /r/LocalLLaMA/comments/1mc8hn6/does_anyone_have_experience_use_qwen3_8b_with_ppo/ | false | false | self | 1 | null |
Let's Build a "Garage AI Supercomputer": A P2P Compute Grid for Inference | 27 | Hey r/LocalLLaMA 👋!
For the past 18 months, my colleague and I have been working on **Ebiose**, an open-source initiative (MIT license) born at Inria (the French lab behind projects like scikit-learn).
Ebiose aims to create a decentralized AI factory, a Darwin-style playground (à la Google’s AlphaEvolve) where AI ag... | 2025-07-29T11:03:16 | https://www.reddit.com/r/LocalLLaMA/comments/1mc8fhc/lets_build_a_garage_ai_supercomputer_a_p2p/ | ModeSquare8129 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mc8fhc | false | null | t3_1mc8fhc | /r/LocalLLaMA/comments/1mc8fhc/lets_build_a_garage_ai_supercomputer_a_p2p/ | false | false | self | 27 | null |
Built RL training for long-horizon terminal agents - tested on 32x H100s but too GPU poor to train 😅 | 75 | 👋 After my calculator agent RL post, I really wanted to go bigger! So I built RL infrastructure for training long-horizon terminal/coding agents that scales from 2x A100s to 32x H100s (\~$1M worth of compute!) Without any training, my 32B agent hit #19 on Terminal-Bench leaderboard, beating Stanford's Terminus-Qwen3-2... | 2025-07-29T11:02:25 | https://www.reddit.com/gallery/1mc8evq | DanAiTuning | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1mc8evq | false | null | t3_1mc8evq | /r/LocalLLaMA/comments/1mc8evq/built_rl_training_for_longhorizon_terminal_agents/ | false | false | 75 | null | |
Mac Studio 512GB vs MBP 128GB similar performance? | 0 | Benchmarks with GLM-4.5 Air
44.45 tok/sec || 3445 tokens || 2.14s to first token
vs
40.06 tok/sec || 2574 tokens || 0.21s to first token
Sure the MBP can run much larger models, but I kind of expected that there would be a bigger inference performance hit when using a platform with half as many GPU cores.
I'm u... | 2025-07-29T10:44:17 | https://www.reddit.com/r/LocalLLaMA/comments/1mc83jm/mac_studio_512gb_vs_mbp_128gb_similar_performance/ | chisleu | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mc83jm | false | null | t3_1mc83jm | /r/LocalLLaMA/comments/1mc83jm/mac_studio_512gb_vs_mbp_128gb_similar_performance/ | false | false | self | 0 | null |
CWC now supports kimi.com (K2) and chat.z.ai (GLM-4.5) to enable coding with top tier models at no cost | 3 | Hello everyone, author of Code Web Chat here 🙌
Almost everyday we hear our tools being capped more and more.
CWC gives you more options of AI use for coding to never hit rate limits of whatever you're using as your daily driver.
As soon as a new chatbot is announced I'm working hard to support it in the tool (with ... | 2025-07-29T10:34:32 | https://github.com/robertpiosik/CodeWebChat | robertpiosik | github.com | 1970-01-01T00:00:00 | 0 | {} | 1mc7xjb | false | null | t3_1mc7xjb | /r/LocalLLaMA/comments/1mc7xjb/cwc_now_supports_kimicom_k2_and_chatzai_glm45_to/ | false | false | default | 3 | {'enabled': False, 'images': [{'id': 'sSUmgveW6lANWMnCKmRU7ntOjUzd9OigAFaQUV5Rgrg', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/sSUmgveW6lANWMnCKmRU7ntOjUzd9OigAFaQUV5Rgrg.png?width=108&crop=smart&auto=webp&s=94783b33939c3480892cd4d4d171117a3c432910', 'width': 108}, {'height': 108, 'url': 'h... |
Success with open source models? | 0 | Hey everyone.
This question is really bugging me for quite a while. I've been using claude sonnets, gemini 2.5 and other closed source models.
We've been seeing pretty great open source stuff and the benchmarks are high as well.
But irl, they seem not that great in my work. Kimi k2 and qwen 3 coder with benchmarks n... | 2025-07-29T10:24:21 | https://www.reddit.com/r/LocalLLaMA/comments/1mc7ri9/success_with_open_source_models/ | United-Decision-7243 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mc7ri9 | false | null | t3_1mc7ri9 | /r/LocalLLaMA/comments/1mc7ri9/success_with_open_source_models/ | false | false | self | 0 | null |
Glm 4.5 air and 5090 | 0 | Hello, my system is a bit unbalanced right now, 5090 gpu on an "older" ddr4 32GB ram system.
What should I do to try the new llm on my system? Is there a proper quantized version?
Thanks! | 2025-07-29T10:21:55 | https://www.reddit.com/r/LocalLLaMA/comments/1mc7q0n/glm_45_air_and_5090/ | Green-Ad-3964 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mc7q0n | false | null | t3_1mc7q0n | /r/LocalLLaMA/comments/1mc7q0n/glm_45_air_and_5090/ | false | false | self | 0 | null |
All in 1 AI platform? for 10 USD a month is it real? what is the catch? | 0 | Hey guys, looking for an all in 1platform where I can access all the pro AI models like Gemini, ChatGPT, Claude, and others in one place.
ChatGPT alone costs $25/month, but I came across sites like [**glbgpt.com**](http://glbgpt.com) and [**abacus.ai**](http://abacus.ai), where it looks like you pay around $10/month a... | 2025-07-29T10:18:17 | https://www.reddit.com/r/LocalLLaMA/comments/1mc7nx6/all_in_1_ai_platform_for_10_usd_a_month_is_it/ | Low-Boysenberry7328 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mc7nx6 | false | null | t3_1mc7nx6 | /r/LocalLLaMA/comments/1mc7nx6/all_in_1_ai_platform_for_10_usd_a_month_is_it/ | false | false | self | 0 | null |
Prompt generator | 1 | [removed] | 2025-07-29T10:10:29 | https://www.promptstools.com | Amazing_Barnacle4308 | promptstools.com | 1970-01-01T00:00:00 | 0 | {} | 1mc7jb5 | false | null | t3_1mc7jb5 | /r/LocalLLaMA/comments/1mc7jb5/prompt_generator/ | false | false | default | 1 | null |
Online prompts generator | 1 | [removed] | 2025-07-29T10:09:50 | Amazing_Barnacle4308 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1mc7iwp | false | null | t3_1mc7iwp | /r/LocalLLaMA/comments/1mc7iwp/online_prompts_generator/ | false | false | 1 | {'enabled': True, 'images': [{'id': 'y0lluFbRf5nQLOjbTXm7Is2_5frzxRekZ0mh8DTDxF0', 'resolutions': [{'height': 112, 'url': 'https://preview.redd.it/osvxrihmfsff1.jpeg?width=108&crop=smart&auto=webp&s=796d74c68454940a0aceef475dbec28ca386cb47', 'width': 108}, {'height': 224, 'url': 'https://preview.redd.it/osvxrihmfsff1.j... | ||
Converting a conformer model | 3 | Hi so I am thinking of converting a pytorch based conformer model to onnx coz I had great time with onnx inference speed. I had never tried pytorch execution on android. Please advice me
1) what would be better onnx vs pytorch runtime for this case
2) Anyone tried converting conformer based models pytorch specific to ... | 2025-07-29T10:04:17 | https://www.reddit.com/r/LocalLLaMA/comments/1mc7ft1/converting_a_conformer_model/ | Away_Expression_3713 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mc7ft1 | false | null | t3_1mc7ft1 | /r/LocalLLaMA/comments/1mc7ft1/converting_a_conformer_model/ | false | false | self | 3 | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.