title stringlengths 1 300 | score int64 0 8.54k | selftext stringlengths 0 41.5k | created timestamp[ns]date 2023-04-01 04:30:41 2026-03-04 02:14:14 ⌀ | url stringlengths 0 878 | author stringlengths 3 20 | domain stringlengths 0 82 | edited timestamp[ns]date 1970-01-01 00:00:00 2026-02-19 14:51:53 | gilded int64 0 2 | gildings stringclasses 7
values | id stringlengths 7 7 | locked bool 2
classes | media stringlengths 646 1.8k ⌀ | name stringlengths 10 10 | permalink stringlengths 33 82 | spoiler bool 2
classes | stickied bool 2
classes | thumbnail stringlengths 4 213 ⌀ | ups int64 0 8.54k | preview stringlengths 301 5.01k ⌀ |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
What is an interesting question that an LLM failed to answer, in your experience? | 5 | Any interesting questions from your experience that you asked a Reasoning LLM and it failed to answer | 2025-08-03T12:38:31 | https://www.reddit.com/r/LocalLLaMA/comments/1mgi7v8/what_is_an_interesting_question_that_an_llm/ | VR-Person | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mgi7v8 | false | null | t3_1mgi7v8 | /r/LocalLLaMA/comments/1mgi7v8/what_is_an_interesting_question_that_an_llm/ | false | false | self | 5 | null |
qihoo360/Light-IF-32B | 92 | Yet another new model claiming to outperform larger ones:
**Instruction following** is a core ability of large language models (LLMs), but performance remains inconsistent, especially on complex tasks.
We identify **lazy reasoning** during the thinking stage as a key cause of poor instruction adherence.
To address... | 2025-08-03T12:24:28 | jacek2023 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1mghy1u | false | null | t3_1mghy1u | /r/LocalLLaMA/comments/1mghy1u/qihoo360lightif32b/ | false | false | default | 92 | {'enabled': True, 'images': [{'id': '6vaf0crhrsgf1', 'resolutions': [{'height': 99, 'url': 'https://preview.redd.it/6vaf0crhrsgf1.png?width=108&crop=smart&auto=webp&s=081b3d7480032b122c209477d47263419358b811', 'width': 108}, {'height': 199, 'url': 'https://preview.redd.it/6vaf0crhrsgf1.png?width=216&crop=smart&auto=web... | |
AI "devs" | 0 | 2025-08-03T12:21:45 | Dangerous-Camera3368 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1mghw96 | false | null | t3_1mghw96 | /r/LocalLLaMA/comments/1mghw96/ai_devs/ | false | false | default | 0 | {'enabled': True, 'images': [{'id': '4vmx6ozprsgf1', 'resolutions': [{'height': 216, 'url': 'https://preview.redd.it/4vmx6ozprsgf1.png?width=108&crop=smart&auto=webp&s=bd212714d7b2bcdcd6155bf3672e93951b80152b', 'width': 108}, {'height': 432, 'url': 'https://preview.redd.it/4vmx6ozprsgf1.png?width=216&crop=smart&auto=we... | ||
MI50 w 32gb? Guys please | 0 | Hot take incoming:
This is a garbage card with garbage support, so quit talking about them like they're useful. As a matter of fact, quit talking about them at all.
You see it took me up until 4 weeks ago to convince my wife to finally let me upgrade my server, I picked up a d380 v9 with 128 gb and 7 1 tb d... | 2025-08-03T11:59:58 | https://www.reddit.com/r/LocalLLaMA/comments/1mghhau/mi50_w_32gb_guys_please/ | Expensive_Mirror5247 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mghhau | false | null | t3_1mghhau | /r/LocalLLaMA/comments/1mghhau/mi50_w_32gb_guys_please/ | false | false | self | 0 | null |
GLM 4.5 Air local tool calling | 2 | Does anyone have any knowledge on how to correctly set up tool calling for GLM 4.5 Air in LM Studio? The model is great with the old school tool calling techniques that aider and cline use, but when I try with any tools that use modern tool calling (Qwen Code or OpenCode), the tool calling fails.
If anyone can give... | 2025-08-03T11:53:33 | https://www.reddit.com/r/LocalLLaMA/comments/1mghd3l/glm_45_air_local_tool_calling/ | -dysangel- | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mghd3l | false | null | t3_1mghd3l | /r/LocalLLaMA/comments/1mghd3l/glm_45_air_local_tool_calling/ | false | false | self | 2 | null |
Why Fortune 500 Wants to Fund Open Models | 0 | My career is in tech startup chaos. Bill Gurley is one of the few from that circle I can listen to while chewing food (as I am now and typing).
Companies like LG want to sell washing machines. They don't want their strategy to get disrupted without having a backup plan. They want to raise the floor so that nobody... | 2025-08-03T11:36:33 | https://youtu.be/fTqINzeudJ4?&t=1067 | Psionikus | youtu.be | 1970-01-01T00:00:00 | 0 | {} | 1mgh2cb | false | {'oembed': {'author_name': 'Bg2 Pod', 'author_url': 'https://www.youtube.com/@Bg2Pod', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/fTqINzeudJ4?start=1067&feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyrosc... | t3_1mgh2cb | /r/LocalLLaMA/comments/1mgh2cb/why_fortune_500_wants_to_fund_open_models/ | false | false | default | 0 | {'enabled': False, 'images': [{'id': 'MK0EV9lEmIsFO7imdqO0m8VG-otbLYhxUaOOpUAB5kM', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/MK0EV9lEmIsFO7imdqO0m8VG-otbLYhxUaOOpUAB5kM.jpeg?width=108&crop=smart&auto=webp&s=dbe82e3d2ef0e784a0e62fb6596ee14119fb2c44', 'width': 108}, {'height': 162, 'url': '... |
I built a GitHub scanner that automatically discovers your AI tools using a new .awesome-ai.md standard I created | 1 | Hey,
I just launched something I think could change how we discover AI tools on. Instead of manually submitting to directories or relying on outdated lists, I created the .awesome-ai.md standard.
How it works:
- Drop a .awesome-ai.md file in your repo root (template: https://github.com/teodorgross/awesome-ai)
- The... | 2025-08-03T11:34:49 | https://github.com/teodorgross/awesome-ai | r00tkit_ | github.com | 1970-01-01T00:00:00 | 0 | {} | 1mgh19i | false | null | t3_1mgh19i | /r/LocalLLaMA/comments/1mgh19i/i_built_a_github_scanner_that_automatically/ | false | false | default | 1 | {'enabled': False, 'images': [{'id': '3eKPASSIaokEljTMRxPAx8m9aBrDtxD5hJx7xRfgQU8', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/3eKPASSIaokEljTMRxPAx8m9aBrDtxD5hJx7xRfgQU8.png?width=108&crop=smart&auto=webp&s=dd85376252643c0cfc25bf58873d1308bbaa8b8c', 'width': 108}, {'height': 108, 'url': 'h... |
OSINT fingerprinting a stealth OpenRouter model - likely Llama-family, not OpenAI | 14 |
Personal note: This is just my opinion based on a very limited set of API-only probes—interpret with caution.
What I did (mini-ROC probes)
* JSON strictness vs. "bad schema" repair
* Tool-calling with an invalid enum + extra property
* Safety/refusal phrasing check
* Long-context end-marker recall
* Tokenizer/shor... | 2025-08-03T11:20:59 | https://www.reddit.com/r/LocalLLaMA/comments/1mggsyb/osint_fingerprinting_a_stealth_openrouter_model/ | jv0010 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mggsyb | false | null | t3_1mggsyb | /r/LocalLLaMA/comments/1mggsyb/osint_fingerprinting_a_stealth_openrouter_model/ | false | false | self | 14 | null |
When You Ask Claude to Optimize Your Ride | 0 | 2025-08-03T11:11:53 | https://www.reddit.com/r/LocalLLaMA/comments/1mggnjy/when_you_ask_claude_to_optimize_your_ride/ | xVinGee | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mggnjy | false | null | t3_1mggnjy | /r/LocalLLaMA/comments/1mggnjy/when_you_ask_claude_to_optimize_your_ride/ | false | false | 0 | null | ||
XBai-04 Is It Real? | 199 | WHAT THE DEVIL?
Another open model outperforms closed ones!
XBai o4 beats OpenAI o3-mini and *confidently* beats Anthropic's Claude Opus.
•Parameters: 32.8 B
•Training: Long-CoT RL + Process Reward Learning (SPRM)
•Benchmarks (High-Modus):
•AIME24: 86.5
•AIME25: 77.9
•LiveCodeBench v5: 67.2
•C-EVAL: 89.7
🔗Open s... | 2025-08-03T11:07:25 | https://www.reddit.com/gallery/1mggku0 | Ordinary_Mud7430 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1mggku0 | false | null | t3_1mggku0 | /r/LocalLLaMA/comments/1mggku0/xbai04_is_it_real/ | false | false | 199 | null | |
🚀 Just set up a powerful local AI development workflow with Ollama + Cursor! | 2 | I accidentally discovered something amazing this weekend while working on my mac mini M4 16gb... 🤦
I was just trying to get Ollama working with Cursor IDE locally (you know, for those late-night coding sessions where API calls become the scary ghost).Well, let's just say things got a bit more "exposed" than I planne... | 2025-08-03T10:57:51 | https://www.reddit.com/r/LocalLLaMA/comments/1mggetn/just_set_up_a_powerful_local_ai_development/ | Evening_Weight_4234 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mggetn | false | null | t3_1mggetn | /r/LocalLLaMA/comments/1mggetn/just_set_up_a_powerful_local_ai_development/ | false | false | self | 2 | null |
NUMAI: A spreadsheet with LLM formula conversion | 1 | Hello,
I have developed a toy spreadsheet, where you can implement your formulas in English, which are then translated into \`javascript\` thanks to an LLM.
For instance, you can write: \`sum of the squared values\` and the LLM will translate this description into:
\`getValuesFromReferences(\['A1', 'A2', 'A3'\]).ma... | 2025-08-03T10:56:18 | https://www.reddit.com/r/LocalLLaMA/comments/1mggdxz/numai_a_spreadsheet_with_llm_formula_conversion/ | Frere_de_la_Quote | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mggdxz | false | null | t3_1mggdxz | /r/LocalLLaMA/comments/1mggdxz/numai_a_spreadsheet_with_llm_formula_conversion/ | false | false | self | 1 | null |
Best Gemma 3 Quant? | 0 | I have decided to run Gemma 3 4B QAT on my Laptop for general use. I was wondering if i should be using some other quant other than the official QAT version by google? Like What would be the performance or quality increase as compared to the QAT version. It would be great if someone shared some benchmarks or other resu... | 2025-08-03T10:40:05 | https://www.reddit.com/r/LocalLLaMA/comments/1mgg4ki/best_gemma_3_quant/ | R46H4V | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mgg4ki | false | null | t3_1mgg4ki | /r/LocalLLaMA/comments/1mgg4ki/best_gemma_3_quant/ | false | false | self | 0 | null |
Successfully running INSTINCT MI50 on Win11 | 21 | Hey, poor GPU guys
A few days ago, I purchased the 32GB version of MI50 from Alibaba, and it arrived at my doorstep via UPS in just a few days, accompanied by a rather loud blower.
Some married guys might understand, but I’ve been using an m-ATX case I bought about 15 years ago, and there’s no room for the MI50 sin... | 2025-08-03T10:38:30 | https://www.reddit.com/r/LocalLLaMA/comments/1mgg3mh/successfully_running_instinct_mi50_on_win11/ | Desperate-Sir-5088 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mgg3mh | false | null | t3_1mgg3mh | /r/LocalLLaMA/comments/1mgg3mh/successfully_running_instinct_mi50_on_win11/ | false | false | 21 | null | |
Still getting bad results with PDFs in AnythingLLM + Llama 3 – Am I doing something wrong, or is there a better setup? | 0 | Hey everyone,
I’ve been doing some research on setting up a local, privacy-friendly LLM assistant, ideally something that can help me write job applications using my previous resumes and cover letters as a base.
From everything I read, it sounded really promising to combine AnythingLLM with Llama 3 (I’m using the LLa... | 2025-08-03T10:22:07 | https://www.reddit.com/r/LocalLLaMA/comments/1mgfuf3/still_getting_bad_results_with_pdfs_in/ | Lazy_Fig_6244 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mgfuf3 | false | null | t3_1mgfuf3 | /r/LocalLLaMA/comments/1mgfuf3/still_getting_bad_results_with_pdfs_in/ | false | false | self | 0 | null |
Do you also get weird behavior from Qwen3-Coder-30B-A3B? | 15 | I was using this model as an assistant to modify code in a C++ file with \~roughly 800 lines of code. However, the model did a lot of mistakes, and it constantly corrected itself (in the same reply) in a way like:
>Here is the modification of the code:
>*\*code\**
But on a second thought, that was not a good ... | 2025-08-03T10:18:02 | https://www.reddit.com/r/LocalLLaMA/comments/1mgfs7l/do_you_also_get_weird_behavior_from/ | Admirable-Star7088 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mgfs7l | false | null | t3_1mgfs7l | /r/LocalLLaMA/comments/1mgfs7l/do_you_also_get_weird_behavior_from/ | false | false | self | 15 | null |
LLM Observability - Any Suggestions? | 1 | I am looking for a way to control the usage of LLMs and to track which users (from my app) are sending how many requests, the prompts, etc.
Sure, I can do this via custom middleware in my app, but I am looking for something that is designed exactly for LLM Observability and would protect me from legal proceedings in ... | 2025-08-03T09:49:56 | https://www.reddit.com/r/LocalLLaMA/comments/1mgfccn/llm_observability_any_suggestions/ | SuddenWerewolf7041 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mgfccn | false | null | t3_1mgfccn | /r/LocalLLaMA/comments/1mgfccn/llm_observability_any_suggestions/ | false | false | self | 1 | null |
Can You Tell the Difference in Text-to-Speech Voices? | 1 | [removed] | 2025-08-03T09:25:05 | https://www.reddit.com/r/LocalLLaMA/comments/1mgeytp/can_you_tell_the_difference_in_texttospeech_voices/ | sudofu_reddit | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mgeytp | false | null | t3_1mgeytp | /r/LocalLLaMA/comments/1mgeytp/can_you_tell_the_difference_in_texttospeech_voices/ | false | false | self | 1 | null |
Can You Tell the Difference in Text-to-Speech Voices? | 1 | [removed] | 2025-08-03T09:23:06 | https://www.reddit.com/r/LocalLLaMA/comments/1mgexps/can_you_tell_the_difference_in_texttospeech_voices/ | sudofu_reddit | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mgexps | false | null | t3_1mgexps | /r/LocalLLaMA/comments/1mgexps/can_you_tell_the_difference_in_texttospeech_voices/ | false | false | self | 1 | null |
Qwen3-30B-A3B-Instruct-2507-Q4_K_S.gguf + LM Studio 0.3.21 (Build 3): Assistant ignores questions, stuck in loop | 12 | Testing with Qwen3-30B-A3B-Instruct-2507-Q4\_K\_S.gguf +LM Studio 0.3.21 (Build 3).
After initial folder and file read (`app/main.go`, `configs.json`, etc.), it keeps replying:
*"I'm ready to assist with your project in /srv/testproject..."*
It ignores direct inputs like:
* "What does this application do?"
* "Exp... | 2025-08-03T08:47:22 | https://www.reddit.com/r/LocalLLaMA/comments/1mgeerv/qwen330ba3binstruct2507q4_k_sgguf_lm_studio_0321/ | Eden63 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mgeerv | false | null | t3_1mgeerv | /r/LocalLLaMA/comments/1mgeerv/qwen330ba3binstruct2507q4_k_sgguf_lm_studio_0321/ | false | false | self | 12 | null |
Best Medical Embedding Model Released | 44 | Just dropped a new medical embedding model that's crushing the competition: [https://huggingface.co/lokeshch19/ModernPubMedBERT](https://huggingface.co/lokeshch19/ModernPubMedBERT)
TL;DR: This model understands medical concepts better than existing solutions and has much fewer false positives.
The model is based ... | 2025-08-03T08:17:31 | https://www.reddit.com/r/LocalLLaMA/comments/1mgdypr/best_medical_embedding_model_released/ | DataNebula | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mgdypr | false | null | t3_1mgdypr | /r/LocalLLaMA/comments/1mgdypr/best_medical_embedding_model_released/ | false | false | self | 44 | {'enabled': False, 'images': [{'id': 'I68Y-9T7cCc1Z97M_N8-PadQJ0sUFhqpKPe5VKRwYHI', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/I68Y-9T7cCc1Z97M_N8-PadQJ0sUFhqpKPe5VKRwYHI.png?width=108&crop=smart&auto=webp&s=c4c4ca0fb1848d33d5ab1a908c384ed031d1d90a', 'width': 108}, {'height': 116, 'url': 'h... |
🧠 ICM+DPO: Used Qwen3's coherent understanding to improve Gemma3 at math - cross-model capability transfer with zero supervision | 22 | Hey r/LocalLLaMA!
Just released something that extends the recent [ICM paper](https://arxiv.org/abs/2506.10139) in a big way - using one model's coherent understanding to improve a completely different model.
# Background: What is ICM?
The original ["Unsupervised Elicitation of Language Models"](https://arxiv.org/a... | 2025-08-03T08:10:25 | https://www.reddit.com/r/LocalLLaMA/comments/1mgdur5/icmdpo_used_qwen3s_coherent_understanding_to/ | asankhs | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mgdur5 | false | null | t3_1mgdur5 | /r/LocalLLaMA/comments/1mgdur5/icmdpo_used_qwen3s_coherent_understanding_to/ | false | false | self | 22 | null |
Audio-in LLM | 4 | Are there any llms that can describe a song? if not, what would it take to build one if you know | 2025-08-03T08:05:05 | https://www.reddit.com/r/LocalLLaMA/comments/1mgdrws/audioin_llm/ | Nearby_Direction2438 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mgdrws | false | null | t3_1mgdrws | /r/LocalLLaMA/comments/1mgdrws/audioin_llm/ | false | false | self | 4 | null |
Chinese LLMs talk freely about Tiananmen massacre and Taiwan | 0 | English questions remain unanswered, but if you switch to e.g. German, the alignment doesn't strike (example with Kimi K2):
>On June 3-4, 1989, after weeks of peaceful protests in Beijing and other cities, the Chinese People's Liberation Army advanced on **Tiananmen Square** with tanks and armed units and used live am... | 2025-08-03T07:50:03 | https://www.reddit.com/r/LocalLLaMA/comments/1mgdjkx/chinese_llms_talk_freely_about_tiananmen_massacre/ | 7pot | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mgdjkx | false | null | t3_1mgdjkx | /r/LocalLLaMA/comments/1mgdjkx/chinese_llms_talk_freely_about_tiananmen_massacre/ | false | false | self | 0 | null |
Chinese LLMs talk freely about Tiananmen massacre and Taiwan | 0 | 2025-08-03T07:46:05 | https://datanizing.com/2025/08/02/chinese-llm-tiananmen-taiwan.html | 7pot | datanizing.com | 1970-01-01T00:00:00 | 0 | {} | 1mgdhdq | false | null | t3_1mgdhdq | /r/LocalLLaMA/comments/1mgdhdq/chinese_llms_talk_freely_about_tiananmen_massacre/ | false | false | default | 0 | null | |
is the P102-100 still a viable option for LLM? | 10 | I have seen thousands of posts of people asking what card to buy and there is two points of view. One is buy expensive 3090, or even more expensive 5000 series or, buy cheap and try it. This post will cover why the P102-100 is still relevant and why it is simply the best budget card to get at 60 dollars.
If you are ju... | 2025-08-03T07:45:44 | Boricua-vet | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1mgdh6r | false | null | t3_1mgdh6r | /r/LocalLLaMA/comments/1mgdh6r/is_the_p102100_still_a_viable_option_for_llm/ | false | false | 10 | {'enabled': True, 'images': [{'id': 'tb8jcmr9JwUFdjGqCEioOHi2smnbOEbTGPoqKGaYDxE', 'resolutions': [{'height': 216, 'url': 'https://preview.redd.it/oy25ru8gergf1.png?width=108&crop=smart&auto=webp&s=6ead78ede2dee21c50ed7920d88cdc0f039342ea', 'width': 108}, {'height': 432, 'url': 'https://preview.redd.it/oy25ru8gergf1.pn... | ||
HRM model was trained on test? | 7 | 2025-08-03T07:21:00 | ILoveMy2Balls | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1mgd3lh | false | null | t3_1mgd3lh | /r/LocalLLaMA/comments/1mgd3lh/hrm_model_was_trained_on_test/ | false | false | 7 | {'enabled': True, 'images': [{'id': 'nwQrGJrBDCDo-K7wxSfT8eAh3-H9AN45BOnvDCwUQPo', 'resolutions': [{'height': 27, 'url': 'https://preview.redd.it/e4op2j02argf1.png?width=108&crop=smart&auto=webp&s=3719305d4458c68cccd4a2b97547cb6bc4be5251', 'width': 108}, {'height': 54, 'url': 'https://preview.redd.it/e4op2j02argf1.png?... | |||
ByteDance drops Seed-Prover | 178 | **ByteDance Seed-Prover proves math the way mathematicians do, not just explanations, but full formal proofs that a computer can verify using Lean.**
It writes Lean 4 code (a formal proof language), solves problems from competitions like IMO and Putnam, and gets the proof *checked* by a compiler.
The key innovations... | 2025-08-03T06:34:03 | https://www.reddit.com/r/LocalLLaMA/comments/1mgccyc/bytedance_drops_seedprover/ | Technical-Love-8479 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mgccyc | false | null | t3_1mgccyc | /r/LocalLLaMA/comments/1mgccyc/bytedance_drops_seedprover/ | false | false | self | 178 | {'enabled': False, 'images': [{'id': 'cnGM2lbzAhFfeq_Wsd7iu0T7rDhd6OEpKKquGX7_eJM', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/cnGM2lbzAhFfeq_Wsd7iu0T7rDhd6OEpKKquGX7_eJM.jpeg?width=108&crop=smart&auto=webp&s=a2880b55faa98583237421c5c4a72d31d6cee167', 'width': 108}, {'height': 162, 'url': '... |
I created an app to run local AI as if it were the App Store | 97 | Hey guys!
**I got tired of installing AI tools the hard way.**
Every time I wanted to try something like Stable Diffusion, RVC or a local LLM, it was the same nightmare:
**terminal commands, missing dependencies, broken CUDA, slow setup, frustration.**
So I built **Dione** — a desktop app that makes running local A... | 2025-08-03T06:12:39 | https://www.reddit.com/gallery/1mgc0v0 | Deivih-4774 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1mgc0v0 | false | null | t3_1mgc0v0 | /r/LocalLLaMA/comments/1mgc0v0/i_created_an_app_to_run_local_ai_as_if_it_were/ | false | false | 97 | null | |
Modded RTX3080 20GBs for $360? | 1 | Where I am right now I have access to SXM2 V100 32GBs for the same price ($360 USD) as modded RTX3080 20GBs, or two SXM2 V100 16GBs with a 300G nvlink bridge for slightly cheaper. Are any of these good options for throwing into my server to run big LLM models? | 2025-08-03T06:03:19 | https://www.reddit.com/r/LocalLLaMA/comments/1mgbven/modded_rtx3080_20gbs_for_360/ | AssociationAdept4052 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mgbven | false | null | t3_1mgbven | /r/LocalLLaMA/comments/1mgbven/modded_rtx3080_20gbs_for_360/ | false | false | self | 1 | null |
We enabled Multi-GPU training in Unsloth AI — a feature that’s usually paid — using just 2 Copilot prompts! | 159 | [https://github.com/oevortex/unsloth](https://github.com/oevortex/unsloth) | 2025-08-03T05:58:01 | https://www.reddit.com/r/LocalLLaMA/comments/1mgbs6r/we_enabled_multigpu_training_in_unsloth_ai_a/ | Quiet-Moment-338 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mgbs6r | false | null | t3_1mgbs6r | /r/LocalLLaMA/comments/1mgbs6r/we_enabled_multigpu_training_in_unsloth_ai_a/ | false | false | self | 159 | {'enabled': False, 'images': [{'id': 'WSMWdS1tBsndSKly_Q084vHJ9V-M2sB-dsq1Yua6ZTY', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/WSMWdS1tBsndSKly_Q084vHJ9V-M2sB-dsq1Yua6ZTY.png?width=108&crop=smart&auto=webp&s=b3fa1e4a31ebf9a6d864f530fdd10814f6d32e3a', 'width': 108}, {'height': 108, 'url': 'h... |
SmallThinker-21B-A3B-Instruct-QAT version | 78 | The larger SmallThinker MoE has been through a quantization aware training process. it's uploaded to the same gguf repo a bit later.
- https://huggingface.co/PowerInfer/SmallThinker-21BA3B-Instruct-GGUF/blob/main/SmallThinker-21B-A3B-Instruct-QAT.Q4_0.gguf
In llama.cpp m2 air 16gb, with the `sudo sysctl iogpu.wired_... | 2025-08-03T05:53:55 | https://huggingface.co/PowerInfer/SmallThinker-21BA3B-Instruct-GGUF/blob/main/SmallThinker-21B-A3B-Instruct-QAT.Q4_0.gguf | Aaaaaaaaaeeeee | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1mgbprh | false | null | t3_1mgbprh | /r/LocalLLaMA/comments/1mgbprh/smallthinker21ba3binstructqat_version/ | false | false | default | 78 | {'enabled': False, 'images': [{'id': '45xh02HDqqpsqpvhy9Hshpnf7PYGTT5vaQkpbv1dIDU', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/45xh02HDqqpsqpvhy9Hshpnf7PYGTT5vaQkpbv1dIDU.png?width=108&crop=smart&auto=webp&s=97ed6aedc23ef593aadb95c2316196f396bc8e65', 'width': 108}, {'height': 116, 'url': 'h... |
Running LLMs locally and flawlessly like copilot or Claude chat or cline. | 0 | If I want to run qwen3 coder or any other AI model that rivals Claude 4 Sonnet locally, what are the ideal system requirements to run it flawlessly? How much RAM? Which motherboard? Recommended GPU and CPU.
If someone has experience running the LLMs locally, please share.
Thanks.
PS: My current system specs are:
... | 2025-08-03T05:44:33 | https://www.reddit.com/r/LocalLLaMA/comments/1mgbk2y/running_llms_locally_and_flawlessly_like_copilot/ | NoFudge4700 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mgbk2y | false | null | t3_1mgbk2y | /r/LocalLLaMA/comments/1mgbk2y/running_llms_locally_and_flawlessly_like_copilot/ | false | false | self | 0 | null |
CUDA-L1: Improving CUDA Optimization via Contrastive Reinforcement Learning | 5 | *The exponential growth in demand for GPU computing resources has created an urgent need for automated CUDA optimization strategies. While recent advances in LLMs show promise for code generation, current SOTA models achieve low success rates in improving CUDA speed. In this paper, we introduce CUDA-L1, an automated re... | 2025-08-03T05:00:24 | https://arxiv.org/abs/2507.14111v4 | Thrumpwart | arxiv.org | 1970-01-01T00:00:00 | 0 | {} | 1mgatd6 | false | null | t3_1mgatd6 | /r/LocalLLaMA/comments/1mgatd6/cudal1_improving_cuda_optimization_via/ | false | false | default | 5 | null |
Seeking a way to implement Low-Maintenance, Fully Local RAG Stack for a 16GB VRAM Setup (36k Arabic epub Docs) | 3 | Hey everyone,
I'm looking for advice on building a robust, self-hosted RAG system with a strong emphasis on **long-term, low-maintenance operation**. My goal is to create a powerful knowledge engine that I can "set and forget" as much as possible, without needing constant daily troubleshooting.
The entire system must... | 2025-08-03T04:35:19 | https://www.reddit.com/r/LocalLLaMA/comments/1mgadmz/seeking_a_way_to_implement_lowmaintenance_fully/ | rfiraz | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mgadmz | false | null | t3_1mgadmz | /r/LocalLLaMA/comments/1mgadmz/seeking_a_way_to_implement_lowmaintenance_fully/ | false | false | self | 3 | null |
I made a prebuilt windows binary for ik_llama.cpp | 37 | [https://huggingface.co/X5R/ik\_llama.cpp](https://huggingface.co/X5R/ik_llama.cpp) | 2025-08-03T04:19:52 | https://www.reddit.com/r/LocalLLaMA/comments/1mga3ox/i_made_a_prebuilt_windows_binary_for_ik_llamacpp/ | Remarkable-Pea645 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mga3ox | false | null | t3_1mga3ox | /r/LocalLLaMA/comments/1mga3ox/i_made_a_prebuilt_windows_binary_for_ik_llamacpp/ | false | false | self | 37 | {'enabled': False, 'images': [{'id': '3NAxKw7OcROuID1oQUkk1MnHTpmZCBizjeUqjivHzO0', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/3NAxKw7OcROuID1oQUkk1MnHTpmZCBizjeUqjivHzO0.png?width=108&crop=smart&auto=webp&s=dbe1684043115eb539138a0a7146d28825949011', 'width': 108}, {'height': 116, 'url': 'h... |
Recent Qwen Models More Pro-Liberally Aligned? | 0 | If that's the case, this is sad news indeed. I hope Qwen will reconsider their approach in the future.
I don't care either way, but when I ask the AI to summarize an article, I don't want it to preach to me / offer thoughts on how 'balanced' or 'trustworthy' the piece is.
I just want a straightforward summary of the... | 2025-08-03T03:22:00 | https://www.reddit.com/r/LocalLLaMA/comments/1mg916l/recent_qwen_models_more_proliberally_aligned/ | Southern_Sun_2106 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mg916l | false | null | t3_1mg916l | /r/LocalLLaMA/comments/1mg916l/recent_qwen_models_more_proliberally_aligned/ | false | false | self | 0 | null |
Best Vibe Code tools that are free and use your own local LLM as of August 2025? | 0 | I've seen Cursor and how it works, and it looks pretty cool, but I rather use my own local hosted LLMs and not pay a usage fee to a 3rd party company, especially tools that integrate with ollama's API.
Does anybody know of any good Vibe Coding (for Windows) tools, as good or better than Cursor, that run on your own lo... | 2025-08-03T02:49:13 | https://www.reddit.com/r/LocalLLaMA/comments/1mg8f1r/best_vibe_code_tools_that_are_free_and_use_your/ | StartupTim | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mg8f1r | false | null | t3_1mg8f1r | /r/LocalLLaMA/comments/1mg8f1r/best_vibe_code_tools_that_are_free_and_use_your/ | false | false | self | 0 | null |
OpenAI OSS models appeared on HF for a wee moment apparently. | 1 | There's some follow up chitchat on HN.
https://news.ycombinator.com/item?id=44758511 | 2025-08-03T02:32:02 | https://x.com/main_horse/status/1951201925778776530 | Suspicious_Young8152 | x.com | 1970-01-01T00:00:00 | 0 | {} | 1mg832v | false | null | t3_1mg832v | /r/LocalLLaMA/comments/1mg832v/openai_oss_models_appeared_on_hf_for_a_wee_moment/ | false | false | default | 1 | null |
ccproxy - Route Claude Code requests to any LLM while keeping your MAX plan | 5 | I've been using Claude Code with my MAX plan and kept running into situations where I wanted to route specific requests to different models without changing my whole setup.
Large context requests would hit Claude's limits, and running compaction so often and having Claude lose important context was a frustrating experi... | 2025-08-03T02:31:07 | https://www.reddit.com/r/LocalLLaMA/comments/1mg82el/ccproxy_route_claude_code_requests_to_any_llm/ | _kintsu | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mg82el | false | null | t3_1mg82el | /r/LocalLLaMA/comments/1mg82el/ccproxy_route_claude_code_requests_to_any_llm/ | false | false | self | 5 | {'enabled': False, 'images': [{'id': '2bOBf4OmqfxCEzVm6KZBaTd34lmAdlTU4rqJVR2YILo', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/2bOBf4OmqfxCEzVm6KZBaTd34lmAdlTU4rqJVR2YILo.png?width=108&crop=smart&auto=webp&s=450f2c799f15b29ec21139264bc6b6bcb57973e4', 'width': 108}, {'height': 108, 'url': 'h... |
Announcing Olla - LLM Load Balancer, Proxy & Model Unifier for Ollama / LM Studio & OpenAI Compatible backends | 66 | We've been working on an LLM proxy, balancer & model unifier based on a few other projects we've created in the past (scout, sherpa) to enable us to run several ollama / lmstudio backends and serve traffic for local-ai.
This was primarily after running into the same issues across several organisations - managing mult... | 2025-08-03T02:14:39 | https://www.reddit.com/gallery/1mg7qpa | 2shanigans | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1mg7qpa | false | null | t3_1mg7qpa | /r/LocalLLaMA/comments/1mg7qpa/announcing_olla_llm_load_balancer_proxy_model/ | false | false | 66 | null | |
Mac + Blackwell 👀 | 161 | It's a WIP, but it's looking like may be possible to pair Macs with NVIDIA soon!
Tweet: [https://x.com/anemll/status/1951307167417639101](https://x.com/anemll/status/1951307167417639101)
Repo: [https://github.com/anemll/anemll](https://github.com/anemll/anemll)
| 2025-08-03T01:51:15 | Accomplished_Ad9530 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1mg7abc | false | null | t3_1mg7abc | /r/LocalLLaMA/comments/1mg7abc/mac_blackwell/ | false | false | default | 161 | {'enabled': True, 'images': [{'id': 'u2mr83o6npgf1', 'resolutions': [{'height': 87, 'url': 'https://preview.redd.it/u2mr83o6npgf1.png?width=108&crop=smart&auto=webp&s=d9ff42619aaccace2d4ed7a908fc57868515ac74', 'width': 108}, {'height': 175, 'url': 'https://preview.redd.it/u2mr83o6npgf1.png?width=216&crop=smart&auto=web... | |
Mac + Blackwell 👀 | 1 | It's a WIP, but it's looking like may be possible to pair Macs with NVIDIA soon!
Repo: [https://github.com/anemll/anemll](https://github.com/anemll/anemll) | 2025-08-03T01:47:59 | https://x.com/anemll/status/1951307167417639101 | Accomplished_Ad9530 | x.com | 1970-01-01T00:00:00 | 0 | {} | 1mg784s | false | null | t3_1mg784s | /r/LocalLLaMA/comments/1mg784s/mac_blackwell/ | false | false | default | 1 | null |
Any news on updated Qwen3-8B/14B versions? | 39 | Since Qwen3-235B-A22B and Qwen3-30B-A3B have been updated, is there any word on similar updates for Qwen3-8B or Qwen3-14B? | 2025-08-03T01:32:51 | https://www.reddit.com/r/LocalLLaMA/comments/1mg6xia/any_news_on_updated_qwen38b14b_versions/ | zyxwvu54321 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mg6xia | false | null | t3_1mg6xia | /r/LocalLLaMA/comments/1mg6xia/any_news_on_updated_qwen38b14b_versions/ | false | false | self | 39 | null |
Any news on updated Qwen-8B/14B versions? | 1 | Since
Qwen3-235B-A22B and
Qwen3-30B-A3B have been updated, is there any word on similar updates for Qwen-8B or Qwen-14B? | 2025-08-03T01:31:19 | https://www.reddit.com/r/LocalLLaMA/comments/1mg6wdt/any_news_on_updated_qwen8b14b_versions/ | zyxwvu54321 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mg6wdt | false | null | t3_1mg6wdt | /r/LocalLLaMA/comments/1mg6wdt/any_news_on_updated_qwen8b14b_versions/ | false | false | self | 1 | null |
OpenAI would easily have the best opensource if they were the horizon beta/lobster models from openrouter and webdev arena | 1 | Horizon beta is available currently on open router and seems to be the newest checkpoint of the model, being simply just more baked and intelligent than the one they released a day ago called horizon alpha. It outperforms Kimi K2 and other top performing base models. Lobster was a reasoning model from webdev about a we... | 2025-08-03T00:47:17 | https://www.reddit.com/r/LocalLLaMA/comments/1mg61lx/openai_would_easily_have_the_best_opensource_if/ | Longjumping_Spot5843 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mg61lx | false | null | t3_1mg61lx | /r/LocalLLaMA/comments/1mg61lx/openai_would_easily_have_the_best_opensource_if/ | false | false | self | 1 | null |
Quasar, Horizon, "Singularity?", Diseases & Theory | 0 | **I'll be blunt:**
I have multiple pathologies (five chronic illnesses, one of which is persistent and severe autoimmune). The only two models that have successfully helped me find avenues for improvement are Deepseek R1 and Perplexity Pro. Personally, during my interactions with Horizon Beta, I felt both borderline u... | 2025-08-03T00:44:09 | https://www.reddit.com/r/LocalLLaMA/comments/1mg5zcx/quasar_horizon_singularity_diseases_theory/ | Ok_Ninja7526 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mg5zcx | false | null | t3_1mg5zcx | /r/LocalLLaMA/comments/1mg5zcx/quasar_horizon_singularity_diseases_theory/ | false | false | 0 | null | |
I created a persistent memory for an AI assistant I'm developing, and am releasing the memory system | 288 | 🚀 I just open-sourced a fully working persistent memory system for AI assistants!
🧠 Features:
\- Real-time memory capture across apps (LM Studio, VS Code, etc.)
\- Semantic search via vector embeddings
\- Tool call logging for AI self-reflection
\- Cross-platform and fully tested
\- Open source and modular
... | 2025-08-03T00:41:41 | https://www.reddit.com/r/LocalLLaMA/comments/1mg5xlb/i_created_a_persistent_memory_for_an_ai_assistant/ | Savantskie1 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mg5xlb | false | null | t3_1mg5xlb | /r/LocalLLaMA/comments/1mg5xlb/i_created_a_persistent_memory_for_an_ai_assistant/ | false | false | self | 288 | {'enabled': False, 'images': [{'id': '6rpXOo6FUHMg7nBmpxnDt1W_rbBXLhU2vtCgMgpecfE', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/6rpXOo6FUHMg7nBmpxnDt1W_rbBXLhU2vtCgMgpecfE.png?width=108&crop=smart&auto=webp&s=b41ef9face9075e71937a4cd3b28923245b39a29', 'width': 108}, {'height': 108, 'url': 'h... |
Thinking or Instruct? | 7 | I honestly don't know which one is better suited for things like medical, philosophical, historical topics, or text interpretation...
It's something I've never been clear about.
For example, when I've used Deepseek, sometimes I feel that putting it into "thinking" mode doesn't add much, but I haven't noticed a clea... | 2025-08-03T00:34:17 | https://www.reddit.com/r/LocalLLaMA/comments/1mg5scj/thinking_or_instruct/ | 9acca9 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mg5scj | false | null | t3_1mg5scj | /r/LocalLLaMA/comments/1mg5scj/thinking_or_instruct/ | false | false | self | 7 | null |
Quasar, Horizon ,"Singularity?", Disease & Theory | 0 | 2025-08-03T00:32:04 | Ok_Ninja7526 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1mg5qti | false | null | t3_1mg5qti | /r/LocalLLaMA/comments/1mg5qti/quasar_horizon_singularity_disease_theory/ | false | false | default | 0 | {'enabled': True, 'images': [{'id': '0ycvg1y6vogf1', 'resolutions': [{'height': 162, 'url': 'https://preview.redd.it/0ycvg1y6vogf1.png?width=108&crop=smart&auto=webp&s=e5c4330135e846fbf948c39a509f2422fc002155', 'width': 108}, {'height': 324, 'url': 'https://preview.redd.it/0ycvg1y6vogf1.png?width=216&crop=smart&auto=we... | ||
How do I get Qwen 3 to stop asking terrible questions? | 16 | Working with Qwen3-234B-A22B-Instruct-2507, I am repeatedly running into what appear be a cluster of similar issues on a fairly regular basis.
If I do anything which requires the model to ask clarifying questions, it frequently generates horrible questions, and the bad ones are almost always of the either/or variety.
... | 2025-08-02T23:36:38 | https://www.reddit.com/r/LocalLLaMA/comments/1mg4lxw/how_do_i_get_qwen_3_to_stop_asking_terrible/ | TastesLikeOwlbear | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mg4lxw | false | null | t3_1mg4lxw | /r/LocalLLaMA/comments/1mg4lxw/how_do_i_get_qwen_3_to_stop_asking_terrible/ | false | false | self | 16 | null |
Local Coding Models hardware suggestions | 1 | [removed] | 2025-08-02T23:26:18 | https://www.reddit.com/r/LocalLLaMA/comments/1mg4e3n/local_coding_models_hardware_suggestions/ | Recent-Success-1520 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mg4e3n | false | null | t3_1mg4e3n | /r/LocalLLaMA/comments/1mg4e3n/local_coding_models_hardware_suggestions/ | false | false | self | 1 | null |
64GB M1 Max, which GLM-4.5-Air? | 2 | So many versions! I saw something about how the DWQ versions are best, but then obviously MLX \*seems\* like it would be best? And what quantization version? | 2025-08-02T23:14:27 | https://www.reddit.com/r/LocalLLaMA/comments/1mg44ya/64gb_m1_max_which_glm45air/ | maxiedaniels | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mg44ya | false | null | t3_1mg44ya | /r/LocalLLaMA/comments/1mg44ya/64gb_m1_max_which_glm45air/ | false | false | self | 2 | null |
Is this set up sufficient? | 1 | Non-techie, so forgive my ignorance.
Looking to get a local LLM and learn Python.
Is this set up optimal for the purpose, or is this an overkill?
- Apple m4 pro chip
- 14 core CPU, 20 core GPU
- 48GB unified memory.
- One TB SSD storage
Eventually would like to advance to training my own LLM on a Linux with Nvidia... | 2025-08-02T23:09:04 | https://www.reddit.com/r/LocalLLaMA/comments/1mg40u1/is_this_set_up_sufficient/ | Wild-Muffin9190 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mg40u1 | false | null | t3_1mg40u1 | /r/LocalLLaMA/comments/1mg40u1/is_this_set_up_sufficient/ | false | false | self | 1 | null |
HRM solved thinking more than current "thinking" models (this needs more hype) | 330 | Article: https://medium.com/@causalwizard/why-im-excited-about-the-hierarchical-reasoning-model-8fc04851ea7e
Context:
This insane new paper got 40% on ARC-AGI with an absolutely tiny model (27M params). It's seriously a revolutionary new paper that got way less attention than it deserved.
https://arxiv.org/abs/2506.... | 2025-08-02T22:44:39 | https://www.reddit.com/r/LocalLLaMA/comments/1mg3i48/hrm_solved_thinking_more_than_current_thinking/ | Charuru | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mg3i48 | false | null | t3_1mg3i48 | /r/LocalLLaMA/comments/1mg3i48/hrm_solved_thinking_more_than_current_thinking/ | false | false | self | 330 | {'enabled': False, 'images': [{'id': 'okA4gpxX7B_Xt3c3UIV2XiEN4mivP8AjVhN1Fsvjoo0', 'resolutions': [{'height': 66, 'url': 'https://external-preview.redd.it/okA4gpxX7B_Xt3c3UIV2XiEN4mivP8AjVhN1Fsvjoo0.png?width=108&crop=smart&auto=webp&s=a32ed96622f166ae158780da40b7af8d982e72f9', 'width': 108}, {'height': 132, 'url': 'h... |
Easily installable GUI for ML-powered audio transcription on AMD GPU ? | 1 | Hi,
Every app I found for locally transcribing audio with ML is either too hard to install or only supports NVIDIA GPUs.
Here's what I looked into : noScribe, aTrain, vibe, mystiq, whisper-gui, biniou.
Know any other ?
Thanks | 2025-08-02T22:42:03 | https://www.reddit.com/r/LocalLLaMA/comments/1mg3g2e/easily_installable_gui_for_mlpowered_audio/ | KaKi_87 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mg3g2e | false | null | t3_1mg3g2e | /r/LocalLLaMA/comments/1mg3g2e/easily_installable_gui_for_mlpowered_audio/ | false | false | self | 1 | null |
Note to the Qwen team re. the new 30B A3B Coder and Instruct versions: Coder is lobotomized when compared to Instruct | 60 | My own testing results are backed up by the private tests run on dubesor.de. Coder is significantly worse in coding related knowledge than Instruct. If Coder is fine tuned from Instruct, I can only surmise that the additional training on a plethora of programming languages and agentic abilities has resulted in a good d... | 2025-08-02T22:38:22 | https://www.reddit.com/r/LocalLLaMA/comments/1mg3d62/note_to_the_qwen_team_re_the_new_30b_a3b_coder/ | jackdareel | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mg3d62 | false | null | t3_1mg3d62 | /r/LocalLLaMA/comments/1mg3d62/note_to_the_qwen_team_re_the_new_30b_a3b_coder/ | false | false | self | 60 | null |
How are people running an MLX-compatible OpenAI API server locally? | 3 | I'm curious how folks are setting up an OpenAI-compatible API server locally that uses MLX models? I don't see an official way and don't want to use LM Studio. What options do I have here?
============================
Second, currently, every time I try to download a model, I get prompted to acknowledge Hugging Face... | 2025-08-02T21:44:24 | https://www.reddit.com/r/LocalLLaMA/comments/1mg26g0/how_are_people_running_an_mlxcompatible_openai/ | discoveringnature12 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mg26g0 | false | null | t3_1mg26g0 | /r/LocalLLaMA/comments/1mg26g0/how_are_people_running_an_mlxcompatible_openai/ | false | false | self | 3 | null |
Closest Local Version of OpenAI's Agent Mode? | 5 | I've tried looking for an application where you can ask it to search/do something and see it actually do it (a GUI showing the browser as it goes through things) just like chatgpt's agent mode, but haven't found anything similar for local yet. Is it too early for that or does anyone know of any projects like that curre... | 2025-08-02T21:42:12 | https://www.reddit.com/r/LocalLLaMA/comments/1mg24nd/closest_local_version_of_openais_agent_mode/ | RabbitEater2 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mg24nd | false | null | t3_1mg24nd | /r/LocalLLaMA/comments/1mg24nd/closest_local_version_of_openais_agent_mode/ | false | false | self | 5 | null |
GNOME AI Virtual Assistant "Newelle" Reaches Version 1.0 Milestone | 22 | 2025-08-02T21:11:21 | https://www.phoronix.com/news/GNOME-AI-Assistant-1.0 | FastDecode1 | phoronix.com | 1970-01-01T00:00:00 | 0 | {} | 1mg1evr | false | null | t3_1mg1evr | /r/LocalLLaMA/comments/1mg1evr/gnome_ai_virtual_assistant_newelle_reaches/ | false | false | default | 22 | null | |
Any news about the open source models that OpenAI promised to release ? | 33 | Sam Altman promised imminent release of open source models . It seems we haven’t heard anything new in the past few weeks, have we? | 2025-08-02T21:10:32 | https://www.reddit.com/r/LocalLLaMA/comments/1mg1e80/any_news_about_the_open_source_models_that_openai/ | NeedleworkerDull7886 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mg1e80 | false | null | t3_1mg1e80 | /r/LocalLLaMA/comments/1mg1e80/any_news_about_the_open_source_models_that_openai/ | false | false | self | 33 | null |
Guide for GPU Purchase for Local LLM? | 6 | Trying to balance cost, model size, and token throughput on a PC build (Linux Mint).
Aiming to keep my gpu cost as close to 1k (or lower) as possible - which would you recommend?
16 GB (go for fast enough, low power, cheaper, Nvidia 5060 series, runs 500).
16 GB (go for speed, Nvidia 5070 series - new runs around 70... | 2025-08-02T21:08:21 | https://www.reddit.com/r/LocalLLaMA/comments/1mg1cg5/guide_for_gpu_purchase_for_local_llm/ | DanManPanther | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mg1cg5 | false | null | t3_1mg1cg5 | /r/LocalLLaMA/comments/1mg1cg5/guide_for_gpu_purchase_for_local_llm/ | false | false | self | 6 | null |
Convert your ChatGTP exported conversations to something that Open-WebUI can import | 24 | In the spirit of local AI, I prefer to migrate all of my existing ChatGPT conversations to Open-WebUI. Unfortunatly, the Open-WebUI import function doesn't quite process them correctly.
This is a simple python script that attempts to reformat your ChatGPT exported conversations into a format that Open-WebU... | 2025-08-02T20:56:15 | https://github.com/scubanarc/chatgpt-to-open-webui/ | scubanarc | github.com | 1970-01-01T00:00:00 | 0 | {} | 1mg12k4 | false | null | t3_1mg12k4 | /r/LocalLLaMA/comments/1mg12k4/convert_your_chatgtp_exported_conversations_to/ | false | false | default | 24 | {'enabled': False, 'images': [{'id': 'HUDVH48IUY-iR1kZ-2wF0SfxzexjRfgIHB_lQ45z_o0', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/HUDVH48IUY-iR1kZ-2wF0SfxzexjRfgIHB_lQ45z_o0.png?width=108&crop=smart&auto=webp&s=ed1ab2121b7badab4580a484ba08cc844d5e16e5', 'width': 108}, {'height': 108, 'url': 'h... |
I have built my own, poor mans Lovable - testing out Cerebras AI | 9 | I decided to test Cerebras and their speed is indeed impressive: 2.5 sec to generate a real-world app with tailwind frontend. I use Docker to containerize the apps built. It is a naive MVP but I need your feedback guys! | 2025-08-02T20:47:07 | https://github.com/restyler/poor-mans-lovable | superjet1 | github.com | 1970-01-01T00:00:00 | 0 | {} | 1mg0uw8 | false | null | t3_1mg0uw8 | /r/LocalLLaMA/comments/1mg0uw8/i_have_built_my_own_poor_mans_lovable_testing_out/ | false | false | default | 9 | {'enabled': False, 'images': [{'id': '2Ld6AawMyp37grKrz1lk2nxgwwwZ3ZxUmD2VwjK2SlE', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/2Ld6AawMyp37grKrz1lk2nxgwwwZ3ZxUmD2VwjK2SlE.png?width=108&crop=smart&auto=webp&s=2cf2bdf2998419d6d274f237ffbcc2b5ad34d663', 'width': 108}, {'height': 108, 'url': 'h... |
How the best image generation models work from the inside ? | 3 | So i was reading the latest paper from FoundationVision (the holders of the best paper award in NeurIPs2024 and the authors of the VAR paper) named [UniTok](https://arxiv.org/abs/2502.20321) and it talks about a new MultiCodebook design for the VQ-VAE tokenizer of images and how it provides better results in understand... | 2025-08-02T20:46:56 | https://www.reddit.com/r/LocalLLaMA/comments/1mg0ur7/how_the_best_image_generation_models_work_from/ | Severe-Awareness829 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mg0ur7 | false | null | t3_1mg0ur7 | /r/LocalLLaMA/comments/1mg0ur7/how_the_best_image_generation_models_work_from/ | false | false | self | 3 | null |
Alibaba not doing to bad at coding according to lmarena | 7 | |Rank|Model|B|
|:-|:-|:-|
|1|Gemini|C|
|d|e|f|
Rank Model Score 95% CI Votes Company License 1 gemini 2.5 pro 1474 ±8 7,178 Goog 1 qwen3 235b a22b instruct 2507 1464 ±18 1,089 Alibaba Apache 2 o3 2025 04 16 1445 ±7 9,877 Closed AI 2 grok 4 2502 1442 ±10 4,063 xAI 2 qwen3 235b a22b thinking 2507 1442 ±20 917 Alibaba Ap... | 2025-08-02T20:44:03 | https://www.reddit.com/r/LocalLLaMA/comments/1mg0sbe/alibaba_not_doing_to_bad_at_coding_according_to/ | Terminator857 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mg0sbe | false | null | t3_1mg0sbe | /r/LocalLLaMA/comments/1mg0sbe/alibaba_not_doing_to_bad_at_coding_according_to/ | false | false | self | 7 | null |
Another reason to love local: Main online services are down | 1 | [deleted] | 2025-08-02T20:34:23 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 1mg0kdm | false | null | t3_1mg0kdm | /r/LocalLLaMA/comments/1mg0kdm/another_reason_to_love_local_main_online_services/ | false | false | default | 1 | null | ||
Alibaba not doing to bad at coding according to lmarena | 1 | Style control removed:
||
||
|Rank||Score|95% CI|Votes|Company|License|
|1|gemini 2.5 pro|1474|±8|7,178|Goog||
|1|qwen3 235b a22b instruct 2507|1464|±18|1,089|Alibaba|Apache|
|2|o3 2025 04 16|1445|±7|9,877|Closed AI||
|2|grok 4 2502|1442|±10|4,063|xAI||
|2|qwen3 235b a22b thinking 2507|1442|±20|917|Alibaba|Apache|
|2|... | 2025-08-02T20:29:22 | https://www.reddit.com/r/LocalLLaMA/comments/1mg0g2p/alibaba_not_doing_to_bad_at_coding_according_to/ | Terminator857 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mg0g2p | false | null | t3_1mg0g2p | /r/LocalLLaMA/comments/1mg0g2p/alibaba_not_doing_to_bad_at_coding_according_to/ | false | false | self | 1 | null |
Alibaba not doing to bad at coding according to lmarena | 1 | Style control removed.
||
||
|Rank|Model|Score|95% CI|Votes|Company|License|
|1|gemini 2.5 pro|1474|±8|7,178|Goog||
|1|qwen3 235b a22b instruct 2507|1464|±18|1,089|Alibaba|Apache|
|2|o3 2025 04 16|1445|±7|9,877|Closed AI||
|2|grok 4 2502|1442|±10|4,063|xAI||
|2|qwen3 235b a22b thinking 2507|1442|±20|917|Alibaba|Apache... | 2025-08-02T20:19:50 | https://www.reddit.com/r/LocalLLaMA/comments/1mg085b/alibaba_not_doing_to_bad_at_coding_according_to/ | Terminator857 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mg085b | false | null | t3_1mg085b | /r/LocalLLaMA/comments/1mg085b/alibaba_not_doing_to_bad_at_coding_according_to/ | false | false | self | 1 | null |
Alibaba not doing to bad at coding according to lmarena | 1 | Style control removed.
|Rank|Model|Score|95% CI|Votes|Company|License|
|1|[gemini 2.5 pro](http://aistudio.google.com/app/prompts/new_chat?model=gemini-2.5-pro)|1474|±8|7,178|Goog||
|1|[qwen3 235b a22b instruct 2507](https://huggingface.co/Qwen/Qwen3-235B-A22B-Instruct-2507)|1464|±18|1,089|Alibaba|Apache|
|2|[o3 2025 ... | 2025-08-02T20:16:53 | https://www.reddit.com/r/LocalLLaMA/comments/1mg05rd/alibaba_not_doing_to_bad_at_coding_according_to/ | Terminator857 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mg05rd | false | null | t3_1mg05rd | /r/LocalLLaMA/comments/1mg05rd/alibaba_not_doing_to_bad_at_coding_according_to/ | false | false | self | 1 | null |
Experience with GLM-4.5-Air + claude code? | 12 | Hi guys,
I am actually running the version with vllm (4x3090) and even if it's quite early I'm quite impressed the model isn't "lost" and can handle some of my tasks through cc (python code modifications). There are some errors during the executions and the model need to retry but need to do more tests to better unde... | 2025-08-02T20:09:35 | https://www.reddit.com/r/LocalLLaMA/comments/1mfzzt4/experience_with_glm45air_claude_code/ | Leflakk | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mfzzt4 | false | null | t3_1mfzzt4 | /r/LocalLLaMA/comments/1mfzzt4/experience_with_glm45air_claude_code/ | false | false | self | 12 | {'enabled': False, 'images': [{'id': 'uyJUeTXJJcqAASFRWO8tMtHYP5vvBYKqUGNMMYE3KnY', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/uyJUeTXJJcqAASFRWO8tMtHYP5vvBYKqUGNMMYE3KnY.png?width=108&crop=smart&auto=webp&s=44ceba9879a0b6cc35b31f44bfe9d55af546e830', 'width': 108}, {'height': 116, 'url': 'h... |
Alibaba not doing too bad at coding according to lmarena | 1 | ||
||
|Rank|Model|Score|95% CI|Votes|Company|License|
|1|[gemini 2.5 pro](http://aistudio.google.com/app/prompts/new_chat?model=gemini-2.5-pro)|1474|±8|7,178|Goog||
|1|[qwen3 235b a22b instruct 2507](https://huggingface.co/Qwen/Qwen3-235B-A22B-Instruct-2507)|1464|±18|1,089|Alibaba|Apache|
|2|[o3 2025 04 16](https://ope... | 2025-08-02T20:07:30 | https://www.reddit.com/r/LocalLLaMA/comments/1mfzy3l/alibaba_not_doing_too_bad_at_coding_according_to/ | Terminator857 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mfzy3l | false | null | t3_1mfzy3l | /r/LocalLLaMA/comments/1mfzy3l/alibaba_not_doing_too_bad_at_coding_according_to/ | false | false | self | 1 | null |
How I Built Medical AI by Solving the Radiation Dose Problem | 5 |
When our radiology department rejected another batch of low-resolution X-rays because they couldn't see critical bone fractures, I watched $15,000 worth of re-imaging appointments get scheduled for the next week. Each patient would get 16x more radiation exposure just to see what should have been visible the first ti... | 2025-08-02T20:02:40 | https://www.reddit.com/r/LocalLLaMA/comments/1mfzu3d/how_i_built_medical_ai_by_solving_the_radiation/ | No-Perception-9919 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mfzu3d | false | null | t3_1mfzu3d | /r/LocalLLaMA/comments/1mfzu3d/how_i_built_medical_ai_by_solving_the_radiation/ | false | false | self | 5 | {'enabled': False, 'images': [{'id': 'OVuemuVygw_3HoiDF0PR0GpK77a-Bg-aRvKWskLNmyM', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/OVuemuVygw_3HoiDF0PR0GpK77a-Bg-aRvKWskLNmyM.jpeg?width=108&crop=smart&auto=webp&s=ecfe2ffa2015cbfbedc8dfb6cbcd4cf0d32f0bee', 'width': 108}, {'height': 162, 'url': '... |
Open-Source Project for Distributed Inference Management | 4 | I just launched GridLLM (https://www.producthunt.com/products/gridllm?launch=gridllm), an open-source orchestration layer for distributing inference requests across your existing Ollama instances! This project spawned from a need to manage three different inference servers at work, and the headache that resulted from t... | 2025-08-02T19:46:02 | https://www.reddit.com/r/LocalLLaMA/comments/1mfzg8h/opensource_project_for_distributed_inference/ | Choice_Nature9658 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mfzg8h | false | null | t3_1mfzg8h | /r/LocalLLaMA/comments/1mfzg8h/opensource_project_for_distributed_inference/ | false | false | self | 4 | {'enabled': False, 'images': [{'id': 'QH_yXcAnNLx7g0BAAKELuFQ9gaGsbXx2ckRASk4Jj0c', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/QH_yXcAnNLx7g0BAAKELuFQ9gaGsbXx2ckRASk4Jj0c.png?width=108&crop=smart&auto=webp&s=a422ad10d8ba96d4fcb866b6ade60210133fea91', 'width': 108}, {'height': 108, 'url': 'h... |
OpenAI RAG API (File Search): an experimental study | 0 | This set of experiments were conducted about half a year ago and we are suggested to share them to the community. Summary of the experiments
(1) Lihua world dataset: conversation data, all texts
(2) In previous studies, Graph RAG (and variants) showed advantages over "naïve" RAG.
(3) Using OpenAI RAG API (File Searc... | 2025-08-02T19:44:35 | https://www.reddit.com/gallery/1mfzezz | DueKitchen3102 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1mfzezz | false | null | t3_1mfzezz | /r/LocalLLaMA/comments/1mfzezz/openai_rag_api_file_search_an_experimental_study/ | false | false | 0 | null | |
Chatterbox tts on amd | 1 | Is it possible to run chatterbox tts on an amd 9070 xt, I tried running it the other day but it would crash immediately before I could even get the ui open and I was wondering if it’s just my system | 2025-08-02T19:28:29 | https://www.reddit.com/r/LocalLLaMA/comments/1mfz1k2/chatterbox_tts_on_amd/ | StrangeMan060 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mfz1k2 | false | null | t3_1mfz1k2 | /r/LocalLLaMA/comments/1mfz1k2/chatterbox_tts_on_amd/ | false | false | self | 1 | null |
Using Jan (first time) vs Lm-Studio | 0 | Im just having this problem:
im trying to run a big model (Qwen3-30b-A3B-instruct-2507-Q8\_0) in Jan, that model works in Lm-studio by default.
I know that is big for my pc: 8 gb of VRam and 32 gb Ram.
Im getting this error:
Failed to load llama-server: llamacpp error: Error: ggml_backend_cuda_buffer_type_allo... | 2025-08-02T19:19:54 | https://www.reddit.com/r/LocalLLaMA/comments/1mfyu90/using_jan_first_time_vs_lmstudio/ | 9acca9 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mfyu90 | false | null | t3_1mfyu90 | /r/LocalLLaMA/comments/1mfyu90/using_jan_first_time_vs_lmstudio/ | false | false | self | 0 | null |
LocalLLM for movies | 0 | Are local llms fast and powerful enough to do analysis on movies in real time?
Say you can tell llms to skip scenes with certain actors and them the llm does scene analysis to skip those parts?
If not today, then when will it be possible to do that? | 2025-08-02T18:54:53 | https://www.reddit.com/r/LocalLLaMA/comments/1mfy924/localllm_for_movies/ | ImaginaryRea1ity | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mfy924 | false | null | t3_1mfy924 | /r/LocalLLaMA/comments/1mfy924/localllm_for_movies/ | false | false | self | 0 | null |
How do I get this information into an AI to make a video? | 0 | I'll need to use free tools. I am looking to make a video with this content. How do I do that? What tools should I use? How do i format this information to be processed by an AI?
\[Begin\]
The Globe wants you to believe everything opposite of physics:
1) Heliocentrism teaches large bodies of liquid water curves... | 2025-08-02T18:52:18 | https://www.reddit.com/r/LocalLLaMA/comments/1mfy6vo/how_do_i_get_this_information_into_an_ai_to_make/ | DivergentDroid1 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mfy6vo | false | null | t3_1mfy6vo | /r/LocalLLaMA/comments/1mfy6vo/how_do_i_get_this_information_into_an_ai_to_make/ | false | false | self | 0 | null |
Dutch LLM | 0 | Hi, I'm developing a product that uses AI, but it's entirely in Dutch. Which AI model would you guys recommend for Dutch language tasks specifically? | 2025-08-02T18:50:56 | https://www.reddit.com/r/LocalLLaMA/comments/1mfy5qs/dutch_llm/ | Sudden-Bath-7378 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mfy5qs | false | null | t3_1mfy5qs | /r/LocalLLaMA/comments/1mfy5qs/dutch_llm/ | false | false | self | 0 | null |
Is there any limits on Deep Research mode on Qwen Chat? | 0 | Or is it unlimited on [chat.qwen.ai](http://chat.qwen.ai) ? | 2025-08-02T18:24:25 | https://www.reddit.com/r/LocalLLaMA/comments/1mfxjd5/is_there_any_limits_on_deep_research_mode_on_qwen/ | Sostrene_Blue | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mfxjd5 | false | null | t3_1mfxjd5 | /r/LocalLLaMA/comments/1mfxjd5/is_there_any_limits_on_deep_research_mode_on_qwen/ | false | false | self | 0 | null |
Are there any Open source LLM’s better than free tier of ChatGPT(4o and 4o mini)? | 0 | I just bought a new PC, it’s not primarily for AI but I wanna try out llms. I’m not too familiar about the different models, so I’d appreciate if someone could provide recommendations.
Pc specs: 5070 Ti 16gb + i7 14700 32 gb ddr5 6000 MHz. | 2025-08-02T18:17:36 | https://www.reddit.com/r/LocalLLaMA/comments/1mfxdlg/are_there_any_open_source_llms_better_than_free/ | Ok-Championship7986 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mfxdlg | false | null | t3_1mfxdlg | /r/LocalLLaMA/comments/1mfxdlg/are_there_any_open_source_llms_better_than_free/ | false | false | self | 0 | null |
Qwen moe in C | 63 | Just shipped something I'm really excited about! 🚀
I was scrolling through my feed and saw Sebastian Raschka, PhD 's incredible Qwen3 MoE implementation in PyTorch. The educational clarity of his code just blew me away - especially how he broke down the Mixture of Experts architecture in his LLMs-from-scratch repo.
Th... | 2025-08-02T18:14:18 | https://www.reddit.com/r/LocalLLaMA/comments/1mfxas1/qwen_moe_in_c/ | 1Hesham | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mfxas1 | false | null | t3_1mfxas1 | /r/LocalLLaMA/comments/1mfxas1/qwen_moe_in_c/ | false | false | self | 63 | {'enabled': False, 'images': [{'id': 'WSjvG1bkXS5RFYUXSqQk_Sce1jxvdesMgCPwzgoYYNg', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/WSjvG1bkXS5RFYUXSqQk_Sce1jxvdesMgCPwzgoYYNg.png?width=108&crop=smart&auto=webp&s=dbd8f00d966699e72ff3a93f578256c7537d2135', 'width': 108}, {'height': 108, 'url': 'h... |
Four Models, One Prompt: Who Writes the Best Instructions for AI? | 2 | This is a quick blog post I put together today briefly comparing Kimi K2, Gemini 2.5 Pro, ChatGPT's throttled free-tier, and Claude 4 Sonnet | 2025-08-02T17:36:54 | https://selfenrichment.hashnode.dev/four-models-one-prompt-who-writes-the-best-instructions-for-ai | robertotomas | selfenrichment.hashnode.dev | 1970-01-01T00:00:00 | 0 | {} | 1mfwec7 | false | null | t3_1mfwec7 | /r/LocalLLaMA/comments/1mfwec7/four_models_one_prompt_who_writes_the_best/ | false | false | default | 2 | null |
100+ AI Benchmarks list | 51 | I've created an Awesome AI Benchmarks GitHub repository with already 100+ benchmarks added for different domains.
I already had a Google Sheets document with those benchmarks and their details and thought it would be great to not waste that and create an [Awesome list](https://github.com/sindresorhus/awesome).
To hav... | 2025-08-02T17:34:52 | https://www.reddit.com/r/LocalLLaMA/comments/1mfwckf/100_ai_benchmarks_list/ | panilyaU | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mfwckf | false | null | t3_1mfwckf | /r/LocalLLaMA/comments/1mfwckf/100_ai_benchmarks_list/ | false | false | self | 51 | {'enabled': False, 'images': [{'id': 'ASQhfiygebxNf8DoDi3MvBbcoqvCP-ZjCV6b0G2_Bwg', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/ASQhfiygebxNf8DoDi3MvBbcoqvCP-ZjCV6b0G2_Bwg.png?width=108&crop=smart&auto=webp&s=93761f053553e9b1cfb16372cdf0e7dac51f1f5e', 'width': 108}, {'height': 108, 'url': 'h... |
WebGPU enables local LLM in the browser. Demo site with AI chat | 0 | 2025-08-02T17:19:20 | https://andreinwald.github.io/browser-llm/ | andreinwald | andreinwald.github.io | 1970-01-01T00:00:00 | 0 | {} | 1mfvzai | false | null | t3_1mfvzai | /r/LocalLLaMA/comments/1mfvzai/webgpu_enables_local_llm_in_the_browser_demo_site/ | false | false | default | 0 | null | |
WebGPU enables local LLM in the browser. Demo site with AI chat | 1 | - No need to use your OPENAI_API_KEY - its local model that runs on your device
- No network requests to any API
- No need to install any program
- No need to download files on your device (model is cached in browser)
- Site will ask before downloading large files (llm model) to browser cache
- ... | 2025-08-02T17:18:07 | https://andreinwald.github.io/browser-llm/ | andreinwald | andreinwald.github.io | 1970-01-01T00:00:00 | 0 | {} | 1mfvy9t | false | null | t3_1mfvy9t | /r/LocalLLaMA/comments/1mfvy9t/webgpu_enables_local_llm_in_the_browser_demo_site/ | false | false | default | 1 | null |
What would it take to support Multi-Token-Prediction (MTP) in llama.cpp? feat. GLM 4.5 | 84 | [A new PR](https://github.com/ggml-org/llama.cpp/pull/15026) was created to support GLM 4.5's models in llama.cpp, as the original, highly anticipated [\#14939](https://github.com/ggml-org/llama.cpp/pull/14939) seemed to get stuck. The new PR description reads: "**this PR will NOT attempt to implement MTP**", with grea... | 2025-08-02T17:17:04 | https://www.reddit.com/r/LocalLLaMA/comments/1mfvxdo/what_would_it_take_to_support/ | Karim_acing_it | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mfvxdo | false | null | t3_1mfvxdo | /r/LocalLLaMA/comments/1mfvxdo/what_would_it_take_to_support/ | false | false | self | 84 | {'enabled': False, 'images': [{'id': 'nu0lYx9MvX_F3y_OrRWNrMtDpRRlG8lzyxoJbWZ3NRg', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/nu0lYx9MvX_F3y_OrRWNrMtDpRRlG8lzyxoJbWZ3NRg.png?width=108&crop=smart&auto=webp&s=192b46649336ccde82b3df88264f14e8c3af5057', 'width': 108}, {'height': 108, 'url': 'h... |
Buy 1 A100 GPU or similar to rent it out for investment purposes. No technical knowledge | 1 | [removed] | 2025-08-02T17:15:48 | https://www.reddit.com/r/LocalLLaMA/comments/1mfvw9v/buy_1_a100_gpu_or_similar_to_rent_it_out_for/ | Regular_Folk_2530 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mfvw9v | false | null | t3_1mfvw9v | /r/LocalLLaMA/comments/1mfvw9v/buy_1_a100_gpu_or_similar_to_rent_it_out_for/ | false | false | self | 1 | null |
UK-based: Can I colocate and rent out a single A100 GPU as a solo investor? | 1 | [removed] | 2025-08-02T17:13:16 | https://www.reddit.com/r/LocalLLaMA/comments/1mfvu5v/ukbased_can_i_colocate_and_rent_out_a_single_a100/ | Regular_Folk_2530 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mfvu5v | false | null | t3_1mfvu5v | /r/LocalLLaMA/comments/1mfvu5v/ukbased_can_i_colocate_and_rent_out_a_single_a100/ | false | false | self | 1 | null |
Buy 1 A100 GPU or similar to rent it out for investment purposes. No technical knowledge | 0 | Hi all - Based in London.
I was thinking as an investment opportunity to buy maybe 1 GPU (like an A100) and then rent it out via vastai or similar. I do not have the time, knowledge or space to do it myself so I was thinking if there are places where I can co-locate it (like small datacentres or similar) where they ... | 2025-08-02T17:05:41 | https://www.reddit.com/r/LocalLLaMA/comments/1mfvns9/buy_1_a100_gpu_or_similar_to_rent_it_out_for/ | Regular_Folk_2530 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mfvns9 | false | null | t3_1mfvns9 | /r/LocalLLaMA/comments/1mfvns9/buy_1_a100_gpu_or_similar_to_rent_it_out_for/ | false | false | self | 0 | null |
Chatterbox TTS in cloud? | 0 | Hi All,
I'm quite new to local AI models, and started today by playing with Chatterbox TTS on my Mac Studio M4 (using the apple silicon version on Hugging Face). Also, hopefully this is the right reddit - I see other posts regarding Chatterbox here, so I guess it is!
It's actually working very nicely indeed, doing a ... | 2025-08-02T17:01:32 | https://www.reddit.com/r/LocalLLaMA/comments/1mfvk5h/chatterbox_tts_in_cloud/ | BrotherBrutha | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mfvk5h | false | null | t3_1mfvk5h | /r/LocalLLaMA/comments/1mfvk5h/chatterbox_tts_in_cloud/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'PkgpYMq-5_74dXyko9Zh9FOppnVlxdQc6WIclap5mWY', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/PkgpYMq-5_74dXyko9Zh9FOppnVlxdQc6WIclap5mWY.jpeg?width=108&crop=smart&auto=webp&s=d2ec29473ad9a43f57f6de38e719603168628711', 'width': 108}, {'height': 113, 'url': '... |
💡 Just Finished Building a Tool That Combines Green Investing with Carbon Impact Tracking – Thoughts? | 1 | [removed] | 2025-08-02T16:57:20 | https://www.reddit.com/r/LocalLLaMA/comments/1mfvgfo/just_finished_building_a_tool_that_combines_green/ | Round-Bluebird685 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mfvgfo | false | null | t3_1mfvgfo | /r/LocalLLaMA/comments/1mfvgfo/just_finished_building_a_tool_that_combines_green/ | false | false | self | 1 | null |
AGI Could Be Our Era's Perpetual Motion Machine - Forever Out of Reach, Though Current AI Already Amazes | 0 | To be frank, AGI doesn't particularly interest or thrill me. Given current technological frameworks, I believe AGI won't arrive anytime soon without some breakthrough discovery. The models we have today would have seemed absolutely magical just five years ago.
Can anyone give me an excitement you have with AGI ? | 2025-08-02T16:54:38 | https://www.reddit.com/r/LocalLLaMA/comments/1mfve4v/agi_could_be_our_eras_perpetual_motion_machine/ | dheetoo | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mfve4v | false | null | t3_1mfve4v | /r/LocalLLaMA/comments/1mfve4v/agi_could_be_our_eras_perpetual_motion_machine/ | false | false | self | 0 | null |
Best local LLM that fits with 12GB VRAM? | 8 | I’m using Qwen 3 14B right now but haven’t checked out any of the new Gemma models nor Phi. Would running the 30B MoE Qwen3 model be advised (I have enough system memory but not enough VRAM)? | 2025-08-02T16:41:55 | https://www.reddit.com/r/LocalLLaMA/comments/1mfv3b0/best_local_llm_that_fits_with_12gb_vram/ | tthane50 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mfv3b0 | false | null | t3_1mfv3b0 | /r/LocalLLaMA/comments/1mfv3b0/best_local_llm_that_fits_with_12gb_vram/ | false | false | self | 8 | null |
Need Help: Building a University Assistant RAGbot | 1 |
Hey everyone!
I’m working on a Final Year Project where I plan to build a **chatbot for my university** using Retrieval-Augmented Generation (RAG) and possibly agentic AI features.
The goal is to create a system that can help students by answering questions and assisting with common university-related tasks.
🔍 *... | 2025-08-02T16:37:14 | https://www.reddit.com/r/LocalLLaMA/comments/1mfuz5w/need_help_building_a_university_assistant_ragbot/ | jarrarhaidery | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mfuz5w | false | null | t3_1mfuz5w | /r/LocalLLaMA/comments/1mfuz5w/need_help_building_a_university_assistant_ragbot/ | false | false | self | 1 | null |
Gateway/Proxy for Claude-Code to OpenAI API compatible. | 6 | Sharing an OpenAI proxy solution for Claude-Code
[https://github.com/ziozzang/claude2openai-proxy](https://github.com/ziozzang/claude2openai-proxy)
While using Claude Code, I wanted to connect to a local model. There were tools like claude-code-router and other systems, but I couldn’t find a solid solution that worke... | 2025-08-02T16:31:16 | https://www.reddit.com/r/LocalLLaMA/comments/1mfuu40/gatewayproxy_for_claudecode_to_openai_api/ | ziozzang0 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mfuu40 | false | null | t3_1mfuu40 | /r/LocalLLaMA/comments/1mfuu40/gatewayproxy_for_claudecode_to_openai_api/ | false | false | self | 6 | {'enabled': False, 'images': [{'id': 'SSww-5zCchd4jUh-0DWh-Zc0KpQfP7v2XwPYRGw1fwE', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/SSww-5zCchd4jUh-0DWh-Zc0KpQfP7v2XwPYRGw1fwE.png?width=108&crop=smart&auto=webp&s=e43bc658beb09686e12af7368ba5a1c7c7d40071', 'width': 108}, {'height': 108, 'url': 'h... |
Qwen Code + Qwen Coder 30b 3A is insane | 231 | This is just a little remark that if you haven't you definitely should try qwen code [https://github.com/QwenLM/qwen-code](https://github.com/QwenLM/qwen-code)
I use qwen coder and qwen 3 30b thinking while the latter still needs some copy and pasting. I'm working on and refining a script for syncing my koreader met... | 2025-08-02T16:17:55 | https://www.reddit.com/r/LocalLLaMA/comments/1mfuiri/qwen_code_qwen_coder_30b_3a_is_insane/ | Flashy_Management962 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mfuiri | false | null | t3_1mfuiri | /r/LocalLLaMA/comments/1mfuiri/qwen_code_qwen_coder_30b_3a_is_insane/ | false | false | self | 231 | {'enabled': False, 'images': [{'id': 'NjYLWOWVN8-HDnLfB2W4w77Codi5wABh6mT-TjOuaZ8', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/NjYLWOWVN8-HDnLfB2W4w77Codi5wABh6mT-TjOuaZ8.png?width=108&crop=smart&auto=webp&s=d2382140736e501c0e5f725eb9004d88daaf4ddc', 'width': 108}, {'height': 108, 'url': 'h... |
Best model to use as agentic AI for RTX 4090? | 0 | I am currently doing the mcp course from huggingface, and I am planning to roll my own local agentic AI. Any idea what the BEST model I should use for RTX 4090? I know best is objective, so I am looking for two models, one for general purpose, and the other for coding. I will be building simple tools for personal use. ... | 2025-08-02T16:10:03 | https://www.reddit.com/r/LocalLLaMA/comments/1mfubwt/best_model_to_use_as_agentic_ai_for_rtx_4090/ | Flat_Chard_3763 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mfubwt | false | null | t3_1mfubwt | /r/LocalLLaMA/comments/1mfubwt/best_model_to_use_as_agentic_ai_for_rtx_4090/ | false | false | self | 0 | null |
Learn GPU AI | 0 | Hi guys, I'm quite new to this topic. Do you where I can find info for starter who doenst have tech background? And what kind of companies are the best out there? | 2025-08-02T16:02:45 | https://www.reddit.com/r/LocalLLaMA/comments/1mfu5ll/learn_gpu_ai/ | AutomaticAbility2008 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mfu5ll | false | null | t3_1mfu5ll | /r/LocalLLaMA/comments/1mfu5ll/learn_gpu_ai/ | false | false | self | 0 | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.