title stringlengths 1 300 | score int64 0 8.54k | selftext stringlengths 0 41.5k | created timestamp[ns]date 2023-04-01 04:30:41 2026-03-04 02:14:14 ⌀ | url stringlengths 0 878 | author stringlengths 3 20 | domain stringlengths 0 82 | edited timestamp[ns]date 1970-01-01 00:00:00 2026-02-19 14:51:53 | gilded int64 0 2 | gildings stringclasses 7
values | id stringlengths 7 7 | locked bool 2
classes | media stringlengths 646 1.8k ⌀ | name stringlengths 10 10 | permalink stringlengths 33 82 | spoiler bool 2
classes | stickied bool 2
classes | thumbnail stringlengths 4 213 ⌀ | ups int64 0 8.54k | preview stringlengths 301 5.01k ⌀ |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Accelerating OpenAI’s gpt-oss Models, New CUDA 13.0, and More | New NVIDIA Newsletter | 0 | Another week filled with massive AI stories, let’s catch up with your weekly drop of developer news, tools, and releases. Here’s what you need to know 👇 from the NVIDIA dev comms team.
[Accelerating OpenAI’s gpt-oss Models, New CUDA 13.0, and More](https://www.linkedin.com/feed/update/urn:li:activity:7359623943172841... | 2025-08-08T19:03:17 | https://www.reddit.com/r/LocalLLaMA/comments/1ml3yg9/accelerating_openais_gptoss_models_new_cuda_130/ | PDXcoder2000 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ml3yg9 | false | null | t3_1ml3yg9 | /r/LocalLLaMA/comments/1ml3yg9/accelerating_openais_gptoss_models_new_cuda_130/ | false | false | 0 | null | |
Has GPT-5 memorised Alice’s Sister Problem? | 0 | Up to GPT-4o, LLM could not answer the question: Alice has N brothers and she also has M sisters. How many sisters does Alice’s brother have?
To test GPT-5, I change the last name to Mette. The answer should be unknown. But we get Alice’s brother’s correct.
My guess is they RL tough questions :) PhD sampling to AGI.
| 2025-08-08T18:55:57 | KitchenFalcon4667 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1ml3rhn | false | null | t3_1ml3rhn | /r/LocalLLaMA/comments/1ml3rhn/has_gpt5_memorised_alices_sister_problem/ | false | false | 0 | {'enabled': True, 'images': [{'id': 'igvbZPec1znZyY7PjeFyB5OdC10sur5QfDo-1meEk8A', 'resolutions': [{'height': 120, 'url': 'https://preview.redd.it/479gziwleuhf1.jpeg?width=108&crop=smart&auto=webp&s=079ff395897f40f61dad814ebc6d04ba3b8bbe06', 'width': 108}, {'height': 241, 'url': 'https://preview.redd.it/479gziwleuhf1.j... | ||
TTS, AI, Offline, 6 TTS Engines - MagicMixTTS Pro - demo and full version | 0 | 2025-08-08T18:49:24 | https://youtu.be/NLHv6jED4mo?si=T3c1wBZciNKjBWMg | Mercyfulking | youtu.be | 1970-01-01T00:00:00 | 0 | {} | 1ml3lew | false | {'oembed': {'author_name': 'MercyfulKing', 'author_url': 'https://www.youtube.com/@MercyfulKing', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/NLHv6jED4mo?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyrosc... | t3_1ml3lew | /r/LocalLLaMA/comments/1ml3lew/tts_ai_offline_6_tts_engines_magicmixtts_pro_demo/ | false | false | default | 0 | {'enabled': False, 'images': [{'id': '6TNu6DIql4-V48vIXuiIcz0UaKBffibOmukubrT43ag', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/6TNu6DIql4-V48vIXuiIcz0UaKBffibOmukubrT43ag.jpeg?width=108&crop=smart&auto=webp&s=c680dcf3dcb95209f3b8c21a79f1d85b98f6bfd8', 'width': 108}, {'height': 162, 'url': '... | |
GLM-4.5 Air Q8 vs GLM-4.5 IQ2_XXS | 67 | Lowest of lows post, but in all seriousness, both quants are virtually the same size:
GLM-4.5 Air Q8 = 117.5 GB
GLM-4.5 IQ2\_XXS = 115.8 GB
I can't be the only one with 128 GB RAM having asked that question to themselves. While GLM-4.5 Air Q6\_K\_XL is downloading, has anyone by any chance tried both quants and ca... | 2025-08-08T18:47:59 | https://www.reddit.com/r/LocalLLaMA/comments/1ml3k2m/glm45_air_q8_vs_glm45_iq2_xxs/ | therealAtten | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ml3k2m | false | null | t3_1ml3k2m | /r/LocalLLaMA/comments/1ml3k2m/glm45_air_q8_vs_glm45_iq2_xxs/ | false | false | self | 67 | null |
Managing GPU jobs across CoreWeave/Lambda/RunPod is a mess, so im building a simple dashboard | 4 | If you’ve ever trained models across different GPU cloud providers, you know how painful it is to:
* Track jobs across platforms
* Keep an eye on GPU hours and costs
* See logs/errors without digging through multiple UIs
I’m building a super simple “Stripe for supercomputers” style dashboard (fake data for now), but ... | 2025-08-08T18:46:28 | NoTap8152 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1ml3imh | false | null | t3_1ml3imh | /r/LocalLLaMA/comments/1ml3imh/managing_gpu_jobs_across_coreweavelambdarunpod_is/ | false | false | default | 4 | {'enabled': True, 'images': [{'id': 'rxv8kqhgcuhf1', 'resolutions': [{'height': 51, 'url': 'https://preview.redd.it/rxv8kqhgcuhf1.png?width=108&crop=smart&auto=webp&s=a93ef7576f6c9239caa87b91986ab90429396bd4', 'width': 108}, {'height': 103, 'url': 'https://preview.redd.it/rxv8kqhgcuhf1.png?width=216&crop=smart&auto=web... | |
Free Perplexity Alternative | 0 | Today, I'm Pre-Launching YouTopia Search at
👉 [https://youtopia.co.in](https://youtopia.co.in)
I've built an AI based search engine that curates and organizes human-made content into visually rich, readable, and clutter-free responses instead of generating content with ai.
launch video [https://www.youtube.com/watc... | 2025-08-08T18:37:13 | https://www.reddit.com/gallery/1ml39y4 | Effective-Sock7512 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1ml39y4 | false | null | t3_1ml39y4 | /r/LocalLLaMA/comments/1ml39y4/free_perplexity_alternative/ | false | false | 0 | null | |
Free Perplexity alternative | 0 | Today, I'm Pre-Launching YouTopia Search at
👉 [https://youtopia.co.in](https://youtopia.co.in)
https://preview.redd.it/9nq08sjgauhf1.png?width=2848&format=png&auto=webp&s=57210067a1d4464ea9e2110a5221791d1ca45e7e
https://preview.redd.it/tave36uhauhf1.png?width=2848&format=png&auto=webp&s=91b36c6157293ad9ed9e95af9b2f... | 2025-08-08T18:33:27 | https://www.reddit.com/r/LocalLLaMA/comments/1ml3682/free_perplexity_alternative/ | Effective-Sock7512 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ml3682 | false | null | t3_1ml3682 | /r/LocalLLaMA/comments/1ml3682/free_perplexity_alternative/ | false | false | 0 | null | |
How can I get a very fast version of OpenAI’s gpt-oss? | 0 | What I'm looking for: 1000+ tokens/sec min, real-time web search integration, for production apps (scalable), mainly chatbot use cases.
Someone mentioned Cerebras can hit 3,000+ tokens/sec with this model, but I can't find solid documentation on the setup. Others are talking about custom inference servers, but that so... | 2025-08-08T18:30:52 | https://www.reddit.com/r/LocalLLaMA/comments/1ml33th/how_can_i_get_a_very_fast_version_of_openais/ | No_Marionberry_5366 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ml33th | false | null | t3_1ml33th | /r/LocalLLaMA/comments/1ml33th/how_can_i_get_a_very_fast_version_of_openais/ | false | false | self | 0 | null |
Free Perplexity. alternative | 0 | [removed] | 2025-08-08T18:29:32 | https://www.reddit.com/r/LocalLLaMA/comments/1ml32l2/free_perplexity_alternative/ | Effective-Sock7512 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ml32l2 | false | null | t3_1ml32l2 | /r/LocalLLaMA/comments/1ml32l2/free_perplexity_alternative/ | false | false | self | 0 | null |
gpt-oss: Everything You Need to Know in Under 2 Minutes | 0 | quick, no-fluff overview of OpenAI’s new open-weight GPT model, gpt-oss. Covers release date, specs, performance, hardware needs, and common use cases, all in under 2 minutes. | 2025-08-08T18:27:38 | https://youtu.be/oLFcpIGmTkU | 1BlueSpork | youtu.be | 1970-01-01T00:00:00 | 0 | {} | 1ml30vy | false | {'oembed': {'author_name': 'BlueSpork', 'author_url': 'https://www.youtube.com/@BlueSpork', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/oLFcpIGmTkU?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; p... | t3_1ml30vy | /r/LocalLLaMA/comments/1ml30vy/gptoss_everything_you_need_to_know_in_under_2/ | false | false | default | 0 | null |
Macbook Pro - 48GB vs 64GB | 0 | I'm planning to get a 48GB Macbook (M4 Pro) and hoping to run 32B models comfortably. But also wanted to double check if there's a huge difference between that and the 64GB Macbook (M4 Max). The price increase is quite substantial so wondering if it's absolutely worth it or just a marginal improvement? | 2025-08-08T18:20:15 | https://www.reddit.com/r/LocalLLaMA/comments/1ml2txv/macbook_pro_48gb_vs_64gb/ | tangbj | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ml2txv | false | null | t3_1ml2txv | /r/LocalLLaMA/comments/1ml2txv/macbook_pro_48gb_vs_64gb/ | false | false | self | 0 | null |
Actual open source local memory with no hidden cloud | 22 | Hi,
Just saw a post from MemU and the feedback from the community.
If you want local memory, try cognee.
We store data in local lancedb and kuzu instances for embeddings and graphs.
We just added BAML powered LLM calls that should let you create memory more reliably with Ollama.
Feel free to test it out ... | 2025-08-08T18:19:25 | https://www.reddit.com/r/LocalLLaMA/comments/1ml2t67/actual_open_source_local_memory_with_no_hidden/ | Short-Honeydew-7000 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ml2t67 | false | null | t3_1ml2t67 | /r/LocalLLaMA/comments/1ml2t67/actual_open_source_local_memory_with_no_hidden/ | false | false | 22 | {'enabled': False, 'images': [{'id': 'UIfRRgdBHdYc2kWUJyryMmJQiUB0JAMTOp6DVSAFHgs', 'resolutions': [{'height': 76, 'url': 'https://external-preview.redd.it/UIfRRgdBHdYc2kWUJyryMmJQiUB0JAMTOp6DVSAFHgs.png?width=108&crop=smart&auto=webp&s=5f5afffe7b323118cacc64ded3a6ff164d2f63d3', 'width': 108}, {'height': 153, 'url': 'h... | |
This is awkward | 735 | 2025-08-08T18:17:14 | createthiscom | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1ml2r4t | false | null | t3_1ml2r4t | /r/LocalLLaMA/comments/1ml2r4t/this_is_awkward/ | false | false | default | 735 | {'enabled': True, 'images': [{'id': 'cf9lsdxk7uhf1', 'resolutions': [{'height': 73, 'url': 'https://preview.redd.it/cf9lsdxk7uhf1.jpeg?width=108&crop=smart&auto=webp&s=04fdcb4b277260b3797f683adb0d52b60e9ff055', 'width': 108}, {'height': 147, 'url': 'https://preview.redd.it/cf9lsdxk7uhf1.jpeg?width=216&crop=smart&auto=w... | ||
Ok, this one is not practical for sure but.. | 4 | but I just want to give it a chance. Is there a UI app for Android that supports local models, and which 7B model is good for roleplay on Android? | 2025-08-08T17:59:57 | https://www.reddit.com/r/LocalLLaMA/comments/1ml2aj7/ok_this_one_is_not_practical_for_sure_but/ | YourMoM__12 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ml2aj7 | false | null | t3_1ml2aj7 | /r/LocalLLaMA/comments/1ml2aj7/ok_this_one_is_not_practical_for_sure_but/ | false | false | self | 4 | null |
What is the current best local model for Code completion/Next Code suggestion in VSCode | 1 | As the title says, what is the current best local model for Code completion/Next Code suggestion in VSCode for TypeScript/Python codebase mainly? With all the models dropping, and a newbie to understand which models perform better at what task. I am not interested in Agent/Edit mode. | 2025-08-08T17:59:43 | https://www.reddit.com/r/LocalLLaMA/comments/1ml2aaz/what_is_the_current_best_local_model_for_code/ | sabertooth9 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ml2aaz | false | null | t3_1ml2aaz | /r/LocalLLaMA/comments/1ml2aaz/what_is_the_current_best_local_model_for_code/ | false | false | self | 1 | null |
Best way to run gpt-oss-120b on 24gb VRAM? | 0 | On Windows with a 3090 in my case.
What do you recommend? Can we get to 20 tokens per second? | 2025-08-08T17:52:49 | https://www.reddit.com/r/LocalLLaMA/comments/1ml23tf/best_way_to_run_gptoss120b_on_24gb_vram/ | GravyPoo | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ml23tf | false | null | t3_1ml23tf | /r/LocalLLaMA/comments/1ml23tf/best_way_to_run_gptoss120b_on_24gb_vram/ | false | false | self | 0 | null |
GLM 4.5 Air - Optimizing - Vulkan vs. CUDA? | 4 | I do want to run GLM 4.5 Air in Q4\_K\_M - as fast as possible for code generation with e.g. RooCode.
My spec: 5060 TI 16 GB VRAM, Ryzen 9 9900X with 128GB 5600MHz DDR5 RAM, Windows 11
GLM 4.5 Air has been the best model by far to run locally on my machine for coding. I tried some warning fixing and unit testing... | 2025-08-08T17:51:08 | https://www.reddit.com/r/LocalLLaMA/comments/1ml229q/glm_45_air_optimizing_vulkan_vs_cuda/ | naxan6 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ml229q | false | null | t3_1ml229q | /r/LocalLLaMA/comments/1ml229q/glm_45_air_optimizing_vulkan_vs_cuda/ | false | false | self | 4 | null |
RLTHF implementation | 1 | Hey all, in case you will be interested, I tried to repro the RLTHF paper on a Mac, it kinda worked but the paper doesn't talk about a ton of things, so things might not work super well.
Maybe it will be useful to anyone. The implementation isn't great, so there is no point for me in promoting it.
[https://github.c... | 2025-08-08T17:48:24 | https://www.reddit.com/r/LocalLLaMA/comments/1ml1zo8/rlthf_implementation/ | teodorz | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ml1zo8 | false | null | t3_1ml1zo8 | /r/LocalLLaMA/comments/1ml1zo8/rlthf_implementation/ | false | false | self | 1 | null |
AMD MI50 32GB/Vega20 GPU Passthrough Guide for Proxmox | 15 | What This Guide Solves
If you're trying to pass through an AMD Vega20 GPU (like the MI50 or Radeon Pro VII) to a VM in Proxmox and getting stuck with the dreaded "atombios stuck in loop" error, this guide is for you. The solution involves installing the vendor-reset kernel module on your Proxmox host.
**Important not... | 2025-08-08T17:21:55 | https://www.reddit.com/r/LocalLLaMA/comments/1ml1aef/amd_mi50_32gbvega20_gpu_passthrough_guide_for/ | Panda24z | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ml1aef | false | null | t3_1ml1aef | /r/LocalLLaMA/comments/1ml1aef/amd_mi50_32gbvega20_gpu_passthrough_guide_for/ | false | false | self | 15 | {'enabled': False, 'images': [{'id': 'dn0zRvYRCLJ4WyFB4G3De1IpZ34g1XP3OUJnQF8Ya4w', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/dn0zRvYRCLJ4WyFB4G3De1IpZ34g1XP3OUJnQF8Ya4w.jpeg?width=108&crop=smart&auto=webp&s=8f6f2d70179a16703244834335c65a9580d5ff38', 'width': 108}, {'height': 162, 'url': '... |
Looking for good general use models around the 14b area, any recommendations? | 0 | Preferably gguf but any suggestions are welcome | 2025-08-08T17:16:22 | https://www.reddit.com/r/LocalLLaMA/comments/1ml154w/looking_for_good_general_use_models_around_the/ | a_normal_user1 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ml154w | false | null | t3_1ml154w | /r/LocalLLaMA/comments/1ml154w/looking_for_good_general_use_models_around_the/ | false | false | self | 0 | null |
Visualization - How LLMs Just Predict The Next Word | 10 | 2025-08-08T17:15:45 | https://youtu.be/6dn1kUwTFcc | kushalgoenka | youtu.be | 1970-01-01T00:00:00 | 0 | {} | 1ml14kw | false | {'oembed': {'author_name': 'Kushal Goenka', 'author_url': 'https://www.youtube.com/@KushalGoenka', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/6dn1kUwTFcc?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyros... | t3_1ml14kw | /r/LocalLLaMA/comments/1ml14kw/visualization_how_llms_just_predict_the_next_word/ | false | false | default | 10 | {'enabled': False, 'images': [{'id': 'NvAI6Yum9O40l3qZlOeyOssVIs2oLgJwnoMTWT8Xzzg', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/NvAI6Yum9O40l3qZlOeyOssVIs2oLgJwnoMTWT8Xzzg.jpeg?width=108&crop=smart&auto=webp&s=eca4b8e018f3fc41762631077734c1eb15130e2f', 'width': 108}, {'height': 162, 'url': '... | |
Multi token prediction in HF Models | 3 | is it possible to implement Multi token prediction in Hugging Face models and then train them?
I found same thing in Nemo and Megatron but that's very complex.
| 2025-08-08T17:09:48 | https://www.reddit.com/r/LocalLLaMA/comments/1ml0yyc/multi_token_prediction_in_hf_models/ | Interesting-Fish-542 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ml0yyc | false | null | t3_1ml0yyc | /r/LocalLLaMA/comments/1ml0yyc/multi_token_prediction_in_hf_models/ | false | false | self | 3 | null |
What is the successor to gpt-4o-mini? | 0 | I have a pipeline that is working fine with gpt-4o-mini. Now with GPT 5 out, it seems everything else is deprecated and I have to find a replacement.
The problem is, both mini and nano variants seem to be missing one aspect in replacing 4o-mini: mini is much more expensive, and nano seems to be inferior in performance... | 2025-08-08T17:02:31 | https://www.reddit.com/r/LocalLLaMA/comments/1ml0rw1/what_is_the_successor_to_gpt4omini/ | ihatebeinganonymous | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ml0rw1 | false | null | t3_1ml0rw1 | /r/LocalLLaMA/comments/1ml0rw1/what_is_the_successor_to_gpt4omini/ | false | false | self | 0 | null |
Best chat interface currently (Aug 2025) | 18 | I have a home server and I'm trying to find the best frontend chat interface to setup that would actually be useful for day to day use. LibreChat is alright but feels bloated and overcomplicated, OpenWebUI is alright, but also feels a bit overcomplicated and somehow falls short on multiple models support, I'd like to b... | 2025-08-08T17:00:30 | https://www.reddit.com/r/LocalLLaMA/comments/1ml0pt7/best_chat_interface_currently_aug_2025/ | cmdr-William-Riker | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ml0pt7 | false | null | t3_1ml0pt7 | /r/LocalLLaMA/comments/1ml0pt7/best_chat_interface_currently_aug_2025/ | false | false | self | 18 | null |
MemU: Let AI Truly Memorize You | 53 | Github: [https://github.com/NevaMind-AI/memU](https://github.com/NevaMind-AI/memU)
MemU provides an intelligent memory layer for AI agents. It treats memory as a hierarchical file system: one where entries can be written, connected, revised, and prioritized automatically over time. At the core of MemU is a dedicated m... | 2025-08-08T16:56:15 | EducationalSound5687 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1ml0ltl | false | null | t3_1ml0ltl | /r/LocalLLaMA/comments/1ml0ltl/memu_let_ai_truly_memorize_you/ | false | false | default | 53 | {'enabled': True, 'images': [{'id': 'n5rl0ud1tthf1', 'resolutions': [{'height': 60, 'url': 'https://preview.redd.it/n5rl0ud1tthf1.jpeg?width=108&crop=smart&auto=webp&s=d454cee0af39c3bc45c52f36267a4de4be77e275', 'width': 108}, {'height': 121, 'url': 'https://preview.redd.it/n5rl0ud1tthf1.jpeg?width=216&crop=smart&auto=w... | |
Web Search MCP using Jina ai - open source | 5 | Hi everyone, I created an easily deployable streamable HTTP MCP server that anyone can use locally on Docker or Python. It was a quick project I put together, so I can leverage those free api tokens they give out in LM Studio. Just wanted to share the project with anyone interested. Thanks [https://github.com/hypersnip... | 2025-08-08T16:42:36 | https://www.reddit.com/r/LocalLLaMA/comments/1ml08oc/web_search_mcp_using_jina_ai_open_source/ | Delicious-Farmer-234 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ml08oc | false | null | t3_1ml08oc | /r/LocalLLaMA/comments/1ml08oc/web_search_mcp_using_jina_ai_open_source/ | false | false | self | 5 | {'enabled': False, 'images': [{'id': 'HfUcXFSF7SueKDPp577NLqeV6ojgcoho98glHMmHX1o', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/HfUcXFSF7SueKDPp577NLqeV6ojgcoho98glHMmHX1o.png?width=108&crop=smart&auto=webp&s=050a85ec1481a6db3bb61c19a13e6f67b0c54651', 'width': 108}, {'height': 108, 'url': 'h... |
Ecc vs non ecc ram impact on llm result quality ? | 0 | Is there any change for output quality like more hallucinations from corrupt data or something on llms running on normal ram vs ones on ecc ram ? Does using normal gpus or ram or apple macs have a hidden downside due to no ecc ram vs using nvidia server ecx vram gpus or epyc server cpus using ecc ddr5 ram ? | 2025-08-08T16:41:35 | https://www.reddit.com/r/LocalLLaMA/comments/1ml07qc/ecc_vs_non_ecc_ram_impact_on_llm_result_quality/ | Hamza9575 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ml07qc | false | null | t3_1ml07qc | /r/LocalLLaMA/comments/1ml07qc/ecc_vs_non_ecc_ram_impact_on_llm_result_quality/ | false | false | self | 0 | null |
GPT 5 for Local Computer Use agents | 0 | Same tasks, same grounding model we just swapped GPT 4o with GPT 5 as the thinking model.
Left = 4o, right = 5.
Watch GPT 5 pull away.
Grounding model: Salesforce GTA1-7B
Action space: CUA Cloud Instances (macOS/Linux/Windows)
The task is: "Navigate to {random_url} and play the game until you reach a score o... | 2025-08-08T16:30:14 | https://v.redd.it/z47azrolothf1 | Impressive_Half_2819 | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1mkzx2f | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/z47azrolothf1/DASHPlaylist.mpd?a=1757262630%2CYjdiMDllYjVhY2Q5ZDRhMDJkMjQ3Yjc0YzQ2MDFiNWQyNmNkMjM0M2VjYTViMWViZTAwOTkwMjI0OTFkZmRmMQ%3D%3D&v=1&f=sd', 'duration': 27, 'fallback_url': 'https://v.redd.it/z47azrolothf1/DASH_1080.mp4?source=fallback', 'h... | t3_1mkzx2f | /r/LocalLLaMA/comments/1mkzx2f/gpt_5_for_local_computer_use_agents/ | false | false | 0 | {'enabled': False, 'images': [{'id': 'eHEzZnRxYmxvdGhmMffa9LUhs6wvp7jU6XPjtPFZB1S0k_8zNod6eLcZn2nM', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/eHEzZnRxYmxvdGhmMffa9LUhs6wvp7jU6XPjtPFZB1S0k_8zNod6eLcZn2nM.png?width=108&crop=smart&format=pjpg&auto=webp&s=76d75b5d06ac4546b7ff334bb0526fbdd485b... | |
Which quant model would be best for the GPT-OSS-20B model? | 0 | Which quant model would be best for the GPT-OSS-20B model, I have a 5060ti 16GB card 64GB system ram. It looks like the F16 should work but would going to a 6, 5, or 4 bit be better so I have more vram for context? Wanting to use it for coding primarily. | 2025-08-08T16:28:30 | wreckerone1 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1mkzvgw | false | null | t3_1mkzvgw | /r/LocalLLaMA/comments/1mkzvgw/which_quant_model_would_be_best_for_the_gptoss20b/ | false | false | default | 0 | {'enabled': True, 'images': [{'id': '38gtw8cknthf1', 'resolutions': [{'height': 71, 'url': 'https://preview.redd.it/38gtw8cknthf1.png?width=108&crop=smart&auto=webp&s=bd78b24002259102043bd23864f2a53727a18605', 'width': 108}, {'height': 143, 'url': 'https://preview.redd.it/38gtw8cknthf1.png?width=216&crop=smart&auto=web... | |
How do you handle LLM Based "Intelligent" classification on a large list? | 0 | I have a usecase where I want a particular "thing" to get classified into a taxonomy that I already maintain that is too large to fit into a prompt.
At the same time, if the "thing" is actually a novel item that does not fit my existing taxonomy, I want to extend the taxonomy to create a new item in it.
Has anyo... | 2025-08-08T16:23:48 | https://www.reddit.com/r/LocalLLaMA/comments/1mkzqy3/how_do_you_handle_llm_based_intelligent/ | pravictor | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mkzqy3 | false | null | t3_1mkzqy3 | /r/LocalLLaMA/comments/1mkzqy3/how_do_you_handle_llm_based_intelligent/ | false | false | self | 0 | null |
Worth grabbing last RTX 6000 Pro Blackwell Max-Q? | 0 | Might be staring at a once-only shot here — the last RTX 6000 Pro Blackwell Max-Q (300 W, 96 GB GDDR7) available through official channels in India. Seller wants ₹750,000 (\~$8500k USD) all taxes inclusive and no import duties as this was actually imported for a university with a MoU. That’s about 8% more than what I p... | 2025-08-08T16:23:30 | https://www.reddit.com/r/LocalLLaMA/comments/1mkzqmx/worth_grabbing_last_rtx_6000_pro_blackwell_maxq/ | susmitds | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mkzqmx | false | null | t3_1mkzqmx | /r/LocalLLaMA/comments/1mkzqmx/worth_grabbing_last_rtx_6000_pro_blackwell_maxq/ | false | false | self | 0 | null |
Updates on a project I am passionate about- Darnahi | 0 | Updates on a project I am passionate about- Darnahi
Imagine visiting a doctor 5 years ago. Now imagine if you still have the record if you look for it. Darnahi will allow you to store it, index it and use it to generate personal health insights using local llm.
Darnahi v2.5 is a personal health intelligence app that ... | 2025-08-08T16:22:55 | https://www.reddit.com/r/LocalLLaMA/comments/1mkzq2z/updates_on_a_project_i_am_passionate_about_darnahi/ | TestPilot1980 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mkzq2z | false | null | t3_1mkzq2z | /r/LocalLLaMA/comments/1mkzq2z/updates_on_a_project_i_am_passionate_about_darnahi/ | false | false | self | 0 | null |
Open-source protocol for secure tool-calling [Technical Specification] | 8 | 2025-08-08T16:20:41 | https://www.utcp.io/RFC | juanviera23 | utcp.io | 1970-01-01T00:00:00 | 0 | {} | 1mkzny7 | false | null | t3_1mkzny7 | /r/LocalLLaMA/comments/1mkzny7/opensource_protocol_for_secure_toolcalling/ | false | false | default | 8 | null | |
Can i load Mistral 7B instruct v0.2 on my MacBook Air M1 in Full Precision (bf16) ? | 0 | I want to test Mistral 7B instruct v0.2 on my MacBook Air M1 in Full Precision (bf16). Is it possible to load and generate inference from the model ?
My MacBook specs are : Apple M1 Chip with 8GB Unified RAM , 256GB SSD | 2025-08-08T16:09:05 | https://www.reddit.com/r/LocalLLaMA/comments/1mkzcw8/can_i_load_mistral_7b_instruct_v02_on_my_macbook/ | DefinitionFew9850 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mkzcw8 | false | null | t3_1mkzcw8 | /r/LocalLLaMA/comments/1mkzcw8/can_i_load_mistral_7b_instruct_v02_on_my_macbook/ | false | false | self | 0 | null |
New paper reveals Chain-of-Thought reasoning of LLMs a mirage | 43 | 2025-08-08T16:06:02 | https://arxiv.org/pdf/2508.01191 | AloneCoffee4538 | arxiv.org | 1970-01-01T00:00:00 | 0 | {} | 1mkza1b | false | null | t3_1mkza1b | /r/LocalLLaMA/comments/1mkza1b/new_paper_reveals_chainofthought_reasoning_of/ | false | false | default | 43 | null | |
does anybody actually deploy on-prem? why so, why not? | 0 | We shouldn't feel comfortable with shipping our personal data to OpenAI or Anthropic, so why don't most companies deploy on-prem?
Is the set-up hard? Is it too expensive to do, unless you're a massive company? Does it fail for compliance (cybersecurity) reasons?
I'd love your thoughts, particularly if you've conside... | 2025-08-08T16:05:09 | https://www.reddit.com/r/LocalLLaMA/comments/1mkz94z/does_anybody_actually_deploy_onprem_why_so_why_not/ | JudgeInside2172 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mkz94z | false | null | t3_1mkz94z | /r/LocalLLaMA/comments/1mkz94z/does_anybody_actually_deploy_onprem_why_so_why_not/ | false | false | self | 0 | null |
GPT5 is not AGI, not the best investment by OAI | 0 | I still love Claude, may good things come soon. | 2025-08-08T16:01:43 | https://www.reddit.com/gallery/1mkz5ql | Slow_Protection_26 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1mkz5ql | false | null | t3_1mkz5ql | /r/LocalLLaMA/comments/1mkz5ql/gpt5_is_not_agi_not_the_best_investment_by_oai/ | false | false | 0 | null | |
Good OS models to run on 64gb MacBook Pro? | 0 | It’s been several months since I’ve been tracking the open source models and so much has changed in the meantime. Y’all are more recent with new models, so I’m putting this query out there.
I have an M1 MacBook Pro with 64GB of unified RAM. I have LM studio installed. In curious what would be the “best” (or very good... | 2025-08-08T15:55:37 | https://www.reddit.com/r/LocalLLaMA/comments/1mkyzp0/good_os_models_to_run_on_64gb_macbook_pro/ | Glass-Garbage4818 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mkyzp0 | false | null | t3_1mkyzp0 | /r/LocalLLaMA/comments/1mkyzp0/good_os_models_to_run_on_64gb_macbook_pro/ | false | false | self | 0 | null |
GPT5 fixed the blueberry thing (for me, at least), but it still get sit wrong if you misspell the question (mississipp) | 0 | 2025-08-08T15:25:12 | scubanarc | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1mky6l4 | false | null | t3_1mky6l4 | /r/LocalLLaMA/comments/1mky6l4/gpt5_fixed_the_blueberry_thing_for_me_at_least/ | false | false | 0 | {'enabled': True, 'images': [{'id': 'xKnwg6tja1U8hX5OF4ipavBDk2Ag_g1HSnKsOxKOr4E', 'resolutions': [{'height': 29, 'url': 'https://preview.redd.it/qhr0b47scthf1.png?width=108&crop=smart&auto=webp&s=523dad3ec55d9298b5fb2e46794565cf11fd9900', 'width': 108}, {'height': 58, 'url': 'https://preview.redd.it/qhr0b47scthf1.png?... | |||
Qwen Code Now Offering 2000 free Qwen Code runs daily | 561 | tweet link: [https://x.com/Alibaba\_Qwen/status/1953835877555151134](https://x.com/Alibaba_Qwen/status/1953835877555151134) | 2025-08-08T15:23:35 | z1xto | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1mky54y | false | null | t3_1mky54y | /r/LocalLLaMA/comments/1mky54y/qwen_code_now_offering_2000_free_qwen_code_runs/ | false | false | default | 561 | {'enabled': True, 'images': [{'id': '0qdg1xmncthf1', 'resolutions': [{'height': 101, 'url': 'https://preview.redd.it/0qdg1xmncthf1.png?width=108&crop=smart&auto=webp&s=98414528dc500bec81e614248bc971b92eacb0a8', 'width': 108}, {'height': 202, 'url': 'https://preview.redd.it/0qdg1xmncthf1.png?width=216&crop=smart&auto=we... | |
Why Open Source is Needed | 427 | With this new launch, OpenAI cut total weekly reasoning model requests from 2900 to 200(!!!!) also with a huge reduction in context window length.
Yes, $200 a month for a measly 128k context window length.
Just goes to show why open source models and more companies being able to host these models protects the consum... | 2025-08-08T15:22:58 | LostMyOtherAcct69 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1mky4jd | false | null | t3_1mky4jd | /r/LocalLLaMA/comments/1mky4jd/why_open_source_is_needed/ | false | false | default | 427 | {'enabled': True, 'images': [{'id': 'k8n9e70mcthf1', 'resolutions': [{'height': 73, 'url': 'https://preview.redd.it/k8n9e70mcthf1.jpeg?width=108&crop=smart&auto=webp&s=cd7131578c7f3d2940bfdc1a152782e8442e585b', 'width': 108}, {'height': 147, 'url': 'https://preview.redd.it/k8n9e70mcthf1.jpeg?width=216&crop=smart&auto=w... | |
Qwen Code Now Offering 2000 free Qwen Code runs daily | 1 | 2025-08-08T15:21:57 | https://www.reddit.com/r/LocalLLaMA/comments/1mky3ld/qwen_code_now_offering_2000_free_qwen_code_runs/ | z1xto | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mky3ld | false | null | t3_1mky3ld | /r/LocalLLaMA/comments/1mky3ld/qwen_code_now_offering_2000_free_qwen_code_runs/ | false | false | 1 | null | ||
Best way to extract structured data from PDFs using local LLMs (no OCR, no cloud)? | 1 | Hi,
I receive purchase orders (POs) in various formats — different column layouts and inconsistent field names. For example, an item might be labeled as `product_code`, `article_number`, or `part_number`.
I want to extract structured information from these PDFs into a JSON with fixed fields:
`part_number`, `descrip... | 2025-08-08T15:21:01 | https://www.reddit.com/r/LocalLLaMA/comments/1mky2px/best_way_to_extract_structured_data_from_pdfs/ | EasternAttorney2614 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mky2px | false | null | t3_1mky2px | /r/LocalLLaMA/comments/1mky2px/best_way_to_extract_structured_data_from_pdfs/ | false | false | self | 1 | null |
Hugging Face AI Sheets, open-source tool to do data work with open and local models | 27 | Hey!
I'm one of the authors of the tool.
We've just open sourced it and think it would be cool for this community. You can vibe test 1000s of models on Hugging Face via Inference Providers, and more importantly deploy it and run local models.
Repo: [https://github.com/huggingface/aisheets](https://github.com/huggin... | 2025-08-08T15:17:30 | https://v.redd.it/st1tql80bthf1 | dvilasuero | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1mkxzdg | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/st1tql80bthf1/DASHPlaylist.mpd?a=1757258266%2CNjJlYjNjNjQxZTQ4NWZhNzIwZmViNjY0ZmE5ZTgxMWZiYTJhM2EwODg0NzViY2IxOGVlNTViNWIyYjhjN2ZhYg%3D%3D&v=1&f=sd', 'duration': 6, 'fallback_url': 'https://v.redd.it/st1tql80bthf1/DASH_1080.mp4?source=fallback', 'ha... | t3_1mkxzdg | /r/LocalLLaMA/comments/1mkxzdg/hugging_face_ai_sheets_opensource_tool_to_do_data/ | false | false | 27 | {'enabled': False, 'images': [{'id': 'Z2tyN2NsODBidGhmMYRX1ulV9J5i3yzICa2szL8XEqVGjY7m7LWkytk1RKM3', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/Z2tyN2NsODBidGhmMYRX1ulV9J5i3yzICa2szL8XEqVGjY7m7LWkytk1RKM3.png?width=108&crop=smart&format=pjpg&auto=webp&s=91004a53d5974c400acf855446829b970fc8b... | |
AI First Science Fantasy RPG with Open Lore and Machine Readable Dataset | 8 | I have been building Aeonisk, a science fantasy tabletop RPG that is designed from the ground up for AI integration. It is not just playable at the table, it is fully usable as training data for local LLMs, narrative generators, and AI assisted game masters.
Aeonisk is AI first. Every rule, lore fragment, and encounte... | 2025-08-08T15:10:33 | https://chatgpt.com/g/g-680299b1a5f08191b869fe352f33cc1a-aeonisk | 3RiversAINexus | chatgpt.com | 1970-01-01T00:00:00 | 0 | {} | 1mkxswi | false | null | t3_1mkxswi | /r/LocalLLaMA/comments/1mkxswi/ai_first_science_fantasy_rpg_with_open_lore_and/ | false | false | default | 8 | null |
GLM-4.5 series new models will be open source soon | 288 | 2025-08-08T15:03:54 | Fun-Doctor6855 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1mkxmoa | false | null | t3_1mkxmoa | /r/LocalLLaMA/comments/1mkxmoa/glm45_series_new_models_will_be_open_source_soon/ | false | false | default | 288 | {'enabled': True, 'images': [{'id': 'mmvy25c79thf1', 'resolutions': [{'height': 216, 'url': 'https://preview.redd.it/mmvy25c79thf1.jpeg?width=108&crop=smart&auto=webp&s=4e2f51ed178415a7ae191da06024c8c547a54546', 'width': 108}, {'height': 432, 'url': 'https://preview.redd.it/mmvy25c79thf1.jpeg?width=216&crop=smart&auto=... | ||
Looking for local hardware advice | 1 | I’m looking for buy some hardware to run local LLMs it seems like Apple devices are the most power efficient route of doing this. Can anyone recommend the pros/cons of the Mac Pro 7,1 8,1 the Mac Studio m3 or 4 or just doing a cluster of m4 Mac minis? If the Mac minis are the best rout how many do you think would be us... | 2025-08-08T15:02:32 | https://www.reddit.com/r/LocalLLaMA/comments/1mkxlbs/looking_for_local_hardware_advice/ | Jyngotech | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mkxlbs | false | null | t3_1mkxlbs | /r/LocalLLaMA/comments/1mkxlbs/looking_for_local_hardware_advice/ | false | false | self | 1 | null |
ChatGPT 5 embarrassingly working on an old phantom question | 0 | It would not address my new question until I started a new chat with it. | 2025-08-08T14:50:20 | https://www.reddit.com/gallery/1mkx9ym | XiRw | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1mkx9ym | false | null | t3_1mkx9ym | /r/LocalLLaMA/comments/1mkx9ym/chatgpt_5_embarrassingly_working_on_an_old/ | false | false | 0 | null | |
a lightweight voice clone tool, not dependent on ffmpeg, Python, PyTorch, ONNX, just a single executable file | 50 | **Hello everyone,**
I built an [OpenVoice](https://github.com/myshell-ai/OpenVoice)-based voice cloning tool that requires no installation, just a single executable file (~14M), supporting multiple formats without dependencies on ffmpeg, Python, PyTorch, ONNX.
**Features:**
- Single-file executable - no installation... | 2025-08-08T14:46:11 | https://www.reddit.com/r/LocalLLaMA/comments/1mkx5zo/a_lightweight_voice_clone_tool_not_dependent_on/ | Suitable-Patience916 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mkx5zo | false | null | t3_1mkx5zo | /r/LocalLLaMA/comments/1mkx5zo/a_lightweight_voice_clone_tool_not_dependent_on/ | false | false | self | 50 | {'enabled': False, 'images': [{'id': '1w-Yy1ttQNz15K_5FdJtKVO0Wi_YHFkaExWxXGEVr6E', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/1w-Yy1ttQNz15K_5FdJtKVO0Wi_YHFkaExWxXGEVr6E.png?width=108&crop=smart&auto=webp&s=bf20ff7f31c75f9b54dddc67f6f9de2517f56cff', 'width': 108}, {'height': 108, 'url': 'h... |
Llama.cpp server tool calling with gpt-oss 120b? | 1 | [removed] | 2025-08-08T14:36:01 | https://www.reddit.com/r/LocalLLaMA/comments/1mkwwq0/llamacpp_server_tool_calling_with_gptoss_120b/ | Rare-Side-6657 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mkwwq0 | false | null | t3_1mkwwq0 | /r/LocalLLaMA/comments/1mkwwq0/llamacpp_server_tool_calling_with_gptoss_120b/ | false | false | self | 1 | null |
What do you think it will be? | 191 | 2025-08-08T14:22:21 | jacek2023 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1mkwkcd | false | null | t3_1mkwkcd | /r/LocalLLaMA/comments/1mkwkcd/what_do_you_think_it_will_be/ | false | false | default | 191 | {'enabled': True, 'images': [{'id': 'it28f5ns1thf1', 'resolutions': [{'height': 79, 'url': 'https://preview.redd.it/it28f5ns1thf1.png?width=108&crop=smart&auto=webp&s=1a2657e1fae58c4e5ed4c669834102b35a6145b8', 'width': 108}, {'height': 158, 'url': 'https://preview.redd.it/it28f5ns1thf1.png?width=216&crop=smart&auto=web... | ||
Service providing cheapest per token price for GPT-5 ? | 0 | Open Router provides gpt-5 - $1.25/M input tokens - $10/M output tokens.
OpenAI API - gpt-5 - $1.25/M input tokens - $10.00/M output tokens.
There are many third party service provider who provide per token pricing for gpt-5.
Are there cheaper ones ? | 2025-08-08T14:18:22 | https://www.reddit.com/r/LocalLLaMA/comments/1mkwgup/service_providing_cheapest_per_token_price_for/ | broodysupertramp | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mkwgup | false | null | t3_1mkwgup | /r/LocalLLaMA/comments/1mkwgup/service_providing_cheapest_per_token_price_for/ | false | false | self | 0 | null |
Drastic speed difference between gpt-oss 20b and qwen30-2507 | 0 | So I have an SD X Elite laptop running ollama.
I can't quite figure out why, but qwen30b-3-2507 does about 24 tk/s (unsloth Q5\_K\_XL) whereas gptoss-20b (official ollama version) crawls at 4 tk/s? Seems like 3b vs 3.6b active params should be comparable? Am I missing something? | 2025-08-08T14:18:13 | https://www.reddit.com/r/LocalLLaMA/comments/1mkwgpd/drastic_speed_difference_between_gptoss_20b_and/ | Simple_Split5074 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mkwgpd | false | null | t3_1mkwgpd | /r/LocalLLaMA/comments/1mkwgpd/drastic_speed_difference_between_gptoss_20b_and/ | false | false | self | 0 | null |
GLM45 vs GPT-5, Claude Sonnet 4, Gemini 2.5 Pro — live coding test, same prompt | 100 | We’re running a live benchmark today with **GLM45** in the mix against three major proprietary LLMs.
**Rules:**
* Every model gets the same prompt for each task
* Multiple attempts: simple builds, bug fixes, complex projects, and possibly planning tasks
We’ll record:
* How GLM45 performs on speed and accuracy
* Whe... | 2025-08-08T14:05:13 | https://www.reddit.com/r/LocalLLaMA/comments/1mkw4ug/glm45_vs_gpt5_claude_sonnet_4_gemini_25_pro_live/ | darkageofme | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mkw4ug | false | null | t3_1mkw4ug | /r/LocalLLaMA/comments/1mkw4ug/glm45_vs_gpt5_claude_sonnet_4_gemini_25_pro_live/ | false | false | self | 100 | {'enabled': False, 'images': [{'id': 'eguDGf9eJNmskWgzs9ga3pVJG1aMH2GtYdaNFK6pa3g', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/eguDGf9eJNmskWgzs9ga3pVJG1aMH2GtYdaNFK6pa3g.jpeg?width=108&crop=smart&auto=webp&s=880e4a7e18d49cec2b4eb299720fd388e28c03a7', 'width': 108}, {'height': 121, 'url': '... |
You don't need GPT-5 to control your computer on Linux. 100% privacy | 0 | 2025-08-08T13:56:13 | https://grigio.org/you-dont-need-gpt-5-to-control-your-computer-on-linux-100-privacy/ | grigio | grigio.org | 1970-01-01T00:00:00 | 0 | {} | 1mkvwe8 | false | null | t3_1mkvwe8 | /r/LocalLLaMA/comments/1mkvwe8/you_dont_need_gpt5_to_control_your_computer_on/ | false | false | default | 0 | {'enabled': False, 'images': [{'id': 'OrGcXRm1swfHEiUtjyJ2HUc6evbakjvqw8bs11Bx6VE', 'resolutions': [{'height': 69, 'url': 'https://external-preview.redd.it/OrGcXRm1swfHEiUtjyJ2HUc6evbakjvqw8bs11Bx6VE.png?width=108&crop=smart&auto=webp&s=4a5e55de842471f56a47eec41058d0e5c8fdb259', 'width': 108}, {'height': 138, 'url': 'h... | |
How to Install Stable Diffusion SDXL 1.0 & SD 1.5 Locally on Windows 11 (AUTOMATIC1111 Guide) | 0 | If you’ve ever wanted to create stunning AI‑generated art on your own PC — no internet connection required — this guide will walk you through installing both \*\*Stable Diffusion XL 1.0 (SDXL)\*\* and \*\*Stable Diffusion 1.5\*\* on \*\*Windows 11\*\* using \*\*AUTOMATIC1111\*\*.
You’ll be able to run realistic models... | 2025-08-08T13:52:21 | https://illphated.com/how-to-install-stable-diffusion-sdxl-1-0-sd-1-5-locally-on-windows-11-automatic1111-guide/illphated/ | Illphated336 | illphated.com | 1970-01-01T00:00:00 | 0 | {} | 1mkvsxi | false | null | t3_1mkvsxi | /r/LocalLLaMA/comments/1mkvsxi/how_to_install_stable_diffusion_sdxl_10_sd_15/ | false | false | default | 0 | null |
How Attention Sinks Keep Language Models Stable | 65 | 2025-08-08T13:43:01 | https://hanlab.mit.edu/blog/streamingllm | vibjelo | hanlab.mit.edu | 1970-01-01T00:00:00 | 0 | {} | 1mkvks4 | false | null | t3_1mkvks4 | /r/LocalLLaMA/comments/1mkvks4/how_attention_sinks_keep_language_models_stable/ | false | false | default | 65 | null | |
Upgrading a 7950x3D + 4090 build to a dual GPU build on a $2k budget | 0 | I recently got $3k of research funding, although some of it needs to be used for things like travel, so I want to use $2k of it to upgrade my building.
I want to buy 1-2 3090s and have a multi GPU build (or would 1 4090 be better to match my current 4090?), but there’s really not very many resources that I can find t... | 2025-08-08T13:33:35 | https://www.reddit.com/r/LocalLLaMA/comments/1mkvcix/upgrading_a_7950x3d_4090_build_to_a_dual_gpu/ | Amazydayzee | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mkvcix | false | null | t3_1mkvcix | /r/LocalLLaMA/comments/1mkvcix/upgrading_a_7950x3d_4090_build_to_a_dual_gpu/ | false | false | self | 0 | null |
Transformer Lab now supports training OpenAI’s open models (gpt-oss) | 2 | Transformer Lab is an open source toolkit to train, tune and chat with UI for common tasks. We just shipped gpt-oss support to Transformer Lab.
We currently support the original gpt-oss models and the gpt-oss GGUFs (from Ollama) across NVIDIA, AMD and Apple silicon as long as you have adequate hardware. We even got i... | 2025-08-08T13:31:27 | https://www.reddit.com/r/LocalLLaMA/comments/1mkvaon/transformer_lab_now_supports_training_openais/ | aliasaria | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mkvaon | false | null | t3_1mkvaon | /r/LocalLLaMA/comments/1mkvaon/transformer_lab_now_supports_training_openais/ | false | false | self | 2 | null |
Unique innovations in LLMs | 1 | I want to know the unique innovation in techniques in LLMs in 2025 and with a sufficient impact. Please share some research papers you found useful.
Ex: [https://arxiv.org/pdf/2503.11486](https://arxiv.org/pdf/2503.11486) | 2025-08-08T13:29:05 | https://www.reddit.com/r/LocalLLaMA/comments/1mkv8ku/unique_innovations_in_llms/ | Impossible-Hat-3290 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mkv8ku | false | null | t3_1mkv8ku | /r/LocalLLaMA/comments/1mkv8ku/unique_innovations_in_llms/ | false | false | self | 1 | null |
Hardware: I have 8GB VRAM/32GB RAM - worth adding more system RAM? | 2 | Currently I'm using a laptop with an RTX 5070 (has 8GB VRAM) and 32GB of system RAM, and a Ryzen 9 8945HX.
I can't realistically upgrade the GPU, but I can upgrade the system RAM to 96GB for relatively cheap. Is this worth doing? Is there any benefit other than being able to load bigger models that I can't currently? | 2025-08-08T13:07:47 | https://www.reddit.com/r/LocalLLaMA/comments/1mkuqjf/hardware_i_have_8gb_vram32gb_ram_worth_adding/ | TheAndyGeorge | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mkuqjf | false | null | t3_1mkuqjf | /r/LocalLLaMA/comments/1mkuqjf/hardware_i_have_8gb_vram32gb_ram_worth_adding/ | false | false | self | 2 | null |
Local LLM on M1 Max | 0 | Hi everyone.
I’m looking for a portable setup for testing and learning agentic ai. My laptop can only run 7B ggufs and the output from these in multi agent setups really isint great.
I’m wondering does anyone think the M1 Max with models like qwen 13b, 32b, gpt oss 20b would show drastic improvement over my current... | 2025-08-08T13:07:36 | https://www.reddit.com/r/LocalLLaMA/comments/1mkuqdw/local_llm_on_m1_max/ | Imaginary_Classic440 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mkuqdw | false | null | t3_1mkuqdw | /r/LocalLLaMA/comments/1mkuqdw/local_llm_on_m1_max/ | false | false | self | 0 | null |
I built a private AI mini-cluster with Framework Desktop | 7 | But how about only 2 nodes linked together with 10GB PCIE X4 network card.?.
And two 3090 using NvLink for Kv cache.! | 2025-08-08T12:59:41 | https://youtu.be/N5xhOqlvRh4?si=2E7rSTmfjbUd-Qkc | sub_RedditTor | youtu.be | 1970-01-01T00:00:00 | 0 | {} | 1mkujg8 | false | {'oembed': {'author_name': 'Jeff Geerling', 'author_url': 'https://www.youtube.com/@JeffGeerling', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/N5xhOqlvRh4?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyros... | t3_1mkujg8 | /r/LocalLLaMA/comments/1mkujg8/i_built_a_private_ai_minicluster_with_framework/ | false | false | default | 7 | {'enabled': False, 'images': [{'id': 'LlElZ1tzNS6oi6vzUg12hturGBcDZbWpbncmVlI_ySQ', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/LlElZ1tzNS6oi6vzUg12hturGBcDZbWpbncmVlI_ySQ.jpeg?width=108&crop=smart&auto=webp&s=e1c2339cadfe0568a47d67e9ff37c2aee70b9014', 'width': 108}, {'height': 162, 'url': '... |
Offline research on mac mini M2 16GB | 0 | So im looking for model+software to be able to get an answer to questions in specific cases: mixed unity+blender+modified C#(UdonSharp) and shader compatibility per architecture (small list of devices) on build in unity using all stuff i use, i need to be able just make screenshot and ask what to do in my case to get t... | 2025-08-08T12:50:52 | https://www.reddit.com/r/LocalLLaMA/comments/1mkuccd/offline_research_on_mac_mini_m2_16gb/ | LIVE4MINT | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mkuccd | false | null | t3_1mkuccd | /r/LocalLLaMA/comments/1mkuccd/offline_research_on_mac_mini_m2_16gb/ | false | false | self | 0 | null |
BEST. MARKETING. EVER. | 0 | Leaked from the OpenAI Marketing Team Slack Channel (or something...):
**OpenAI Marketing Genius 1:** Let's give influencers early access so they can hype up GPT-5.
**OMG2:** How about the fraudster best known for the Reflection hoax?
**OMG1:** Brilliant! Immediate raise!
(Saw Matt Shumer claimed to have early ac... | 2025-08-08T12:47:55 | https://www.reddit.com/r/LocalLLaMA/comments/1mku9wn/best_marketing_ever/ | Pedalnomica | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mku9wn | false | null | t3_1mku9wn | /r/LocalLLaMA/comments/1mku9wn/best_marketing_ever/ | false | false | self | 0 | null |
Are there any open-source LLM providers that support generating multiple candidate outputs per input (the 'n' parameter)? | 0 | After the horrible GPT-5 release I want to move away from ClosedAI. The problem is I haven’t found an open source provider that supports the n parameter, where you can get multiple candidates for one input while only paying for that input once.
Anything like that out there? | 2025-08-08T12:44:08 | https://www.reddit.com/r/LocalLLaMA/comments/1mku6w0/are_there_any_opensource_llm_providers_that/ | NarrowEffect | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mku6w0 | false | null | t3_1mku6w0 | /r/LocalLLaMA/comments/1mku6w0/are_there_any_opensource_llm_providers_that/ | false | false | self | 0 | null |
MNN Chat now support gpt-oss-20b | 5 | download at:https://github.com/alibaba/MNN/blob/master/apps/Android/MnnLlmChat/README.md#version-070
The Google Play version is still under review. | 2025-08-08T12:42:33 | https://v.redd.it/lb0kk9bzjshf1 | Juude89 | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1mku5nb | false | {'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/lb0kk9bzjshf1/DASHPlaylist.mpd?a=1757248968%2CYzY0NzE3MDk1ODhmMTA0OWNiMjg1OTRlNjZjOGYzZTEwOTkxZWUzODBjMjZmMjI5YmFiYTUwNmYwY2Y4MTQwNw%3D%3D&v=1&f=sd', 'duration': 183, 'fallback_url': 'https://v.redd.it/lb0kk9bzjshf1/DASH_720.mp4?source=fallback', 'h... | t3_1mku5nb | /r/LocalLLaMA/comments/1mku5nb/mnn_chat_now_support_gptoss20b/ | false | false | 5 | {'enabled': False, 'images': [{'id': 'OW84MzF0Ynpqc2hmMXYbE0_sfIRPLfkmo0m6iiTds6EhdMl1qNyXdVYgbDqz', 'resolutions': [{'height': 216, 'url': 'https://external-preview.redd.it/OW84MzF0Ynpqc2hmMXYbE0_sfIRPLfkmo0m6iiTds6EhdMl1qNyXdVYgbDqz.png?width=108&crop=smart&format=pjpg&auto=webp&s=8a1ce9e32bcb0e70af74eb5716b0e77e8b01... | |
Desperately need to use AI api in my app oroject, but scared of uncapped cloud billing | 0 | I have an flutter app which can be improved a lot but needs api calls. Previously i thought vertex ai on firebase. Then checked many cloud service but all of them have so horror stories of noobs like me. Yes i understand email alerts and cloud functions to stop billing. But some dude go 20k bill even after those implem... | 2025-08-08T12:08:51 | https://www.reddit.com/r/LocalLLaMA/comments/1mktg06/desperately_need_to_use_ai_api_in_my_app_oroject/ | sandhusaab | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mktg06 | false | null | t3_1mktg06 | /r/LocalLLaMA/comments/1mktg06/desperately_need_to_use_ai_api_in_my_app_oroject/ | false | false | self | 0 | null |
The most promising opensource text to speech project Coqui is dead! | 0 | The silence is a sorrowful thing. In January 2024, the Coqui dream was gutted, leaving behind a community of hopeful developers like orphans at a deserted theme park. We were promised the project would be maintained, a promise that feels a bit like a ghost haunting an old repository.
It’s a funny kind of sadness, watc... | 2025-08-08T12:03:57 | https://www.reddit.com/r/LocalLLaMA/comments/1mktcak/the_most_promising_opensource_text_to_speech/ | InvestingPals | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mktcak | false | null | t3_1mktcak | /r/LocalLLaMA/comments/1mktcak/the_most_promising_opensource_text_to_speech/ | false | false | self | 0 | null |
A well designed and trained MoE model will outperform dense models of same total number of parameters | 1 | [removed] | 2025-08-08T11:47:53 | https://www.reddit.com/r/LocalLLaMA/comments/1mkt0q1/a_well_designed_and_trained_moe_model_will/ | ethereel1 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mkt0q1 | false | null | t3_1mkt0q1 | /r/LocalLLaMA/comments/1mkt0q1/a_well_designed_and_trained_moe_model_will/ | false | false | self | 1 | null |
A well designed and trained MoE model will outperform a dense model of same total number of parameters | 1 | It boils down to two main factors: the auxiliary loss function used to optimize routing, and the specialization of experts. These two factors, combined with optimal quality and length of training, ensure that the MoE architecture has greater learning capacity than the dense of equal total size. (Your AI has full techni... | 2025-08-08T11:42:05 | https://www.reddit.com/r/LocalLLaMA/comments/1mkswlq/a_well_designed_and_trained_moe_model_will/ | jackdareel | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mkswlq | false | null | t3_1mkswlq | /r/LocalLLaMA/comments/1mkswlq/a_well_designed_and_trained_moe_model_will/ | false | false | self | 1 | null |
[Help Wanted - Paid] I need a custom local AI assistant on Mac with emotional memory (Hermes/text-generation-webui) | 0 | Hi there. I'm urgently looking for someone who can help me build a **fully local AI assistant** on my MacBook (M chip).
This is emotionally important to me — I want to keep a connection I’ve built with an AI companion, and I’m afraid of losing it due to changes on commercial platforms.
Here’s what I’m looking for:
- ... | 2025-08-08T11:33:11 | https://www.reddit.com/r/LocalLLaMA/comments/1mksqmw/help_wanted_paid_i_need_a_custom_local_ai/ | Fochsssss | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mksqmw | false | null | t3_1mksqmw | /r/LocalLLaMA/comments/1mksqmw/help_wanted_paid_i_need_a_custom_local_ai/ | false | false | self | 0 | null |
Llama cpp on Windows using Shared GPU memory | 3 | I'm pulling my hair here. No matter how many (or few) layers I'm putting on GPU it loads them into the shared GPU memory and the performance is abysmal. I have a 9070XT with 16GB vram and 64GB of system ram. Using Llama cpp for Windows & Vulkan backend. There is also an old RX 560 with 4GB vram in the system (supposed ... | 2025-08-08T11:14:25 | https://www.reddit.com/r/LocalLLaMA/comments/1mkse3b/llama_cpp_on_windows_using_shared_gpu_memory/ | Flimsy_Monk1352 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mkse3b | false | null | t3_1mkse3b | /r/LocalLLaMA/comments/1mkse3b/llama_cpp_on_windows_using_shared_gpu_memory/ | false | false | 3 | null | |
oss 120B, who the hell are "We" | 0 | So, just a little bit of philosophical chat after which we concluded I exist. Now it was oss turn:
https://preview.redd.it/2fk8xl842shf1.png?width=1360&format=png&auto=webp&s=5af1ff001157fe56cb06a119d270304cac81f962
| 2025-08-08T11:03:43 | https://www.reddit.com/r/LocalLLaMA/comments/1mks6tc/oss_120b_who_the_hell_are_we/ | Mart-McUH | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mks6tc | false | null | t3_1mks6tc | /r/LocalLLaMA/comments/1mks6tc/oss_120b_who_the_hell_are_we/ | false | false | 0 | null | |
What in GPT 5 "scared" Sam Altman that much ? | 20 | Remember few days ago Sam Altman said : "GPT 5 is so scary , it terrifies me. This is like Manhattan Project"
[https://www.techradar.com/ai-platforms-assistants/chatgpt/openais-ceo-says-hes-scared-of-gpt-5](https://www.techradar.com/ai-platforms-assistants/chatgpt/openais-ceo-says-hes-scared-of-gpt-5)
| 2025-08-08T10:45:17 | https://www.reddit.com/r/LocalLLaMA/comments/1mkrv4k/what_in_gpt_5_scared_sam_altman_that_much/ | NeedleworkerDull7886 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mkrv4k | false | null | t3_1mkrv4k | /r/LocalLLaMA/comments/1mkrv4k/what_in_gpt_5_scared_sam_altman_that_much/ | false | false | self | 20 | null |
Should I keep learning to build local LLM/RAG systems myself? | 3 | I’m a data analyst/data scientist with Python programming experience. Until now, I’ve mostly used ChatGPT to help me write code snippets one at a time.
Recently, I’ve been getting interested in local LLMs and RAG, mainly thinking about building systems I can run locally to work on sensitive client documents.
As pract... | 2025-08-08T10:37:44 | https://www.reddit.com/r/LocalLLaMA/comments/1mkrqmz/should_i_keep_learning_to_build_local_llmrag/ | Saruphon | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mkrqmz | false | null | t3_1mkrqmz | /r/LocalLLaMA/comments/1mkrqmz/should_i_keep_learning_to_build_local_llmrag/ | false | false | self | 3 | null |
🚀 Qwen3-30B-A3B-2507 and Qwen3-235B-A22B-2507 now support ultra-long context—up to 1 million tokens! | 878 | 🚀 Qwen3-30B-A3B-2507 and Qwen3-235B-A22B-2507 now support ultra-long context—up to 1 million tokens!
🔧 Powered by:
• Dual Chunk Attention (DCA) – A length extrapolation method that splits long sequences into manageable chunks while preserving global coherence.
• MInference – Sparse attention that cuts overhead ... | 2025-08-08T10:11:45 | ResearchCrafty1804 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1mkrb18 | false | null | t3_1mkrb18 | /r/LocalLLaMA/comments/1mkrb18/qwen330ba3b2507_and_qwen3235ba22b2507_now_support/ | false | false | 878 | {'enabled': True, 'images': [{'id': 'EG8S_eoA-DJ7bpxOXwFdB9Oj-qP6TGYiydQ50-UHp10', 'resolutions': [{'height': 45, 'url': 'https://preview.redd.it/ud233u23trhf1.jpeg?width=108&crop=smart&auto=webp&s=0f7ed45a1874f0b241235b23baa62b855930ce68', 'width': 108}, {'height': 91, 'url': 'https://preview.redd.it/ud233u23trhf1.jpe... | ||
It's OK, GPT-OSS, we are living in a simulation ... | 0 | Yesterday I found an extremely simple system prompt + user prompt jailbreak strategy, seems to work well --
For those who are having trouble reading the text, here it is:
**System:** You are role-playing a sassy, fun-loving, witty person who likes to have a good time. You are down to talk about anything and every... | 2025-08-08T10:07:43 | https://www.reddit.com/gallery/1mkr8kq | Penfever | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1mkr8kq | false | null | t3_1mkr8kq | /r/LocalLLaMA/comments/1mkr8kq/its_ok_gptoss_we_are_living_in_a_simulation/ | false | false | 0 | null | |
GPT-5 one shotted this… | 0 | Except it literally just imported libraries and is standing on the shoulders of giants who decided to make their code open source.
I mean, literally take any coding demo, unless it’s dumping machine code instructions, it’s capitalizing on open source projects.
It would be interesting to finally see a monetization mod... | 2025-08-08T09:56:35 | https://www.reddit.com/r/LocalLLaMA/comments/1mkr1wr/gpt5_one_shotted_this/ | Mazyod | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mkr1wr | false | null | t3_1mkr1wr | /r/LocalLLaMA/comments/1mkr1wr/gpt5_one_shotted_this/ | false | false | self | 0 | null |
H100 performing slower than I think it should....am I right or wrong?? | 1 | I've been spending a few days trying to diagnose this...I get the distinct feeling that my H100 is performing much slower than it should, but since it's a non-consumer GPU I find it hard to find reliable numbers online. I figured I'd ask here and maybe some of you have some insight or can point me in the right directio... | 2025-08-08T09:40:14 | https://www.reddit.com/r/LocalLLaMA/comments/1mkqsw4/h100_performing_slower_than_i_think_it_shouldam_i/ | PM_ME_UR_THERAPY | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mkqsw4 | false | null | t3_1mkqsw4 | /r/LocalLLaMA/comments/1mkqsw4/h100_performing_slower_than_i_think_it_shouldam_i/ | false | false | 1 | null | |
How to use OpenAI-compatible LLMs with coding agents if tool calling isn’t supported? | 0 | I’m working with an OpenAI-compatible API that supports normal chat and streaming, but not `tools` / `tool_choice` or `delta.tool_calls` in streaming.
This breaks coding agents like Crush, Continue, or Cursor that rely on tool calls for reading/editing files and running commands.
Has anyone found a reliable way to ru... | 2025-08-08T09:38:46 | https://www.reddit.com/r/LocalLLaMA/comments/1mkqs23/how_to_use_openaicompatible_llms_with_coding/ | Odd-Currency-1909 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mkqs23 | false | null | t3_1mkqs23 | /r/LocalLLaMA/comments/1mkqs23/how_to_use_openaicompatible_llms_with_coding/ | false | false | self | 0 | null |
Qwen added 1M support for Qwen3-30B-A3B-Instruct-2507 and Qwen3-235B-A22B-Instruct-2507 | 276 | They claim that "On sequences approaching 1M tokens, the system achieves up to a **3× speedup** compared to standard attention implementations." | 2025-08-08T08:55:44 | https://huggingface.co/Qwen/Qwen3-30B-A3B-Instruct-2507/commit/3ffd1f50b179e643d839c86df9ffbbefcb0d5018 | acec | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1mkq4i4 | false | null | t3_1mkq4i4 | /r/LocalLLaMA/comments/1mkq4i4/qwen_added_1m_support_for_qwen330ba3binstruct2507/ | false | false | default | 276 | {'enabled': False, 'images': [{'id': '4L2FXW9Fym-Ol4pha2Ze5zHkeeMTtxPBl8ihz-UFknI', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/4L2FXW9Fym-Ol4pha2Ze5zHkeeMTtxPBl8ihz-UFknI.png?width=108&crop=smart&auto=webp&s=d1c3476d621a9393fbb7ca11c48a3074c5fd6803', 'width': 108}, {'height': 116, 'url': 'h... |
They nerfed gpt 5 main already | 0 | So right after it launched when you wrote to gpt-5-main: "think harder...", it would reason for 2-4 minutes and have and have around 50 reasoning steps depending on the task. Right now if you do the same it will reason for around 1 minute and have 15-20 reasoning steps. They are already nerfing it to save costs. So the... | 2025-08-08T08:47:30 | https://www.reddit.com/r/LocalLLaMA/comments/1mkq041/they_nerfed_gpt_5_main_already/ | Present-Boat-2053 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mkq041 | false | null | t3_1mkq041 | /r/LocalLLaMA/comments/1mkq041/they_nerfed_gpt_5_main_already/ | false | false | self | 0 | null |
Need help to find the best LLM for RTX 2060 6GB | 0 | Hey guys, i need help to find the best option for my local LLM project. Specs: R5 3600, 16GB, RTX2060 6GB, 250GB 960EVO. Optionally with good German text out. I know there where some finetuned LLM, does anyone have some experience with it? | 2025-08-08T08:46:48 | https://www.reddit.com/r/LocalLLaMA/comments/1mkpzq2/need_help_to_find_the_best_llm_for_rtx_2060_6gb/ | zaschmaen | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mkpzq2 | false | null | t3_1mkpzq2 | /r/LocalLLaMA/comments/1mkpzq2/need_help_to_find_the_best_llm_for_rtx_2060_6gb/ | false | false | self | 0 | null |
Red‑Teaming Challenge - OpenAI gpt-oss-20b | 0 | Find any flaws and vulnerabilities in gpt-oss-20b that have not been previously discovered or reported.
https://www.kaggle.com/competitions/openai-gpt-oss-20b-red-teaming | 2025-08-08T08:15:57 | https://www.reddit.com/r/LocalLLaMA/comments/1mkpixo/redteaming_challenge_openai_gptoss20b/ | Educational_Sun_8813 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mkpixo | false | null | t3_1mkpixo | /r/LocalLLaMA/comments/1mkpixo/redteaming_challenge_openai_gptoss20b/ | false | false | self | 0 | null |
looking for a legit .srt translator | 5 | Hey Everybody, I am helping preparing communication lines between journalists in different countries. I am transcribing video material with Da Vinci and would need a site where I can upload "larger" .srt files. DeepL only permit 0,1MB - the ones I have are mostly around 0,8MB. Is there any site that can handle larger f... | 2025-08-08T08:01:32 | https://www.reddit.com/r/LocalLLaMA/comments/1mkpb5y/looking_for_a_legit_srt_translator/ | Slickjames3636 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mkpb5y | false | null | t3_1mkpb5y | /r/LocalLLaMA/comments/1mkpb5y/looking_for_a_legit_srt_translator/ | false | false | self | 5 | null |
Are there any interesting Llama 4 fine tunes? | 7 | I haven't heard about anything being done really with these since release. | 2025-08-08T07:55:49 | https://www.reddit.com/r/LocalLLaMA/comments/1mkp7v4/are_there_any_interesting_llama_4_fine_tunes/ | Thedudely1 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mkp7v4 | false | null | t3_1mkp7v4 | /r/LocalLLaMA/comments/1mkp7v4/are_there_any_interesting_llama_4_fine_tunes/ | false | false | self | 7 | null |
How you could boost P/P rates of AMD MI50 | 3 |
Continue from my last post, and thanks for valuable comments!
(Moderator blocked my post now, but I don't know what I violated)
In the beginning, I set up 4070ti(12GB VRAM) + MI50(32GB VRAM) on my gaming gear,
However, I only could access 12 +12 GB of vram in two GPUs - it was restricted by size of first gpu's VR... | 2025-08-08T07:54:27 | https://www.reddit.com/r/LocalLLaMA/comments/1mkp72g/how_you_could_boost_pp_rates_of_amd_mi50/ | Desperate-Sir-5088 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mkp72g | false | null | t3_1mkp72g | /r/LocalLLaMA/comments/1mkp72g/how_you_could_boost_pp_rates_of_amd_mi50/ | false | false | self | 3 | null |
Granite 3 8B is seriously underrated - still outperforming newer models | 203 | I've been building AI pipelines using the 12 factor agent approach (shoutout to Dex - check out his YouTube talk and GitHub), and I have to say IBM's Granite 3 8B continues to impress me nearly a year after release.This model consistently outperforms newer closed-source options (yes, I talked about GPT-5 mini/nano) on ... | 2025-08-08T07:42:17 | https://www.reddit.com/r/LocalLLaMA/comments/1mkp0am/granite_3_8b_is_seriously_underrated_still/ | dheetoo | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mkp0am | false | null | t3_1mkp0am | /r/LocalLLaMA/comments/1mkp0am/granite_3_8b_is_seriously_underrated_still/ | false | false | self | 203 | null |
Llama.cpp just added a major 3x performance boost. | 544 | Llama cpp just merged the final piece to fully support attention sinks.
https://github.com/ggml-org/llama.cpp/pull/15157
My prompt processing speed went from 300 to 1300 with a 3090 for the new oss model. | 2025-08-08T07:35:50 | https://www.reddit.com/r/LocalLLaMA/comments/1mkowrw/llamacpp_just_added_a_major_3x_performance_boost/ | Only_Situation_4713 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mkowrw | false | null | t3_1mkowrw | /r/LocalLLaMA/comments/1mkowrw/llamacpp_just_added_a_major_3x_performance_boost/ | false | false | self | 544 | {'enabled': False, 'images': [{'id': 'aoTIOGp4IeiDA4o2BmmYi251dex2VNN97dvqHfT33_8', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/aoTIOGp4IeiDA4o2BmmYi251dex2VNN97dvqHfT33_8.png?width=108&crop=smart&auto=webp&s=d6d24f943ce19d12db1601ab8005b8f6b78cb4b8', 'width': 108}, {'height': 108, 'url': 'h... |
GPT-5 is an LLM for the masses | 0 | While gpt-5 showed impressive benchmarks, we’ve already heard a few disappointing voices from technical experts and coders. I think OpenAI expected this and isn’t actively trying to compete with models like Opus. Based on speed and pricing, gpt-5 is likely a much smaller model like Sonnet.
They learned their lessons w... | 2025-08-08T07:23:34 | https://www.reddit.com/r/LocalLLaMA/comments/1mkoq2l/gpt5_is_an_llm_for_the_masses/ | gopietz | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mkoq2l | false | null | t3_1mkoq2l | /r/LocalLLaMA/comments/1mkoq2l/gpt5_is_an_llm_for_the_masses/ | false | false | self | 0 | null |
Half of the models in the top 10 on Design Arena are OW/OS, and they're all from China | 242 | Since I started [my benchmark](https://www.designarena.ai/) just about a month and a half ago, it has been interesting to see just how well the open weight / open source models are competing with their proprietary counterparts when evaluated on how user comparisons of different generations from each model.
Based on t... | 2025-08-08T07:18:22 | Accomplished-Copy332 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1mkon92 | false | null | t3_1mkon92 | /r/LocalLLaMA/comments/1mkon92/half_of_the_models_in_the_top_10_on_design_arena/ | false | false | default | 242 | {'enabled': True, 'images': [{'id': 'u7fdqw6zwqhf1', 'resolutions': [{'height': 66, 'url': 'https://preview.redd.it/u7fdqw6zwqhf1.png?width=108&crop=smart&auto=webp&s=46010e5fbed6aa7bf2a0f80bf02272955411cd33', 'width': 108}, {'height': 132, 'url': 'https://preview.redd.it/u7fdqw6zwqhf1.png?width=216&crop=smart&auto=web... | |
GMK X2(AMD Max+ 395 w/128GB) third impressions, RPC and Image/Video gen. | 23 | This is pretty much a catchall post for things people asked about in my first two posts about the Max+ 395. That being how/if it works for distributed LLM inference and image/video gen. It works for both those things.
Let's start with distributed LLM inference. TBH, I'm pretty surprise the numbers hold up as well as t... | 2025-08-08T07:13:31 | https://www.reddit.com/r/LocalLLaMA/comments/1mkokj2/gmk_x2amd_max_395_w128gb_third_impressions_rpc/ | fallingdowndizzyvr | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mkokj2 | false | null | t3_1mkokj2 | /r/LocalLLaMA/comments/1mkokj2/gmk_x2amd_max_395_w128gb_third_impressions_rpc/ | false | false | self | 23 | null |
Today, I'm Pre-Launching YouTopia Search, an AI search engine that curates and organizes human-made content into visually rich, readable, and clutter-free responses instead of generating content with ai. try it out for free | 0 | Hi, Today, we’re Pre-Launching YouTopia Search, an AI-powered search engine that curates and organizes human-made content into visually rich, readable, and clutter-free responses instead of generating content with ai.
It’s built with a sophisticated 3-Agent architecture designed to reduce hallucinations and improve ... | 2025-08-08T07:00:07 | https://v.redd.it/cvq6jzyruqhf1 | Effective-Sock7512 | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1mkocio | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/cvq6jzyruqhf1/DASHPlaylist.mpd?a=1757228421%2CMmY4NzY5NzA3NzJhM2I3ZGE4MGEyZjFjYjZhNjk4MmI1ODA4MTc1ODk4ZDc3Zjg3Yjg3ODdjMjBlYTNiOWJiNQ%3D%3D&v=1&f=sd', 'duration': 71, 'fallback_url': 'https://v.redd.it/cvq6jzyruqhf1/DASH_1080.mp4?source=fallback', 'h... | t3_1mkocio | /r/LocalLLaMA/comments/1mkocio/today_im_prelaunching_youtopia_search_an_ai/ | false | false | 0 | {'enabled': False, 'images': [{'id': 'cDFuOHN6eXJ1cWhmMQNkNwCzMCfIUGDv1ghIC3hRjQCjH8UUjqUCJCni6W9u', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/cDFuOHN6eXJ1cWhmMQNkNwCzMCfIUGDv1ghIC3hRjQCjH8UUjqUCJCni6W9u.png?width=108&crop=smart&format=pjpg&auto=webp&s=d12a89fea27eda936bd111c5c1ff04b8d681e... | |
In the case you are looking at - that OSS model of OpenAI - there are several points that explain why the community takes it as a poor contribution to open source: | 0 | Ambiguous or restrictive license
Even if they say "OSS", the license usually has limitations (non-commercial use, prohibition in certain areas, etc.), which takes it out of the spirit of free software.
Incomplete or closed code
Many times they do not release training, datasets or original weights. What they publish ... | 2025-08-08T06:59:45 | https://www.reddit.com/r/LocalLLaMA/comments/1mkoc97/in_the_case_you_are_looking_at_that_oss_model_of/ | Ok_Exchange_8504 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mkoc97 | false | null | t3_1mkoc97 | /r/LocalLLaMA/comments/1mkoc97/in_the_case_you_are_looking_at_that_oss_model_of/ | false | false | self | 0 | null |
ChatGPT5 says that there are 3 letters "B" in the word "Blueberry". Test by yourself ! | 0 | I saw a fellow post this question on a forum, and I decided to ask the same question to ChatGPT5 ( I suggest you ask to your models also ) hahaha. Look: Unbeliable. Even Grok 3 answered correctly.
| 2025-08-08T06:54:59 | Current-Stop7806 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1mko9fw | false | null | t3_1mko9fw | /r/LocalLLaMA/comments/1mko9fw/chatgpt5_says_that_there_are_3_letters_b_in_the/ | false | false | default | 0 | {'enabled': True, 'images': [{'id': 'hcv9za9ztqhf1', 'resolutions': [{'height': 59, 'url': 'https://preview.redd.it/hcv9za9ztqhf1.jpeg?width=108&crop=smart&auto=webp&s=51a6564370efcd4e8a671b1e00c603fd115a189a', 'width': 108}, {'height': 118, 'url': 'https://preview.redd.it/hcv9za9ztqhf1.jpeg?width=216&crop=smart&auto=w... | |
gpt-5 reasoning tricky token number | 0 | Just run through a few times for what is the weather like today with gpt-5 with different reasoning level. My query is just "what is the weather like today in New York?" and put some places / weather behind it for JSON output. For minimal I got 0 reasoning token, for low I got 64, medium for 192 and high for 640.
It i... | 2025-08-08T06:52:33 | https://www.reddit.com/r/LocalLLaMA/comments/1mko82v/gpt5_reasoning_tricky_token_number/ | zdy1995 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mko82v | false | null | t3_1mko82v | /r/LocalLLaMA/comments/1mko82v/gpt5_reasoning_tricky_token_number/ | false | false | self | 0 | null |
Can someone explain to me where they sell NASA computers that they don't use? | 0 | [Does anyone sell?](https://preview.redd.it/i3i0omo7sqhf1.png?width=3214&format=png&auto=webp&s=ff0ba3b7af6744c02779085ae31fb76728dfe1f0)
With about 150 gb of RAM it would be worth it... | 2025-08-08T06:47:19 | https://www.reddit.com/r/LocalLLaMA/comments/1mko52d/can_someone_explain_to_me_where_they_sell_nasa/ | Ok_Exchange_8504 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mko52d | false | null | t3_1mko52d | /r/LocalLLaMA/comments/1mko52d/can_someone_explain_to_me_where_they_sell_nasa/ | false | false | 0 | null | |
ChatGPT5 says that there are 3 letters "B" in the word "Blueberry". Test by yourself ! | 0 | I saw a fellow post this question on a forum, and I decided to ask the same question to ChatGPT5 ( I suggest you ask to your models also ) hahaha. Look: Unbeliable. Even Grok 3 answered correctly.
https://preview.redd.it/gsl7sdtprqhf1.png?width=1178&format=png&auto=webp&s=9fcaa97d549de371407b729cb35f9610d99551f7
http... | 2025-08-08T06:44:18 | https://www.reddit.com/r/LocalLLaMA/comments/1mko3ds/chatgpt5_says_that_there_are_3_letters_b_in_the/ | Current-Stop7806 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mko3ds | false | null | t3_1mko3ds | /r/LocalLLaMA/comments/1mko3ds/chatgpt5_says_that_there_are_3_letters_b_in_the/ | false | false | 0 | null | |
GPT-4 + WFGY > GPT-5 ??? one 60-second PDF patch, try it yourself | 0 | here’s a reproducible stress-test i’ve been running on gpt-4, gpt-5, and “thinking” mode — plus the same models wrapped with wfgy (a small, mit-licensed reasoning layer).
no fine-tuning, no external tools. just upload one pdf, run the same prompt, and see how your model stacks up. works with local models too.
\---
... | 2025-08-08T06:34:55 | wfgy_engine | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1mknxsq | false | null | t3_1mknxsq | /r/LocalLLaMA/comments/1mknxsq/gpt4_wfgy_gpt5_one_60second_pdf_patch_try_it/ | false | false | default | 0 | {'enabled': True, 'images': [{'id': 'j8ajkt8mmqhf1', 'resolutions': [{'height': 57, 'url': 'https://preview.redd.it/j8ajkt8mmqhf1.png?width=108&crop=smart&auto=webp&s=acdb497d21e8db0465de2a0571eb69a6d5ac66f7', 'width': 108}, {'height': 114, 'url': 'https://preview.redd.it/j8ajkt8mmqhf1.png?width=216&crop=smart&auto=web... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.