title stringlengths 1 300 | score int64 0 8.54k | selftext stringlengths 0 41.5k | created timestamp[ns]date 2023-04-01 04:30:41 2026-03-04 02:14:14 ⌀ | url stringlengths 0 878 | author stringlengths 3 20 | domain stringlengths 0 82 | edited timestamp[ns]date 1970-01-01 00:00:00 2026-02-19 14:51:53 | gilded int64 0 2 | gildings stringclasses 7
values | id stringlengths 7 7 | locked bool 2
classes | media stringlengths 646 1.8k ⌀ | name stringlengths 10 10 | permalink stringlengths 33 82 | spoiler bool 2
classes | stickied bool 2
classes | thumbnail stringlengths 4 213 ⌀ | ups int64 0 8.54k | preview stringlengths 301 5.01k ⌀ |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Need advice on a vps to host a docker Rag engine with vectorDB | 2 | Hello everyone,
I'm a student with some programming experience and I'm generally comfortable with IT, but I don't have much experience with VPS or server hosting.
I've built a web app with tools to help medical students study, which uses an LLM with a RAG system. For the RAG system, I'm currently using RAGFlow, an op... | 2025-08-01T03:11:41 | https://www.reddit.com/r/LocalLLaMA/comments/1melltk/need_advice_on_a_vps_to_host_a_docker_rag_engine/ | aliihsan01100 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1melltk | false | null | t3_1melltk | /r/LocalLLaMA/comments/1melltk/need_advice_on_a_vps_to_host_a_docker_rag_engine/ | false | false | self | 2 | null |
Some Questions (Curiosity) Regarding ollama , llama.cpp and LM Studio for a complete beginner | 3 | 1. Why is llama.cpp needed ? Like what does it actually do ? If a model weights are available then loading the architecture and model weights will be enough right ? Does that the work it does ?
2. How does llama.cpp make inference faster ? Also could it have been written in something else than C++ (Like C or any othe... | 2025-08-01T02:58:49 | https://www.reddit.com/r/LocalLLaMA/comments/1melcsm/some_questions_curiosity_regarding_ollama/ | Rukelele_Dixit21 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1melcsm | false | null | t3_1melcsm | /r/LocalLLaMA/comments/1melcsm/some_questions_curiosity_regarding_ollama/ | false | false | self | 3 | null |
🎭 I've been working on a pack of prompts that convert local AI (like llama.cpp) into characters with roles. | 1 | [removed] | 2025-08-01T02:50:30 | https://www.reddit.com/r/LocalLLaMA/comments/1mel6u4/ive_been_working_on_a_pack_of_prompts_that/ | Ok_Exchange_8504 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mel6u4 | false | null | t3_1mel6u4 | /r/LocalLLaMA/comments/1mel6u4/ive_been_working_on_a_pack_of_prompts_that/ | false | false | self | 1 | null |
Dingo 1.9.0 released: Open-source data quality evaluation with enhanced hallucination detection | 1 | Just released **Dingo 1.9.0** with major upgrades for RAG-era data quality assessment.
# Key Updates:
**🔍 Enhanced Hallucination Detection** Dingo 1.9.0 integrates two powerful hallucination detection approaches:
* **HHEM-2.1-Open local model** (recommended) - runs locally without API costs
* **GPT-based cloud dete... | 2025-08-01T02:50:23 | https://www.reddit.com/r/LocalLLaMA/comments/1mel6r0/dingo_190_released_opensource_data_quality/ | chupei0 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mel6r0 | false | null | t3_1mel6r0 | /r/LocalLLaMA/comments/1mel6r0/dingo_190_released_opensource_data_quality/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'uB_z0Yt9CjXqTRQp8TUA_R28UmUMbBJFEGl6IdJ7inY', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/uB_z0Yt9CjXqTRQp8TUA_R28UmUMbBJFEGl6IdJ7inY.png?width=108&crop=smart&auto=webp&s=dbdf50a8b88735ddab8ff93474d2b6933c6f40b4', 'width': 108}, {'height': 108, 'url': 'h... |
Best open source LLM for long context RAG? | 0 | I’m developing an agentic RAG application, and needed your guys’ advice on which open source LLM to use. In your experience, which LLM has the best citation grounding? (i.e, claims it makes with citations should actually exist in the respective citation’s content)
I need near perfect grounding accuracy, and don’t want... | 2025-08-01T02:50:14 | https://www.reddit.com/r/LocalLLaMA/comments/1mel6ma/best_open_source_llm_for_long_context_rag/ | ButterscotchVast2948 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mel6ma | false | null | t3_1mel6ma | /r/LocalLLaMA/comments/1mel6ma/best_open_source_llm_for_long_context_rag/ | false | false | self | 0 | null |
genmo is great for storyboards and concept videos | 0 | genmo lets you build short story scenes with text prompts. not great for subtle emotion yet, but good for sci-fi or fantasy previews. | 2025-08-01T02:40:45 | https://www.reddit.com/r/LocalLLaMA/comments/1mekzk3/genmo_is_great_for_storyboards_and_concept_videos/ | Neat_Chapter_9055 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mekzk3 | false | null | t3_1mekzk3 | /r/LocalLLaMA/comments/1mekzk3/genmo_is_great_for_storyboards_and_concept_videos/ | false | false | self | 0 | null |
What kind of system do I need to run Qwen3-Coder locally like Cursor AI? Is my setup enough? | 3 | Hey everyone,
I want to run Qwen3-Coder-30B-A3B-Instruct locally and get fast code suggestions similar to Cursor AI. Here is my current system:
* CPU: 8-core, 16-thread Intel i7-12700K
* GPU: NVIDIA RTX 3070 or 4070 with 12 to 16 GB VRAM
* RAM: 64 GB DDR4 or DDR5
* Storage: 1 TB NVMe SSD
* Operating System: Windows 1... | 2025-08-01T02:34:24 | https://www.reddit.com/r/LocalLLaMA/comments/1mekuwo/what_kind_of_system_do_i_need_to_run_qwen3coder/ | Medical_Path2953 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mekuwo | false | null | t3_1mekuwo | /r/LocalLLaMA/comments/1mekuwo/what_kind_of_system_do_i_need_to_run_qwen3coder/ | false | false | self | 3 | null |
Agentic email workflow inside of OpenWebUI | 0 |
This is my first time really playing with tool calling so bare with me...
Anytime I get GLM do do a multi-step workflow I get these awkward, repeating explanations.
Is that the model choosing to say that, or is that an Openwebui thing? Or a setting I have wrong?
Seems like everything in green should be... | 2025-08-01T02:26:07 | https://www.reddit.com/r/LocalLLaMA/comments/1mekoy8/agentic_email_workflow_inside_of_openwebui/ | Conscious_Cut_6144 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mekoy8 | false | null | t3_1mekoy8 | /r/LocalLLaMA/comments/1mekoy8/agentic_email_workflow_inside_of_openwebui/ | false | false | 0 | null | |
Looking to buy/build a killer LLM/AI/ML/Deep Learning workstation | 0 | Hello guys.
I’ve been holding off on doing this for a while.
I work in IT and I’ve been in computer science for many years, but I am a complete novice on LLMs. I want to be able to run the best and baddest models that I see everyone talking about here and I was hoping for some advice that might be useful to other pe... | 2025-08-01T02:24:20 | https://www.reddit.com/r/LocalLLaMA/comments/1meknnb/looking_to_buybuild_a_killer_llmaimldeep_learning/ | Particular_Cancel947 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1meknnb | false | null | t3_1meknnb | /r/LocalLLaMA/comments/1meknnb/looking_to_buybuild_a_killer_llmaimldeep_learning/ | false | false | self | 0 | null |
How can I set the context length for API external models in Open webUI ? | 0 | The title says all: How can I set the context length for API external models in Open webUI ? Thanks in advance for any help. 🙏💥 | 2025-08-01T02:22:12 | https://www.reddit.com/r/LocalLLaMA/comments/1mekm4p/how_can_i_set_the_context_length_for_api_external/ | Current-Stop7806 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mekm4p | false | null | t3_1mekm4p | /r/LocalLLaMA/comments/1mekm4p/how_can_i_set_the_context_length_for_api_external/ | false | false | self | 0 | null |
Forkable logic system for local autonomy after collapse — Vox–Codex | 1 | [removed] | 2025-08-01T02:18:17 | https://www.reddit.com/r/LocalLLaMA/comments/1mekjb0/forkable_logic_system_for_local_autonomy_after/ | Rare-Result3181 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mekjb0 | false | null | t3_1mekjb0 | /r/LocalLLaMA/comments/1mekjb0/forkable_logic_system_for_local_autonomy_after/ | false | false | self | 1 | null |
tool calling support was merged into ik_llama last week | 8 | i didn't see anyone post about it here so i decided to make a post. i know that i avoided using it for coding related stuff because of that but i've been using it since the pull request was merged and it works great!
https://github.com/ikawrakow/ik_llama.cpp/pull/643 | 2025-08-01T02:04:48 | https://www.reddit.com/r/LocalLLaMA/comments/1mek98n/tool_calling_support_was_merged_into_ik_llama/ | jwpbe | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mek98n | false | null | t3_1mek98n | /r/LocalLLaMA/comments/1mek98n/tool_calling_support_was_merged_into_ik_llama/ | false | false | self | 8 | {'enabled': False, 'images': [{'id': 'lv81lYLbq_i2v4fj-LmT1r0FDkcO3TmV3X7sDcFmAB0', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/lv81lYLbq_i2v4fj-LmT1r0FDkcO3TmV3X7sDcFmAB0.png?width=108&crop=smart&auto=webp&s=50ce203e6f450a389e8b59ba25e4cb82cefd2822', 'width': 108}, {'height': 108, 'url': 'h... |
Best TTS model right now that I can self host? | 0 | Looking for a TTS model that is human like that I can self host.
Preferably it would generate a response quickly and have human emotion capability (laughing, sighing, etc.) | 2025-08-01T01:46:34 | https://www.reddit.com/r/LocalLLaMA/comments/1mejvkn/best_tts_model_right_now_that_i_can_self_host/ | iKontact | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mejvkn | false | null | t3_1mejvkn | /r/LocalLLaMA/comments/1mejvkn/best_tts_model_right_now_that_i_can_self_host/ | false | false | self | 0 | null |
Anyone desperate for a little more compute, may as well try | 0 | https://www.reddit.com/r/pcmasterrace/s/HrKBc1chMw | 2025-08-01T01:44:25 | https://www.reddit.com/r/LocalLLaMA/comments/1meju07/anyone_desperate_for_a_little_more_compute_may_as/ | ArchdukeofHyperbole | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1meju07 | false | null | t3_1meju07 | /r/LocalLLaMA/comments/1meju07/anyone_desperate_for_a_little_more_compute_may_as/ | false | false | self | 0 | null |
Running Local RAG on Thousands of OCR’d PDFs — Need Advice for Efficient Long-Doc Processing | 5 | Hi everyone,
I'm beginning my journey into working with LLMs, RAG pipelines, and local inference — and I’m facing a real-world challenge right off the bat.
I have a large corpus of documents (thousands of them), mostly in PDF format, some exceeding 10,000 pages each. All files have already gone through OCR, so the te... | 2025-08-01T01:39:14 | https://www.reddit.com/r/LocalLLaMA/comments/1mejq45/running_local_rag_on_thousands_of_ocrd_pdfs_need/ | NaturalInitial1025 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mejq45 | false | null | t3_1mejq45 | /r/LocalLLaMA/comments/1mejq45/running_local_rag_on_thousands_of_ocrd_pdfs_need/ | false | false | self | 5 | null |
How are people running GLM-4.5-Air in int4 on a 4090 or even laptops with 64GB of ram? I get Out of Memory errors. | 14 | \^
Medium article claim
I just get instant OOMs. Here is the command I use in VLLM with [https://huggingface.co/cpatonn/GLM-4.5-Air-AWQ](https://huggingface.co/cpatonn/GLM-4.5-Air-AWQ)
❯ vllm serve /home/nomadictuba2005/models/glm45air-awq \\
\--quantization compressed-tensors \\
\--dtype float16 \\
\--kv-c... | 2025-08-01T01:36:52 | Pro-editor-1105 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1mejoef | false | null | t3_1mejoef | /r/LocalLLaMA/comments/1mejoef/how_are_people_running_glm45air_in_int4_on_a_4090/ | false | false | 14 | {'enabled': True, 'images': [{'id': 'Ux4xrL_cOzrqZtnZgjw8b4MKDUdhGBhtV-cnMt2d4YI', 'resolutions': [{'height': 19, 'url': 'https://preview.redd.it/ob4424fkabgf1.png?width=108&crop=smart&auto=webp&s=b4e638541ae9fe39a61d5cebca812548c1426f0b', 'width': 108}, {'height': 39, 'url': 'https://preview.redd.it/ob4424fkabgf1.png?... | ||
[P] Tri-70B-preview-SFT: New 70B Model (Research Preview, SFT-only) | 56 | Hey r/LocalLLaMA,
We're a scrappy startup at Trillion Labs and just released [Tri-70B-preview-SFT](https://huggingface.co/trillionlabs/Tri-70B-preview-SFT), our largest language model yet (70B params!), trained from scratch on \~1.5T tokens. We unexpectedly ran short on compute, so this is a pure supervised fine-tunin... | 2025-08-01T01:31:25 | https://www.reddit.com/r/LocalLLaMA/comments/1mejkcu/p_tri70bpreviewsft_new_70b_model_research_preview/ | jshin49 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mejkcu | false | null | t3_1mejkcu | /r/LocalLLaMA/comments/1mejkcu/p_tri70bpreviewsft_new_70b_model_research_preview/ | false | false | self | 56 | {'enabled': False, 'images': [{'id': '54LcYt31V5699aK96P6r3bJQs24PiOVpBBLMv2INZiw', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/54LcYt31V5699aK96P6r3bJQs24PiOVpBBLMv2INZiw.png?width=108&crop=smart&auto=webp&s=b7a80c31c557591f18bda1f387961a8fe38f053e', 'width': 108}, {'height': 116, 'url': 'h... |
first time local llm and facing issues | 0 | 2025-08-01T01:00:32 | https://www.reddit.com/r/LocalLLaMA/comments/1meiwzu/first_time_local_llm_and_facing_issues/ | Fit_Bit_9845 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1meiwzu | false | null | t3_1meiwzu | /r/LocalLLaMA/comments/1meiwzu/first_time_local_llm_and_facing_issues/ | false | false | 0 | null | ||
Releasing Open Weights for FLUX.1 Krea | 24 | Yes, it's an image model and not a language model, but this blog post is really interesting, especially the parts t hat discuss the Pdata.
https://www.krea.ai/blog/flux-krea-open-source-release
**I am not affiliated with Black Forest, Flux, or any of these companies, I'm just sharing the link.** | 2025-08-01T00:41:54 | https://www.reddit.com/r/LocalLLaMA/comments/1meiizp/releasing_open_weights_for_flux1_krea/ | CtrlAltDelve | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1meiizp | false | null | t3_1meiizp | /r/LocalLLaMA/comments/1meiizp/releasing_open_weights_for_flux1_krea/ | false | false | self | 24 | null |
Speech-to-text for long audio files | 1 | Hi everyone, does someone have recommendations for a speech-to-text model that would be able to handle long audio’s (~1 hour)? What would be the best way to go about this?
| 2025-08-01T00:29:53 | https://www.reddit.com/r/LocalLLaMA/comments/1mei9yg/speechtotext_for_long_audio_files/ | Noxchi095 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mei9yg | false | null | t3_1mei9yg | /r/LocalLLaMA/comments/1mei9yg/speechtotext_for_long_audio_files/ | false | false | self | 1 | null |
Cline + Qwen 3 Coder A3B wont call tools | 0 | ./build/bin/llama-server --model ~/Documents/Programm
ing/LLM_models/qwen3-coder-30b-a3b-instruct-q4_k_m.gguf --n-gpu-layers 100 --host 0.0.0.0 --port 8080 --jinja -
-chat-template-file ~/Documents/Programming/LLM_models/tokenizer_config.json
./build/bin/llama-server --model ~/Documents/Programm
... | 2025-08-01T00:29:35 | https://www.reddit.com/r/LocalLLaMA/comments/1mei9pu/cline_qwen_3_coder_a3b_wont_call_tools/ | fractalcrust | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mei9pu | false | null | t3_1mei9pu | /r/LocalLLaMA/comments/1mei9pu/cline_qwen_3_coder_a3b_wont_call_tools/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'ksRJC2bKGwjrMfOqsioi-B4oIm5QWQUM7Vf03KwieGM', 'resolutions': [{'height': 53, 'url': 'https://external-preview.redd.it/ksRJC2bKGwjrMfOqsioi-B4oIm5QWQUM7Vf03KwieGM.jpeg?width=108&crop=smart&auto=webp&s=6fa9ec0bda4ae81d05efe9ff0a296be82987e912', 'width': 108}, {'height': 106, 'url': '... |
GGUF for GLM4.5 air when? | 0 | Looks like the Mac community is loving GLM4.5 air. But I’m over here on ollama and I’m wondering when we will see a GGUF of GLM4.5 air? Normally unsloth is on this, but anyone will do. | 2025-08-01T00:28:03 | https://www.reddit.com/r/LocalLLaMA/comments/1mei8io/gguf_for_glm45_air_when/ | JTN02 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mei8io | false | null | t3_1mei8io | /r/LocalLLaMA/comments/1mei8io/gguf_for_glm45_air_when/ | false | false | self | 0 | null |
How many times do you sample, and why not more? | 0 | If you read most of the technical release papers, they sample plenty. 5, 8, 10, 25, 100times! Some of those scores we are seeing are after so many sampling. Fair enough, I don't think an LLM should be judged by one sample, but definitely a few. Yet it seems folks are not sampling plenty of times when doing one s... | 2025-08-01T00:24:46 | https://www.reddit.com/r/LocalLLaMA/comments/1mei5ya/how_many_times_do_you_sample_and_why_not_more/ | segmond | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mei5ya | false | null | t3_1mei5ya | /r/LocalLLaMA/comments/1mei5ya/how_many_times_do_you_sample_and_why_not_more/ | false | false | self | 0 | null |
Open source TTS w/voice cloning and multilingual translation? | 1 | I'm getting totally lost and overwhelmed in the research and possible options, always changing and hard to keep up with.
Looking for free or open-source tools that can do two things:
1. **Voice cloning with text-to-speech** – found [this post](https://www.reddit.com/r/LocalLLaMA/comments/1f0awd6/best_local_open... | 2025-08-01T00:02:14 | https://www.reddit.com/r/LocalLLaMA/comments/1meho6b/open_source_tts_wvoice_cloning_and_multilingual/ | smoreofnothing22 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1meho6b | false | null | t3_1meho6b | /r/LocalLLaMA/comments/1meho6b/open_source_tts_wvoice_cloning_and_multilingual/ | false | false | self | 1 | null |
best ram configuration for llama with stable difusion | 0 | hello so i plan to run a lamm 4 scout and some kind of stable difusion moddels localy via silly tavern and Oobabooga, the thing i want to know is how to configure these 2 moddels to run the best for my ram/vram should i have it so that both moddels can fit in vram or should i have larger moddels that need to over flow ... | 2025-07-31T23:55:38 | c2btw | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1mehiqe | false | null | t3_1mehiqe | /r/LocalLLaMA/comments/1mehiqe/best_ram_configuration_for_llama_with_stable/ | false | false | default | 0 | {'enabled': True, 'images': [{'id': 'ivkha3srsagf1', 'resolutions': [{'height': 52, 'url': 'https://preview.redd.it/ivkha3srsagf1.png?width=108&crop=smart&auto=webp&s=753336a0a1bc8b11bba7a9dc57050e9e7052762e', 'width': 108}, {'height': 104, 'url': 'https://preview.redd.it/ivkha3srsagf1.png?width=216&crop=smart&auto=web... | |
100 E-books in 15 min | vLLM, A6000, around 1k output tokens/s with 100 concurrent requests Qwen3-30B-A3B-Instruct-2507 | 7 | ============================================================
BENCHMARK SUMMARY
============================================================
Total runs: 100
Successful runs: 99
Success rate: 99.0%
Total benchmark duration: 836.54s
Average time per request (wall clock): 8.37s
Overall Performance:
Average total time p... | 2025-07-31T23:45:39 | secopsml | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1mehark | false | null | t3_1mehark | /r/LocalLLaMA/comments/1mehark/100_ebooks_in_15_min_vllm_a6000_around_1k_output/ | false | false | default | 7 | {'enabled': True, 'images': [{'id': 'ld339ymaqagf1', 'resolutions': [{'height': 64, 'url': 'https://preview.redd.it/ld339ymaqagf1.png?width=108&crop=smart&auto=webp&s=36ccfa946681a8390063a2e5cf969069b903d880', 'width': 108}, {'height': 128, 'url': 'https://preview.redd.it/ld339ymaqagf1.png?width=216&crop=smart&auto=web... | |
Why is "hf download" such a PITA? | 3 | * Why does it get stuck at 100%?
* Why does it kill my router?!
* Why does slow down internet to a crawl when it's *resuming* a download?!
* Why does it take half an hour to know from where it shall resume a download?!
* Why does it receive "incomplete message" and has to sleep two dozen times during download?!
* Why t... | 2025-07-31T23:30:08 | https://www.reddit.com/r/LocalLLaMA/comments/1megyc6/why_is_hf_download_such_a_pita/ | ParaboloidalCrest | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1megyc6 | false | null | t3_1megyc6 | /r/LocalLLaMA/comments/1megyc6/why_is_hf_download_such_a_pita/ | false | false | self | 3 | null |
"Horizon Alpha" hides its thinking | 58 | It's definitely OpenAI's upcoming "open-source" model. | 2025-07-31T23:18:52 | ICYPhoenix7 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1megpco | false | null | t3_1megpco | /r/LocalLLaMA/comments/1megpco/horizon_alpha_hides_its_thinking/ | false | false | default | 58 | {'enabled': True, 'images': [{'id': 'ewdetoz7magf1', 'resolutions': [{'height': 183, 'url': 'https://preview.redd.it/ewdetoz7magf1.jpeg?width=108&crop=smart&auto=webp&s=1fa5a86d9494123d5af599968bcf9f3c2ab876ea', 'width': 108}, {'height': 367, 'url': 'https://preview.redd.it/ewdetoz7magf1.jpeg?width=216&crop=smart&auto=... | |
There’s been some confusion floating around about the Maxsun Intel Arc Pro B60 Dual Turbo (48GB) — mostly about its price. | 1 | [removed] | 2025-07-31T23:09:32 | https://www.hydratechbuilds.com/product-page/intel-arc-pro-b60-dual-48g-turbo | Appropriate_Gate4055 | hydratechbuilds.com | 1970-01-01T00:00:00 | 0 | {} | 1meghn5 | false | null | t3_1meghn5 | /r/LocalLLaMA/comments/1meghn5/theres_been_some_confusion_floating_around_about/ | true | false | spoiler | 1 | null |
There’s been some confusion floating around about the Maxsun Intel Arc Pro B60 Dual Turbo (48GB) — mostly about its price. | 1 | [removed] | 2025-07-31T23:06:17 | https://www.hydratechbuilds.com/product-page/intel-arc-pro-b60-dual-48g-turbo | Appropriate_Gate4055 | hydratechbuilds.com | 1970-01-01T00:00:00 | 0 | {} | 1megey1 | false | null | t3_1megey1 | /r/LocalLLaMA/comments/1megey1/theres_been_some_confusion_floating_around_about/ | true | false | spoiler | 1 | null |
Bytedance Seed Diffusion Preview | 11 | https://seed.bytedance.com/en/seed_diffusion
"A large scale language model based on discrete-state diffusion, specializing in code generation, achieves an inference speed of 2,146 token/s, a 5.4x improvement over autoregressive models of comparable size." | 2025-07-31T23:05:05 | https://www.reddit.com/r/LocalLLaMA/comments/1megdy9/bytedance_seed_diffusion_preview/ | Beautiful_Box_7153 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1megdy9 | false | null | t3_1megdy9 | /r/LocalLLaMA/comments/1megdy9/bytedance_seed_diffusion_preview/ | false | false | self | 11 | null |
GLM is way more open about the chinese government than other chinese models. | 6 | 2025-07-31T22:59:56 | https://www.reddit.com/gallery/1meg9k5 | Pro-editor-1105 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1meg9k5 | false | null | t3_1meg9k5 | /r/LocalLLaMA/comments/1meg9k5/glm_is_way_more_open_about_the_chinese_government/ | false | false | 6 | null | ||
DIY AI MAX 395+ ITX board? | 5 | Just saw this being announced:
Direct link: https://en.sixunited.com/ZB_deatail/334.html
Do people think it will materialise? Would be a cheaper and more appropriate option than frameworks for those preferring to build their own hardware such as upgrade their ITX NAS. | 2025-07-31T22:59:41 | https://www.tomshardware.com/pc-components/motherboards/amd-strix-halo-mini-itx-motherboard-flaunts-128gb-lpddr5x-add-a-cpu-cooler-boot-drive-and-power-supply-for-a-slim-gaming-or-ai-rig | mitchins-au | tomshardware.com | 1970-01-01T00:00:00 | 0 | {} | 1meg9cq | false | null | t3_1meg9cq | /r/LocalLLaMA/comments/1meg9cq/diy_ai_max_395_itx_board/ | false | false | default | 5 | {'enabled': False, 'images': [{'id': 'oqIT-G4_oveD59Zww0lHhnIZJCilyHatyz6EbH7XnhU', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/oqIT-G4_oveD59Zww0lHhnIZJCilyHatyz6EbH7XnhU.jpeg?width=108&crop=smart&auto=webp&s=d2fe3ef3c6034fb4d65cf6c03eb2a579f12191a2', 'width': 108}, {'height': 121, 'url': '... |
An Ollama wrapper for IRC/Slack/Discord, you want to run your own AI for chat? Here ya go. | 0 | If you want to share your `ollama` instance with your friends on Discord, or IRC like me, there aren't many options. I got this working today, so now I can have a trusted local AI on a machine that I can ask questions and it responds in the channel or in private messages. (It's also markdown in Discord/Slack, so it's p... | 2025-07-31T22:25:25 | https://github.com/jjasghar/ai-irc-slack-discord-ollama-bot | jjasghar | github.com | 1970-01-01T00:00:00 | 0 | {} | 1mefgt2 | false | null | t3_1mefgt2 | /r/LocalLLaMA/comments/1mefgt2/an_ollama_wrapper_for_ircslackdiscord_you_want_to/ | false | false | 0 | {'enabled': False, 'images': [{'id': 'g6vxT0esOljml79fXZH6UeNZ4VDz-9uDcCn3t81Ue44', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/g6vxT0esOljml79fXZH6UeNZ4VDz-9uDcCn3t81Ue44.png?width=108&crop=smart&auto=webp&s=3ddb4af410578d36d2885aabe3c69324a178412f', 'width': 108}, {'height': 108, 'url': 'h... | |
Ollama's new GUI is closed source? | 279 | Brothers and sisters, we're being taken for fools.
https://preview.redd.it/d1iudzju8agf1.png?width=922&format=png&auto=webp&s=c7d5d1e1b891425817fab581afae0149aec26b6b
Did anyone check if it's phoning home? | 2025-07-31T22:04:17 | https://www.reddit.com/r/LocalLLaMA/comments/1meeyee/ollamas_new_gui_is_closed_source/ | Sea_Night_2572 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1meeyee | false | null | t3_1meeyee | /r/LocalLLaMA/comments/1meeyee/ollamas_new_gui_is_closed_source/ | false | false | 279 | null | |
The Great Deception of "Low Prices" in LLM APIs | 133 | ( Or... The adventures of a newbie )
Today I learned something really important — and honestly, I had no idea how using API-hosted LLMs can quietly become a black hole for your wallet.💸💰
At first glance, the pricing seems super appealing. You see those spicy “low” prices from big US companies — something like $0.0... | 2025-07-31T21:54:06 | Current-Stop7806 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1meep6o | false | null | t3_1meep6o | /r/LocalLLaMA/comments/1meep6o/the_great_deception_of_low_prices_in_llm_apis/ | false | false | default | 133 | {'enabled': True, 'images': [{'id': 'f8vv4t837agf1', 'resolutions': [{'height': 108, 'url': 'https://preview.redd.it/f8vv4t837agf1.png?width=108&crop=smart&auto=webp&s=4a158041f9882499a65c08f11adace0fa76a0f40', 'width': 108}, {'height': 216, 'url': 'https://preview.redd.it/f8vv4t837agf1.png?width=216&crop=smart&auto=we... | |
Vibe coding in prod by Anthropic | 0 | 2025-07-31T21:51:40 | https://youtu.be/fHWFF_pnqDk?si=0b2cr3QYxR4x0Ups | siddhantparadox | youtu.be | 1970-01-01T00:00:00 | 0 | {} | 1meen33 | false | {'oembed': {'author_name': 'Anthropic', 'author_url': 'https://www.youtube.com/@anthropic-ai', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/fHWFF_pnqDk?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope... | t3_1meen33 | /r/LocalLLaMA/comments/1meen33/vibe_coding_in_prod_by_anthropic/ | false | false | default | 0 | {'enabled': False, 'images': [{'id': 'u-k4ljvDNY76UM3dX9Yb7qhEVOQzhmoT-rA7taL17-U', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/u-k4ljvDNY76UM3dX9Yb7qhEVOQzhmoT-rA7taL17-U.jpeg?width=108&crop=smart&auto=webp&s=439096583487c64bed2d7aa47fb7467544b98f7e', 'width': 108}, {'height': 162, 'url': '... | |
Vibe coding in prod by Snthropic | 1 | 2025-07-31T21:51:03 | https://youtu.be/fHWFF_pnqDk?si=0b2cr3QYxR4x0Ups | siddhantparadox | youtu.be | 1970-01-01T00:00:00 | 0 | {} | 1meemjn | false | {'oembed': {'author_name': 'Anthropic', 'author_url': 'https://www.youtube.com/@anthropic-ai', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/fHWFF_pnqDk?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope... | t3_1meemjn | /r/LocalLLaMA/comments/1meemjn/vibe_coding_in_prod_by_snthropic/ | false | false | default | 1 | null | |
Here's cogito-v2-109B MoE coding Space Invaders in 1 minute on Strix Halo using Lemonade (unedited video) | 52 | Is this the best week ever for new models? I can't believe what we're getting. Huge shoutout to u/danielhanchen and the Unsloth team for getting the GGUFs out so fast!
LLM Server is Lemonade, GitHub: [https://github.com/lemonade-sdk/lemonade](https://github.com/lemonade-sdk/lemonade)
Discord [https://discord.gg/Sf8cf... | 2025-07-31T21:36:16 | https://v.redd.it/39k2gtxw2agf1 | jfowers_amd | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1mee99g | false | {'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/39k2gtxw2agf1/DASHPlaylist.mpd?a=1756589791%2CMTEzNWI2NDdjNzhlODU4ZTdlYWE2ZjIxMDYyOGFiZTMyODY4MmQ1YjA1ZjA5NDFlMGMxZTlkOGVjNTI3YjlkMg%3D%3D&v=1&f=sd', 'duration': 68, 'fallback_url': 'https://v.redd.it/39k2gtxw2agf1/DASH_720.mp4?source=fallback', 'ha... | t3_1mee99g | /r/LocalLLaMA/comments/1mee99g/heres_cogitov2109b_moe_coding_space_invaders_in_1/ | false | false | 52 | {'enabled': False, 'images': [{'id': 'M3ppMGd2eHcyYWdmMStZsfyV8F4GV6vQy52E9vmc2CGAsEmxg4Z4VgmDbWvl', 'resolutions': [{'height': 70, 'url': 'https://external-preview.redd.it/M3ppMGd2eHcyYWdmMStZsfyV8F4GV6vQy52E9vmc2CGAsEmxg4Z4VgmDbWvl.png?width=108&crop=smart&format=pjpg&auto=webp&s=703b4735f6cd2310092b4fc6927b9b41ee451... | |
Claude Code alternative for local | 5 | Hey, I'm looking for some recommendations on models similar to Claude, and maybe some clicks too.
I've been checking out OpenCode.ai and playing with stuff like GLM4-5, but haven't seen anyone try it with what we're doing. Wondering if it's worth switching everything over from Claude Code to test it out.
Anyone got ... | 2025-07-31T20:56:52 | https://www.reddit.com/r/LocalLLaMA/comments/1med9hx/claude_code_alternative_for_local/ | filipemendespi | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1med9hx | false | null | t3_1med9hx | /r/LocalLLaMA/comments/1med9hx/claude_code_alternative_for_local/ | false | false | self | 5 | null |
Let your CLI coding agent interact with CLI applications, like Playwright/Puppeteer but for the terminal | 5 | 2025-07-31T20:48:27 | SatoshiNotMe | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1med1tt | false | null | t3_1med1tt | /r/LocalLLaMA/comments/1med1tt/let_your_cli_coding_agent_interact_with_cli/ | false | false | default | 5 | {'enabled': True, 'images': [{'id': 'f8en11qav9gf1', 'resolutions': [{'height': 71, 'url': 'https://preview.redd.it/f8en11qav9gf1.gif?width=108&crop=smart&format=png8&s=0b2b2442428e24d0aad559e879877fb00994dda4', 'width': 108}, {'height': 143, 'url': 'https://preview.redd.it/f8en11qav9gf1.gif?width=216&crop=smart&format... | ||
I built a python script to auto-generate full AI character sets (SFW/NSFW) with LoRA, WebUI API, metadata + folder structure | 1 | Hey folks 👋
I've been working on a Python script that automates the full creation of structured character image sets using the **Stable Diffusion WebUI API (AUTOMATIC1111)**.
# 🔧 What the tool does:
* Handles **LoRA switching and weights**
* Sends full prompt batches via API (**SFW/NSFW separated**)
* Auto-generat... | 2025-07-31T20:47:44 | https://www.reddit.com/r/LocalLLaMA/comments/1med15k/i_built_a_python_script_to_autogenerate_full_ai/ | Appropriate-Sand-934 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1med15k | false | null | t3_1med15k | /r/LocalLLaMA/comments/1med15k/i_built_a_python_script_to_autogenerate_full_ai/ | false | false | nsfw | 1 | null |
ChatGPT hallucinated about music app Soundslice so often, the founder made the lie come true | TechCrunch | 1 | 2025-07-31T20:43:37 | https://techcrunch.com/2025/07/09/chatgpt-hallucinated-about-music-app-soundslice-so-often-the-founder-made-the-lie-come-true/ | ChiliPepperHott | techcrunch.com | 1970-01-01T00:00:00 | 0 | {} | 1mecx9y | false | null | t3_1mecx9y | /r/LocalLLaMA/comments/1mecx9y/chatgpt_hallucinated_about_music_app_soundslice/ | false | false | default | 1 | {'enabled': False, 'images': [{'id': 'nR2pVxehfwN8QdQGeVdApJpYjIuKWfs-SywPfJUFEE4', 'resolutions': [{'height': 63, 'url': 'https://external-preview.redd.it/nR2pVxehfwN8QdQGeVdApJpYjIuKWfs-SywPfJUFEE4.png?width=108&crop=smart&auto=webp&s=2cc737374080ce997847ca2205c773d6e8bd7613', 'width': 108}, {'height': 127, 'url': 'h... | |
Built a full stack web app builder that runs locally and gives you full control | 48 | I never really liked the idea of web based app builders like lovable or replit. They make it really easy to get started, but with that ease comes compromise. Such as being locked in to their ecosystem, being charged for every little thing such as running your project on their VM, hosting, or just to even get access to ... | 2025-07-31T20:41:44 | https://v.redd.it/2pk8172np9gf1 | james-jiang | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1mecvig | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/2pk8172np9gf1/DASHPlaylist.mpd?a=1756586520%2CZWM3MGIzOTc5NWJhNTNjMzQxYzllM2UwZWIzNDAwODg4ZjQzODE1NDEyNDU5MDMxYTJmZGYzOWVkMDZiYjUwYw%3D%3D&v=1&f=sd', 'duration': 59, 'fallback_url': 'https://v.redd.it/2pk8172np9gf1/DASH_1080.mp4?source=fallback', 'h... | t3_1mecvig | /r/LocalLLaMA/comments/1mecvig/built_a_full_stack_web_app_builder_that_runs/ | false | false | 48 | {'enabled': False, 'images': [{'id': 'dGY1N3k2Mm5wOWdmMcTvezTVKiOZTS0zxb0uRi8qlT1iQxY6oFymR1E5yEoz', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/dGY1N3k2Mm5wOWdmMcTvezTVKiOZTS0zxb0uRi8qlT1iQxY6oFymR1E5yEoz.png?width=108&crop=smart&format=pjpg&auto=webp&s=9799b5383228653c23823b190eb25cb5825f6... | |
Finding it hard to part with QwQ:32b, convince me there is something better that I should be using for production RAG tasks. | 5 | I’ve been using QwQ for production RAG tasks for quite a while now, mainly because it absolutely kills it with providing good citations (when instructed to explicitly do so). It’s also great at formatting answers in markdown, and is just a solid all around performer for me. I was eager to step up to the original Qwen3:... | 2025-07-31T20:09:16 | https://www.reddit.com/r/LocalLLaMA/comments/1mec14w/finding_it_hard_to_part_with_qwq32b_convince_me/ | Porespellar | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mec14w | false | null | t3_1mec14w | /r/LocalLLaMA/comments/1mec14w/finding_it_hard_to_part_with_qwq32b_convince_me/ | false | false | self | 5 | null |
What are some good sites to buy a LLM capable desktop | 0 | Heya /r/LocalLLaMA , I read a lot of posts of people describing the components they are using to build custom desktops but I am not interested in manually building a PC.
Is there a site/company that sells LLM capable desktops that will run qwen/deepseek/etc where we can just buy? | 2025-07-31T20:08:01 | https://www.reddit.com/r/LocalLLaMA/comments/1mebzvo/what_are_some_good_sites_to_buy_a_llm_capable/ | ecret | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mebzvo | false | null | t3_1mebzvo | /r/LocalLLaMA/comments/1mebzvo/what_are_some_good_sites_to_buy_a_llm_capable/ | false | false | self | 0 | null |
Laptop Recommendations? | 0 | I'm interested in buying a new laptop and would like it to be capable of running reasonably strong LLMs. I was considering a fully-upgraded Razer Blade 16, but someone else recommended the MacBook Pro M4 Max. I'd say my max budget is $8,000, but I'd still prefer to be cheaper than that. What would you guys suggest for ... | 2025-07-31T19:49:26 | https://www.reddit.com/r/LocalLLaMA/comments/1mebifn/laptop_recommendations/ | PlasticSoldier2018 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mebifn | false | null | t3_1mebifn | /r/LocalLLaMA/comments/1mebifn/laptop_recommendations/ | false | false | self | 0 | null |
Qwen3-Coder-30B-A3B is a BEAST | 1 | vLLM on 2x RTX 4090 + 2xRTX 3090 =
1. Non-quanted weights;
2. GPU KV cache size: **364,752 tokens** (1 concurrent session);
3. Avg generation throughput: 41.3 tokens/s
4. High quality flappy bird out of the box from the first attempt. | 2025-07-31T19:35:40 | https://www.reddit.com/r/LocalLLaMA/comments/1meb5lf/qwen3coder30ba3b_is_a_beast/ | ahtolllka | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1meb5lf | false | null | t3_1meb5lf | /r/LocalLLaMA/comments/1meb5lf/qwen3coder30ba3b_is_a_beast/ | false | false | self | 1 | null |
$60K+ from building enterprise RAG systems at scale: Complete technical guide on how to build them | 1 | [removed] | 2025-07-31T19:32:44 | https://www.reddit.com/r/LocalLLaMA/comments/1meb2wc/60k_from_building_enterprise_rag_systems_at_scale/ | Low_Acanthisitta7686 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1meb2wc | false | null | t3_1meb2wc | /r/LocalLLaMA/comments/1meb2wc/60k_from_building_enterprise_rag_systems_at_scale/ | false | false | self | 1 | null |
New Portable AI Rig Announced (Marketed As A Gaming Laptop) | 8 | \[Src: https://videocardz.com/newz/emdoor-unveils-ryzen-ai-max-300-gaming-laptop\](https://videocardz.com/newz/emdoor-unveils-ryzen-ai-max-300-gaming-laptop)
| Specification | Details |
|---------------|---------|
| Processor | 16-core Ryzen MAX+ 395, 12-core MAX 390, or 8-core MAX 385 |
| Display | 16-inch, 2560x1... | 2025-07-31T19:29:07 | https://videocardz.com/newz/emdoor-unveils-ryzen-ai-max-300-gaming-laptop | false79 | videocardz.com | 1970-01-01T00:00:00 | 0 | {} | 1meazh1 | false | null | t3_1meazh1 | /r/LocalLLaMA/comments/1meazh1/new_portable_ai_rig_announced_marketed_as_a/ | false | false | 8 | {'enabled': False, 'images': [{'id': 'hJGIF8KiepOKz80XX69XMy0ts3UGcwumYePG0rWwhGM', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/hJGIF8KiepOKz80XX69XMy0ts3UGcwumYePG0rWwhGM.jpeg?width=108&crop=smart&auto=webp&s=2d2ff4ab3b731e40926b69413bc0185aa9859792', 'width': 108}, {'height': 112, 'url': '... | |
Let your CLI coding agent interact with CLI applications, like Playwright/Puppeteer but for the terminal | 1 | [removed] | 2025-07-31T19:24:42 | https://www.reddit.com/r/LocalLLaMA/comments/1meavdc/let_your_cli_coding_agent_interact_with_cli/ | SatoshiNotMe | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1meavdc | false | null | t3_1meavdc | /r/LocalLLaMA/comments/1meavdc/let_your_cli_coding_agent_interact_with_cli/ | false | false | self | 1 | null |
An attempt to explain LLM Transformers without math | 7 | I tried to create a little intuitive explanation of what's happening "under the hood" of the transformer architecture without any math... it glosses over a lot but I think starting to talk about it in this way at least dispels some of the myths of how they work. | 2025-07-31T19:20:34 | https://youtu.be/VlbBgj2lBls | nimishg | youtu.be | 1970-01-01T00:00:00 | 0 | {} | 1mearht | false | {'oembed': {'author_name': 'Nimish Gåtam', 'author_url': 'https://www.youtube.com/@nimishgatam8901', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/VlbBgj2lBls?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyr... | t3_1mearht | /r/LocalLLaMA/comments/1mearht/an_attempt_to_explain_llm_transformers_without/ | false | false | default | 7 | {'enabled': False, 'images': [{'id': 'mK7kA7DOTqxkSxj8ISMxYvYyNAvwQ_CTmMB_afvnSp8', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/mK7kA7DOTqxkSxj8ISMxYvYyNAvwQ_CTmMB_afvnSp8.jpeg?width=108&crop=smart&auto=webp&s=09ff60921ab0b3dfbcd14be0956687fb43b50ffc', 'width': 108}, {'height': 162, 'url': '... |
Let your CLI coding agent control the terminal, like Playwright/Puppeteer but for the terminal | 1 | [removed] | 2025-07-31T19:14:05 | SatoshiNotMe | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1mealby | false | null | t3_1mealby | /r/LocalLLaMA/comments/1mealby/let_your_cli_coding_agent_control_the_terminal/ | false | false | default | 1 | {'enabled': True, 'images': [{'id': 'kbazsgxyd9gf1', 'resolutions': [{'height': 71, 'url': 'https://preview.redd.it/kbazsgxyd9gf1.gif?width=108&crop=smart&format=png8&s=657a21a46e3f9103f971679cc4eba694d8aea3e4', 'width': 108}, {'height': 143, 'url': 'https://preview.redd.it/kbazsgxyd9gf1.gif?width=216&crop=smart&format... | |
Suggest models for local computer use agent | 0 | I got my hands on M1 Max MacBook Pro 64Gb RAM 1Tb SSD.
Can someone suggest me how should i proceed? | 2025-07-31T19:06:06 | https://www.reddit.com/r/LocalLLaMA/comments/1meadtx/suggest_models_for_local_computer_use_agent/ | Haunting_Stomach8967 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1meadtx | false | null | t3_1meadtx | /r/LocalLLaMA/comments/1meadtx/suggest_models_for_local_computer_use_agent/ | false | false | self | 0 | null |
Horizon Alpha on OpenRouter | 8 | Anyone catch Horizon Alpha the new cloaked model up on OR? Blazing fast. It sure has an OpenAI vibe but I’m not betting on it. Anyone have any guesses or know what it is? Sorry if this has been talked about already but if so, I haven’t seen it. | 2025-07-31T18:54:13 | https://www.reddit.com/r/LocalLLaMA/comments/1mea2gf/horizon_alpha_on_openrouter/ | Background_Put_4978 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mea2gf | false | null | t3_1mea2gf | /r/LocalLLaMA/comments/1mea2gf/horizon_alpha_on_openrouter/ | false | false | self | 8 | null |
Comparison I did - Claude Sonnet / local Qwen3-30B / local Qwen3-235B-thinking | 4 | First of all - it is starting to be interesting. Looks like we can run locally models which compete with online models!
I know - there are thousands of comparison, yet, maybe someone will find this one interesting.
What I did - my prompt: "Write a complete system to control heating / cooling system at home. Note, I'm... | 2025-07-31T18:50:19 | https://www.reddit.com/r/LocalLLaMA/comments/1me9yqh/comparison_i_did_claude_sonnet_local_qwen330b/ | stachumann | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1me9yqh | false | null | t3_1me9yqh | /r/LocalLLaMA/comments/1me9yqh/comparison_i_did_claude_sonnet_local_qwen330b/ | false | false | self | 4 | null |
Horizon Alpha outputs a refusal why trying to make it think - something curious going on? | 0 | I tried to coax Horizon Alpha into reasoning by using this system prompt:
`You are a large reasoning model. include your thoughts in <think> thoughts </think> tags. Then provide a the final answer leading with **final answer**`
Interestingly it will always output a refusal. There is something curious going in. Does ... | 2025-07-31T18:48:37 | https://www.reddit.com/r/LocalLLaMA/comments/1me9x4m/horizon_alpha_outputs_a_refusal_why_trying_to/ | cpldcpu | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1me9x4m | false | null | t3_1me9x4m | /r/LocalLLaMA/comments/1me9x4m/horizon_alpha_outputs_a_refusal_why_trying_to/ | false | false | 0 | null | |
Code to do your *own* quantification? | 0 | It's fairly easy to use a built in quantification of a LLM. But I want to do my own quantification scheme. Is there any starter code on how to do that efficiently?
TIA | 2025-07-31T18:41:30 | https://www.reddit.com/r/LocalLLaMA/comments/1me9qiz/code_to_do_your_own_quantification/ | Ok_Atmosphere3601 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1me9qiz | false | null | t3_1me9qiz | /r/LocalLLaMA/comments/1me9qiz/code_to_do_your_own_quantification/ | false | false | self | 0 | null |
GPT-5 might already be on OpenRouter? | 0 | A new, hidden model called **horizon-alpha** recently appeared on the platform.
https://preview.redd.it/fq01fq9g89gf1.png?width=1329&format=png&auto=webp&s=aab356b109e267230944df68664a42ee571ef8f1
https://preview.redd.it/3zau72vd89gf1.png?width=1175&format=png&auto=webp&s=081dafbec86b085cfc79a4efb7f157f57c73df72
Af... | 2025-07-31T18:40:44 | https://www.reddit.com/r/LocalLLaMA/comments/1me9pro/gpt5_might_already_be_on_openrouter/ | Dr_Karminski | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1me9pro | false | null | t3_1me9pro | /r/LocalLLaMA/comments/1me9pro/gpt5_might_already_be_on_openrouter/ | false | false | 0 | null | |
Suitable model for Summarization | 1 | Can anyone tell me which is the best suitable model for LoRA fine tuning on summarisation task , specially for a specific domain and long documents ? I mainly worked on Encoder-Decoder models like T5. Suggest some other transformer models which can be fine tuned. I have 1xA100 GPU (80GB). | 2025-07-31T18:31:56 | https://www.reddit.com/r/LocalLLaMA/comments/1me9hhl/suitable_model_for_summarization/ | DefinitionFew9850 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1me9hhl | false | null | t3_1me9hhl | /r/LocalLLaMA/comments/1me9hhl/suitable_model_for_summarization/ | false | false | self | 1 | null |
Ollama Troubles | 0 | I got the new qwen model running on my laptop. The tokens/sec is great but the time to first tokens is super duper long. This is due to the fact that the agent has 6-7 tools (strict zod schema).
Is there a way to create a modelfile where you can provide all the tools or something or just some way do prompt caching so ... | 2025-07-31T18:11:43 | https://www.reddit.com/r/LocalLLaMA/comments/1me8ym2/ollama_troubles/ | Tasty_Yesterday6280 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1me8ym2 | false | null | t3_1me8ym2 | /r/LocalLLaMA/comments/1me8ym2/ollama_troubles/ | false | false | self | 0 | null |
They all tried | 0 | 2025-07-31T18:09:55 | entsnack | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1me8wz6 | false | null | t3_1me8wz6 | /r/LocalLLaMA/comments/1me8wz6/they_all_tried/ | false | false | 0 | {'enabled': True, 'images': [{'id': 'VQdKQE4GuITCrpJO5eYsZnDMQW1foygkPxkB8i0a97Y', 'resolutions': [{'height': 72, 'url': 'https://preview.redd.it/21h8x40239gf1.png?width=108&crop=smart&auto=webp&s=beeb0d1ffa51e77d6e6ece8db2c7966e243de3c2', 'width': 108}, {'height': 144, 'url': 'https://preview.redd.it/21h8x40239gf1.png... | |||
Powering Claude Code with a locally running GPT-4.5-Air on [a patched version] of mlx_lm.server <-> claude-code-router, gist in comment if you want to try it also -- also funny to see it speaking Chinese :] | 4 | 2025-07-31T18:04:31 | Alarming-Presence-93 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1me8rxj | false | null | t3_1me8rxj | /r/LocalLLaMA/comments/1me8rxj/powering_claude_code_with_a_locally_running/ | false | false | default | 4 | {'enabled': True, 'images': [{'id': 'q38i73u129gf1', 'resolutions': [{'height': 102, 'url': 'https://preview.redd.it/q38i73u129gf1.png?width=108&crop=smart&auto=webp&s=2352aaa1e344f7829965ae581344fe3993ba58f4', 'width': 108}, {'height': 204, 'url': 'https://preview.redd.it/q38i73u129gf1.png?width=216&crop=smart&auto=we... | ||
Are radeon mi60 32Gb gpus still any good? | 7 | I'm shopping for a second hand gpu that has 32 gb of vram. I found the radeon mi50 and mi60 with 32Gb of VRAM. They're kinda old, are there any good for inference? I will use it for LLMs for text generation, image2image generation (like flux.1 kontext), as an agent, or for my camera surveillance for object and person d... | 2025-07-31T17:58:41 | https://www.reddit.com/r/LocalLLaMA/comments/1me8m73/are_radeon_mi60_32gb_gpus_still_any_good/ | redblood252 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1me8m73 | false | null | t3_1me8m73 | /r/LocalLLaMA/comments/1me8m73/are_radeon_mi60_32gb_gpus_still_any_good/ | false | false | self | 7 | null |
Why does HF not show total size for directories? | 16 | Pretty much the title. unsloth is really good about listing how large their quants are in gb, but anytime I look at a safetensors directory I'm left wondering how large the directory is. Do I have enough space to download it? Who knows! It seems like such a trivial thing to list total directory size on the web ui. Why ... | 2025-07-31T17:49:21 | https://www.reddit.com/r/LocalLLaMA/comments/1me8dgy/why_does_hf_not_show_total_size_for_directories/ | createthiscom | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1me8dgy | false | null | t3_1me8dgy | /r/LocalLLaMA/comments/1me8dgy/why_does_hf_not_show_total_size_for_directories/ | false | false | self | 16 | null |
HELP PLEASE -I'm all lost nothing working my RP chats are all just loop or the same message as before | 0 | I'm new to this whole local LLM thing, and my RP chats either just repeat the same stuff as before or go on forever without stopping. I tried getting help from ChatGPT to tweak the settings, but nah — same thing keeps happening.
I’m running SillyTavern linked to Oobabooga (aka `textgen-portable-3.8-windows-cuda12.4`),... | 2025-07-31T17:44:12 | -Fibon4cci | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1me88j7 | false | null | t3_1me88j7 | /r/LocalLLaMA/comments/1me88j7/help_please_im_all_lost_nothing_working_my_rp/ | false | false | default | 0 | {'enabled': True, 'images': [{'id': '8bf4hz0ay8gf1', 'resolutions': [{'height': 60, 'url': 'https://preview.redd.it/8bf4hz0ay8gf1.png?width=108&crop=smart&auto=webp&s=65e9d7361282368523321772013aa1a7b4d37f17', 'width': 108}, {'height': 121, 'url': 'https://preview.redd.it/8bf4hz0ay8gf1.png?width=216&crop=smart&auto=web... | |
Qwen 30B A3B 2507 having an identity crisis... | 0 | Distillation gone wild?
https://preview.redd.it/2h1jhkaus8gf1.png?width=773&format=png&auto=webp&s=c9a3a3ada4dd0cebac3e8da699929492c1f90db5
https://preview.redd.it/vlzbk8aks8gf1.png?width=580&format=png&auto=webp&s=03da1d860c9373a91cb38b2e1b2ee36e5e6c31ad
https://preview.redd.it/c0ngzeq5s8gf1.png?width=487&format=pn... | 2025-07-31T17:34:32 | https://www.reddit.com/r/LocalLLaMA/comments/1me7z6b/qwen_30b_a3b_2507_having_an_identity_crisis/ | randomqhacker | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1me7z6b | false | null | t3_1me7z6b | /r/LocalLLaMA/comments/1me7z6b/qwen_30b_a3b_2507_having_an_identity_crisis/ | false | false | 0 | null | |
I built a local alternative to Grammarly that runs 100% offline | 589 | It uses the Gemma 3n E4B model and requires less than 500MB of memory for grammar checking, dropping to 300MB while idle.
It's still in the early stages, but I’d love to hear your feedback!
You can try it out here: [https://refine.sh](https://refine.sh) | 2025-07-31T17:33:53 | https://v.redd.it/pxb4pfgaw8gf1 | Runjuu | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1me7yia | false | {'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/pxb4pfgaw8gf1/DASHPlaylist.mpd?a=1756575249%2CMzBjNjRhZDIyNDEyZGY5Njk4M2VkZDlkMjUwNzBhMGRkZWY0NTRmNjgwYjk3Yzk3NmE2ZGRjYTNjMTFlY2Y5Mw%3D%3D&v=1&f=sd', 'duration': 8, 'fallback_url': 'https://v.redd.it/pxb4pfgaw8gf1/DASH_720.mp4?source=fallback', 'has... | t3_1me7yia | /r/LocalLLaMA/comments/1me7yia/i_built_a_local_alternative_to_grammarly_that/ | false | false | 589 | {'enabled': False, 'images': [{'id': 'NjBpa2hmZ2F3OGdmMVJtVbsvP1gN8nG91LVDgC8po1e9pFdftwF79YNc_pfg', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/NjBpa2hmZ2F3OGdmMVJtVbsvP1gN8nG91LVDgC8po1e9pFdftwF79YNc_pfg.png?width=108&crop=smart&format=pjpg&auto=webp&s=6b4cddd1185554555c627441a65a67ea0e821... | |
And people say DeepSeek is censored... | 0 | Does DeepSeek censor this prompt too? | 2025-07-31T17:33:19 | entsnack | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1me7xyj | false | null | t3_1me7xyj | /r/LocalLLaMA/comments/1me7xyj/and_people_say_deepseek_is_censored/ | false | false | default | 0 | {'enabled': True, 'images': [{'id': '9cd4zwokw8gf1', 'resolutions': [{'height': 120, 'url': 'https://preview.redd.it/9cd4zwokw8gf1.jpeg?width=108&crop=smart&auto=webp&s=8642f6b94f8bafc6c8ef58f74632a44bab583805', 'width': 108}, {'height': 241, 'url': 'https://preview.redd.it/9cd4zwokw8gf1.jpeg?width=216&crop=smart&auto=... | |
Started a Slack group for AI agent/automation side project builders — free to join | 7 | Hey folks — I’m working on a side project around LLM agents and realized I didn’t have a good place to share experiments or talk to other builders doing similar stuff.
So I started a Slack community for people working on agent-based tools, backend automations, and AI-native side projects. Think LangChain, AutoGen, pro... | 2025-07-31T17:32:08 | https://www.reddit.com/r/LocalLLaMA/comments/1me7wuj/started_a_slack_group_for_ai_agentautomation_side/ | Embarrassed-Radio319 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1me7wuj | false | null | t3_1me7wuj | /r/LocalLLaMA/comments/1me7wuj/started_a_slack_group_for_ai_agentautomation_side/ | false | false | self | 7 | {'enabled': False, 'images': [{'id': 'I-xX7rjCusgUcmQ53FmR-IUZEMbenSZYrO30pYAyMIA', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/I-xX7rjCusgUcmQ53FmR-IUZEMbenSZYrO30pYAyMIA.png?width=108&crop=smart&auto=webp&s=a899259f1a624211bfb51c3c4a875767313d7934', 'width': 108}, {'height': 113, 'url': 'h... |
Try some models | 1 | Hiii im willing to try some of the models you guys talk about, but I don’t know how yet.
Can you refer me some tutorial or guide which model I can try?
My HW is a dual Xeon e2695, Graphics- 4060, 64GB RAM, 1TB Nvme, 2TB SSD.
Thx in advance! | 2025-07-31T17:22:16 | https://www.reddit.com/r/LocalLLaMA/comments/1me7nbq/try_some_models/ | Brilliant-Lie2367 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1me7nbq | false | null | t3_1me7nbq | /r/LocalLLaMA/comments/1me7nbq/try_some_models/ | false | false | self | 1 | null |
Which way modern man? | 2 | 2025-07-31T17:18:08 | BoJackHorseMan53 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1me7jed | false | null | t3_1me7jed | /r/LocalLLaMA/comments/1me7jed/which_way_modern_man/ | false | false | 2 | {'enabled': True, 'images': [{'id': 'p_YkYPdEBKwqTMAiAoKFv6AZbymjEYUHIgDwFF8enJ8', 'resolutions': [{'height': 80, 'url': 'https://preview.redd.it/pfai2rzst8gf1.png?width=108&crop=smart&auto=webp&s=ddde2863d577a8a0993b6ad7626065a6df783440', 'width': 108}, {'height': 160, 'url': 'https://preview.redd.it/pfai2rzst8gf1.png... | |||
15+ templates to build agents that are production tested - please give feedback? | 0 | hey r/LocalLLaMA
I've been building [julep.ai](http://julep.ai) to build AI workflows, and saw most users struggle with workflow templates, structure and prompt templates.
So we created a bunch of templates, which are already live in production with 15+ more templates coming next week.
These are plug-and-play, so ... | 2025-07-31T17:16:53 | https://www.reddit.com/r/LocalLLaMA/comments/1me7i6l/15_templates_to_build_agents_that_are_production/ | Samantha-2023 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1me7i6l | false | null | t3_1me7i6l | /r/LocalLLaMA/comments/1me7i6l/15_templates_to_build_agents_that_are_production/ | false | false | 0 | null | |
Which way modern man? | 0 | 2025-07-31T17:14:56 | BoJackHorseMan53 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1me7ga1 | false | null | t3_1me7ga1 | /r/LocalLLaMA/comments/1me7ga1/which_way_modern_man/ | false | false | default | 0 | {'enabled': True, 'images': [{'id': 'gm4x9pe4t8gf1', 'resolutions': [{'height': 64, 'url': 'https://preview.redd.it/gm4x9pe4t8gf1.png?width=108&crop=smart&auto=webp&s=52c35e78490785e10433b0cf1098889bdc4b1267', 'width': 108}, {'height': 128, 'url': 'https://preview.redd.it/gm4x9pe4t8gf1.png?width=216&crop=smart&auto=web... | ||
Has anyone else seen LLMs lose context after a tool call in OpenWebUI? (Using Qwen 30B) | 2 | I’m running OpenWebUI with an Ollama-backed instance of `Qwen3-30B-A3B-Thinking-2507` (just tried the Instruct as well, and ran into the same issue) and running into a frustrating behavior I’m hoping others have seen (or solved).
Here’s the pattern:
1. I give the model a clear task, for example: “Plan a fun day in Wa... | 2025-07-31T16:59:02 | https://www.reddit.com/r/LocalLLaMA/comments/1me713k/has_anyone_else_seen_llms_lose_context_after_a/ | DVoltaire | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1me713k | false | null | t3_1me713k | /r/LocalLLaMA/comments/1me713k/has_anyone_else_seen_llms_lose_context_after_a/ | false | false | self | 2 | null |
Ai for making photo alive | 0 | Hi guys.There is trend now in internet make old photo alive,can you recommend me free ai for this? | 2025-07-31T16:56:12 | https://www.reddit.com/r/LocalLLaMA/comments/1me6yfh/ai_for_making_photo_alive/ | Akriosss | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1me6yfh | false | null | t3_1me6yfh | /r/LocalLLaMA/comments/1me6yfh/ai_for_making_photo_alive/ | false | false | self | 0 | null |
HELP ... FINE TUNING LED MODEL !!!! | 0 | I want to fine tune LED. I tried freeze its encoder and applied the LoRA to decoder of the model. But i got following error. How can i solve this.
/usr/local/lib/python3.11/dist-packages/torch/utils/checkpoint.py:87: UserWarning: None of the inputs have requires\_grad=True. Gradients will be None warnings.warn(
-... | 2025-07-31T16:50:25 | https://www.reddit.com/r/LocalLLaMA/comments/1me6sxd/help_fine_tuning_led_model/ | DefinitionFew9850 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1me6sxd | false | null | t3_1me6sxd | /r/LocalLLaMA/comments/1me6sxd/help_fine_tuning_led_model/ | false | false | self | 0 | null |
Masking LLM API keys | 0 | Worried about rotating master LLM api keys across apps, teams and other prospects?
Saving keys at AWS Secrets or other vaults and then find it difficult to share with teammates?
Now easily mask your LLM api keys at maskllm.com and securely rotate it across apps & teams.
Masked keys come with built-in rate limiting a... | 2025-07-31T16:42:25 | https://maskllm.com | orumcorum | maskllm.com | 1970-01-01T00:00:00 | 0 | {} | 1me6lic | false | null | t3_1me6lic | /r/LocalLLaMA/comments/1me6lic/masking_llm_api_keys/ | false | false | default | 0 | null |
How can Groq host Kimi-K2 but refuses to host DeepSeek-R1-0528 or V3-0324??? | 22 | Kimi-K2 goes for 1T params with 32b active and Deepseek models go for 671B with 37b active at once.
They've hosted the 400b dense variant of Llama at one point and still host Maverick and scout which are significantly worse than other models in similar or smaller weight class.
They don't even host the qwen3-235b-a22... | 2025-07-31T16:39:55 | https://www.reddit.com/gallery/1me6j2v | True_Requirement_891 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1me6j2v | false | null | t3_1me6j2v | /r/LocalLLaMA/comments/1me6j2v/how_can_groq_host_kimik2_but_refuses_to_host/ | false | false | 22 | null | |
Core count or clock speed for Stable Diffusion, Lora training & AI videos? | 0 | I have so many options for my server i can go for 4 cores with 8 threads but get up to around 3.6Ghz (4.1Ghz turbo) or i can do 2.2Ghz with i 2.6Ghz turbo with 22 cores and 44 threads and many options inbetween.
For LoRA training, AI video and stable diffusion/FLUX with ComfyUI what would be best to aim for? Core cou... | 2025-07-31T15:47:07 | https://www.reddit.com/r/LocalLLaMA/comments/1me54cn/core_count_or_clock_speed_for_stable_diffusion/ | Rotunda0 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1me54cn | false | null | t3_1me54cn | /r/LocalLLaMA/comments/1me54cn/core_count_or_clock_speed_for_stable_diffusion/ | false | false | self | 0 | null |
the last MCP server you'll ever need | 11 | Hi peeps,
UTCP was very well received here last time for providing a FOSS, no wrapper alternative to MCP for tool calling.
Now you can call any endpoint you want from your existing MCP Clients (LMStudio, Jan Desktop etc.) using only one server
no middlemen, no extra security infra
If you want to learn more:
UTCP P... | 2025-07-31T15:33:31 | juanviera23 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1me4riw | false | null | t3_1me4riw | /r/LocalLLaMA/comments/1me4riw/the_last_mcp_server_youll_ever_need/ | false | false | 11 | {'enabled': True, 'images': [{'id': 'Th3KIvbKb2Eaz_bVabBAIPL8LyEq7N4kqbXLE3Fqb4A', 'resolutions': [{'height': 46, 'url': 'https://preview.redd.it/esdfh4o5b8gf1.png?width=108&crop=smart&auto=webp&s=2a6c88f6987a560f196e5be91a469f795b888c5a', 'width': 108}, {'height': 93, 'url': 'https://preview.redd.it/esdfh4o5b8gf1.png?... | ||
I made a comparison chart for Qwen3-Coder-30B-A3B vs. Qwen3-Coder-480B-A35B | 314 | As you can see from the radar chart, the scores on the left for the two Agent capability tests, mind2web and BFCL-v3, are very close. This suggests that the Agent capabilities of Qwen3-Coder-FLash should be quite strong.
However, there is still a significant gap in the Aider-Polyglot and SWE Multilingual tests, w... | 2025-07-31T15:23:42 | Dr_Karminski | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1me4i2h | false | null | t3_1me4i2h | /r/LocalLLaMA/comments/1me4i2h/i_made_a_comparison_chart_for_qwen3coder30ba3b_vs/ | false | false | default | 314 | {'enabled': True, 'images': [{'id': 'l6547uel88gf1', 'resolutions': [{'height': 170, 'url': 'https://preview.redd.it/l6547uel88gf1.png?width=108&crop=smart&auto=webp&s=fc731d339acbcf88a184b51662dd456eefd477c5', 'width': 108}, {'height': 340, 'url': 'https://preview.redd.it/l6547uel88gf1.png?width=216&crop=smart&auto=we... | |
Help! How to access the full 96GB VRAM on AMD Strix Halo (Ryzen AI Max+ 395) with PyTorch in Ubuntu 24.04? | 1 | Hey everyone,
I’ve got an AMD Strix Halo (Ryzen AI Max+ 395) running Ubuntu 24.04, and I’ve installed ROCm based on the official documentation. To keep things streamlined, I also went ahead and installed PyTorch via Docker, as recommended by the official docs.
However, when I run `import torch` and check for VRAM, I... | 2025-07-31T15:19:22 | https://www.reddit.com/r/LocalLLaMA/comments/1me4e22/help_how_to_access_the_full_96gb_vram_on_amd/ | ashwin3005 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1me4e22 | false | null | t3_1me4e22 | /r/LocalLLaMA/comments/1me4e22/help_how_to_access_the_full_96gb_vram_on_amd/ | false | false | self | 1 | null |
AMD MI50/MI60 owners club | 1 | [removed] | 2025-07-31T15:15:09 | https://www.reddit.com/r/LocalLLaMA/comments/1me4a3j/amd_mi50mi60_owners_club/ | FriendlyRetriver | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1me4a3j | false | null | t3_1me4a3j | /r/LocalLLaMA/comments/1me4a3j/amd_mi50mi60_owners_club/ | false | false | self | 1 | null |
Is there a good Computer use workflow / model? | 1 | Is there a good computer use workflow or model yet that people like these days?
Which one is people's favorite on the market today? Preferably a locally run model but totally fine if it needs to reach out to a service like OpenAI/Claude/Manus etc. | 2025-07-31T15:10:58 | https://www.reddit.com/r/LocalLLaMA/comments/1me467z/is_there_a_good_computer_use_workflow_model/ | BluCreator | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1me467z | false | null | t3_1me467z | /r/LocalLLaMA/comments/1me467z/is_there_a_good_computer_use_workflow_model/ | false | false | self | 1 | null |
Is it worth buying a 3090 over a P40 in my case? | 3 | So I'm trying to decide between buying a p40 (maybe two) or an rtx 3090. My main purpose is reasoning/coding. I'm on a tighter budget right now since i have to buy the whole pc rig, and the P40 is just so much cheaper than the 3090.
Basically would a P40 really suffice for heavy reasoning and coding or should i ju... | 2025-07-31T15:09:28 | https://www.reddit.com/r/LocalLLaMA/comments/1me44qm/is_it_worth_buying_a_3090_over_a_p40_in_my_case/ | More_Indication_3439 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1me44qm | false | null | t3_1me44qm | /r/LocalLLaMA/comments/1me44qm/is_it_worth_buying_a_3090_over_a_p40_in_my_case/ | false | false | self | 3 | null |
Space Invaders on first try with Qwen3 Coder 30b-a3b (Unsloth Q6_K) | 122 | First try from the most minimalistic prompt possible:
\> Write an HTML and JavaScript page implementing space invaders | 2025-07-31T15:09:04 | https://v.redd.it/num3q6pa68gf1 | waescher | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1me44dy | false | {'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/num3q6pa68gf1/DASHPlaylist.mpd?a=1756566559%2CNzA4NDU2NmJjMDdkMzk3NTU1NmZkZWE2NGM1YWM0NDUyOGY0NmFjNmE1ODRhMDBhZjQzZWEwMDBiZGNmYzRlYw%3D%3D&v=1&f=sd', 'duration': 13, 'fallback_url': 'https://v.redd.it/num3q6pa68gf1/DASH_720.mp4?source=fallback', 'ha... | t3_1me44dy | /r/LocalLLaMA/comments/1me44dy/space_invaders_on_first_try_with_qwen3_coder/ | false | false | 122 | {'enabled': False, 'images': [{'id': 'Z21iOW82cGE2OGdmMf4bjCo6h_II4JpemDAqzdx9OhZnb5PcmXrVKPcJDT7r', 'resolutions': [{'height': 79, 'url': 'https://external-preview.redd.it/Z21iOW82cGE2OGdmMf4bjCo6h_II4JpemDAqzdx9OhZnb5PcmXrVKPcJDT7r.png?width=108&crop=smart&format=pjpg&auto=webp&s=3887bb128805d684143e02aca9e08aa4727d7... | |
8% -> 33.3% on Aider polyglot | 60 | I just checked the Aider polyglot score of the [Qwen3-Coder-30B-A3B-Instruct](https://huggingface.co/Qwen/Qwen3-Coder-30B-A3B-Instruct) model, it seems they are showing the score of ***diff*** **Edit Format**
And a quick comparison against the last local qwen coder model, shows a huge jump in performance:
8% -> 33.3... | 2025-07-31T15:00:05 | https://www.reddit.com/r/LocalLLaMA/comments/1me3vpe/8_333_on_aider_polyglot/ | AaronFeng47 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1me3vpe | false | null | t3_1me3vpe | /r/LocalLLaMA/comments/1me3vpe/8_333_on_aider_polyglot/ | false | false | self | 60 | {'enabled': False, 'images': [{'id': 'IAGFmaGszKcqSKR_8qg0oES6OBfFDCBNvzr72pbVe7o', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/IAGFmaGszKcqSKR_8qg0oES6OBfFDCBNvzr72pbVe7o.png?width=108&crop=smart&auto=webp&s=fbf0440b72bf3c599b24d782f0bddf00251537cf', 'width': 108}, {'height': 116, 'url': 'h... |
LLM forgets recent user messages – only responds from memory DB, not conversation context (Llama-3 base, local setup) | 3 | **Setup:**
Running a local Llama-3 8B base model via llama.cpp on a high-spec laptop (64 GB RAM, RTX 4060). Using a custom AI assistant that includes user memory, profile blocks, and a memory graph. Model runs fine technically—fast inference, no crashes.
**The problem:**
Even in short chats, the assistant forgets what... | 2025-07-31T14:54:47 | https://www.reddit.com/r/LocalLLaMA/comments/1me3qyu/llm_forgets_recent_user_messages_only_responds/ | 10jaqk192 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1me3qyu | false | null | t3_1me3qyu | /r/LocalLLaMA/comments/1me3qyu/llm_forgets_recent_user_messages_only_responds/ | false | false | self | 3 | null |
Instruct embedding models | 2 | When using embedding model for example Qwen 3 8B - is ti beneficial to pass instruction when doing chunk/document embeddings as Instruct: [custom instruction]\nText: [document text] ? In all the tutorials and documentations as I see it only passed with queries | 2025-07-31T14:47:16 | https://www.reddit.com/r/LocalLLaMA/comments/1me3k8h/instruct_embedding_models/ | SadInitial9305 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1me3k8h | false | null | t3_1me3k8h | /r/LocalLLaMA/comments/1me3k8h/instruct_embedding_models/ | false | false | self | 2 | null |
Dario's (stupid) take on open source | 15 | Wtf is this guy talking about
https://youtu.be/mYDSSRS-B5U&t=36m43s | 2025-07-31T14:44:51 | https://www.reddit.com/r/LocalLLaMA/comments/1me3hy7/darios_stupid_take_on_open_source/ | Conscious_Nobody9571 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1me3hy7 | false | null | t3_1me3hy7 | /r/LocalLLaMA/comments/1me3hy7/darios_stupid_take_on_open_source/ | false | false | self | 15 | null |
MistralAI releases Codestral 25.08 (via API only tho) | 27 | Apparent improvements:
* Improved Performance: +30% increase in accepted completions, +10% more retained code, and 50% fewer runaway generations
* Enhanced Chat Mode: +5% improvement in instruction following and code abilities
* Flexible Deployment: Supports cloud, VPC, or on-prem environments
Only usable via API (mo... | 2025-07-31T14:40:55 | https://www.reddit.com/r/LocalLLaMA/comments/1me3e7w/mistralai_releases_codestral_2508_via_api_only_tho/ | juanviera23 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1me3e7w | false | null | t3_1me3e7w | /r/LocalLLaMA/comments/1me3e7w/mistralai_releases_codestral_2508_via_api_only_tho/ | false | false | self | 27 | {'enabled': False, 'images': [{'id': 'fZa1k6EgCLeOgsay3muaKQJpugercNmOrJnutzU_xCM', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/fZa1k6EgCLeOgsay3muaKQJpugercNmOrJnutzU_xCM.jpeg?width=108&crop=smart&auto=webp&s=762f3b17a77a1a724579ae4672a2a2916540a708', 'width': 108}, {'height': 121, 'url': '... |
Sam Altman After New Models Qwen3 | 0 | 2025-07-31T14:37:39 | https://v.redd.it/gyxnol9818gf1 | Ok_Ninja7526 | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1me3b7o | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/gyxnol9818gf1/DASHPlaylist.mpd?a=1756564674%2CNjM1YjZiMmE2MWEyNTViNjY5ZWE3OGU5Y2ExYjdiODE2ZGI1YjlkZDIzZTNlMzExZmFiMjVjNWExMzRkYTA1YQ%3D%3D&v=1&f=sd', 'duration': 7, 'fallback_url': 'https://v.redd.it/gyxnol9818gf1/DASH_1080.mp4?source=fallback', 'ha... | t3_1me3b7o | /r/LocalLLaMA/comments/1me3b7o/sam_altman_after_new_models_qwen3/ | false | false | 0 | {'enabled': False, 'images': [{'id': 'ZWMyajduYjgxOGdmMQQDwhtBgWT4b0TciBpwhNMNACEI6fUOpQa-dsmd4yJC', 'resolutions': [{'height': 192, 'url': 'https://external-preview.redd.it/ZWMyajduYjgxOGdmMQQDwhtBgWT4b0TciBpwhNMNACEI6fUOpQa-dsmd4yJC.png?width=108&crop=smart&format=pjpg&auto=webp&s=925bc9d5701177aa14bf6d30f222030eae74... | ||
Qwen3-Coder-Flash / Qwen3-Coder-30B-A3B-Instruct-FP8 are here! | 188 | 2025-07-31T14:29:15 | zRevengee | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1me33jj | false | null | t3_1me33jj | /r/LocalLLaMA/comments/1me33jj/qwen3coderflash_qwen3coder30ba3binstructfp8_are/ | false | false | default | 188 | {'enabled': True, 'images': [{'id': '3dn8agzjz7gf1', 'resolutions': [{'height': 60, 'url': 'https://preview.redd.it/3dn8agzjz7gf1.jpeg?width=108&crop=smart&auto=webp&s=bf353fe46bbeb57399d0faf6bcb05041173ab4ba', 'width': 108}, {'height': 121, 'url': 'https://preview.redd.it/3dn8agzjz7gf1.jpeg?width=216&crop=smart&auto=w... | ||
Qwen/Qwen3-Coder-30B-A3B-Instruct · Hugging Face | 103 | **Qwen3-Coder** is available in multiple sizes. Today, we're excited to introduce **Qwen3-Coder-30B-A3B-Instruct**. This streamlined model maintains impressive performance and efficiency, featuring the following key enhancements:
* **Significant Performance** among open models on **Agentic Coding**, **Agentic Browser-... | 2025-07-31T14:27:41 | https://huggingface.co/Qwen/Qwen3-Coder-30B-A3B-Instruct | jacek2023 | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1me324b | false | null | t3_1me324b | /r/LocalLLaMA/comments/1me324b/qwenqwen3coder30ba3binstruct_hugging_face/ | false | false | default | 103 | {'enabled': False, 'images': [{'id': 'IAGFmaGszKcqSKR_8qg0oES6OBfFDCBNvzr72pbVe7o', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/IAGFmaGszKcqSKR_8qg0oES6OBfFDCBNvzr72pbVe7o.png?width=108&crop=smart&auto=webp&s=fbf0440b72bf3c599b24d782f0bddf00251537cf', 'width': 108}, {'height': 116, 'url': 'h... |
🚀 Qwen3-Coder-Flash released! | 1,536 | 🦥 Qwen3-Coder-Flash: Qwen3-Coder-30B-A3B-Instruct
💚 Just lightning-fast, accurate code generation.
✅ Native 256K context (supports up to 1M tokens with YaRN)
✅ Optimized for platforms like Qwen Code, Cline, Roo Code, Kilo Code, etc.
✅ Seamless function calling & agent workflows
💬 Chat: https://chat.qwen.ai/
🤗... | 2025-07-31T14:26:52 | ResearchCrafty1804 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1me31d8 | false | null | t3_1me31d8 | /r/LocalLLaMA/comments/1me31d8/qwen3coderflash_released/ | false | false | default | 1,536 | {'enabled': True, 'images': [{'id': 'p7fpia2bz7gf1', 'resolutions': [{'height': 62, 'url': 'https://preview.redd.it/p7fpia2bz7gf1.jpeg?width=108&crop=smart&auto=webp&s=95c4825b11a345671147c3d4e0b79207480f3ca0', 'width': 108}, {'height': 124, 'url': 'https://preview.redd.it/p7fpia2bz7gf1.jpeg?width=216&crop=smart&auto=w... | |
Qwen3-Coder-3B-A3B-Instruct | 0 | 2025-07-31T14:25:36 | https://huggingface.co/Qwen/Qwen3-Coder-30B-A3B-Instruct | _bachrc | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1me305i | false | null | t3_1me305i | /r/LocalLLaMA/comments/1me305i/qwen3coder3ba3binstruct/ | false | false | default | 0 | null | |
Qwen3-Coder-30B-A3B released! | 531 | 2025-07-31T14:24:40 | https://huggingface.co/Qwen/Qwen3-Coder-30B-A3B-Instruct | glowcialist | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1me2zc6 | false | null | t3_1me2zc6 | /r/LocalLLaMA/comments/1me2zc6/qwen3coder30ba3b_released/ | false | false | default | 531 | {'enabled': False, 'images': [{'id': 'IAGFmaGszKcqSKR_8qg0oES6OBfFDCBNvzr72pbVe7o', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/IAGFmaGszKcqSKR_8qg0oES6OBfFDCBNvzr72pbVe7o.png?width=108&crop=smart&auto=webp&s=fbf0440b72bf3c599b24d782f0bddf00251537cf', 'width': 108}, {'height': 116, 'url': 'h... | |
Towards Locally Deployable Fine-Tuned Causal LLMs for Mode Choice Behaviour | 0 | 2025-07-31T14:24:34 | https://arxiv.org/html/2507.21432v1 | juanviera23 | arxiv.org | 1970-01-01T00:00:00 | 0 | {} | 1me2z8b | false | null | t3_1me2z8b | /r/LocalLLaMA/comments/1me2z8b/towards_locally_deployable_finetuned_causal_llms/ | false | false | default | 0 | null | |
FLUX.1 Krea [dev] - a new state-of-the-art open-weights FLUX model, built for photorealism. | 56 | [https://x.com/bfl\_ml/status/1950920537741336801](https://x.com/bfl_ml/status/1950920537741336801) | 2025-07-31T14:22:02 | https://huggingface.co/black-forest-labs/FLUX.1-Krea-dev | ApprehensiveAd3629 | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1me2wxx | false | null | t3_1me2wxx | /r/LocalLLaMA/comments/1me2wxx/flux1_krea_dev_a_new_stateoftheart_openweights/ | false | false | 56 | {'enabled': False, 'images': [{'id': 'umBIFB2q0PLAR4i8_IGsGxcKPqvKt-H27oJu9PzZu6Y', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/umBIFB2q0PLAR4i8_IGsGxcKPqvKt-H27oJu9PzZu6Y.png?width=108&crop=smart&auto=webp&s=0653c84335d6638557b27eaa1db7b3d010a5cdb6', 'width': 108}, {'height': 116, 'url': 'h... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.