title stringlengths 1 300 | score int64 0 8.54k | selftext stringlengths 0 41.5k | created timestamp[ns]date 2023-04-01 04:30:41 2026-03-04 02:14:14 ⌀ | url stringlengths 0 878 | author stringlengths 3 20 | domain stringlengths 0 82 | edited timestamp[ns]date 1970-01-01 00:00:00 2026-02-19 14:51:53 | gilded int64 0 2 | gildings stringclasses 7
values | id stringlengths 7 7 | locked bool 2
classes | media stringlengths 646 1.8k ⌀ | name stringlengths 10 10 | permalink stringlengths 33 82 | spoiler bool 2
classes | stickied bool 2
classes | thumbnail stringlengths 4 213 ⌀ | ups int64 0 8.54k | preview stringlengths 301 5.01k ⌀ |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Qwen3-Coder-30B-A3B in a laptop - Apple or NVIDIA (RTX 4080/5080)? | 2 | Hi everyone,
I have a $2.500 budget for a new laptop, and I would like to know what's your experience in running small models (around 30B) in these machines.
My options:
\- MacBook Pro M1 Max w/ 64GB RAM
\- MacBook Pro M4 w/36 or 48GB RAM
\- RTX 4080 Mobile 12GB + 64GB RAM
\- RTX 5080 Mobile 16GB + 64GB RAM
... | 2025-08-21T20:37:09 | https://www.reddit.com/r/LocalLLaMA/comments/1mwmi2n/qwen3coder30ba3b_in_a_laptop_apple_or_nvidia_rtx/ | Icaruszin | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mwmi2n | false | null | t3_1mwmi2n | /r/LocalLLaMA/comments/1mwmi2n/qwen3coder30ba3b_in_a_laptop_apple_or_nvidia_rtx/ | false | false | self | 2 | null |
Pewdiepie’s monstrous 160GB Vram build | 662 | He was talking about running llama 3 70B on half of the gpus. so we might be getting a pewdiepie local llm arc. | 2025-08-21T20:32:55 | https://youtu.be/2JzOe1Hs26Q?si=9Ck53vK9hja3BZD7 | joseph_the_69th | youtu.be | 1970-01-01T00:00:00 | 0 | {} | 1mwme5c | false | {'oembed': {'author_name': 'PewDiePie', 'author_url': 'https://www.youtube.com/@PewDiePie', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/2JzOe1Hs26Q?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; p... | t3_1mwme5c | /r/LocalLLaMA/comments/1mwme5c/pewdiepies_monstrous_160gb_vram_build/ | false | false | default | 662 | {'enabled': False, 'images': [{'id': 'zQgZCeoj46IUkydlNZy5fsyhmsqrk550dmk1a_cyvRo', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/zQgZCeoj46IUkydlNZy5fsyhmsqrk550dmk1a_cyvRo.jpeg?width=108&crop=smart&auto=webp&s=27a2ec9bfdfaf941c0c588b124d42a89f7be3d9e', 'width': 108}, {'height': 162, 'url': '... |
Qwen 14b on a 3060 Vllm | 3 | Hello everyone, I want to run the qwen 14b model on my 3060 12gb vllm server. It needs to have fp8 compression and 32k context and kv cache. Does anyone know how to do this? Can I fully offload everything to cpu and just keep the model weights on the gpu? Thank You | 2025-08-21T20:27:55 | https://www.reddit.com/r/LocalLLaMA/comments/1mwm9hx/qwen_14b_on_a_3060_vllm/ | Vllm-user | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mwm9hx | false | null | t3_1mwm9hx | /r/LocalLLaMA/comments/1mwm9hx/qwen_14b_on_a_3060_vllm/ | false | false | self | 3 | null |
Kimi K2 locally, my results and appreciation post | 38 | Hi,
I've just run Kimi K2 locally and I'm amazed that I can run it completely locally. I'm fucking loving K2.
I'm just script kiddie, until now I was using ollama so any suggestions are very welcome.
My setup:
AMD Ryzen Threadripper PRO 3945WX
Asrock wrx80 creator 2.0 mobo
512 GB DDR4 3200 MHz (8 64gb sticks... | 2025-08-21T20:15:26 | https://www.reddit.com/r/LocalLLaMA/comments/1mwlxo6/kimi_k2_locally_my_results_and_appreciation_post/ | koibKop4 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mwlxo6 | false | null | t3_1mwlxo6 | /r/LocalLLaMA/comments/1mwlxo6/kimi_k2_locally_my_results_and_appreciation_post/ | false | false | self | 38 | null |
What's the best platform right now for iOS and Android streaming Speech To Text? | 1 | I tried ExecuTorch and the speed wasn't great. GPU acceleration is tricky.
WhisperKit works great on iOS but Android is lagging at the moment. However they will support Android and Parakeet later this year which is fantastic! It's pricey for the Pro version, though.
Haven't tried Whisper.cpp or the others yet.
Anyon... | 2025-08-21T20:08:05 | https://www.reddit.com/r/LocalLLaMA/comments/1mwlqn3/whats_the_best_platform_right_now_for_ios_and/ | rockstar107 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mwlqn3 | false | null | t3_1mwlqn3 | /r/LocalLLaMA/comments/1mwlqn3/whats_the_best_platform_right_now_for_ios_and/ | false | false | self | 1 | null |
The €6k AI Dilemma: Build an EPYC Server, keep my 5090 and dual it , or just buy a MacBook and rent GPUs if needed? | 1 | Hi all,
Originally, I was planning a dual RTX 5090 build. I have one for MRSP. I only have old laptop and it crash on me during the work hence I need something else also for this as I travel more and more for job. I have around 6 k in Euro saved for now. I spent last 4 days and nights and cant make decision as it's bi... | 2025-08-21T20:05:23 | https://www.reddit.com/r/LocalLLaMA/comments/1mwlo17/the_6k_ai_dilemma_build_an_epyc_server_keep_my/ | SomeRandomGuuuuuuy | self.LocalLLaMA | 2025-08-21T20:10:55 | 0 | {} | 1mwlo17 | false | null | t3_1mwlo17 | /r/LocalLLaMA/comments/1mwlo17/the_6k_ai_dilemma_build_an_epyc_server_keep_my/ | false | false | self | 1 | null |
Setup for AI for now EPYC server vs dual GPU PCIE5 or just renting GPU. | 1 | [deleted] | 2025-08-21T20:03:28 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 1mwlm7l | false | null | t3_1mwlm7l | /r/LocalLLaMA/comments/1mwlm7l/setup_for_ai_for_now_epyc_server_vs_dual_gpu/ | false | false | default | 1 | null | ||
Which coding model can I run on Nvidia 3050 Laptop? | 0 | My laptop has 32GB RAM
Nvidia 3050 4GB GPU
Ryzen 5
Which model can I run on my laptop for coding with tools like cline? I would like the results to be similar to Gemini 2.5 pro or qwen3-coder, is it possible somehow? | 2025-08-21T20:02:00 | https://www.reddit.com/r/LocalLLaMA/comments/1mwlkqo/which_coding_model_can_i_run_on_nvidia_3050_laptop/ | pakkedheeth | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mwlkqo | false | null | t3_1mwlkqo | /r/LocalLLaMA/comments/1mwlkqo/which_coding_model_can_i_run_on_nvidia_3050_laptop/ | false | false | self | 0 | null |
[Model Release] Deca 3 Alpha Ultra 4.6T! Parameters | 120 | **Note:** No commercial use without a commercial license.
[https://huggingface.co/deca-ai/3-alpha-ultra](https://huggingface.co/deca-ai/3-alpha-ultra)
Deca 3 Alpha Ultra is a large-scale language model built on a **DynAMoE (Dynamically Activated Mixture of Experts)** architecture, differing from traditional MoE syst... | 2025-08-21T19:50:55 | https://www.reddit.com/r/LocalLLaMA/comments/1mwla9s/model_release_deca_3_alpha_ultra_46t_parameters/ | MohamedTrfhgx | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mwla9s | false | null | t3_1mwla9s | /r/LocalLLaMA/comments/1mwla9s/model_release_deca_3_alpha_ultra_46t_parameters/ | false | false | self | 120 | {'enabled': False, 'images': [{'id': 'YC_2SQ4VeqUgjTgrsP3pUNJY0zHnnzWckbdF91XSY9Y', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/YC_2SQ4VeqUgjTgrsP3pUNJY0zHnnzWckbdF91XSY9Y.png?width=108&crop=smart&auto=webp&s=5093b5be8fccb73b629d51a073ed032e973d8b0c', 'width': 108}, {'height': 116, 'url': 'h... |
GitHub - karpathy/rendergit: Render any git repo into a single static HTML page for humans or LLMs | 52 | Karpathy's at it again!
Simple, one file python script to flatten git repos into a single HTML file | 2025-08-21T19:40:24 | https://github.com/karpathy/rendergit | FullstackSensei | github.com | 1970-01-01T00:00:00 | 0 | {} | 1mwl09x | false | null | t3_1mwl09x | /r/LocalLLaMA/comments/1mwl09x/github_karpathyrendergit_render_any_git_repo_into/ | false | false | 52 | {'enabled': False, 'images': [{'id': 'Z_OzWvTnBaBBexR3EpJ1SvbmUAtHerUflPeHuklK1G8', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/Z_OzWvTnBaBBexR3EpJ1SvbmUAtHerUflPeHuklK1G8.png?width=108&crop=smart&auto=webp&s=23aeb9d6360372a9fb12bd5a004eb768fb398deb', 'width': 108}, {'height': 108, 'url': 'h... | |
Quantization API .. Feedback Appreciated | 0 | I think I have found a way to quantize models from fp32 to less than 1-bit. It is a kind of form of vector based quantization but the codebook wouldn’t be too large.
I am thinking of exposing it as an API where you send the fp32 model and you get the quantized model and the codebook. Would it be something that you wou... | 2025-08-21T19:39:46 | https://www.reddit.com/r/LocalLLaMA/comments/1mwkzo4/quantization_api_feedback_appreciated/ | textclf | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mwkzo4 | false | null | t3_1mwkzo4 | /r/LocalLLaMA/comments/1mwkzo4/quantization_api_feedback_appreciated/ | false | false | self | 0 | null |
A digital butler for your phone (clicks, swipes, and types so you don’t have to) | 9 | This video is not speeded up.
I am making this **Open Source project** which let you **plug LLM to your android and let him take incharge of your phone.**
All the repetitive tasks like sending greeting message to new connection on linkedin, or removing spam messages from the Gmail. All the automation just with your v... | 2025-08-21T19:21:33 | https://v.redd.it/rzs8a94r7fkf1 | Salty-Bodybuilder179 | /r/LocalLLaMA/comments/1mwkimj/a_digital_butler_for_your_phone_clicks_swipes_and/ | 1970-01-01T00:00:00 | 0 | {} | 1mwkimj | false | {'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/rzs8a94r7fkf1/DASHPlaylist.mpd?a=1758525701%2CNjVhNDE1NzcxN2RhYmM4OGVhMWIzZjQxMGQ0ZmY0YmJiMGFhZjRjM2RiOTlmZTg4ZDE5ZTYxMGI4NmMxNzczYQ%3D%3D&v=1&f=sd', 'duration': 200, 'fallback_url': 'https://v.redd.it/rzs8a94r7fkf1/DASH_720.mp4?source=fallback', 'h... | t3_1mwkimj | /r/LocalLLaMA/comments/1mwkimj/a_digital_butler_for_your_phone_clicks_swipes_and/ | false | false | 9 | {'enabled': False, 'images': [{'id': 'M2ZxeDd3MnI3ZmtmMWqCscfNpDCTEkJlWxgZTKFJNV1h2UlRZy-Zmgfugd86', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/M2ZxeDd3MnI3ZmtmMWqCscfNpDCTEkJlWxgZTKFJNV1h2UlRZy-Zmgfugd86.png?width=108&crop=smart&format=pjpg&auto=webp&s=79c3efc43c89e1bf0fec0c922ec492088a3c7... | |
GPT-OSS-20b on Ollama is generating gibberish whenever I run it locally | 0 | Because the internet is slow at home, I downloaded Unsloth's .gguf file of GPT-OSS-20b at work before copying the file to my home computer.
I created a Modelfile with just a \`FROM\` directive and ran the model.
The problem is that no matter the system prompt I add, the model always generates non-sense. It even rare... | 2025-08-21T19:13:31 | https://www.reddit.com/r/LocalLLaMA/comments/1mwkasj/gptoss20b_on_ollama_is_generating_gibberish/ | fromtunis | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mwkasj | false | null | t3_1mwkasj | /r/LocalLLaMA/comments/1mwkasj/gptoss20b_on_ollama_is_generating_gibberish/ | false | false | self | 0 | null |
Gemma 3 0.27b: What is this model used for? | 0 | Interested to know what you use it in. | 2025-08-21T18:39:03 | https://www.reddit.com/r/LocalLLaMA/comments/1mwjdku/gemma_3_027b_what_is_this_model_used_for/ | Ok-Internal9317 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mwjdku | false | null | t3_1mwjdku | /r/LocalLLaMA/comments/1mwjdku/gemma_3_027b_what_is_this_model_used_for/ | false | false | self | 0 | null |
Only AI coding tools guide you need to read in 2025 | 0 | I tried every ai IDE or as they say vibe coding tool out there so you don't have to and here is what works the best.
Best AI IDEs:
1. Kiro: Best of strategic and spec based development, just create tasks for your requirements and execute them one by one, great isn't it? and for $20 you get fare usage(not cheap ... | 2025-08-21T18:31:36 | https://www.reddit.com/r/LocalLLaMA/comments/1mwj6d6/only_ai_coding_tools_guide_you_need_to_read_in/ | mradulp | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mwj6d6 | false | null | t3_1mwj6d6 | /r/LocalLLaMA/comments/1mwj6d6/only_ai_coding_tools_guide_you_need_to_read_in/ | false | false | self | 0 | null |
Recommended Settings JSON | 1 | Hi guys,
found these today: [https://huggingface.co/Quant-Cartel/Recommended-Settings/tree/main](https://huggingface.co/Quant-Cartel/Recommended-Settings/tree/main)
How can i use those? Which backend does support importing these jsons? | 2025-08-21T18:22:08 | https://www.reddit.com/r/LocalLLaMA/comments/1mwixan/recommended_settings_json/ | MrCatberry | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mwixan | false | null | t3_1mwixan | /r/LocalLLaMA/comments/1mwixan/recommended_settings_json/ | false | false | self | 1 | null |
How to get gguf’s running on cloud hosting? | 1 | Llama.cpp/llama-cpp-python literally does not work on any of the cloud hosting services i’ve used with free gpu hours for some reason?
It goes like this:
1. Failed to build the wheel
2. When building the cuda library something will not work when building it.
I use chatgpt or gemini to guide me through setting it up e... | 2025-08-21T18:16:04 | https://www.reddit.com/r/LocalLLaMA/comments/1mwircz/how_to_get_ggufs_running_on_cloud_hosting/ | LongjumpingAd6657 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mwircz | false | null | t3_1mwircz | /r/LocalLLaMA/comments/1mwircz/how_to_get_ggufs_running_on_cloud_hosting/ | false | false | self | 1 | null |
Most uncensored model for local machine | 5 | hi, i want most uncensored llm model for coding and nsfw stuff
i appreciate anyone could help | 2025-08-21T18:07:07 | https://www.reddit.com/r/LocalLLaMA/comments/1mwiiqr/most_uncensored_model_for_local_machine/ | Business_Caramel_688 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mwiiqr | false | null | t3_1mwiiqr | /r/LocalLLaMA/comments/1mwiiqr/most_uncensored_model_for_local_machine/ | false | false | self | 5 | null |
Pewdiepie builds a 140GB VRAM workstation | Guide? | 0 | Not really AI oriented, more of a hardware side, but I found it interesting nonetheless. Even loads LLama3-70B at 21:36. | 2025-08-21T18:06:35 | https://youtu.be/2JzOe1Hs26Q | Ok_Top9254 | youtu.be | 1970-01-01T00:00:00 | 0 | {} | 1mwii8l | false | {'oembed': {'author_name': 'PewDiePie', 'author_url': 'https://www.youtube.com/@PewDiePie', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/2JzOe1Hs26Q?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; p... | t3_1mwii8l | /r/LocalLLaMA/comments/1mwii8l/pewdiepie_builds_a_140gb_vram_workstation_guide/ | false | false | default | 0 | {'enabled': False, 'images': [{'id': 'zQgZCeoj46IUkydlNZy5fsyhmsqrk550dmk1a_cyvRo', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/zQgZCeoj46IUkydlNZy5fsyhmsqrk550dmk1a_cyvRo.jpeg?width=108&crop=smart&auto=webp&s=27a2ec9bfdfaf941c0c588b124d42a89f7be3d9e', 'width': 108}, {'height': 162, 'url': '... |
Deepseek + Claude Code Working Flawlessly! 🤯 (haven't experience error like other proxy project yet) | 42 | 2025-08-21T18:05:04 | GTHell | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1mwigpz | false | null | t3_1mwigpz | /r/LocalLLaMA/comments/1mwigpz/deepseek_claude_code_working_flawlessly_havent/ | false | false | default | 42 | {'enabled': True, 'images': [{'id': 'r9a6y15twekf1', 'resolutions': [{'height': 63, 'url': 'https://preview.redd.it/r9a6y15twekf1.png?width=108&crop=smart&auto=webp&s=c0ab23fd7f86c3955066974a851f3248a92146db', 'width': 108}, {'height': 127, 'url': 'https://preview.redd.it/r9a6y15twekf1.png?width=216&crop=smart&auto=web... | ||
Can LLMs Explain Their Reasoning? - Lecture Clip | 6 | 2025-08-21T17:49:00 | https://youtu.be/u2uNPzzZ45k | kushalgoenka | youtu.be | 1970-01-01T00:00:00 | 0 | {} | 1mwi15i | false | {'oembed': {'author_name': 'Kushal Goenka', 'author_url': 'https://www.youtube.com/@KushalGoenka', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/u2uNPzzZ45k?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyros... | t3_1mwi15i | /r/LocalLLaMA/comments/1mwi15i/can_llms_explain_their_reasoning_lecture_clip/ | false | false | default | 6 | {'enabled': False, 'images': [{'id': 'TEUGnG3_498JAZ2_OVIe6SduCtLk60U1nCPoQNdAxiw', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/TEUGnG3_498JAZ2_OVIe6SduCtLk60U1nCPoQNdAxiw.jpeg?width=108&crop=smart&auto=webp&s=a64dcf9329374822a649343eb355f65f7731b627', 'width': 108}, {'height': 162, 'url': '... | |
Some legend finally posted working quants of GLM-4.5 Air for Ollama | 0 | Merged and packaged quants that work on Ollama are now available for GLM-4.5 Air thanks to this kind soul:
https://ollama.com/MichelRosselli/GLM-4.5-Air
Note: chat template is provisional and doesn’t support tool calling or disabling thinking yet, but everything else seems to work fine for me. | 2025-08-21T17:42:48 | https://www.reddit.com/r/LocalLLaMA/comments/1mwhvas/some_legend_finally_posted_working_quants_of/ | Porespellar | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mwhvas | false | null | t3_1mwhvas | /r/LocalLLaMA/comments/1mwhvas/some_legend_finally_posted_working_quants_of/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'krjt_5uhqcaDfYjfO7lkezThehav9cAIRJgcK-OKAmM', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/krjt_5uhqcaDfYjfO7lkezThehav9cAIRJgcK-OKAmM.png?width=108&crop=smart&auto=webp&s=3dc759de0e8fa36d241c5728d41ee3cf022cab96', 'width': 108}, {'height': 113, 'url': 'h... |
deepseek-v3.1 thinking worse than non-thinking? | 0 | Just noticed that non-thinking performs significantly better than thinking on SVGBench (https://github.com/johnbean393/SVGBench) anyone have similar findings on vibe checks and personal evals? Non-thinking is a lot cheaper given the new API pricing structure, so this would be cool if true. | 2025-08-21T17:39:35 | entsnack | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1mwhs9d | false | null | t3_1mwhs9d | /r/LocalLLaMA/comments/1mwhs9d/deepseekv31_thinking_worse_than_nonthinking/ | false | false | default | 0 | {'enabled': True, 'images': [{'id': 'bjlwjnytsekf1', 'resolutions': [{'height': 79, 'url': 'https://preview.redd.it/bjlwjnytsekf1.jpeg?width=108&crop=smart&auto=webp&s=51e2702575b05068ef5c96a10276ebdf0b3046dd', 'width': 108}, {'height': 159, 'url': 'https://preview.redd.it/bjlwjnytsekf1.jpeg?width=216&crop=smart&auto=w... | |
Drummer's Behemoth R1 123B v2 - A reasoning Largestral 2411 - Absolute Cinema! | 129 | 2025-08-21T17:27:02 | https://huggingface.co/TheDrummer/Behemoth-R1-123B-v2 | TheLocalDrummer | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1mwhfw9 | false | null | t3_1mwhfw9 | /r/LocalLLaMA/comments/1mwhfw9/drummers_behemoth_r1_123b_v2_a_reasoning/ | false | false | default | 129 | {'enabled': False, 'images': [{'id': 'n2_ZPSiTWH6gNdzv-Fy_QAw-KkziAVj94v7pvMBPE_M', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/hsj8k1X0C_q74iyN-NqkYYL7JWSy6JJ56ZytmJNIMLY.jpg?width=108&crop=smart&auto=webp&s=ae34106a04e1980ad28041f9c33d43adb8d6b833', 'width': 108}, {'height': 116, 'url': 'h... | |
PSA: OpenAI GPT-OSS running slow? Do not set top-k to 0! | 41 | I was having issues with GPT-OSS 20b running very slowly on my hardware. At first I suspected that I was using shared RAM, but even at much lower context, and thus memory, I still had horrible speeds. Turns out I had followed the directions of Unsloth in their GPT-OSS guide and set the Top\_K to 0. This slows down llam... | 2025-08-21T17:21:44 | https://www.reddit.com/r/LocalLLaMA/comments/1mwhal0/psa_openai_gptoss_running_slow_do_not_set_topk_to/ | and_human | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mwhal0 | false | null | t3_1mwhal0 | /r/LocalLLaMA/comments/1mwhal0/psa_openai_gptoss_running_slow_do_not_set_topk_to/ | false | false | self | 41 | {'enabled': False, 'images': [{'id': 'PRdd_kkrrPsaTY8Vspk1ipbBBBdrqjST7GMU-yfmU64', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/PRdd_kkrrPsaTY8Vspk1ipbBBBdrqjST7GMU-yfmU64.png?width=108&crop=smart&auto=webp&s=6eab21c5ac3e571b87c5f0556a5e5729e795a46c', 'width': 108}, {'height': 108, 'url': 'h... |
RL infrastructure and Agentic AI meetup | 2 | Welcome to join us in San Francisco
https://lu.ma/bl21t8q4
This event is cohosted by verl, SGLang, Zilliz and Creao AI and organized by Monolith. Together, we’ll explore the latest advances in RL, RL infrastructure, Reasoning, and Agentic AI.
We’ll open with several presentations and dig into:
verl – Reinforcement... | 2025-08-21T17:15:46 | https://www.reddit.com/r/LocalLLaMA/comments/1mwh4q0/rl_infrastructure_and_agentic_ai_meetup/ | dalton_lovegood | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mwh4q0 | false | null | t3_1mwh4q0 | /r/LocalLLaMA/comments/1mwh4q0/rl_infrastructure_and_agentic_ai_meetup/ | false | false | self | 2 | {'enabled': False, 'images': [{'id': 'IdkNqfOFtn_CE2jYbUdFttZe-NKFR97WK4TxYzSrKS4', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/IdkNqfOFtn_CE2jYbUdFttZe-NKFR97WK4TxYzSrKS4.jpeg?width=108&crop=smart&auto=webp&s=fc7152b5fd7ebdefc692d361adbabe4930f3ca06', 'width': 108}, {'height': 113, 'url': '... |
What’s a good model to run at 32k context on a 3060 on VLLM? | 0 | Title | 2025-08-21T17:13:32 | https://www.reddit.com/r/LocalLLaMA/comments/1mwh2h6/whats_a_good_model_to_run_at_32k_context_on_a/ | Vllm-user | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mwh2h6 | false | null | t3_1mwh2h6 | /r/LocalLLaMA/comments/1mwh2h6/whats_a_good_model_to_run_at_32k_context_on_a/ | false | false | self | 0 | null |
VSCpde extension with support of llm on local network | 0 | So I have my home server with a pretty decent CPU. I'm looking for a VS Code extension that supports Ollama on a local network with a dedicated local API from Ollama. The problem with Continue is that it only picks up the localhost API of Ollama on my PC, and the same goes for CodeGPT. I simply can't set them up to lis... | 2025-08-21T17:11:19 | https://www.reddit.com/r/LocalLLaMA/comments/1mwh0b4/vscpde_extension_with_support_of_llm_on_local/ | You_Dayn | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mwh0b4 | false | null | t3_1mwh0b4 | /r/LocalLLaMA/comments/1mwh0b4/vscpde_extension_with_support_of_llm_on_local/ | false | false | self | 0 | null |
[WTF!? News/iOS] Open sourced kokoro + llama.cpp + tool calling demo for iOS | 0 | Hello all!
\[Skippable blurb/link to my shipping app\]
I made a post a long back with my RSS Reader + Local LLM agents, [https://apps.apple.com/us/app/what-the-fluff/id6741672065](https://apps.apple.com/us/app/what-the-fluff/id6741672065), which can be downloaded there. It has an in app purchase, but like 90% of the ... | 2025-08-21T17:05:08 | https://www.reddit.com/r/LocalLLaMA/comments/1mwgu6o/wtf_newsios_open_sourced_kokoro_llamacpp_tool/ | clockentyne | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mwgu6o | false | null | t3_1mwgu6o | /r/LocalLLaMA/comments/1mwgu6o/wtf_newsios_open_sourced_kokoro_llamacpp_tool/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'YWJ4enSkkQc2OmiLX0y08em-aRVAjnN48B3J57-S0r0', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/YWJ4enSkkQc2OmiLX0y08em-aRVAjnN48B3J57-S0r0.png?width=108&crop=smart&auto=webp&s=0d1780e1580027b353e6483b9ae6f69ce4f07b63', 'width': 108}, {'height': 113, 'url': 'h... |
PACT: a new head-to-head negotiation benchmark for LLMs | 17 | >!GPT-5 leads. GPT-OSS-120B is the top open weights model.!< | 2025-08-21T17:03:05 | https://github.com/lechmazur/pact/ | zero0_one1 | github.com | 1970-01-01T00:00:00 | 0 | {} | 1mwgs6w | false | null | t3_1mwgs6w | /r/LocalLLaMA/comments/1mwgs6w/pact_a_new_headtohead_negotiation_benchmark_for/ | false | false | default | 17 | {'enabled': False, 'images': [{'id': 'kiN9MzMfAK5bpfItm0YrlZ4AidBiuSPXUaXIrrpbCs8', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/kiN9MzMfAK5bpfItm0YrlZ4AidBiuSPXUaXIrrpbCs8.png?width=108&crop=smart&auto=webp&s=6f5f25ca89e937ebf993182ecce7d683046cbebf', 'width': 108}, {'height': 108, 'url': 'h... |
DeepSeek will support Chinese chips | 0 | Translate:
UE8M0 FP8 is designed for the upcoming next-generation domestic chip.
https://preview.redd.it/zw2x5ug9mekf1.jpg?width=1080&format=pjpg&auto=webp&s=49937a7d8d06c84a8dc314462b234eb7dff492fb
| 2025-08-21T17:02:59 | https://www.reddit.com/r/LocalLLaMA/comments/1mwgs3d/deepseek_will_support_chinese_chips/ | External_Mood4719 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mwgs3d | false | null | t3_1mwgs3d | /r/LocalLLaMA/comments/1mwgs3d/deepseek_will_support_chinese_chips/ | false | false | 0 | null | |
Small language model doesn't like acronym. Use full word if possible!!! | 2 | Been experimenting with Falcon3 7B (yeah, 2024 models are "old" now in AI time lol) for classifying research paper abstracts into categories like RCTs vs meta-analyses.
Initially used a JSON format like `{'class': 'rct'}` in my system prompt - worked perfectly with GPT-5-mini. But with Falcon3, my app start throwing J... | 2025-08-21T17:00:46 | https://www.reddit.com/r/LocalLLaMA/comments/1mwgpu7/small_language_model_doesnt_like_acronym_use_full/ | dheetoo | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mwgpu7 | false | null | t3_1mwgpu7 | /r/LocalLLaMA/comments/1mwgpu7/small_language_model_doesnt_like_acronym_use_full/ | false | false | self | 2 | null |
Starting with selfhosted LocalLLM and LocalAI | 1 | I want to get into LLM abd AI but I wish to run stuff selfhosted locally.
I prefer to virtualize everything with Proxmox, but I'm also open to any suggestions.
I am a novice when it comes to LLM and AI, pretty much shooting in the dark over here...What should i try to run ??
I have the following hardware laying aro... | 2025-08-21T16:49:35 | https://www.reddit.com/r/LocalLLaMA/comments/1mwgeqr/starting_with_selfhosted_localllm_and_localai/ | mitrako | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mwgeqr | false | null | t3_1mwgeqr | /r/LocalLLaMA/comments/1mwgeqr/starting_with_selfhosted_localllm_and_localai/ | false | false | self | 1 | null |
I tried hundreds of prompts… here’s when “nano-banana” keeps showing up 🍌 | 0 | So I’ve been running a ton of image generation tests — literally **hundreds of prompts** in order to get **nano-banana** to do the editing
After analyzing my results, I noticed a pattern:
* **nano-banana kicked in when the request was “\[subject\]editing only.”** Example prompts: “Do not alter, resize, or change the ... | 2025-08-21T16:33:23 | https://www.reddit.com/r/LocalLLaMA/comments/1mwfyqw/i_tried_hundreds_of_prompts_heres_when_nanobanana/ | mmarco_08 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mwfyqw | false | null | t3_1mwfyqw | /r/LocalLLaMA/comments/1mwfyqw/i_tried_hundreds_of_prompts_heres_when_nanobanana/ | false | false | self | 0 | null |
I ran qwen4b non thinking via LM Studio on Ubuntu with RTX3090 and 32 Gigs of RAM and a 14700KF processor, and it broke my heart. | 4 | All the agents like Cline and KiloCode want larger context window and max I could set was 90K-ish it didn't work and that was super slow. My PC fans were screaming when a request would go. RooCode was able to work with 32K window but that was also super slow and super inaccurate at its task because it would have to com... | 2025-08-21T16:24:33 | https://www.reddit.com/r/LocalLLaMA/comments/1mwfq3z/i_ran_qwen4b_non_thinking_via_lm_studio_on_ubuntu/ | NoFudge4700 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mwfq3z | false | null | t3_1mwfq3z | /r/LocalLLaMA/comments/1mwfq3z/i_ran_qwen4b_non_thinking_via_lm_studio_on_ubuntu/ | false | false | self | 4 | null |
Ollama prompt_eval_count < num_ctx | 1 | [removed] | 2025-08-21T16:22:57 | https://www.reddit.com/r/LocalLLaMA/comments/1mwfogi/ollama_prompt_eval_count_num_ctx/ | NihilisticAssHat | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mwfogi | false | null | t3_1mwfogi | /r/LocalLLaMA/comments/1mwfogi/ollama_prompt_eval_count_num_ctx/ | false | false | self | 1 | null |
DeepSeek V3.1 (Thinking) aggregated benchmarks (vs. gpt-oss-120b) | 197 | I was personally interested in comparing with gpt-oss-120b on intelligence vs. speed, tabulating those numbers below for reference:
||DeepSeek 3.1 (Thinking)|gpt-oss-120b (High)|
|:-|:-|:-|
|Total parameters|671B|120B|
|Active parameters|37B|5.1B|
|Context|128K|131K|
|Intelligence Index|60|61|
|Coding Index|59|50|
|Ma... | 2025-08-21T15:56:14 | https://www.reddit.com/gallery/1mwexgd | entsnack | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1mwexgd | false | null | t3_1mwexgd | /r/LocalLLaMA/comments/1mwexgd/deepseek_v31_thinking_aggregated_benchmarks_vs/ | false | false | 197 | null | |
Why low-bit models aren't totally braindead: A guide from 1-bit meme to FP16 research | 543 | Alright, it's not exactly the same picture, but the core idea is quite similar. This post will explain how, by breaking down LLM quantization into varying levels of precision, starting from a 1-bit meme, then a 2-bit TL;DR, 4-bit overview, 8-bit further reading, and lastly the highest precision FP16 research itself.
#... | 2025-08-21T15:54:35 | Small-Fall-6500 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1mwevt4 | false | null | t3_1mwevt4 | /r/LocalLLaMA/comments/1mwevt4/why_lowbit_models_arent_totally_braindead_a_guide/ | false | false | default | 543 | {'enabled': True, 'images': [{'id': '5t58iz5u9ekf1', 'resolutions': [{'height': 120, 'url': 'https://preview.redd.it/5t58iz5u9ekf1.jpeg?width=108&crop=smart&auto=webp&s=9e32e6e5b689ff142502d199f8ae062f33cf904b', 'width': 108}, {'height': 241, 'url': 'https://preview.redd.it/5t58iz5u9ekf1.jpeg?width=216&crop=smart&auto=... | |
Anyone have gpt-oss-120b single GGUF abliterated? | 0 | For the life of me I can't get gguf-split --merge to work. | 2025-08-21T15:40:14 | https://www.reddit.com/r/LocalLLaMA/comments/1mwehl8/anyone_have_gptoss120b_single_gguf_abliterated/ | sunkendreams333 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mwehl8 | false | null | t3_1mwehl8 | /r/LocalLLaMA/comments/1mwehl8/anyone_have_gptoss120b_single_gguf_abliterated/ | false | false | self | 0 | null |
New "Sonic" Stealth Model (Grok-4-Code/4.5) + Cursor Makes 300 Tool Calls for a Single Prompt | 20 | Wanted to test out a new stealth model, Sonic, last night after Claude/Qwen-3 struggled to solve a problem. [Sonic is rumored to be Grok](https://x.com/mark_k/status/1958437678933844028) (It's obviously Grok). The prompt was about integrating GLSL into Manim, ManimCE's OpenGL logic is a mess so it's a really solid codi... | 2025-08-21T15:37:13 | https://v.redd.it/cxo5slf22ekf1 | Longjumping-Solid563 | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1mweeod | false | {'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/cxo5slf22ekf1/DASHPlaylist.mpd?a=1758382647%2CYjUxN2MzZGJmZjEyMmRkZjU1MDYyNTRkOTE3YmUyNzliMTFiZGExYzZjYzk2YjVkMWJjNmJhZDJiYzQ4MDU4Ng%3D%3D&v=1&f=sd', 'duration': 22, 'fallback_url': 'https://v.redd.it/cxo5slf22ekf1/DASH_720.mp4?source=fallback', 'ha... | t3_1mweeod | /r/LocalLLaMA/comments/1mweeod/new_sonic_stealth_model_grok4code45_cursor_makes/ | false | false | 20 | {'enabled': False, 'images': [{'id': 'djRubHNsZjIyZWtmMbBsYRXpfNN00cVdnGBuEYoU8Hrh8a7-LUpnqkB8QZys', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/djRubHNsZjIyZWtmMbBsYRXpfNN00cVdnGBuEYoU8Hrh8a7-LUpnqkB8QZys.png?width=108&crop=smart&format=pjpg&auto=webp&s=dcd2a947cf2a42d908142de1b068c779656f7... | |
Looking for a local chat UI with dynamic image model switching (like online services offer) | 1 | I’ve been blown away by some online chat services that integrate image generation directly into the chat experience. They let you adjust things like checkpoint/model, steps, and seeds during the chat session — either through dropdowns or quick controls in the interface. It makes experimenting super fluid compared to ed... | 2025-08-21T15:23:53 | https://www.reddit.com/r/LocalLLaMA/comments/1mwe1go/looking_for_a_local_chat_ui_with_dynamic_image/ | reallionkiller | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mwe1go | false | null | t3_1mwe1go | /r/LocalLLaMA/comments/1mwe1go/looking_for_a_local_chat_ui_with_dynamic_image/ | false | false | self | 1 | null |
Is DeepSeek V3.1 better than GPT-5? | 0 | Is DeepSeek V3.1 better than GPT-5? | 2025-08-21T15:22:27 | https://www.reddit.com/r/LocalLLaMA/comments/1mwe00z/is_deepseek_v31_better_than_gpt5/ | balianone | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mwe00z | false | null | t3_1mwe00z | /r/LocalLLaMA/comments/1mwe00z/is_deepseek_v31_better_than_gpt5/ | false | false | self | 0 | null |
What is the minimum llm useful in coding? | 1 | I tried using gpt-oss-20b gguf Q4, but it consumes all my resources and it's uncomfortable.
RTX 4060 8 VRAM
32 RAM
I'm also interested in what minimum llm is starting to be useful in coding, not considering how many resources are available. | 2025-08-21T15:14:24 | https://www.reddit.com/r/LocalLLaMA/comments/1mwdrvr/what_is_the_minimum_llm_useful_in_coding/ | DentistNext6439 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mwdrvr | false | null | t3_1mwdrvr | /r/LocalLLaMA/comments/1mwdrvr/what_is_the_minimum_llm_useful_in_coding/ | false | false | self | 1 | null |
Minimal local LLM for profit in coding | 1 | [removed] | 2025-08-21T15:08:08 | https://www.reddit.com/r/LocalLLaMA/comments/1mwdlpt/minimal_local_llm_for_profit_in_coding/ | Mundane-Buyer-6729 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mwdlpt | false | null | t3_1mwdlpt | /r/LocalLLaMA/comments/1mwdlpt/minimal_local_llm_for_profit_in_coding/ | false | false | self | 1 | null |
Im struggling to study (motivation wise) | 0 | So basically when I have to study or put my head down to learn something I can’t find anything interesting in it I can’t focus
And I thought about making an app that scans your lessons and in function of what type of learner you are it will create flashcards or a roadmap or idk what other thing but you get it
PS : ... | 2025-08-21T15:03:16 | https://www.reddit.com/r/LocalLLaMA/comments/1mwdgog/im_struggling_to_study_motivation_wise/ | Fit-Writer-1796 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mwdgog | false | null | t3_1mwdgog | /r/LocalLLaMA/comments/1mwdgog/im_struggling_to_study_motivation_wise/ | false | false | self | 0 | null |
Command A Reasoning: Enterprise-grade control for AI agents | 106 | [https://cohere.com/blog/command-a-reasoning](https://cohere.com/blog/command-a-reasoning)
HF Link: [https://huggingface.co/CohereLabs/command-a-reasoning-08-2025](https://huggingface.co/CohereLabs/command-a-reasoning-08-2025) | 2025-08-21T15:02:58 | https://www.reddit.com/gallery/1mwdgdw | Dark_Fire_12 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1mwdgdw | false | null | t3_1mwdgdw | /r/LocalLLaMA/comments/1mwdgdw/command_a_reasoning_enterprisegrade_control_for/ | false | false | 106 | null | |
Run Gemma3 270M in your browser. 100% privacy. Needs WebGPU (and probably Chrome) | 3 | 2025-08-21T15:02:27 | https://rhulha.github.io/Gemma3-270m-WebGPU/ | paranoidray | rhulha.github.io | 1970-01-01T00:00:00 | 0 | {} | 1mwdfvn | false | null | t3_1mwdfvn | /r/LocalLLaMA/comments/1mwdfvn/run_gemma3_270m_in_your_browser_100_privacy_needs/ | false | false | default | 3 | null | |
Deepseek v3.1 seriously makes GPT-5 look trashy... | 0 | 2025-08-21T15:00:32 | https://www.youtube.com/watch?v=wSs0vesjqXU | Longjumping_Spot5843 | youtube.com | 1970-01-01T00:00:00 | 0 | {} | 1mwddul | false | {'oembed': {'author_name': 'YJxAI', 'author_url': 'https://www.youtube.com/@YJxAI', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/wSs0vesjqXU?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-i... | t3_1mwddul | /r/LocalLLaMA/comments/1mwddul/deepseek_v31_seriously_makes_gpt5_look_trashy/ | false | false | default | 0 | null | |
Deepseek v3.1 seriously makes GPT-5 look dumb... | 1 | 2025-08-21T14:59:43 | https://www.youtube.com/watch?v=wSs0vesjqXU | Longjumping_Spot5843 | youtube.com | 1970-01-01T00:00:00 | 0 | {} | 1mwdcz4 | false | {'oembed': {'author_name': 'YJxAI', 'author_url': 'https://www.youtube.com/@YJxAI', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/wSs0vesjqXU?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-i... | t3_1mwdcz4 | /r/LocalLLaMA/comments/1mwdcz4/deepseek_v31_seriously_makes_gpt5_look_dumb/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'IpIQ__EsYAWyBJX_vru3AmivOQImZYqDY59kaR24_M8', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/IpIQ__EsYAWyBJX_vru3AmivOQImZYqDY59kaR24_M8.jpeg?width=108&crop=smart&auto=webp&s=a77b87982924103b2760f2e7c7c0a40d0f939fb1', 'width': 108}, {'height': 162, 'url': '... | ||
CohereLabs/command-a-reasoning-08-2025 · Hugging Face | 3 | 2025-08-21T14:55:28 | https://huggingface.co/CohereLabs/command-a-reasoning-08-2025 | Dark_Fire_12 | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1mwd8rd | false | null | t3_1mwd8rd | /r/LocalLLaMA/comments/1mwd8rd/coherelabscommandareasoning082025_hugging_face/ | false | false | default | 3 | null | |
Open-weight models continue to impress in scientific literature review (SciArena) | 9 | [SciArena](https://sciarena.allen.ai/) is a nice benchmark by the folks at Allen AI, similar to LM Arena and DesignArena but focused on scientific literature review. At launch, DeepSeek R1 was the only open weight model that was competitive with the proprietary ones. Now, we also have gpt-oss-120b (note the cost!) and ... | 2025-08-21T14:54:49 | entsnack | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1mwd859 | false | null | t3_1mwd859 | /r/LocalLLaMA/comments/1mwd859/openweight_models_continue_to_impress_in/ | false | false | default | 9 | {'enabled': True, 'images': [{'id': 'w47f5rszxdkf1', 'resolutions': [{'height': 108, 'url': 'https://preview.redd.it/w47f5rszxdkf1.png?width=108&crop=smart&auto=webp&s=65f251750cd94e6a5252a3e3f7b7388427657790', 'width': 108}, {'height': 216, 'url': 'https://preview.redd.it/w47f5rszxdkf1.png?width=216&crop=smart&auto=we... | |
Any Android app that uses NPU to run llms? | 1 | Thx | 2025-08-21T14:53:26 | https://www.reddit.com/r/LocalLLaMA/comments/1mwd6sm/any_android_app_that_uses_npu_to_run_llms/ | Own-Potential-2308 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mwd6sm | false | null | t3_1mwd6sm | /r/LocalLLaMA/comments/1mwd6sm/any_android_app_that_uses_npu_to_run_llms/ | false | false | self | 1 | null |
Generative TTS Kokoro-82M not functional on RX 7800XT | 4 | Recently-ish, firefox [finally added WebGPU](https://mozillagfx.wordpress.com/2025/07/15/shipping-webgpu-on-windows-in-firefox-141/) support officially (better late than never) however I noticed I'm no longer able to utilise certain aspects which worked fine on wasm such as [Kokoro](https://huggingface.co/spaces/webml-... | 2025-08-21T14:50:59 | https://www.reddit.com/r/LocalLLaMA/comments/1mwd49e/generative_tts_kokoro82m_not_functional_on_rx/ | zbovka | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mwd49e | false | null | t3_1mwd49e | /r/LocalLLaMA/comments/1mwd49e/generative_tts_kokoro82m_not_functional_on_rx/ | false | false | self | 4 | {'enabled': False, 'images': [{'id': 'hk8RKmS8erGxm7oAyFGqe0RU_qY8JgPTIWiY5hCku6Y', 'resolutions': [{'height': 107, 'url': 'https://external-preview.redd.it/hk8RKmS8erGxm7oAyFGqe0RU_qY8JgPTIWiY5hCku6Y.png?width=108&crop=smart&auto=webp&s=91c281cd9d3e1ef7942d42c82d850dc69e88e05f', 'width': 108}, {'height': 214, 'url': '... |
Document translation with RAG | 3 | Hi everyone,
I’m working on a medical translation project where I use Ollama for translations. (gemma3:27b) I also created a dataset in JSON format, for example:
{
"translations": {
"en": {
"term": "Cytomegalovirus",
"abbr": "CMV"
},
"ru": {
"term": "цит... | 2025-08-21T14:45:16 | https://www.reddit.com/r/LocalLLaMA/comments/1mwcyov/document_translation_with_rag/ | Low_Fix_8323 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mwcyov | false | null | t3_1mwcyov | /r/LocalLLaMA/comments/1mwcyov/document_translation_with_rag/ | false | false | self | 3 | null |
Love small but mighty team of DeepSeek | 1,046 | They are working so hard they are even inventing new spellings! | 2025-08-21T14:02:32 | dbhalla4 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1mwbsww | false | null | t3_1mwbsww | /r/LocalLLaMA/comments/1mwbsww/love_small_but_mighty_team_of_deepseek/ | false | false | default | 1,046 | {'enabled': True, 'images': [{'id': '38d427vmpdkf1', 'resolutions': [{'height': 114, 'url': 'https://preview.redd.it/38d427vmpdkf1.png?width=108&crop=smart&auto=webp&s=14b70ca89a5bc3484554883ac25799945478ce37', 'width': 108}, {'height': 229, 'url': 'https://preview.redd.it/38d427vmpdkf1.png?width=216&crop=smart&auto=we... | |
Agentic Signal – Visual AI Workflow Builder with Ollama Integration | 5 | Hi everyone! I’ve been working for a few months on a project that integrates tightly with Ollama, and I thought the LocalLLaMA community might find it interesting and useful.
**What it is:**
`Agentic Signal` is a visual workflow automation platform that lets you build AI workflows using a drag-and-drop interface. ... | 2025-08-21T13:49:54 | https://www.reddit.com/r/LocalLLaMA/comments/1mwbha7/agentic_signal_visual_ai_workflow_builder_with/ | Code-Forge-Temple | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mwbha7 | false | null | t3_1mwbha7 | /r/LocalLLaMA/comments/1mwbha7/agentic_signal_visual_ai_workflow_builder_with/ | false | false | self | 5 | {'enabled': False, 'images': [{'id': 'wlksgVpYc7rfW0khooC-cwHaetzdC0ZJqSL8K4RG8z4', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/wlksgVpYc7rfW0khooC-cwHaetzdC0ZJqSL8K4RG8z4.png?width=108&crop=smart&auto=webp&s=c9ee23f2ed6d5e01a2c4cc2a5d7de41460e68285', 'width': 108}, {'height': 121, 'url': 'h... |
Local LLMs in 2025: Key Predictions & Analysis | 0 | 🔑 Key Takeaways for 2025:
Hardware Democracy - Consumer GPUs like the RTX 5090 now match enterprise H100 performance at 75% lower cost
Economic Crossover - Clear breakeven at 2 million tokens/day (real case: API costs growing from $15k to $60k monthly)
Regulatory Acceleration - GDPR/HIPAA compliance driving healt... | 2025-08-21T13:38:46 | https://www.reddit.com/r/LocalLLaMA/comments/1mwb77x/local_llms_in_2025_key_predictions_analysis/ | Rich-External6195 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mwb77x | false | null | t3_1mwb77x | /r/LocalLLaMA/comments/1mwb77x/local_llms_in_2025_key_predictions_analysis/ | false | false | self | 0 | null |
Where is AMD NPU driver for Linux? | 49 | 2025-08-21T13:38:01 | gnorrisan | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1mwb6j3 | false | null | t3_1mwb6j3 | /r/LocalLLaMA/comments/1mwb6j3/where_is_amd_npu_driver_for_linux/ | false | false | default | 49 | {'enabled': True, 'images': [{'id': 'l6rwjjqlldkf1', 'resolutions': [{'height': 55, 'url': 'https://preview.redd.it/l6rwjjqlldkf1.png?width=108&crop=smart&auto=webp&s=40d418e4853c90377d9cd7420cac2c38bf02aaa2', 'width': 108}, {'height': 110, 'url': 'https://preview.redd.it/l6rwjjqlldkf1.png?width=216&crop=smart&auto=web... | ||
Intern-S1-mini 8B multimodal is out! | 72 | Intern-S1-mini is a lightweight multimodal reasoning large language model 🤖.
Base: Built on Qwen3-8B 🧠 + InternViT-0.3B 👁️.
Training: Pretrained on 5 trillion tokens 📚, more than half from scientific domains (chemistry, physics, biology, materials science 🧪).
Strengths: Can handle text, images, and video 💬🖼️�... | 2025-08-21T13:36:55 | https://www.reddit.com/r/LocalLLaMA/comments/1mwb5ix/interns1mini_8b_multimodal_is_out/ | Own-Potential-2308 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mwb5ix | false | null | t3_1mwb5ix | /r/LocalLLaMA/comments/1mwb5ix/interns1mini_8b_multimodal_is_out/ | false | false | self | 72 | {'enabled': False, 'images': [{'id': 'GvY399ZUf1W7dywr11QZLSxw8EmMgUMNKkUf0XN2pS0', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/GvY399ZUf1W7dywr11QZLSxw8EmMgUMNKkUf0XN2pS0.png?width=108&crop=smart&auto=webp&s=5946f3c4940966444f4e3d583148814d3b8e6113', 'width': 108}, {'height': 116, 'url': 'h... |
A Complete Guide to Running LLM Locally on Home Hardware: From Getting Started to Giving Up and Starting Over | 0 | ✅ **Completed 7 comprehensive research tasks:**
* **LLM deployment tools** \- Compared Ollama, LM Studio, GPT4All, and emerging platforms
* **Hardware requirements** \- Detailed budget analysis from $500 to $5000+ setups
* **Model selection strategies** \- Recommendations by VRAM capacity and use case
* **Performance ... | 2025-08-21T13:35:55 | https://www.reddit.com/r/LocalLLaMA/comments/1mwb4n7/a_complete_guide_to_running_llm_locally_on_home/ | Rich-External6195 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mwb4n7 | false | null | t3_1mwb4n7 | /r/LocalLLaMA/comments/1mwb4n7/a_complete_guide_to_running_llm_locally_on_home/ | false | false | self | 0 | null |
🚀 Why I Chose Local LLMs Over OpenAI: Complete 2025 Analysis (August Update) | 1 | After extensive research and hands-on comparison, I found that Local LLMs outperform OpenAI in several critical areas:
**💰 Cost Efficiency**
* Local deployment becomes cost-effective when monthly expenses exceed $200-300
* Escape the "poverty tax" of per-token pricing
* Long-term ROI significantly better for high-vo... | 2025-08-21T13:17:14 | https://www.reddit.com/r/LocalLLaMA/comments/1mwaogw/why_i_chose_local_llms_over_openai_complete_2025/ | Rich-External6195 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mwaogw | false | null | t3_1mwaogw | /r/LocalLLaMA/comments/1mwaogw/why_i_chose_local_llms_over_openai_complete_2025/ | false | false | self | 1 | null |
Local coding interface | 7 | I'd like to move away from cursor... what local app are you guys using to work on your codebase with local llama.cpp-> llama-server? | 2025-08-21T13:17:01 | https://www.reddit.com/r/LocalLLaMA/comments/1mwaoa8/local_coding_interface/ | Agreeable-Prompt-666 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mwaoa8 | false | null | t3_1mwaoa8 | /r/LocalLLaMA/comments/1mwaoa8/local_coding_interface/ | false | false | self | 7 | null |
My browser-based AI Kingdom game is alive with player-written stories, diplomacy, and betrayal. | 3 | Hey Reddit,
A while back, I posted about my passion project, **AI Kingdom**, a strategy game where you rule by giving commands to an AI council. The support and insightful feedback from this community were incredible and have been a huge motivation. Now, the game has matured, and I wanted to share a clearer picture of... | 2025-08-21T13:08:00 | https://www.reddit.com/r/LocalLLaMA/comments/1mwagmn/my_browserbased_ai_kingdom_game_is_alive_with/ | magix1147 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mwagmn | false | null | t3_1mwagmn | /r/LocalLLaMA/comments/1mwagmn/my_browserbased_ai_kingdom_game_is_alive_with/ | false | false | self | 3 | null |
Tried using Gemma 2B as offline LLM, quite satisfied with the result. Less than 3 GB of RAM used. | 19 | 2025-08-21T13:07:03 | https://v.redd.it/tvgm14xmfdkf1 | ajarbyurns1 | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1mwafu4 | false | {'reddit_video': {'bitrate_kbps': 1200, 'dash_url': 'https://v.redd.it/tvgm14xmfdkf1/DASHPlaylist.mpd?a=1758373637%2COWEyZGIxZTk3MDAzOGYxYzM2ZGNmZjllZTVlZTM0YzExOWMxZWU2OTcxMjllN2RjNjBjMjAzMDY5YzY3OTdlNQ%3D%3D&v=1&f=sd', 'duration': 33, 'fallback_url': 'https://v.redd.it/tvgm14xmfdkf1/DASH_480.mp4?source=fallback', 'ha... | t3_1mwafu4 | /r/LocalLLaMA/comments/1mwafu4/tried_using_gemma_2b_as_offline_llm_quite/ | false | false | 19 | {'enabled': False, 'images': [{'id': 'bXJqM2c2YzdnZGtmMdJI1T6_Vo-tDukOLax0jCjgjlFSu6B50TUKS310Zpyb', 'resolutions': [{'height': 61, 'url': 'https://external-preview.redd.it/bXJqM2c2YzdnZGtmMdJI1T6_Vo-tDukOLax0jCjgjlFSu6B50TUKS310Zpyb.png?width=108&crop=smart&format=pjpg&auto=webp&s=67fcab5f51024bc8758a443da6982a3c2fff2... | ||
Weaponizing image scaling against production AI systems | 17 | 2025-08-21T12:59:38 | https://blog.trailofbits.com/2025/08/21/weaponizing-image-scaling-against-production-ai-systems/ | _QWUKE | blog.trailofbits.com | 1970-01-01T00:00:00 | 0 | {} | 1mwa9aa | false | null | t3_1mwa9aa | /r/LocalLLaMA/comments/1mwa9aa/weaponizing_image_scaling_against_production_ai/ | false | false | 17 | {'enabled': False, 'images': [{'id': 'hJirNYdbhQ_WgMVYs7Y4bwQFjFjmkaHKNpaeYLJaF-Y', 'resolutions': [{'height': 52, 'url': 'https://external-preview.redd.it/hJirNYdbhQ_WgMVYs7Y4bwQFjFjmkaHKNpaeYLJaF-Y.png?width=108&crop=smart&auto=webp&s=b085f181a93cbd40b90d97752458f914d27740e0', 'width': 108}, {'height': 104, 'url': 'h... | ||
MCP-Universe: Benchmarking Large Language Models with Real-World Model Context Protocol Servers | 6 | 🚀 Introducing MCP-Universe, a comprehensive benchmark that pushes LLMs and AI agents into realistic, tool-rich environments powered by real-world Model Context Protocol (MCP) servers!
🔌 While MCP has emerged as the "USB-C for AI" standard for connecting LLMs to external tools and data, existing evaluations remain ov... | 2025-08-21T12:53:26 | https://www.reddit.com/r/LocalLLaMA/comments/1mwa46v/mcpuniverse_benchmarking_large_language_models/ | cylaw01 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mwa46v | false | null | t3_1mwa46v | /r/LocalLLaMA/comments/1mwa46v/mcpuniverse_benchmarking_large_language_models/ | false | false | 6 | null | |
[Project Release] Running TinyLlama on Intel NPU with OpenVINO (my first GitHub repo 🎉) | 17 | Hey everyone,
I just finished my very first open-source project and wanted to share it here. I managed to get **TinyLlama 1.1B Chat** running **locally** on my Intel Core Ultra laptop’s **NPU** using **OpenVINO GenAI**.
**What I did:**
* Exported the HuggingFace model with `optimum-cli` → OpenVINO IR format
* Quant... | 2025-08-21T12:36:46 | https://v.redd.it/w0twnkuq9dkf1 | Spiritual-Ad-5916 | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1mw9qgw | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/w0twnkuq9dkf1/DASHPlaylist.mpd?a=1758371818%2CYjQzY2IzOGE1Yzg3OTY3YzJjZDgyZmI2MzdmMGM3NGViM2ZlYjBkNzdkYjg0ZDYxZWY1MjlmZTgxZDUzZDVhNA%3D%3D&v=1&f=sd', 'duration': 46, 'fallback_url': 'https://v.redd.it/w0twnkuq9dkf1/DASH_1080.mp4?source=fallback', 'h... | t3_1mw9qgw | /r/LocalLLaMA/comments/1mw9qgw/project_release_running_tinyllama_on_intel_npu/ | false | false | 17 | {'enabled': False, 'images': [{'id': 'cm53eHhtdXE5ZGtmMXwz12yAXJpbZNSu1z7afGTTOzAq2FgoU-VUYNvwwCr8', 'resolutions': [{'height': 70, 'url': 'https://external-preview.redd.it/cm53eHhtdXE5ZGtmMXwz12yAXJpbZNSu1z7afGTTOzAq2FgoU-VUYNvwwCr8.png?width=108&crop=smart&format=pjpg&auto=webp&s=57fa912e0c5ab1c5abe538e102aab4c555a34... | |
Bedtime Story Generator by Xenova using gemma3 270m and Kokoro! All open source all 100% private needs WebGPU | 8 | 2025-08-21T12:32:57 | https://huggingface.co/spaces/webml-community/bedtime-story-generator | paranoidray | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1mw9nbv | false | null | t3_1mw9nbv | /r/LocalLLaMA/comments/1mw9nbv/bedtime_story_generator_by_xenova_using_gemma3/ | false | false | 8 | {'enabled': False, 'images': [{'id': '1-73r9BEyVQixt0uvGDN6Rw9Zdv0ourGod35ZTFV5GY', 'resolutions': [{'height': 57, 'url': 'https://external-preview.redd.it/1-73r9BEyVQixt0uvGDN6Rw9Zdv0ourGod35ZTFV5GY.png?width=108&crop=smart&auto=webp&s=b301ea34e29d6f8773a9cb9d006e691a9bdc7850', 'width': 108}, {'height': 115, 'url': 'h... | ||
Anyone else experienced deepseek is not translating phrases properly? | 4 | Is any one experiencing translation problem when you give prompt to do english to Bangla?
| 2025-08-21T12:32:49 | https://www.reddit.com/r/LocalLLaMA/comments/1mw9n8c/anyone_else_experienced_deepseek_is_not/ | Sedative_Britto | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mw9n8c | false | null | t3_1mw9n8c | /r/LocalLLaMA/comments/1mw9n8c/anyone_else_experienced_deepseek_is_not/ | false | false | self | 4 | null |
advice for chosing betwen M4 or ryzen 395 AI Max + ? | 0 | Hey everyone,
I'm looking to get a new computer specifically for running large AI models locally, hopefully up to gpt-oss 120B parameter models if that's feasible.
I'm struggling to find any comprehensive benchmarks for this kind of workload. I'm not sure what hardware to go for.
Should I be looking at something lik... | 2025-08-21T12:26:47 | https://www.reddit.com/r/LocalLLaMA/comments/1mw9ik8/advice_for_chosing_betwen_m4_or_ryzen_395_ai_max/ | azrodosr | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mw9ik8 | false | null | t3_1mw9ik8 | /r/LocalLLaMA/comments/1mw9ik8/advice_for_chosing_betwen_m4_or_ryzen_395_ai_max/ | false | false | self | 0 | null |
OmniNeural-4B | 12 | **OmniNeural-4B** — the world’s first **NPU-aware multimodal model**, natively understanding text, images, and audio.
post : [https://x.com/nexa\_ai/status/1958197904210002092](https://x.com/nexa_ai/status/1958197904210002092)
benchmark :
https://preview.redd.it/ompjw1at7dkf1.png?width=3696&format=png&auto=webp&s=82... | 2025-08-21T12:20:17 | https://www.reddit.com/r/LocalLLaMA/comments/1mw9ddg/omnineural4b/ | Illustrious-Swim9663 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mw9ddg | false | null | t3_1mw9ddg | /r/LocalLLaMA/comments/1mw9ddg/omnineural4b/ | false | false | 12 | null | |
the Nexa OmniNeural-4B launch team | 1 | **OmniNeural-4B** — the world’s first **NPU-aware multimodal model**, natively understanding text, images, and audio.
post : [https://x.com/nexa\_ai/status/1958197904210002092](https://x.com/nexa_ai/status/1958197904210002092)
benchmark :
https://preview.redd.it/bwbjytpx6dkf1.png?width=3696&format=png&auto=webp&s=... | 2025-08-21T12:17:34 | https://www.reddit.com/r/LocalLLaMA/comments/1mw9b7v/the_nexa_omnineural4b_launch_team/ | Illustrious-Swim9663 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mw9b7v | false | null | t3_1mw9b7v | /r/LocalLLaMA/comments/1mw9b7v/the_nexa_omnineural4b_launch_team/ | false | false | 1 | null | |
Right GPU for AI research | 0 | For our research we have an option to get a GPU Server to run local models. We aim to run models like Meta's Maverick or Scout, Qwen3 and similar. We plan some fine tuning operations, but mainly inference including MCP communication with our systems. Currently we can get either one H200 or two RTX PRO 6000 Blackwell. T... | 2025-08-21T12:12:43 | toombayoomba | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1mw97ac | false | null | t3_1mw97ac | /r/LocalLLaMA/comments/1mw97ac/right_gpu_for_ai_research/ | false | false | default | 0 | {'enabled': True, 'images': [{'id': '6uwhijni6dkf1', 'resolutions': [{'height': 108, 'url': 'https://preview.redd.it/6uwhijni6dkf1.jpeg?width=108&crop=smart&auto=webp&s=d8df954d4304803a8471274d18465d9dc05376ed', 'width': 108}, {'height': 216, 'url': 'https://preview.redd.it/6uwhijni6dkf1.jpeg?width=216&crop=smart&auto=... | |
Looking for a better approach for structured data extraction from PDFs | 3 | I’m working on a project where I need to extract specific fields from PDF documents (around 20 pages in length). The extracted data should be in a dictionary-like format: the keys (field names) are fixed, but the values vary — sometimes it’s a single value, sometimes multiple values, and sometimes no value at all.
Our... | 2025-08-21T12:10:15 | https://www.reddit.com/r/LocalLLaMA/comments/1mw95co/looking_for_a_better_approach_for_structured_data/ | Ahmad401 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mw95co | false | null | t3_1mw95co | /r/LocalLLaMA/comments/1mw95co/looking_for_a_better_approach_for_structured_data/ | false | false | self | 3 | null |
Local model agentic tool recommendations | 2 | I find success with Cursor but annoyed I cant use it fully offline and with a local model. Cline/Roo use up a ton of tokens and respond incredibly slow, even with cloud models.
My goal isn't particularly programming, but to use an MCP server to retrieve, process, send data. As well to have conversation and explain or ... | 2025-08-21T11:56:05 | https://www.reddit.com/r/LocalLLaMA/comments/1mw8u6e/local_model_agentic_tool_recommendations/ | DeviantlyPronto | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mw8u6e | false | null | t3_1mw8u6e | /r/LocalLLaMA/comments/1mw8u6e/local_model_agentic_tool_recommendations/ | false | false | self | 2 | null |
the end of closed source | 0 | Who has noticed that closed-source models aren't as popular as they used to be? Openia made a mistake with GPT-5, Anthropic is self-destructing, and Google is irrelevant.
\- This is accompanied by the fact that the models are open and the APIs are cheaper.
\- For example, Deepseek v1.3 is now cheaper than Claude 4, w... | 2025-08-21T11:50:21 | https://www.reddit.com/r/LocalLLaMA/comments/1mw8pyc/the_end_of_closed_source/ | Illustrious-Swim9663 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mw8pyc | false | null | t3_1mw8pyc | /r/LocalLLaMA/comments/1mw8pyc/the_end_of_closed_source/ | false | false | self | 0 | null |
For example, Deepseek v1.3 is now cheaper than Claude 4, which has a lower price. | 2 | Who has noticed that closed models are no longer as popular as they used to be? Openia was wrong about GPT-5, Anthropic is self-destructing, and Google is irrelevant.
\- This is accompanied by the fact that the models are open and the APIs are cheaper.
\- For example, Deepseek v1.3 is now cheaper than Claude 4, which... | 2025-08-21T11:47:13 | https://www.reddit.com/r/LocalLLaMA/comments/1mw8nom/for_example_deepseek_v13_is_now_cheaper_than/ | Illustrious-Swim9663 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mw8nom | false | null | t3_1mw8nom | /r/LocalLLaMA/comments/1mw8nom/for_example_deepseek_v13_is_now_cheaper_than/ | false | false | 2 | null | |
A new Machine to install LLama | 0 | Hello everybody
I want to buy a new mini PC to be used for AI local LLMs
I am checking BeeLink PCs with a budget of 400$ max
What do you recommend? | 2025-08-21T11:34:30 | https://www.reddit.com/r/LocalLLaMA/comments/1mw8egw/a_new_machine_to_install_llama/ | MohammedAttya | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mw8egw | false | null | t3_1mw8egw | /r/LocalLLaMA/comments/1mw8egw/a_new_machine_to_install_llama/ | false | false | self | 0 | null |
I’m gonna say it: | 130 | 2025-08-21T11:24:07 | JLeonsarmiento | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1mw86zw | false | null | t3_1mw86zw | /r/LocalLLaMA/comments/1mw86zw/im_gonna_say_it/ | false | false | default | 130 | {'enabled': True, 'images': [{'id': 'gxbn2ofuxckf1', 'resolutions': [{'height': 81, 'url': 'https://preview.redd.it/gxbn2ofuxckf1.jpeg?width=108&crop=smart&auto=webp&s=ea9be6bc2b0823e143ade49774475b5395084e4b', 'width': 108}, {'height': 162, 'url': 'https://preview.redd.it/gxbn2ofuxckf1.jpeg?width=216&crop=smart&auto=w... | ||
Which local model for documentation writing? | 3 | Which model would you guys suggest for going through the code and fixing/writing documentation/comments (Doygen, markdown)? I don't want it to write code, but go through the code and fix typos in comments, document generic functions, typedefs and stuff and to make sure it is consistent across the code base. I plan to u... | 2025-08-21T11:20:22 | https://www.reddit.com/r/LocalLLaMA/comments/1mw84ey/which_local_model_for_documentation_writing/ | Kubas_inko | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mw84ey | false | null | t3_1mw84ey | /r/LocalLLaMA/comments/1mw84ey/which_local_model_for_documentation_writing/ | false | false | self | 3 | null |
New DeepSeek API pricing: -chat prices increasing, -reasoner prices decreasing | 114 | New API pricing scheme goes into effect on September 5, 2025: https://api-docs.deepseek.com/quick_start/pricing | 2025-08-21T11:09:58 | entsnack | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1mw7x6f | false | null | t3_1mw7x6f | /r/LocalLLaMA/comments/1mw7x6f/new_deepseek_api_pricing_chat_prices_increasing/ | false | false | default | 114 | {'enabled': True, 'images': [{'id': 'd2xgmwobvckf1', 'resolutions': [{'height': 57, 'url': 'https://preview.redd.it/d2xgmwobvckf1.jpeg?width=108&crop=smart&auto=webp&s=94a688cdb5590617945eb5038717bb9cd3a1a1f9', 'width': 108}, {'height': 115, 'url': 'https://preview.redd.it/d2xgmwobvckf1.jpeg?width=216&crop=smart&auto=w... | |
Constrained Decoding for Diffusion LLMs | 9 | Hey all, I recently developed a constrained decoding technique for Diffusion LLMs. Since these are getting more and more popular, though I might share it here. | 2025-08-21T11:06:32 | https://constrained-diffusion.ai | nielstron | constrained-diffusion.ai | 1970-01-01T00:00:00 | 0 | {} | 1mw7ush | false | null | t3_1mw7ush | /r/LocalLLaMA/comments/1mw7ush/constrained_decoding_for_diffusion_llms/ | false | false | default | 9 | {'enabled': False, 'images': [{'id': 'Qa_5iE_ckoouJFq4tJW7zju2MQH8f_JIysTRmOqsM1A', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/Qa_5iE_ckoouJFq4tJW7zju2MQH8f_JIysTRmOqsM1A.png?width=108&crop=smart&auto=webp&s=e17cf058a1dc8a391e408d307e6e49f0a85aea1d', 'width': 108}, {'height': 113, 'url': 'h... |
Developing a local coding assistant and providing for it a proprietary library API for code generation | 5 | I’m thinking of building a fully local coding assistant for my M4 Max MacBook Pro with 64 GB RAM that could safely reason over an internal library. The code can’t leave the machine and the code generation must be done locally.
The system should be able to generate code using the API of the internal library and ask nat... | 2025-08-21T10:59:45 | https://www.reddit.com/r/LocalLLaMA/comments/1mw7pug/developing_a_local_coding_assistant_and_providing/ | Few-Pie2809 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mw7pug | false | null | t3_1mw7pug | /r/LocalLLaMA/comments/1mw7pug/developing_a_local_coding_assistant_and_providing/ | false | false | self | 5 | null |
Just need 20 million more H100s and we will have AGI just trust me | 335 | 2025-08-21T10:42:05 | analgerianabroad | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1mw7ecz | false | null | t3_1mw7ecz | /r/LocalLLaMA/comments/1mw7ecz/just_need_20_million_more_h100s_and_we_will_have/ | false | false | default | 335 | {'enabled': True, 'images': [{'id': '7hwag17tpckf1', 'resolutions': [{'height': 36, 'url': 'https://preview.redd.it/7hwag17tpckf1.png?width=108&crop=smart&auto=webp&s=650a11f9d803c6557b8a74f195c5102282bdb93c', 'width': 108}, {'height': 73, 'url': 'https://preview.redd.it/7hwag17tpckf1.png?width=216&crop=smart&auto=webp... | ||
DeepSeek has revealed that the next generation of China-made chips is about to be released | 137 | In an official post on DeepSeek's official WeChat account, DeepSeek further explained that UE8M0 FP8 is designed for the upcoming next-generation domestic chip.
https://preview.redd.it/5j7osgkanckf1.png?width=1205&format=png&auto=webp&s=bad0b7a62ad023889c86de7320fda3c7f4871f03
| 2025-08-21T10:25:14 | https://www.reddit.com/r/LocalLLaMA/comments/1mw73uz/deepseek_has_revealed_that_the_next_generation_of/ | Dry-Ad8947 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mw73uz | false | null | t3_1mw73uz | /r/LocalLLaMA/comments/1mw73uz/deepseek_has_revealed_that_the_next_generation_of/ | false | false | 137 | null | |
Introducing Intern-S1-mini, a lightweight version of Intern-S1, which contains an 8B language model and a 0.3B vision encoder. | 39 | 2025-08-21T10:22:21 | https://github.com/InternLM/Intern-S1 | Lynncc6 | github.com | 1970-01-01T00:00:00 | 0 | {} | 1mw724q | false | null | t3_1mw724q | /r/LocalLLaMA/comments/1mw724q/introducing_interns1mini_a_lightweight_version_of/ | false | false | 39 | {'enabled': False, 'images': [{'id': 'iJbqIXj4e8d90OQ705I7po4CzD6K5SM0Vr9TFSrm88U', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/iJbqIXj4e8d90OQ705I7po4CzD6K5SM0Vr9TFSrm88U.png?width=108&crop=smart&auto=webp&s=8e72f9a425e7d082da148061468babcf05437567', 'width': 108}, {'height': 108, 'url': 'h... | ||
Can I run a 4090 along side my ada a5000? | 1 | I have a chance to upgrade my desktop 4090 to a 5090. If I go that route I'd like to add the 4090 to my little AI machine which already has an Ada a5000 GPU. I use the AI machine for Ollama and ComfyUI.
My worry is that I read a post from a while ago that says due to driver issues the two GPUs dont work together.
The... | 2025-08-21T10:05:16 | https://www.reddit.com/r/LocalLLaMA/comments/1mw6rnu/can_i_run_a_4090_along_side_my_ada_a5000/ | LTJC | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mw6rnu | false | null | t3_1mw6rnu | /r/LocalLLaMA/comments/1mw6rnu/can_i_run_a_4090_along_side_my_ada_a5000/ | false | false | self | 1 | null |
Why does local Upscailing is so bad? | 0 | I tried both fooocus upscail and upscayl upscailing tools and both had lost a lot of details. and the quality of the photo got worse. Am I doing something wrong? important details just became distorted or completly lost in the image. | 2025-08-21T10:05:13 | https://www.reddit.com/r/LocalLLaMA/comments/1mw6rmk/why_does_local_upscailing_is_so_bad/ | Brilliant-Piece1490 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mw6rmk | false | null | t3_1mw6rmk | /r/LocalLLaMA/comments/1mw6rmk/why_does_local_upscailing_is_so_bad/ | false | false | self | 0 | null |
Training LLM/VLM from scratch | 3 | Anyone has experience in training small LLM/VLM from scratch? How much VRAM do I need? Thanks. | 2025-08-21T09:55:52 | https://www.reddit.com/r/LocalLLaMA/comments/1mw6lp4/training_llmvlm_from_scratch/ | kitgary | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mw6lp4 | false | null | t3_1mw6lp4 | /r/LocalLLaMA/comments/1mw6lp4/training_llmvlm_from_scratch/ | false | false | self | 3 | null |
2018 Mac Mini - suggestions for text and web search LLaMa. | 0 | Hello,
I have recently picked up a 2018 Mac mini. It has a i3 3.6Ghz processor with a measly 8gb of ram.
Are there any models that would be light enough for general queries that would require web search functionality? The only real output that would be required would be text based, and maybe the odd image but compl... | 2025-08-21T09:46:50 | https://www.reddit.com/r/LocalLLaMA/comments/1mw6ge6/2018_mac_mini_suggestions_for_text_and_web_search/ | ForeignAdagio9169 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mw6ge6 | false | null | t3_1mw6ge6 | /r/LocalLLaMA/comments/1mw6ge6/2018_mac_mini_suggestions_for_text_and_web_search/ | false | false | self | 0 | null |
A Marketplace for Ray jobs (training, fine tuning, serving) | 3 | I have been using Ray clusters for a while, and being in the AI infrastructure space for a while now. I see that the folks at Anyscale (Ray;s parent company) are offering a hosted paid version of Ray clusters.
I'm considering dedicating resources to offer an open source alternative to their offerings, so developers ca... | 2025-08-21T09:46:31 | https://www.reddit.com/r/LocalLLaMA/comments/1mw6g7q/a_marketplace_for_ray_jobs_training_fine_tuning/ | Good-Coconut3907 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mw6g7q | false | null | t3_1mw6g7q | /r/LocalLLaMA/comments/1mw6g7q/a_marketplace_for_ray_jobs_training_fine_tuning/ | false | false | self | 3 | null |
Alibaba DAMO academy's open source lingshu mllm in mobile. | 23 | 2025-08-21T09:20:45 | https://v.redd.it/lz6mg0stbckf1 | Juude89 | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1mw61k1 | false | {'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/lz6mg0stbckf1/DASHPlaylist.mpd?a=1758360059%2CMDE4OTcxOWRjZTA4MGJlZTVmYzY3ZTI5MTVmOWM4NzIxYjA4NWQyNzUxYmY4ZDFiYjMyOTBmZWM2OTlmMzBlYw%3D%3D&v=1&f=sd', 'duration': 43, 'fallback_url': 'https://v.redd.it/lz6mg0stbckf1/DASH_720.mp4?source=fallback', 'ha... | t3_1mw61k1 | /r/LocalLLaMA/comments/1mw61k1/alibaba_damo_academys_open_source_lingshu_mllm_in/ | false | false | 23 | {'enabled': False, 'images': [{'id': 'Z3VmNTl6c3RiY2tmMZgTcTNhdU2ZbpWd9zFzEdWGHFuGlIJQwDxKn8SFh4Sa', 'resolutions': [{'height': 216, 'url': 'https://external-preview.redd.it/Z3VmNTl6c3RiY2tmMZgTcTNhdU2ZbpWd9zFzEdWGHFuGlIJQwDxKn8SFh4Sa.png?width=108&crop=smart&format=pjpg&auto=webp&s=b2ceb974bdae5691f832c763e62e973d67ba... | ||
Having to beg AI to do math for me | 6 | I'm literally about to punch my screen in, gemini refuses to help with anything related to money | 2025-08-21T09:17:56 | BitDaniYT | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1mw601r | false | null | t3_1mw601r | /r/LocalLLaMA/comments/1mw601r/having_to_beg_ai_to_do_math_for_me/ | false | false | default | 6 | {'enabled': True, 'images': [{'id': 'g9q1tiruackf1', 'resolutions': [{'height': 216, 'url': 'https://preview.redd.it/g9q1tiruackf1.png?width=108&crop=smart&auto=webp&s=4912a587d781a578ffc84a1fd236f3bd6f1e83a0', 'width': 108}, {'height': 432, 'url': 'https://preview.redd.it/g9q1tiruackf1.png?width=216&crop=smart&auto=we... | |
LiteRP – lightweight open-source frontend for local LLM roleplay | 69 | I’ve been working on a minimal frontend for chatting and roleplay with AI characters, and I’d like to share the first early beta release **LiteRP v0.3**: [https://github.com/Sumrix/LiteRP](https://github.com/Sumrix/LiteRP)
Most roleplay frontends (like SillyTavern) are powerful but heavy and complex to set up. LiteRP ... | 2025-08-21T08:38:38 | sumrix | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1mw5due | false | null | t3_1mw5due | /r/LocalLLaMA/comments/1mw5due/literp_lightweight_opensource_frontend_for_local/ | false | false | 69 | {'enabled': True, 'images': [{'id': '8O2TQEf1bXqRjIceRTcDxLD29Lzmj8jUrSuAqFDESKw', 'resolutions': [{'height': 70, 'url': 'https://preview.redd.it/sfwt2l733ckf1.png?width=108&crop=smart&auto=webp&s=6f6eb21c08b9bf790262b7ca524feb9fdb0e810d', 'width': 108}, {'height': 141, 'url': 'https://preview.redd.it/sfwt2l733ckf1.png... | ||
There is an easy way to dramatically reduce the resources we spend to train LLMs. | 0 | What if I told you that a vast amount of the computation that goes into training every new Large Language Model is completely redundant and could be easily reused? I have been thinking about it for months and finally was able to find the right words this evening. I made this writeup of the ideas I had. I really hope th... | 2025-08-21T08:26:34 | https://medium.com/@AlexeyBorsky/there-is-an-easy-way-to-dramatically-reduce-the-resources-we-spend-to-train-llms-c46b93562319 | Another__one | medium.com | 1970-01-01T00:00:00 | 0 | {} | 1mw5768 | false | null | t3_1mw5768 | /r/LocalLLaMA/comments/1mw5768/there_is_an_easy_way_to_dramatically_reduce_the/ | false | false | default | 0 | {'enabled': False, 'images': [{'id': 'MrKFoDS5K4-WVS29KykAHp7M1OSvcxQhBT0Qppw4BaE', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/MrKFoDS5K4-WVS29KykAHp7M1OSvcxQhBT0Qppw4BaE.jpeg?width=108&crop=smart&auto=webp&s=63095ff35498f160d42caf791fea092a14062a38', 'width': 108}, {'height': 120, 'url': '... |
[Discussion] Local LLM labeling with a tiny self-hosted UI — what actually saves time? | 2 | I’m building a **small self-hosted labeler + backend** for text classification datasets (local fine-tunes/eval). Goal: keep accuracy while **cutting human labeling effort**.
**Quick questions for folks doing this locally:**
1. **Stack tips** for speed? (e.g., React/Vue + FastAPI, SQLite/pgvector/FAISS, keyboard-first... | 2025-08-21T08:21:41 | vihanga2001 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1mw54i8 | false | null | t3_1mw54i8 | /r/LocalLLaMA/comments/1mw54i8/discussion_local_llm_labeling_with_a_tiny/ | false | false | 2 | {'enabled': True, 'images': [{'id': 'kqo0qJjtaDdBKn2w_v2SMMl0xbG4isHCC4hMx4wwrq8', 'resolutions': [{'height': 162, 'url': 'https://preview.redd.it/d85m2ku61ckf1.png?width=108&crop=smart&auto=webp&s=ff2b18257f63ac6b5853c5c262c524f46fd78fe1', 'width': 108}, {'height': 324, 'url': 'https://preview.redd.it/d85m2ku61ckf1.pn... | ||
Can we get a 4B-A1B MoE? Or what is the closest to it? | 10 | Thx | 2025-08-21T07:31:54 | https://www.reddit.com/r/LocalLLaMA/comments/1mw4cp9/can_we_get_a_4ba1b_moe_or_what_is_the_closest_to/ | Own-Potential-2308 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mw4cp9 | false | null | t3_1mw4cp9 | /r/LocalLLaMA/comments/1mw4cp9/can_we_get_a_4ba1b_moe_or_what_is_the_closest_to/ | false | false | self | 10 | null |
DeepSeek-V3.1 implements Anthropic API compatibility | 295 | https://api-docs.deepseek.com/guides/anthropic_api | 2025-08-21T06:47:55 | vibedonnie | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1mw3nat | false | null | t3_1mw3nat | /r/LocalLLaMA/comments/1mw3nat/deepseekv31_implements_anthropic_api_compatibility/ | false | false | 295 | {'enabled': True, 'images': [{'id': 'wrMct5ET5fiyAwN2Ic6pEKBLYJrLCSMG7ZkXoDuPsL4', 'resolutions': [{'height': 108, 'url': 'https://preview.redd.it/0pp8mwjkkbkf1.jpeg?width=108&crop=smart&auto=webp&s=a6fcdb9fd3f1d1f2446bb15bace5a9ef4e8d52c8', 'width': 108}, {'height': 216, 'url': 'https://preview.redd.it/0pp8mwjkkbkf1.j... | ||
DeepSeek-V3.1 implements Anthropic API compatibility | 1 | [deleted] | 2025-08-21T06:47:22 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 1mw3mz1 | false | null | t3_1mw3mz1 | /r/LocalLLaMA/comments/1mw3mz1/deepseekv31_implements_anthropic_api_compatibility/ | false | false | default | 1 | null | ||
DeepSeek-V3.1 (Thinking and Non Thinking) | 125 | DeepSeek-V3.1 is a hybrid model that supports both thinking mode and non-thinking mode. Compared to the previous version, this upgrade brings improvements in multiple aspects:
* **Hybrid thinking mode**: One model supports both thinking mode and non-thinking mode by changing the chat template.
* **Smarter tool calling... | 2025-08-21T06:43:19 | touhidul002 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1mw3kmd | false | null | t3_1mw3kmd | /r/LocalLLaMA/comments/1mw3kmd/deepseekv31_thinking_and_non_thinking/ | false | false | 125 | {'enabled': True, 'images': [{'id': 'PQ22G6uXvlYfaR5gQQU4qW2clZ914VethBKcQLe70UQ', 'resolutions': [{'height': 68, 'url': 'https://preview.redd.it/131ngchkjbkf1.png?width=108&crop=smart&auto=webp&s=ddd647eb76c2090657a184fc16a6c20954d37800', 'width': 108}, {'height': 137, 'url': 'https://preview.redd.it/131ngchkjbkf1.png... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.