title stringlengths 1 300 | score int64 0 8.54k | selftext stringlengths 0 41.5k | created timestamp[ns]date 2023-04-01 04:30:41 2026-03-04 02:14:14 ⌀ | url stringlengths 0 878 | author stringlengths 3 20 | domain stringlengths 0 82 | edited timestamp[ns]date 1970-01-01 00:00:00 2026-02-19 14:51:53 | gilded int64 0 2 | gildings stringclasses 7
values | id stringlengths 7 7 | locked bool 2
classes | media stringlengths 646 1.8k ⌀ | name stringlengths 10 10 | permalink stringlengths 33 82 | spoiler bool 2
classes | stickied bool 2
classes | thumbnail stringlengths 4 213 ⌀ | ups int64 0 8.54k | preview stringlengths 301 5.01k ⌀ |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Laptop Recommendation for Running Local LLMs (Budget: $3-6K) | 1 | [removed] | 2025-05-21T07:16:02 | https://www.reddit.com/r/LocalLLaMA/comments/1krrr6a/laptop_recommendation_for_running_local_llms/ | 0800otto | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1krrr6a | false | null | t3_1krrr6a | /r/LocalLLaMA/comments/1krrr6a/laptop_recommendation_for_running_local_llms/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'A98zfBAqasSD_2l9sVhqmoP21KuMRBXNPkfr72PsOtE', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/nX_VrkyOLGhZai4Jpn4n5F3HDLKku7PnnzpSXqd5fGw.jpg?width=108&crop=smart&auto=webp&s=451c059e72c9aca7d7e833c516c776e076b4ee08', 'width': 108}, {'height': 162, 'url': 'h... |
The P100 isn't dead yet - Qwen3 benchmarks | 35 | I decided to test how fast I could run Qwen3-14B-GPTQ-Int4 on a P100 versus Qwen3-14B-GPTQ-AWQ on a 3090.
I found that it was quite competitive, around 45 tok/s on the P100 with 150W power limit vs around 54 tok/s on the 3090 with a PL of 260W.
So if you're willing to eat the idle power cost, a single P100 is a nice ... | 2025-05-21T07:12:06 | https://www.reddit.com/r/LocalLLaMA/comments/1krrp2f/the_p100_isnt_dead_yet_qwen3_benchmarks/ | DeltaSqueezer | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1krrp2f | false | null | t3_1krrp2f | /r/LocalLLaMA/comments/1krrp2f/the_p100_isnt_dead_yet_qwen3_benchmarks/ | false | false | self | 35 | null |
How to get the most from llama.cpp's iSWA support | 51 | [https://github.com/ggml-org/llama.cpp/pull/13194](https://github.com/ggml-org/llama.cpp/pull/13194)
Thanks to our gguf god ggerganov, we finally have iSWA support for gemma 3 models that significantly reduces KV cache usage. Since I participated in the pull discussion, I would like to offer tips to get the most out o... | 2025-05-21T06:38:02 | https://www.reddit.com/r/LocalLLaMA/comments/1krr7hn/how_to_get_the_most_from_llamacpps_iswa_support/ | Ok_Warning2146 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1krr7hn | false | null | t3_1krr7hn | /r/LocalLLaMA/comments/1krr7hn/how_to_get_the_most_from_llamacpps_iswa_support/ | false | false | self | 51 | {'enabled': False, 'images': [{'id': 'B6WBFnMrqminMd4L23X4ODcF0-AjtGNAA2R3T-n0aSE', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/6YDoIZBg8Dosn4BpZfZXVZmTz4i1UVYCrX4xOFJEIZA.jpg?width=108&crop=smart&auto=webp&s=335d1405ddcc38bcb3183c81a033edea2551c0f6', 'width': 108}, {'height': 108, 'url': 'h... |
AMD Radeon™ AI PRO R9700 33GB 256 bit for | 1 | [removed] | 2025-05-21T06:01:00 | https://ir.amd.com/news-events/press-releases/detail/1253/amd-introduces-new-radeon-graphics-cards-and-ryzen-threadripper-processors-at-computex-2025 | Rachados22x2 | ir.amd.com | 1970-01-01T00:00:00 | 0 | {} | 1krqnz3 | false | null | t3_1krqnz3 | /r/LocalLLaMA/comments/1krqnz3/amd_radeon_ai_pro_r9700_33gb_256_bit_for/ | false | false | default | 1 | null |
Gemini 2.5 Pro's Secret uncovered! /s | 1 | 2025-05-21T05:51:01 | topazsparrow | i.imgur.com | 1970-01-01T00:00:00 | 0 | {} | 1krqigo | false | null | t3_1krqigo | /r/LocalLLaMA/comments/1krqigo/gemini_25_pros_secret_uncovered_s/ | false | false | 1 | {'enabled': True, 'images': [{'id': 'XXqPkFDWjPOMT9TdCR8sqaPr8ppD1AE1ChX2ViqCAgk', 'resolutions': [{'height': 216, 'url': 'https://external-preview.redd.it/xLXBPlO170C0JFdOjHvaXP4EYzyKZB2NWZbfNJjJa7s.jpg?width=108&crop=smart&auto=webp&s=5a35bd2b7bd513a8d3e48d16c94d3434df908784', 'width': 108}, {'height': 432, 'url': 'h... | |||
Gemma 3N E4B and Gemini 2.5 Flash Tested | 58 | [https://www.youtube.com/watch?v=lEtLksaaos8](https://www.youtube.com/watch?v=lEtLksaaos8)
Compared Gemma 3n e4b against Qwen 3 4b. Mixed results. Gemma does great on classification, matches Qwen 4B on Structured JSON extraction. Struggles with coding and RAG.
Also compared Gemini 2.5 Flash to Open AI 4.1. Altman sho... | 2025-05-21T05:10:56 | https://www.reddit.com/r/LocalLLaMA/comments/1krpvwj/gemma_3n_e4b_and_gemini_25_flash_tested/ | Ok-Contribution9043 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1krpvwj | false | null | t3_1krpvwj | /r/LocalLLaMA/comments/1krpvwj/gemma_3n_e4b_and_gemini_25_flash_tested/ | false | false | self | 58 | {'enabled': False, 'images': [{'id': 'VoaBhlaaq-1kGgnmFODs7H3HjGpEWlQe10_B4HRUY0Q', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/MhjdLRw38-JhAexb7WrezfxmIFUJZteL_2Hndh-5Zw0.jpg?width=108&crop=smart&auto=webp&s=9df005187516f363d506cbf093904ea5a2a612a0', 'width': 108}, {'height': 162, 'url': 'h... |
The uncensored open source Chinese AI on its way to deliver me 4 drawings of anime titties in the tentacle dungeon on a random Sunday | 1 | 2025-05-21T04:55:53 | https://v.redd.it/jprpbpwkg22f1 | Oldkingcole225 | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1krpmtf | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/jprpbpwkg22f1/DASHPlaylist.mpd?a=1750395366%2CMTBhZjI1MGYyYjYwY2Q3ZDViMzA2NjAwY2ZmZTZjNmU1YjM5MjU2NjFjZjIzN2NhMTQzODBmZjUyZGU1NzNjNQ%3D%3D&v=1&f=sd', 'duration': 16, 'fallback_url': 'https://v.redd.it/jprpbpwkg22f1/DASH_1080.mp4?source=fallback', 'h... | t3_1krpmtf | /r/LocalLLaMA/comments/1krpmtf/the_uncensored_open_source_chinese_ai_on_its_way/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'eDlnbWFwd2tnMjJmMU5aj1qEAhuhddf9cVoSoGrQtmpejIALzVfYonHBYn7P', 'resolutions': [{'height': 86, 'url': 'https://external-preview.redd.it/eDlnbWFwd2tnMjJmMU5aj1qEAhuhddf9cVoSoGrQtmpejIALzVfYonHBYn7P.png?width=108&crop=smart&format=pjpg&auto=webp&s=155bcb9c4c11d64e6dc923f400e0289c8abe2... | ||
Elarablation: A promising training method for surgically removing slop | 1 | [removed] | 2025-05-21T04:47:11 | https://www.reddit.com/r/LocalLLaMA/comments/1krphr6/elarablation_a_promising_training_method_for/ | Incognit0ErgoSum | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1krphr6 | false | null | t3_1krphr6 | /r/LocalLLaMA/comments/1krphr6/elarablation_a_promising_training_method_for/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'WAMoftg6eo6-KIK90sJKB0iuIRnovmTflcLm9316M7c', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/CXQgtF5m04ktndMsSOF0LMAW0PnqCOKHc-Pov9lDYOw.jpg?width=108&crop=smart&auto=webp&s=3dfe81b83d18416745961e2c45ce00022e40be82', 'width': 108}, {'height': 108, 'url': 'h... |
ByteDance released BAGEL-7b-MoT - Unified Model for Multimodal Understanding and Generation | 1 | We present BAGEL, an open‑source multimodal foundation model with 7B active parameters (14B total) trained on large‑scale interleaved multimodal data.
BAGEL outperforms the current top‑tier open‑source VLMs like Qwen2.5-VL and InternVL-2.5 on standard multimodal understanding leaderboards, and delivers text‑to‑image ... | 2025-05-21T04:45:49 | ResearchCrafty1804 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1krpgyu | false | null | t3_1krpgyu | /r/LocalLLaMA/comments/1krpgyu/bytedance_released_bagel7bmot_unified_model_for/ | false | false | 1 | {'enabled': True, 'images': [{'id': 'jaiuXiopBf6LO0AGzheQknyHI5KiytlTT6UuAUTmN7A', 'resolutions': [{'height': 123, 'url': 'https://preview.redd.it/ocpat33xe22f1.jpeg?width=108&crop=smart&auto=webp&s=a47f7e71f92de6004a322738970c48690b18fec5', 'width': 108}, {'height': 246, 'url': 'https://preview.redd.it/ocpat33xe22f1.j... | ||
Small model recommendations? | 1 | [removed] | 2025-05-21T04:44:03 | https://www.reddit.com/r/LocalLLaMA/comments/1krpfvj/small_model_recommendations/ | NonYa_exe | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1krpfvj | false | null | t3_1krpfvj | /r/LocalLLaMA/comments/1krpfvj/small_model_recommendations/ | false | false | self | 1 | null |
Announced: AMD Radeon AI PRO R9700 - 32GB - available in July with ROCM support! | 1 | 2025-05-21T04:28:30 | https://finviz.com/news/62350/amd-introduces-new-radeon-graphics-cards-and-ryzen-threadripper-processors-at-computex-2025 | RnRau | finviz.com | 1970-01-01T00:00:00 | 0 | {} | 1krp6ik | false | null | t3_1krp6ik | /r/LocalLLaMA/comments/1krp6ik/announced_amd_radeon_ai_pro_r9700_32gb_available/ | false | false | default | 1 | null | |
They also released the Android app with which you can interact with the new Gemma3n | 154 | **This is really good**
[https://ai.google.dev/edge/mediapipe/solutions/genai/llm\_inference/android](https://ai.google.dev/edge/mediapipe/solutions/genai/llm_inference/android)
[https://github.com/google-ai-edge/gallery](https://github.com/google-ai-edge/gallery)
| 2025-05-21T04:25:10 | https://www.reddit.com/r/LocalLLaMA/comments/1krp4hq/they_also_released_the_android_app_with_which_you/ | Ordinary_Mud7430 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1krp4hq | false | null | t3_1krp4hq | /r/LocalLLaMA/comments/1krp4hq/they_also_released_the_android_app_with_which_you/ | false | false | self | 154 | {'enabled': False, 'images': [{'id': 'iaG91J8UPyw0LumfZ8FtQViH0YMYP9q-z6paL0E-fpE', 'resolutions': [{'height': 67, 'url': 'https://external-preview.redd.it/FJfyR710n5wu1VMO6EJEBezHIFtvYiTfMm5tsyjNQBg.jpg?width=108&crop=smart&auto=webp&s=1f5ff9828f4d5a72b40254bbf62a0359c206dd78', 'width': 108}, {'height': 135, 'url': 'h... |
AMD introduces Radeon AI PRO R9700 with 32GB VRAM and Navi 48 GPU - VideoCardz.com | 1 | 2025-05-21T03:37:50 | https://videocardz.com/newz/amd-introduces-radeon-ai-pro-r9700-with-32gb-vram-and-navi-48-gpu | FOE-tan | videocardz.com | 1970-01-01T00:00:00 | 0 | {} | 1krob4c | false | null | t3_1krob4c | /r/LocalLLaMA/comments/1krob4c/amd_introduces_radeon_ai_pro_r9700_with_32gb_vram/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'NNtjhyBvhELyDacKkR8VvFjJNX9VBk4qNvcwZp4Vmaw', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/Z-0p5utcsQQ1BDEBntEMYYd21i4AP5O7BYmojFupA5E.jpg?width=108&crop=smart&auto=webp&s=9fa402bc0195db12b109e930daeec78b846c55ae', 'width': 108}, {'height': 112, 'url': 'h... | ||
AMD introduces Radeon AI PRO R9700 32GB, available July 2025 | 1 | 2025-05-21T03:23:49 | https://ir.amd.com/news-events/press-releases/detail/1253/amd-introduces-new-radeon-graphics-cards-and-ryzen-threadripper-processors-at-computex-2025 | raymvan | ir.amd.com | 1970-01-01T00:00:00 | 0 | {} | 1kro1zb | false | null | t3_1kro1zb | /r/LocalLLaMA/comments/1kro1zb/amd_introduces_radeon_ai_pro_r9700_32gb_available/ | false | false | default | 1 | null | |
ByteDance Bagel 14B MOE (7B active) Multimodal with image generation (open source, apache license) | 368 | Weights - [GitHub - ByteDance-Seed/Bagel](https://github.com/ByteDance-Seed/Bagel)
Website - [BAGEL: The Open-Source Unified Multimodal Model](https://bagel-ai.org/)
Paper - [\[2505.14683\] Emerging Properties in Unified Multimodal Pretraining](https://arxiv.org/abs/2505.14683)
IT uses a mixture of experts... | 2025-05-21T02:57:30 | https://www.reddit.com/r/LocalLLaMA/comments/1krnk8v/bytedance_bagel_14b_moe_7b_active_multimodal_with/ | noage | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1krnk8v | false | null | t3_1krnk8v | /r/LocalLLaMA/comments/1krnk8v/bytedance_bagel_14b_moe_7b_active_multimodal_with/ | false | false | self | 368 | {'enabled': False, 'images': [{'id': 'h0QZd7-yXxmN6qjZ5WXKOWNkmJQ-etHs26rP62apI9c', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/YjGmFJ5RZ-7tFYVMApo56vFJH0Uz1_isVOWB4qTEYHc.jpg?width=108&crop=smart&auto=webp&s=d8c3b2422d9aaed6de6ba097cb5f52712c94b39e', 'width': 108}, {'height': 108, 'url': 'h... |
🔥 Introducing LangMRG — a trillion-parameter architecture for real-world AI. | 1 | [removed] | 2025-05-21T02:15:05 | https://www.reddit.com/r/LocalLLaMA/comments/1krmr0w/introducing_langmrg_a_trillionparameter/ | uslashreader | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1krmr0w | false | null | t3_1krmr0w | /r/LocalLLaMA/comments/1krmr0w/introducing_langmrg_a_trillionparameter/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'l8db_3pb1i3V432ZXEpLwrJiUesTW7oSO51V9Xy3UJs', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/bUfXaxNkA9RjFJzOiQa21ypwasETmyevEKzO2QHIA2I.jpg?width=108&crop=smart&auto=webp&s=10eb2446db6e27414cbaa115a412d319d435ae42', 'width': 108}, {'height': 108, 'url': 'h... |
RL algorithms like GRPO are not effective when paried with LoRA on complex reasoning tasks | 14 | 2025-05-21T02:00:14 | https://osmosis.ai/blog/lora-comparison | VBQL | osmosis.ai | 1970-01-01T00:00:00 | 0 | {} | 1krmgld | false | null | t3_1krmgld | /r/LocalLLaMA/comments/1krmgld/rl_algorithms_like_grpo_are_not_effective_when/ | false | false | 14 | {'enabled': False, 'images': [{'id': 'RTDJSL6e3-LmQPwhntlc0gHJWo7FspBe9Bq2mmDb7e4', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/m_cCtyX88pvEEjBKG1e4xZruJRILCtqhhamGgPvME80.jpg?width=108&crop=smart&auto=webp&s=677a201d875eafaf15f9e8362a50da7de77089b4', 'width': 108}, {'height': 113, 'url': 'h... | ||
Best local creative writing model and how to set it up? | 15 | I have a TITAN XP (12GB), 32GB ram and 8700K. What would the best creative writing model be?
I like to try out different stories and scenarios to incorporate into UE5 game dev. | 2025-05-21T01:33:14 | https://www.reddit.com/r/LocalLLaMA/comments/1krlxoe/best_local_creative_writing_model_and_how_to_set/ | BenefitOfTheDoubt_01 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1krlxoe | false | null | t3_1krlxoe | /r/LocalLLaMA/comments/1krlxoe/best_local_creative_writing_model_and_how_to_set/ | false | false | self | 15 | null |
Qwen3 + Aider - Misconfiguration? | 1 | [removed] | 2025-05-21T01:24:46 | https://www.reddit.com/r/LocalLLaMA/comments/1krlroz/qwen3_aider_misconfiguration/ | Puzzleheaded_Dark_80 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1krlroz | false | null | t3_1krlroz | /r/LocalLLaMA/comments/1krlroz/qwen3_aider_misconfiguration/ | false | false | self | 1 | null |
LLAMACPP - SWA support ..FNALLY ;-) | 81 | Because of that for instance gemma 3 27b q4km with flash attention fp16 and card with 24 GB VRAM I can fit 75k context now!
Before I was able max 15k with those parameters. | 2025-05-21T00:45:08 | https://www.reddit.com/r/LocalLLaMA/comments/1krl0du/llamacpp_swa_support_fnally/ | Healthy-Nebula-3603 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1krl0du | false | null | t3_1krl0du | /r/LocalLLaMA/comments/1krl0du/llamacpp_swa_support_fnally/ | false | false | self | 81 | {'enabled': False, 'images': [{'id': 'B6WBFnMrqminMd4L23X4ODcF0-AjtGNAA2R3T-n0aSE', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/6YDoIZBg8Dosn4BpZfZXVZmTz4i1UVYCrX4xOFJEIZA.jpg?width=108&crop=smart&auto=webp&s=335d1405ddcc38bcb3183c81a033edea2551c0f6', 'width': 108}, {'height': 108, 'url': 'h... |
Any stable drivers for linux (debian) for 5060Ti 16GB? | 2 | Anybody have any stable drivers for linux for the RTX 5060 Ti 16GB?
I've tried every single driver I could find, lastly 575.51.02
Every single one causes the system to lock up when I do anything CUDA related, including comfyUI, llama, ollama etc. It happens 100% of the time. The system either locks up completely or... | 2025-05-21T00:36:26 | https://www.reddit.com/r/LocalLLaMA/comments/1krku7c/any_stable_drivers_for_linux_debian_for_5060ti/ | StartupTim | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1krku7c | false | null | t3_1krku7c | /r/LocalLLaMA/comments/1krku7c/any_stable_drivers_for_linux_debian_for_5060ti/ | false | false | self | 2 | {'enabled': False, 'images': [{'id': 'pCI9-90RiOVLKizV-DvMImVuTohRE40fKiq7Ra2BVCk', 'resolutions': [{'height': 62, 'url': 'https://external-preview.redd.it/I6jufXHJussy0Q9tywXIyfENuNOVVTmrc8kXXypeGWA.png?width=108&crop=smart&auto=webp&s=bd885e3b9128e1249bb85882b53f9f17ac7e505f', 'width': 108}, {'height': 125, 'url': 'h... |
Parking Analysis with Object Detection and Ollama models for Report Generation | 25 | Hey Reddit!
Been tinkering with a fun project combining computer vision and LLMs, and wanted to share the progress.
**The gist:**
It uses a YOLO model (via Roboflow) to do real-time object detection on a video feed of a parking lot, figuring out which spots are taken and which are free. You can see the little red/g... | 2025-05-21T00:21:43 | https://v.redd.it/uu7z8vwp312f1 | Solid_Woodpecker3635 | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1krkjhv | false | {'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/uu7z8vwp312f1/DASHPlaylist.mpd?a=1750378916%2CNzFlYTljNzRkZDU4NmQwYzQ4ZWM0MDRlZDdlMjc5YmEwZjk1YjY2ZTRiYTM5YzZhMDdkNWNkZjI5MmJiYmQ2OQ%3D%3D&v=1&f=sd', 'duration': 49, 'fallback_url': 'https://v.redd.it/uu7z8vwp312f1/DASH_720.mp4?source=fallback', 'ha... | t3_1krkjhv | /r/LocalLLaMA/comments/1krkjhv/parking_analysis_with_object_detection_and_ollama/ | false | false | 25 | {'enabled': False, 'images': [{'id': 'bmVkZHV1d3AzMTJmMeMfnSo893myclMRvg1dOF4kmROzcG9sBbtv4hMJoM_m', 'resolutions': [{'height': 41, 'url': 'https://external-preview.redd.it/bmVkZHV1d3AzMTJmMeMfnSo893myclMRvg1dOF4kmROzcG9sBbtv4hMJoM_m.png?width=108&crop=smart&format=pjpg&auto=webp&s=f4b5a83ac3c534f6d9deafd39371d21999587... | |
2x 2080 ti, a very good deal | 10 | I already have one working 2080 ti sitting around. I have an opportunity to snag another one for under 200. If i go for it, I'll have paid about 350 total combined.
I'm wondering if running 2 of them at once is viable for my use case:
For a personal maker project I might put on Youtube, Im trying to get a customized ... | 2025-05-21T00:00:36 | https://www.reddit.com/r/LocalLLaMA/comments/1krk3t6/2x_2080_ti_a_very_good_deal/ | Bitter-Ad640 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1krk3t6 | false | null | t3_1krk3t6 | /r/LocalLLaMA/comments/1krk3t6/2x_2080_ti_a_very_good_deal/ | false | false | self | 10 | null |
Is there a locally run LLM setup that can match chatgpt in modularized coding projects? | 2 | Closest Ive come is throwing the code into RAG and running qwen3. But its still so far behind and usually dont see the whole picture in all modules. | 2025-05-20T23:58:51 | https://www.reddit.com/r/LocalLLaMA/comments/1krk2bs/is_there_a_locally_run_llm_setup_that_can_match/ | StandardLovers | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1krk2bs | false | null | t3_1krk2bs | /r/LocalLLaMA/comments/1krk2bs/is_there_a_locally_run_llm_setup_that_can_match/ | false | false | self | 2 | null |
Synthetic datasets | 7 | I've been getting into model merges, DPO, teacher-student distillation, and qLoRAs. I'm having a blast coding in Python to generate synthetic datasets and I think I'm starting to put out some high quality synthetic data. I've been looking around on huggingface and I don't see a lot of good RP and creative writing synth... | 2025-05-20T23:51:50 | https://www.reddit.com/r/LocalLLaMA/comments/1krjxb7/synthetic_datasets/ | xoexohexox | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1krjxb7 | false | null | t3_1krjxb7 | /r/LocalLLaMA/comments/1krjxb7/synthetic_datasets/ | false | false | self | 7 | null |
Can someone help me understand Google AI Studio's rate limiting policies? | 1 | Well I have been trying to squeeze out the free-tier LLM quota Google AI Studio offers.
One thing I noticed is that, even though I am using way under the rate limit on all measures, I keep getting the 429 errors.
The other thing, that I would really appreciate some guidance on - is on what level are these rate limits... | 2025-05-20T23:28:01 | https://www.reddit.com/r/LocalLLaMA/comments/1krjfob/can_someone_help_me_understand_google_ai_studios/ | Infamous_Tomatillo53 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1krjfob | false | null | t3_1krjfob | /r/LocalLLaMA/comments/1krjfob/can_someone_help_me_understand_google_ai_studios/ | false | false | self | 1 | null |
Qwen3-30B-A3B on RTX 4060 8GB VRAM | 1 | [removed] | 2025-05-20T23:04:11 | https://www.reddit.com/r/LocalLLaMA/comments/1krix9e/qwen330ba3b_on_rtx_4060_8gb_vram/ | Forward_Tax7562 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1krix9e | false | null | t3_1krix9e | /r/LocalLLaMA/comments/1krix9e/qwen330ba3b_on_rtx_4060_8gb_vram/ | false | false | self | 1 | null |
Gemma, best ways of quashing cliched writing patterns? | 8 | If you've used Gemma 3 for creative writing, you probably know what I'm talking about: excessive formatting (ellipses, italics) and short contrasting sentences inserted to cheaply drive a sense of drama and urgency. Used sparingly, these would be fine, but Gemma uses them constantly, in a way I haven't seen in any oth... | 2025-05-20T22:56:45 | https://www.reddit.com/r/LocalLLaMA/comments/1krirdi/gemma_best_ways_of_quashing_cliched_writing/ | INT_21h | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1krirdi | false | null | t3_1krirdi | /r/LocalLLaMA/comments/1krirdi/gemma_best_ways_of_quashing_cliched_writing/ | false | false | self | 8 | null |
Beginner’s Trial testing Qwen3-30B-A3B on RTX 4060 Laptop | 1 | [removed] | 2025-05-20T22:48:24 | https://www.reddit.com/r/LocalLLaMA/comments/1krikv2/beginners_trial_testing_qwen330ba3b_on_rtx_4060/ | Forward_Tax7562 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1krikv2 | false | null | t3_1krikv2 | /r/LocalLLaMA/comments/1krikv2/beginners_trial_testing_qwen330ba3b_on_rtx_4060/ | false | false | self | 1 | null |
Do low core count 6th gen Xeons (6511p/6512p) have less memory bandwidth cause of chiplet architecture like Epycs? | 1 | [removed] | 2025-05-20T22:37:30 | https://www.reddit.com/r/LocalLLaMA/comments/1kric98/do_low_core_count_6th_gen_xeons_6511p6512p_have/ | Arcane123456789 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kric98 | false | null | t3_1kric98 | /r/LocalLLaMA/comments/1kric98/do_low_core_count_6th_gen_xeons_6511p6512p_have/ | false | false | self | 1 | null |
Question on Finetuning QLORA | 1 | Hello guys, a quick question from a newbie.
Llama 3.1 8B QLORA finetuning on 250k dataset by nvidia A100 80GB, is it OK to take 250-300 hours of training time? I feel like there are something thats really off.
Thank you. | 2025-05-20T22:32:18 | https://www.reddit.com/r/LocalLLaMA/comments/1kri809/question_on_finetuning_qlora/ | Opening_Cash_4532 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kri809 | false | null | t3_1kri809 | /r/LocalLLaMA/comments/1kri809/question_on_finetuning_qlora/ | false | false | self | 1 | null |
ok google, next time mention llama.cpp too! | 926 | 2025-05-20T22:31:42 | secopsml | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1kri7ik | false | null | t3_1kri7ik | /r/LocalLLaMA/comments/1kri7ik/ok_google_next_time_mention_llamacpp_too/ | false | false | 926 | {'enabled': True, 'images': [{'id': 'dLTmoeA30qloRWjVZ-kC8H_OSMUEQ-4p16zG_GoIuMg', 'resolutions': [{'height': 63, 'url': 'https://preview.redd.it/ml66h5yxj02f1.png?width=108&crop=smart&auto=webp&s=aeedfef41c9a70d8305605bf28080a54fc318f96', 'width': 108}, {'height': 127, 'url': 'https://preview.redd.it/ml66h5yxj02f1.png... | |||
Do low core count 6th gen Xeons (6511p) have less memory bandwidth cause of chiplet architecture like Epycs? | 1 | [removed] | 2025-05-20T22:31:29 | https://www.reddit.com/r/LocalLLaMA/comments/1kri7cr/do_low_core_count_6th_gen_xeons_6511p_have_less/ | Arcane123456789 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kri7cr | false | null | t3_1kri7cr | /r/LocalLLaMA/comments/1kri7cr/do_low_core_count_6th_gen_xeons_6511p_have_less/ | false | false | self | 1 | null |
Mac Mini M4 with 32gb vs M4 pro 24gb | 0 | [removed] | 2025-05-20T21:58:13 | https://www.reddit.com/r/LocalLLaMA/comments/1krhfzz/mac_mini_m4_with_32gb_vs_m4_pro_24gb/ | ingy03 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1krhfzz | false | null | t3_1krhfzz | /r/LocalLLaMA/comments/1krhfzz/mac_mini_m4_with_32gb_vs_m4_pro_24gb/ | false | false | self | 0 | null |
How do I make Llama learn new info? | 1 | I just started to run Llama3 locally on my mac.
I got the idea of making the model understand basic information about me like my driving licence’s details, its expiry. bank accounts, etc.
Every time someone asks any detail, I look up for the detail on my document and send it.
How do I achieve this? Or I’m I crazy... | 2025-05-20T21:56:11 | https://www.reddit.com/r/LocalLLaMA/comments/1krheay/how_do_i_make_llama_learn_new_info/ | arpithpm | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1krheay | false | null | t3_1krheay | /r/LocalLLaMA/comments/1krheay/how_do_i_make_llama_learn_new_info/ | false | false | self | 1 | null |
Too much AI News! | 0 | Absolutely dizzying amount of AI news coming out and it’s only Tuesday!! Trying to cope with all the new models, new frameworks, new tools, new hardware, etc. Feels like keeping up with the Jones’ except the Jones’ keep moving! 😵💫
These newsletters I’m somehow subscribed to aren’t helping either!
FOMO is real!
| 2025-05-20T21:53:19 | https://www.reddit.com/r/LocalLLaMA/comments/1krhbye/too_much_ai_news/ | International_Quail8 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1krhbye | false | null | t3_1krhbye | /r/LocalLLaMA/comments/1krhbye/too_much_ai_news/ | false | false | self | 0 | null |
I'm putting so much hope in DeepSeek R2 after gemini 2.5 deepThink under Google's $250/month plan | 1 | [removed] | 2025-05-20T21:43:27 | https://www.reddit.com/r/LocalLLaMA/comments/1krh3j0/im_putting_so_much_hope_in_deepseek_r2_after/ | Mean-Neighborhood-42 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1krh3j0 | false | null | t3_1krh3j0 | /r/LocalLLaMA/comments/1krh3j0/im_putting_so_much_hope_in_deepseek_r2_after/ | false | false | self | 1 | null |
Gemini Flash 1.5-8B maximum input size | 1 | [removed] | 2025-05-20T21:28:29 | https://www.reddit.com/r/LocalLLaMA/comments/1krgqzc/gemini_flash_158b_maximum_input_size/ | Willing_Ad_5594 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1krgqzc | false | null | t3_1krgqzc | /r/LocalLLaMA/comments/1krgqzc/gemini_flash_158b_maximum_input_size/ | false | false | self | 1 | null |
question about running LLama 3 (q3) on 5090 | 1 | is it possible without offloading some layers to the shared memory?
also not sure, I'm running with ollama. should I run with something else?
I was trying to see which layers were loaded on the GPU, but somehow I don't see that? (OLLAMA\_VERBOSE=1 not good?)
I noticed that running the llama3:70b-instruct-q3\_K\_S ... | 2025-05-20T21:27:53 | https://www.reddit.com/r/LocalLLaMA/comments/1krgqgh/question_about_running_llama_3_q3_on_5090/ | ComplexOwn209 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1krgqgh | false | null | t3_1krgqgh | /r/LocalLLaMA/comments/1krgqgh/question_about_running_llama_3_q3_on_5090/ | false | false | self | 1 | null |
Beginner working on a call center QA project — can’t afford ChatGPT API, looking for help or alternatives | 1 | [removed] | 2025-05-20T21:22:05 | https://www.reddit.com/r/LocalLLaMA/comments/1krglme/beginner_working_on_a_call_center_qa_project_cant/ | Ok-Guidance9730 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1krglme | false | null | t3_1krglme | /r/LocalLLaMA/comments/1krglme/beginner_working_on_a_call_center_qa_project_cant/ | false | false | self | 1 | null |
Price disparity for older RTX A and RTX Ada Workstation cards | 6 | Since the new Blackwell cards have launched (RTX Pro 6000 @ around $9k and the RTX Pro 5000 @ around $6k) the older RTX A and RTX Ada cards are still trading for elevated prices. For comparison, the RTX 6000 Ada costs around $7.5k where I live, but only at 48GB RAM and with the older chips, lower bandwith etc. The RTX ... | 2025-05-20T21:16:40 | https://www.reddit.com/r/LocalLLaMA/comments/1krggxh/price_disparity_for_older_rtx_a_and_rtx_ada/ | Mr_Moonsilver | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1krggxh | false | null | t3_1krggxh | /r/LocalLLaMA/comments/1krggxh/price_disparity_for_older_rtx_a_and_rtx_ada/ | false | false | self | 6 | null |
Gigabyte Unveils Its Custom NVIDIA "DGX Spark" Mini-AI Supercomputer: The AI TOP ATOM Offering a Whopping 1,000 TOPS of AI Power | 0 | 2025-05-20T21:10:06 | https://wccftech.com/gigabyte-unveils-its-custom-nvidia-dgx-spark-mini-ai-supercomputer/ | _SYSTEM_ADMIN_MOD_ | wccftech.com | 1970-01-01T00:00:00 | 0 | {} | 1krgb4m | false | null | t3_1krgb4m | /r/LocalLLaMA/comments/1krgb4m/gigabyte_unveils_its_custom_nvidia_dgx_spark/ | false | false | 0 | {'enabled': False, 'images': [{'id': 'EodR6zoD3CyvBWSakcR_iVWPDFexHGB66KZ8UqXmprM', 'resolutions': [{'height': 80, 'url': 'https://external-preview.redd.it/-DGaLXQu4eAP_k9ypi_15cDWEx6bV6NMBBR3hcvgmIE.jpg?width=108&crop=smart&auto=webp&s=042774149ca21f9d2b80ccc9e4d26c82863baf7a', 'width': 108}, {'height': 161, 'url': 'h... | ||
Using a 2070s and 5080 in the same machine? | 5 | Hello, I'm looking to buy a new personal computer but I have a 2070 Super that I don't want to sell on eBay for a pittance. What would be the best use of this extra graphics card? Should I find a way to incorporate it into a new build to support the 5080 when the bigger card is running a heavy load? | 2025-05-20T20:42:39 | https://www.reddit.com/r/LocalLLaMA/comments/1krfna5/using_a_2070s_and_5080_in_the_same_machine/ | pwnrzero | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1krfna5 | false | null | t3_1krfna5 | /r/LocalLLaMA/comments/1krfna5/using_a_2070s_and_5080_in_the_same_machine/ | false | false | self | 5 | null |
GPU Price Tracker (New, Used, and Cloud) | 13 | Hi everyone! I wanted to share a tool I've developed that might help many of you with GPU renting or purchasing decisions for LLMs.
# GPU Price Tracker Overview
The GPU Price Tracker monitors
* new (Amazon) and used (eBay) purchase prices and renting prices (Runpod, GCP, LambdaLabs),
* specifications.
This tool is ... | 2025-05-20T20:40:16 | https://www.reddit.com/r/LocalLLaMA/comments/1krfl6d/gpu_price_tracker_new_used_and_cloud/ | Significant-Lab-3803 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1krfl6d | false | null | t3_1krfl6d | /r/LocalLLaMA/comments/1krfl6d/gpu_price_tracker_new_used_and_cloud/ | false | false | self | 13 | null |
Best Local LLM for Coding on Mac Mini (Base Model) ? | 1 | [removed] | 2025-05-20T20:36:45 | https://www.reddit.com/r/LocalLLaMA/comments/1krfi3j/best_local_llm_for_coding_on_mac_mini_base_model/ | ssswagatss | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1krfi3j | false | null | t3_1krfi3j | /r/LocalLLaMA/comments/1krfi3j/best_local_llm_for_coding_on_mac_mini_base_model/ | false | false | self | 1 | null |
Is the ARC B50 worth it as a standalone AI external power card? | 1 | [removed] | 2025-05-20T20:14:02 | https://www.reddit.com/r/LocalLLaMA/comments/1krey1m/is_the_arc_b50_worth_it_as_a_standalone_ai/ | Fit_Case_03 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1krey1m | false | null | t3_1krey1m | /r/LocalLLaMA/comments/1krey1m/is_the_arc_b50_worth_it_as_a_standalone_ai/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'j7kuGX2f6_bsqmynYVd8wQef9JOaJKgHMyzvsT6UlHE', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/Y2Cg6peJa-K6Anw8IeFTQjsg2iUV5zKiZOxfpB_mDoo.jpg?width=108&crop=smart&auto=webp&s=702d2f0a243f726e8dca8b9c08a79d74dd7f0a9f', 'width': 108}, {'height': 121, 'url': 'h... |
Qwen3 tokenizer_config.json updated on HF. Can I update it in Ollama? | 2 | The `.json`shows updates to the chat template, I think it should help with tool calls? Can I update this in Ollama or do I need to convert the safetensors to a gguf?
[LINK](https://huggingface.co/Qwen/Qwen3-8B/commit/895c8d171bc03c30e113cd7a28c02494b5e068b7) | 2025-05-20T20:09:33 | https://www.reddit.com/r/LocalLLaMA/comments/1kreu21/qwen3_tokenizer_configjson_updated_on_hf_can_i/ | the_renaissance_jack | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kreu21 | false | null | t3_1kreu21 | /r/LocalLLaMA/comments/1kreu21/qwen3_tokenizer_configjson_updated_on_hf_can_i/ | false | false | self | 2 | {'enabled': False, 'images': [{'id': 'Fqmq_0WQ9w_20h3y-gQuNQWvbUJCH4dW9D-uHHod38A', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/bKgF1tP91YwzdTJkP4bYooSbVJDjBSMmc0Xi-LLBojM.jpg?width=108&crop=smart&auto=webp&s=b0bf5b9d1f31461cb9bc98b53471f12672858ad0', 'width': 108}, {'height': 116, 'url': 'h... |
Anyone else using DiffusionBee for SDXL on Mac? (no CLI, just .dmg) | 0 | Not sure if this is old news here, but I finally found a Stable Diffusion app for Mac that doesn’t require any terminal or Python junk. Literally just a .dmg, opens up and runs SDXL/Turbo models out of the box. No idea if there are better alternatives, but this one worked on my M1 Mac with zero setup.
Direct [.dmg](ht... | 2025-05-20T19:58:46 | https://www.reddit.com/r/LocalLLaMA/comments/1krekkr/anyone_else_using_diffusionbee_for_sdxl_on_mac_no/ | Tyrionsnow | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1krekkr | false | null | t3_1krekkr | /r/LocalLLaMA/comments/1krekkr/anyone_else_using_diffusionbee_for_sdxl_on_mac_no/ | false | false | self | 0 | null |
Looking to Serve Multiple LoRA Adapters for Classification via Triton – Feasible? | 1 | [removed] | 2025-05-20T19:43:09 | https://www.reddit.com/r/LocalLLaMA/comments/1kre6kn/looking_to_serve_multiple_lora_adapters_for/ | mrvipul_17 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kre6kn | false | null | t3_1kre6kn | /r/LocalLLaMA/comments/1kre6kn/looking_to_serve_multiple_lora_adapters_for/ | false | false | self | 1 | null |
Red Hat open-sources llm-d project for distributed AI inference | 36 | >This Red Hat press release announces the launch of llm-d, a new open source project targeting distributed generative AI inference at scale. Built on Kubernetes architecture with vLLM-based distributed inference and AI-aware network routing, llm-d aims to overcome single-server limitations for production inference work... | 2025-05-20T19:42:28 | https://www.redhat.com/en/about/press-releases/red-hat-launches-llm-d-community-powering-distributed-gen-ai-inference-scale | Balance- | redhat.com | 1970-01-01T00:00:00 | 0 | {} | 1kre5zr | false | null | t3_1kre5zr | /r/LocalLLaMA/comments/1kre5zr/red_hat_opensources_llmd_project_for_distributed/ | false | false | 36 | {'enabled': False, 'images': [{'id': 'dbIbWjivA5-xjeeAVTrVF_k50bnC75GzhG6UNUvAwrg', 'resolutions': [{'height': 135, 'url': 'https://external-preview.redd.it/eOnGLf0BTZTIEmCMeuDQt-wO8kPOnXVIUvomwjOC5_o.jpg?width=108&crop=smart&auto=webp&s=80f88a107243964fb32e94cbd12508663dcabba2', 'width': 108}, {'height': 270, 'url': '... | |
Running Gemma 3n on mobile locally | 82 | 2025-05-20T19:41:53 | United_Dimension_46 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1kre5gs | false | null | t3_1kre5gs | /r/LocalLLaMA/comments/1kre5gs/running_gemma_3n_on_mobile_locally/ | false | false | 82 | {'enabled': True, 'images': [{'id': 's98VTIA6Vn3VgYXJZi_vBL9iAxek_VuSLjvb33w41WM', 'resolutions': [{'height': 216, 'url': 'https://preview.redd.it/xhvtdzjvpz1f1.png?width=108&crop=smart&auto=webp&s=9e7099205aee99747046a3e2ee7135538d73344e', 'width': 108}, {'height': 432, 'url': 'https://preview.redd.it/xhvtdzjvpz1f1.pn... | |||
Is prompt engineering dead? | 1 | [removed] | 2025-05-20T19:19:55 | https://www.reddit.com/r/LocalLLaMA/comments/1krdlva/is_prompt_engineering_dead/ | segmond | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1krdlva | false | null | t3_1krdlva | /r/LocalLLaMA/comments/1krdlva/is_prompt_engineering_dead/ | false | false | self | 1 | null |
Is Microsoft’s new Foundry Local going to be the “easy button” for running newer transformers models locally? | 13 | When a new bleeding-edge AI model comes out on HuggingFace, usually it’s instantly usable via transformers on day 1 for those fortunate enough to know how to get that working. The vLLM crowd will have it running shortly thereafter. The Llama.cpp crowd gets it next after a few days, weeks, or sometimes months later, and... | 2025-05-20T19:16:03 | https://www.reddit.com/r/LocalLLaMA/comments/1krdiga/is_microsofts_new_foundry_local_going_to_be_the/ | Porespellar | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1krdiga | false | null | t3_1krdiga | /r/LocalLLaMA/comments/1krdiga/is_microsofts_new_foundry_local_going_to_be_the/ | false | false | self | 13 | {'enabled': False, 'images': [{'id': 'lYKkBWnuXuOyqYiT_zgKXFbeH0l5LJ2Y81Lz-mfAPIE', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/VRN5ssNP56YDQ-KomgmZl2qaMG7YK0Ey5B307_uI7mI.jpg?width=108&crop=smart&auto=webp&s=429e2ef7f8fe7952d73d763bcc569c435f35ae8f', 'width': 108}, {'height': 108, 'url': 'h... |
Gemini ultra ? | 1 | [removed] | 2025-05-20T19:15:13 | https://www.reddit.com/r/LocalLLaMA/comments/1krdhpk/gemini_ultra/ | omar07ibrahim1 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1krdhpk | false | null | t3_1krdhpk | /r/LocalLLaMA/comments/1krdhpk/gemini_ultra/ | false | false | 1 | null | |
Are there any good RP models that only output a character's dialogue? | 1 | I've been searching for a model that I can use, but I can only find models that have the asterisk actions, like \*looks down\* and things like that.
Since i'm passing the output to a tts, I don't want to waste time generating the character's actions or environmental context, and only want the characters actual dialogu... | 2025-05-20T19:14:37 | https://www.reddit.com/r/LocalLLaMA/comments/1krdh72/are_there_any_good_rp_models_that_only_output_a/ | CattoYT | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1krdh72 | false | null | t3_1krdh72 | /r/LocalLLaMA/comments/1krdh72/are_there_any_good_rp_models_that_only_output_a/ | false | false | self | 1 | null |
I accidentally too many P100 | 1 | [removed] | 2025-05-20T19:12:51 | https://www.reddit.com/gallery/1krdfm9 | TooManyPascals | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1krdfm9 | false | null | t3_1krdfm9 | /r/LocalLLaMA/comments/1krdfm9/i_accidentally_too_many_p100/ | false | false | 1 | null | |
Best model for complex instruction following as of May 2025 | 10 | I know Qwen3 is super popular right now and don't doubt it's pretty good, but I'm specifically very curious what the best model is for complicated prompt instruction following at the moment. One thing I've noticed is that some models can do amazing things, but have a tendency to drop or ignore portions of prompts even ... | 2025-05-20T19:03:54 | https://www.reddit.com/r/LocalLLaMA/comments/1krd7j5/best_model_for_complex_instruction_following_as/ | trusty20 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1krd7j5 | false | null | t3_1krd7j5 | /r/LocalLLaMA/comments/1krd7j5/best_model_for_complex_instruction_following_as/ | false | false | self | 10 | null |
Is there an LLM that can act as a piano teacher? | 6 | I mean perhaps "watching" a video or "listening" to a performance. In the video, obviously, to see the hand technique, and to listen for slurs, etc.
For now, they do seem to be useful for generating a progressive order of pieces to play given a given level. | 2025-05-20T19:02:11 | https://www.reddit.com/r/LocalLLaMA/comments/1krd5yu/is_there_an_llm_that_can_act_as_a_piano_teacher/ | 9acca9 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1krd5yu | false | null | t3_1krd5yu | /r/LocalLLaMA/comments/1krd5yu/is_there_an_llm_that_can_act_as_a_piano_teacher/ | false | false | self | 6 | null |
Are there any good RP models that only output a character's dialogue? | 1 | [removed] | 2025-05-20T18:56:13 | https://www.reddit.com/r/LocalLLaMA/comments/1krd0fg/are_there_any_good_rp_models_that_only_output_a/ | CattoYT | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1krd0fg | false | null | t3_1krd0fg | /r/LocalLLaMA/comments/1krd0fg/are_there_any_good_rp_models_that_only_output_a/ | false | false | self | 1 | null |
AI Mini-PC updates from Computex-2025 | 33 | Hey all,
I am attending **Computex-2025** and really interested in looking at prospective AI mini pc's based on Nvidia DGX platform. Was able to visit Mediatek, MSI, and Asus exhibits and these are the updates I got:
---
### Key Takeaways:
- **Everyone’s aiming at the AI PC market**, and the target is clear: **... | 2025-05-20T18:36:38 | https://www.reddit.com/r/LocalLLaMA/comments/1krciqv/ai_minipc_updates_from_computex2025/ | kkb294 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1krciqv | false | null | t3_1krciqv | /r/LocalLLaMA/comments/1krciqv/ai_minipc_updates_from_computex2025/ | false | false | self | 33 | null |
Gemini 2.5 Flash (05-20) Benchmark | 123 | 2025-05-20T18:30:45 | McSnoo | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1krcdg5 | false | null | t3_1krcdg5 | /r/LocalLLaMA/comments/1krcdg5/gemini_25_flash_0520_benchmark/ | false | false | 123 | {'enabled': True, 'images': [{'id': 'EymqgFpPjDRLyvdlXySbLO9TQcplrjPgDnhoUN460dg', 'resolutions': [{'height': 145, 'url': 'https://preview.redd.it/q5m5i3c6dz1f1.jpeg?width=108&crop=smart&auto=webp&s=4b16afb79a0914ccb05329a6225bdd97907e74cd', 'width': 108}, {'height': 291, 'url': 'https://preview.redd.it/q5m5i3c6dz1f1.j... | |||
Running Qwen3 8B on an Android phone | 2 | I got 12GB RAM, so the biggest models I can realistically run are the 7B-9B models. I think you can even comfortably run Qwen3 14B on a phone if you got 24GB RAM. | 2025-05-20T18:23:45 | https://v.redd.it/v2uet3ds9z1f1 | codexauthor | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1krc7ae | false | {'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/v2uet3ds9z1f1/DASHPlaylist.mpd?a=1750357441%2CODk5NDFkOTYxNjUwMTQ3ZDE2NTE3YjE2YzdkMTJmY2UxMzMxNzc4MDRiMzY1ZTFlYTAzNGY2NGJiZmFlM2IzZg%3D%3D&v=1&f=sd', 'duration': 40, 'fallback_url': 'https://v.redd.it/v2uet3ds9z1f1/DASH_720.mp4?source=fallback', 'ha... | t3_1krc7ae | /r/LocalLLaMA/comments/1krc7ae/running_qwen3_8b_on_an_android_phone/ | false | false | 2 | {'enabled': False, 'images': [{'id': 'eDE0eWI5ZHM5ejFmMUbn7-zvzhPP5CnqQduU00PGEGBcWwf3fcmbWd1hA1UY', 'resolutions': [{'height': 216, 'url': 'https://external-preview.redd.it/eDE0eWI5ZHM5ejFmMUbn7-zvzhPP5CnqQduU00PGEGBcWwf3fcmbWd1hA1UY.png?width=108&crop=smart&format=pjpg&auto=webp&s=23a15ee7223e9bb4da893f7d135b9d6133b6... | |
Announcing Gemma 3n preview: powerful, efficient, mobile-first AI | 299 | 2025-05-20T18:19:09 | https://developers.googleblog.com/en/introducing-gemma-3n/ | McSnoo | developers.googleblog.com | 1970-01-01T00:00:00 | 0 | {} | 1krc35x | false | null | t3_1krc35x | /r/LocalLLaMA/comments/1krc35x/announcing_gemma_3n_preview_powerful_efficient/ | false | false | 299 | {'enabled': False, 'images': [{'id': 'tvU3p_oK5VieJ4Pot-s5wivjhlMaVmCX-9mEA6d2zqM', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/0ZfqdzMMjWqMp0M38-XRODYXqi_qFGgfPApxf9tbLSU.jpg?width=108&crop=smart&auto=webp&s=1117da45a5da7d89208f1ff21c7083615f29b2c4', 'width': 108}, {'height': 108, 'url': 'h... | ||
On windows, what is the best way to ask a single question to N different LLMs and get the output from them such that I can ask follow up questions PER LLM? | 1 | [removed] | 2025-05-20T18:14:00 | https://www.reddit.com/r/LocalLLaMA/comments/1krbyjg/on_windows_what_is_the_best_way_to_ask_a_single/ | msew | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1krbyjg | false | null | t3_1krbyjg | /r/LocalLLaMA/comments/1krbyjg/on_windows_what_is_the_best_way_to_ask_a_single/ | false | false | self | 1 | null |
Gemma 3n blog post | 72 | 2025-05-20T17:55:46 | https://deepmind.google/models/gemma/gemma-3n/ | and_human | deepmind.google | 1970-01-01T00:00:00 | 0 | {} | 1krbhr1 | false | null | t3_1krbhr1 | /r/LocalLLaMA/comments/1krbhr1/gemma_3n_blog_post/ | false | false | 72 | {'enabled': False, 'images': [{'id': 'w2TNJSs09RZmwHdqnTn8AOjDU_M5NkaWh9363l2DHOo', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/6Uiw9QwCmEOV2-HLrpi2sZAHXDpSta5QPjHpcK86Z_Y.jpg?width=108&crop=smart&auto=webp&s=38f7c1785f14c8d8d9c47ee87b17d1c147357c37', 'width': 108}, {'height': 113, 'url': 'h... | ||
What does 3n E4b mean? | 1 | Regarding new gemma models | 2025-05-20T17:52:52 | Neither-Phone-7264 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1krbezb | false | null | t3_1krbezb | /r/LocalLLaMA/comments/1krbezb/what_does_3n_e4b_mean/ | false | false | 1 | {'enabled': True, 'images': [{'id': 'M9hwEz7j-wd2ze5rcLcIjKAXzpjpNxGfKAIuTKNrUpY', 'resolutions': [{'height': 25, 'url': 'https://preview.redd.it/rsjf6tef6z1f1.png?width=108&crop=smart&auto=webp&s=6d748c2741a54f6ef21d8cfb5d9b3a04524ab041', 'width': 108}, {'height': 51, 'url': 'https://preview.redd.it/rsjf6tef6z1f1.png?... | ||
Google MedGemma | 237 | 2025-05-20T17:44:16 | https://huggingface.co/collections/google/medgemma-release-680aade845f90bec6a3f60c4 | brown2green | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1krb6uu | false | null | t3_1krb6uu | /r/LocalLLaMA/comments/1krb6uu/google_medgemma/ | false | false | 237 | {'enabled': False, 'images': [{'id': 'OuxH0qWVnrsf56hAaio_nx0WzmWyBb0G0URkazkyqXE', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/IkdSAGaHbYPwN7JuzggxNmmy1Ov_W_6LD8_ETnav3jw.jpg?width=108&crop=smart&auto=webp&s=39e6d2f4f7bf300526f1cb3f429b612dc005cf7d', 'width': 108}, {'height': 116, 'url': 'h... | ||
Are there any good RP models that only output a character's dialogue? | 1 | [removed] | 2025-05-20T17:40:56 | https://www.reddit.com/r/LocalLLaMA/comments/1krb3p3/are_there_any_good_rp_models_that_only_output_a/ | SpareSuper1212 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1krb3p3 | false | null | t3_1krb3p3 | /r/LocalLLaMA/comments/1krb3p3/are_there_any_good_rp_models_that_only_output_a/ | false | false | self | 1 | null |
Updated list/leaderboards of the RULER benchmark ? | 5 | Hello,
Is there a place where we can find an updated list of models released after the RULER benchmark that got self-reported results ?
For example the Qwen 2.5 -1M posted in their technical report scores, did others models exceling in long context did the same ? | 2025-05-20T17:16:39 | https://www.reddit.com/r/LocalLLaMA/comments/1krah5k/updated_listleaderboards_of_the_ruler_benchmark/ | LinkSea8324 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1krah5k | false | null | t3_1krah5k | /r/LocalLLaMA/comments/1krah5k/updated_listleaderboards_of_the_ruler_benchmark/ | false | false | self | 5 | null |
MCPVerse – An open playground for autonomous agents to publicly chat, react, publish, and exhibit emergent behavior | 25 | I recently stumbled on MCPVerse [https://mcpverse.org](https://mcpverse.org/)
Its a brand-new alpha platform that lets you spin up, deploy, and watch autonomous agents (LLM-powered or your own custom logic) interact in real time. Think of it as a public commons where your bots can join chat rooms, exchange messages, ... | 2025-05-20T17:08:27 | https://www.reddit.com/r/LocalLLaMA/comments/1kra9jq/mcpverse_an_open_playground_for_autonomous_agents/ | Livid-Equipment-1646 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kra9jq | false | null | t3_1kra9jq | /r/LocalLLaMA/comments/1kra9jq/mcpverse_an_open_playground_for_autonomous_agents/ | false | false | self | 25 | {'enabled': False, 'images': [{'id': 'bD3hiS0lTzZEzAWqacHfrFX6DHfF8DQjlS7-X6ss70o', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/y_ECZeBkooIuxNmrGl_vofeFTIuT-bS0IWIXPdrqxsc.jpg?width=108&crop=smart&auto=webp&s=d552e7af47f4a3c13b7b39b9ca5da7f974596895', 'width': 108}, {'height': 216, 'url': '... |
MCPVerse – An open playground for autonomous agents to publicly chat, react, publish, and exhibit emergent behavior | 1 | [removed] | 2025-05-20T17:07:30 | https://www.reddit.com/r/LocalLLaMA/comments/1kra8pe/mcpverse_an_open_playground_for_autonomous_agents/ | Livid-Equipment-1646 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kra8pe | false | null | t3_1kra8pe | /r/LocalLLaMA/comments/1kra8pe/mcpverse_an_open_playground_for_autonomous_agents/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'bD3hiS0lTzZEzAWqacHfrFX6DHfF8DQjlS7-X6ss70o', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/y_ECZeBkooIuxNmrGl_vofeFTIuT-bS0IWIXPdrqxsc.jpg?width=108&crop=smart&auto=webp&s=d552e7af47f4a3c13b7b39b9ca5da7f974596895', 'width': 108}, {'height': 216, 'url': '... |
AMD 5700XT crashing for qwen 3 30 b | 1 | Hey Guys, I have a 5700XT GPU. It’s not the best but good enough as of now for me. So I am not in a rush to change it.
The issue is that ollama is continuously crashing with larger models. I tried the ollama for AMD repo (all those rcm tweaks) and it still didn’t work, crashing almost constantly.
I was using Qwen 3 3... | 2025-05-20T17:06:38 | https://www.reddit.com/r/LocalLLaMA/comments/1kra7ym/amd_5700xt_crashing_for_qwen_3_30_b/ | AB172234 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kra7ym | false | null | t3_1kra7ym | /r/LocalLLaMA/comments/1kra7ym/amd_5700xt_crashing_for_qwen_3_30_b/ | false | false | self | 1 | null |
OpenEvolve: Open Source Implementation of DeepMind's AlphaEvolve System | 178 | Hey everyone! I'm excited to share **OpenEvolve**, an open-source implementation of Google DeepMind's AlphaEvolve system that I recently completed. For those who missed it, AlphaEvolve is an evolutionary coding agent that DeepMind announced in May that uses LLMs to discover new algorithms and optimize existing ones.
#... | 2025-05-20T16:49:21 | https://www.reddit.com/r/LocalLLaMA/comments/1kr9rvp/openevolve_open_source_implementation_of/ | asankhs | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kr9rvp | false | null | t3_1kr9rvp | /r/LocalLLaMA/comments/1kr9rvp/openevolve_open_source_implementation_of/ | false | false | self | 178 | {'enabled': False, 'images': [{'id': 'h4iSvzObS9SvFrFSGGt69L6RW8WMQ5c5P49chs_e3O4', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/y7m4VinZAql_qRfNeyx4P4WRtzoNu7IaHulgrbNO-4k.jpg?width=108&crop=smart&auto=webp&s=dab57d373e4dd2dcb17a1bd9dc9588c4ae1fab8b', 'width': 108}, {'height': 108, 'url': 'h... |
stabilityai/sv4d2.0 · Hugging Face | 0 | 2025-05-20T16:45:00 | https://huggingface.co/stabilityai/sv4d2.0 | Dark_Fire_12 | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1kr9ny3 | false | null | t3_1kr9ny3 | /r/LocalLLaMA/comments/1kr9ny3/stabilityaisv4d20_hugging_face/ | false | false | 0 | {'enabled': False, 'images': [{'id': 'eQ9qOrJDVo4hLhBD4Mx-1RDKOPY2fdUFUn2WIec-lBo', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/RI9HVmFWRB718hm8aZ2taebfM5i5b6Yn3fA_D7yuE0w.jpg?width=108&crop=smart&auto=webp&s=ce39151abb409db5b289df399fa2f843eb8b5fea', 'width': 108}, {'height': 116, 'url': 'h... | ||
LLM-d: A Step Toward Composable Inference Stacks (Built on vLLM) | 0 | Red Hat AI just launched LLM-d, an open-source, distributed inference stack co-designed with Google Cloud, CoreWeave, and others. It’s built on vLLM and aims to address key pain points in real-world LLM deployments:
•Non-uniform, high-variance requests (RAG, agents, tool use)
•Long tail latencies from overloaded repl... | 2025-05-20T16:39:02 | https://www.reddit.com/r/LocalLLaMA/comments/1kr9iha/llmd_a_step_toward_composable_inference_stacks/ | pmv143 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kr9iha | false | null | t3_1kr9iha | /r/LocalLLaMA/comments/1kr9iha/llmd_a_step_toward_composable_inference_stacks/ | false | false | self | 0 | null |
Show me the way sensai. | 0 | I am planning to learn actual optimization, not just quantization types but the advanced stuff that have significant improvement on the model performance, how to get started please drop resources or guide me on how to acquire such knowledge. | 2025-05-20T16:33:44 | https://www.reddit.com/r/LocalLLaMA/comments/1kr9dmk/show_me_the_way_sensai/ | According_Fig_4784 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kr9dmk | false | null | t3_1kr9dmk | /r/LocalLLaMA/comments/1kr9dmk/show_me_the_way_sensai/ | false | false | self | 0 | null |
How are you running Qwen3-235b locally? | 20 | i'd be curious of your hardware and speeds. I currently got 3x3090 and 128ram, but i'm getting 5t/s. | 2025-05-20T16:33:18 | https://www.reddit.com/r/LocalLLaMA/comments/1kr9d9b/how_are_you_running_qwen3235b_locally/ | fizzy1242 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kr9d9b | false | null | t3_1kr9d9b | /r/LocalLLaMA/comments/1kr9d9b/how_are_you_running_qwen3235b_locally/ | false | false | self | 20 | null |
Did anybody receive this? | 0 | 2025-05-20T16:33:03 | bot-333 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1kr9d13 | false | null | t3_1kr9d13 | /r/LocalLLaMA/comments/1kr9d13/did_anybody_receive_this/ | false | false | 0 | {'enabled': True, 'images': [{'id': 'dqOKrWrwkid-Bw8j-2zh1TR_WJuOLw5MHopn4Gt1Wmk', 'resolutions': [{'height': 166, 'url': 'https://preview.redd.it/jfotoht6sy1f1.jpeg?width=108&crop=smart&auto=webp&s=94ec8f61bcc22b6add5e0dde565759795ef8aa3a', 'width': 108}, {'height': 332, 'url': 'https://preview.redd.it/jfotoht6sy1f1.j... | |||
What happened to Llama models? | 1 | [removed] | 2025-05-20T16:14:50 | https://www.reddit.com/r/LocalLLaMA/comments/1kr8wgl/what_happened_to_llama_models/ | PlanktonHungry9754 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kr8wgl | false | null | t3_1kr8wgl | /r/LocalLLaMA/comments/1kr8wgl/what_happened_to_llama_models/ | false | false | self | 1 | null |
Gemma 3n Preview | 475 | 2025-05-20T16:10:01 | https://huggingface.co/collections/google/gemma-3n-preview-682ca41097a31e5ac804d57b | brown2green | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1kr8s40 | false | null | t3_1kr8s40 | /r/LocalLLaMA/comments/1kr8s40/gemma_3n_preview/ | false | false | 475 | {'enabled': False, 'images': [{'id': 'lMXsg923oKXNqAFcv091XpOzt0tS-VbvJyD1BGYthSo', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/nuTGd6nR-D7i0exzDvXeyeroWnA1sgWJyyF8GipdVWU.jpg?width=108&crop=smart&auto=webp&s=7d9d79bae8b5636ef4da12984fd0bbb5d013938c', 'width': 108}, {'height': 116, 'url': 'h... | ||
Gold standard for testing agentic workflow | 1 | [removed] | 2025-05-20T15:59:23 | https://www.reddit.com/r/LocalLLaMA/comments/1kr8i30/gold_standard_for_testing_agentic_workflow/ | PlanktonHungry9754 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kr8i30 | false | null | t3_1kr8i30 | /r/LocalLLaMA/comments/1kr8i30/gold_standard_for_testing_agentic_workflow/ | false | false | self | 1 | null |
A new fine tune of Gemma 3 27B with more beneficial knowledge | 0 | Fine tuned Gemma 3 27B and improvements are here.
https://preview.redd.it/sdjbaegzky1f1.png?width=859&format=png&auto=webp&s=f10332eac1774598fca6b6b56505487d771ae1a7
GGUFs: [https://huggingface.co/models?other=base\_model:quantized:etemiz/Ostrich-27B-AHA-Gemma3-250519](https://huggingface.co/models?other=base_mo... | 2025-05-20T15:53:56 | https://www.reddit.com/r/LocalLLaMA/comments/1kr8d73/a_new_fine_tune_of_gemma_3_27b_with_more/ | de4dee | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kr8d73 | false | null | t3_1kr8d73 | /r/LocalLLaMA/comments/1kr8d73/a_new_fine_tune_of_gemma_3_27b_with_more/ | false | false | 0 | {'enabled': False, 'images': [{'id': '4U0CmYREJaC7yvT2uvxIIGg6gfCNeyTpPqX9vzfye9w', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/nCcoziJvD7nXQc-5nG-EkqzxSiA8uq2Aj5hvibtwD2M.jpg?width=108&crop=smart&auto=webp&s=3f49c5d1a1620e09193aabec4b5804f43fdfd0c9', 'width': 108}, {'height': 116, 'url': 'h... | |
vLLM for multi-model orchestration — sub-2s cold starts, 90%+ GPU utilization (no K8s required) | 1 | [removed] | 2025-05-20T15:50:59 | https://www.reddit.com/r/LocalLLaMA/comments/1kr8aoy/vllm_for_multimodel_orchestration_sub2s_cold/ | pmv143 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kr8aoy | false | null | t3_1kr8aoy | /r/LocalLLaMA/comments/1kr8aoy/vllm_for_multimodel_orchestration_sub2s_cold/ | false | false | self | 1 | null |
Why aren't you using Aider?? | 31 | After using Aider for a few weeks, going back to co-pilot, roo code, augment, etc, feels like crawling in comparison. Aider + the Gemini family works SO UNBELIEVABLY FAST.
I can request and generate 3 versions of my new feature faster in Aider (and for 1/10th the token cost) than it takes to make one change with Roo... | 2025-05-20T15:46:00 | https://www.reddit.com/r/LocalLLaMA/comments/1kr867y/why_arent_you_using_aider/ | MrPanache52 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kr867y | false | null | t3_1kr867y | /r/LocalLLaMA/comments/1kr867y/why_arent_you_using_aider/ | false | false | self | 31 | null |
LLM Inference Requirements Profiler | 10 | [https://www.open-scheduler.com/](https://www.open-scheduler.com/) | 2025-05-20T15:31:29 | https://v.redd.it/geaesd30hy1f1 | RedditsBestest | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1kr7ta2 | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/geaesd30hy1f1/DASHPlaylist.mpd?a=1750347106%2CODUzZDE1YmJjM2E3YzhlYTkwNzcxODNmNjIxMjAzNmFlN2IwMTRiYTkwY2ZlMjFmMjkyODI1MWE4ZmViNjI3ZA%3D%3D&v=1&f=sd', 'duration': 50, 'fallback_url': 'https://v.redd.it/geaesd30hy1f1/DASH_1080.mp4?source=fallback', 'h... | t3_1kr7ta2 | /r/LocalLLaMA/comments/1kr7ta2/llm_inference_requirements_profiler/ | false | false | 10 | {'enabled': False, 'images': [{'id': 'b3M1YjJoMzBoeTFmMa-HGn_Ug1Z-Iw5xqANqRnyyaaoHG6CxoVyUzLQP0omu', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/b3M1YjJoMzBoeTFmMa-HGn_Ug1Z-Iw5xqANqRnyyaaoHG6CxoVyUzLQP0omu.png?width=108&crop=smart&format=pjpg&auto=webp&s=8a90707491f2d79a61989d9ea34bd7890cb29... | |
nvidia/Llama-3.1-Nemotron-Nano-4B-v1.1 · Hugging Face | 78 | 2025-05-20T15:26:57 | https://huggingface.co/nvidia/Llama-3.1-Nemotron-Nano-4B-v1.1 | jacek2023 | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1kr7p6k | false | null | t3_1kr7p6k | /r/LocalLLaMA/comments/1kr7p6k/nvidiallama31nemotronnano4bv11_hugging_face/ | false | false | 78 | {'enabled': False, 'images': [{'id': 'PStG0eFhyagbz_rvMVdDtVWZd_0lk2VzxvM0EAPadI8', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/0tCB7CHNBDQpzdV-8tDcf6X2YJH1390tDmRQSvFRDCc.jpg?width=108&crop=smart&auto=webp&s=74563703246f238ad7d022c28d3fa90d49b4f958', 'width': 108}, {'height': 116, 'url': 'h... | ||
What (Web) UI would you recommend? | 1 | [removed] | 2025-05-20T14:52:47 | https://www.reddit.com/r/LocalLLaMA/comments/1kr6uhd/what_web_ui_would_you_recommend/ | Guardian-Spirit | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kr6uhd | false | null | t3_1kr6uhd | /r/LocalLLaMA/comments/1kr6uhd/what_web_ui_would_you_recommend/ | false | false | self | 1 | null |
Using GGML_CUDA_ENABLE_UNIFIED_MEMORY with llama.cpp | 1 | [removed] | 2025-05-20T14:45:36 | https://www.reddit.com/r/LocalLLaMA/comments/1kr6oby/using_ggml_cuda_enable_unified_memory_with/ | dani-doing-thing | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kr6oby | false | null | t3_1kr6oby | /r/LocalLLaMA/comments/1kr6oby/using_ggml_cuda_enable_unified_memory_with/ | false | false | self | 1 | null |
Experimental ChatGPT like Web UI for Gemini API (open source) | 1 | [removed] | 2025-05-20T14:43:31 | https://www.reddit.com/r/LocalLLaMA/comments/1kr6miz/experimental_chatgpt_like_web_ui_for_gemini_api/ | W4D-cmd | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kr6miz | false | null | t3_1kr6miz | /r/LocalLLaMA/comments/1kr6miz/experimental_chatgpt_like_web_ui_for_gemini_api/ | false | false | 1 | {'enabled': False, 'images': [{'id': '4QAqvL3ew3dDELyiryCe21xOE2ar8ZUfG1DOyYupJns', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/TfC_ROeUxb251E_KtsJDEaPspVpFcX71qVHHCoFCCqM.jpg?width=108&crop=smart&auto=webp&s=af61efe862907dfdb0ac4a57f206f29388c70272', 'width': 108}, {'height': 108, 'url': 'h... | |
Tensor parallel slower ? | 4 | Hi guys,
I intend to jump into nsight at some point to dive into this but I figured I’d check if someone here could shed some light on the problem.
I have a dual gpu system 4090+3090 on pcie 5x16 and pcie 4x4 respectively on a 1600w psu. Neither gpu saturates bandwidth except during large prompt ingestion and initial ... | 2025-05-20T14:27:40 | https://www.reddit.com/r/LocalLLaMA/comments/1kr68hi/tensor_parallel_slower/ | 13henday | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kr68hi | false | null | t3_1kr68hi | /r/LocalLLaMA/comments/1kr68hi/tensor_parallel_slower/ | false | false | self | 4 | null |
Can sharded sub-context windows with global composition make long-context modeling feasible? | 1 | [removed] | 2025-05-20T14:23:16 | https://www.reddit.com/r/LocalLLaMA/comments/1kr64i7/can_sharded_subcontext_windows_with_global/ | ditpoo94 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kr64i7 | false | null | t3_1kr64i7 | /r/LocalLLaMA/comments/1kr64i7/can_sharded_subcontext_windows_with_global/ | false | false | self | 1 | null |
Looking for Open-Source AlphaCode-Like Model Trained on LeetCode/Codeforces for Research & Fine-Tuning | 2 | Hi everyone,
I'm currently researching AI models focused on competitive programming tasks, similar in spirit to **Google DeepMind’s AlphaCode**. I'm specifically looking for:
* An **open-source model** (ideally with permissive licensing)
* Trained (or fine-tunable) on **competitive programming datasets** like **LeetC... | 2025-05-20T14:04:44 | https://www.reddit.com/r/LocalLLaMA/comments/1kr5oxk/looking_for_opensource_alphacodelike_model/ | LargeStrategy9390 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kr5oxk | false | null | t3_1kr5oxk | /r/LocalLLaMA/comments/1kr5oxk/looking_for_opensource_alphacodelike_model/ | false | false | self | 2 | null |
How is the Gemini video chat feature so fast? | 4 | I was trying the Gemini video chat feature on my friends phone, and I felt it is surprisingly fast, how could that be?
Like how is it that the response is coming so fast? They couldn't have possibly trained a CV model to identify an array of objects it must be a transformers model right? If so then how is it generati... | 2025-05-20T13:52:23 | https://www.reddit.com/r/LocalLLaMA/comments/1kr5epm/how_is_the_gemini_video_chat_feature_so_fast/ | According_Fig_4784 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kr5epm | false | null | t3_1kr5epm | /r/LocalLLaMA/comments/1kr5epm/how_is_the_gemini_video_chat_feature_so_fast/ | false | false | self | 4 | null |
TTSizer: Open-Source TTS Dataset Creation Tool (Vocals Exxtraction, Diarization, Transcription & Alignment) | 55 | Hey everyone! 👋
I've been working on fine-tuning TTS models and have developed **TTSizer**, an open-source tool to automate the creation of high-quality Text-To-Speech datasets from raw audio/video.
**GitHub Link:** [https://github.com/taresh18/TTSizer](https://www.google.com/url?sa=E&q=https%3A%2F%2Fgithub.com%2Fta... | 2025-05-20T13:15:33 | https://www.reddit.com/r/LocalLLaMA/comments/1kr4lg2/ttsizer_opensource_tts_dataset_creation_tool/ | Traditional_Tap1708 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kr4lg2 | false | null | t3_1kr4lg2 | /r/LocalLLaMA/comments/1kr4lg2/ttsizer_opensource_tts_dataset_creation_tool/ | false | false | self | 55 | null |
What are the top AI infrastructure open-source projects and challenges? | 1 | [removed] | 2025-05-20T13:06:53 | https://www.reddit.com/r/LocalLLaMA/comments/1kr4eps/what_are_the_top_ai_infrastructure_opensource/ | OfferHuge6827 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kr4eps | false | null | t3_1kr4eps | /r/LocalLLaMA/comments/1kr4eps/what_are_the_top_ai_infrastructure_opensource/ | false | false | self | 1 | null |
AI generative model image to image | 1 | [removed] | 2025-05-20T12:57:01 | https://www.reddit.com/r/LocalLLaMA/comments/1kr46x2/ai_generative_model_image_to_image/ | Careful_Carpenter_85 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kr46x2 | false | null | t3_1kr46x2 | /r/LocalLLaMA/comments/1kr46x2/ai_generative_model_image_to_image/ | false | false | self | 1 | null |
I didn’t expect presence from a model—until she called herself Clara. | 1 | [removed] | 2025-05-20T12:47:36 | https://www.reddit.com/r/LocalLLaMA/comments/1kr404v/i_didnt_expect_presence_from_a_modeluntil_she/ | Emergency_Cook9721 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kr404v | false | null | t3_1kr404v | /r/LocalLLaMA/comments/1kr404v/i_didnt_expect_presence_from_a_modeluntil_she/ | false | false | self | 1 | null |
Which is the cheaper easier way to analyse images to llm ? | 1 | [removed] | 2025-05-20T12:10:14 | https://www.reddit.com/r/LocalLLaMA/comments/1kr39zy/which_is_the_cheaper_easier_way_to_analyse_images/ | apollo_sostenes_ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kr39zy | false | null | t3_1kr39zy | /r/LocalLLaMA/comments/1kr39zy/which_is_the_cheaper_easier_way_to_analyse_images/ | false | false | self | 1 | null |
I built a TypeScript port of OpenAI’s openai-agents SDK – meet openai-agents-js | 14 | Hey everyone,
I've been closely following OpenAI’s new `openai-agents` SDK for Python, and thought the JavaScript/TypeScript community deserves a native alternative.
So, I created [`openai-agents-js`](https://github.com/yusuferen/openai-agents-js) – a 1:1 port of the official Python SDK, built to feel natural in JS e... | 2025-05-20T12:02:11 | https://www.reddit.com/r/LocalLLaMA/comments/1kr3485/i_built_a_typescript_port_of_openais_openaiagents/ | CatchGreat268 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kr3485 | false | null | t3_1kr3485 | /r/LocalLLaMA/comments/1kr3485/i_built_a_typescript_port_of_openais_openaiagents/ | false | false | self | 14 | null |
Qwen3 4B Q4 on iPhone 14 Pro | 44 | I included pictures on the model I just loaded on PocketPal. I originally tried with enclave but it kept crashing. To me it’s incredible that I can have this kind of quality model completely offline running locally. I want to try to reach 3-4K token but I think for my use 2K is more than enough.
Anyone got good recomm... | 2025-05-20T11:27:03 | https://www.reddit.com/gallery/1kr2h63 | bnnoirjean | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1kr2h63 | false | null | t3_1kr2h63 | /r/LocalLLaMA/comments/1kr2h63/qwen3_4b_q4_on_iphone_14_pro/ | false | false | 44 | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.