title stringlengths 1 300 | score int64 0 8.54k | selftext stringlengths 0 41.5k | created timestamp[ns]date 2023-04-01 04:30:41 2026-03-04 02:14:14 ⌀ | url stringlengths 0 878 | author stringlengths 3 20 | domain stringlengths 0 82 | edited timestamp[ns]date 1970-01-01 00:00:00 2026-02-19 14:51:53 | gilded int64 0 2 | gildings stringclasses 7
values | id stringlengths 7 7 | locked bool 2
classes | media stringlengths 646 1.8k ⌀ | name stringlengths 10 10 | permalink stringlengths 33 82 | spoiler bool 2
classes | stickied bool 2
classes | thumbnail stringlengths 4 213 ⌀ | ups int64 0 8.54k | preview stringlengths 301 5.01k ⌀ |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Best Local LLM on a 16GB MacBook Pro M4 | 0 | Hi! I'm looking to run local llm on a MacBook Pro M4 with 16GB of RAM. My intended use case of creative writing for a writing some stories (so to brainstorm certain ideas), some psychological reasoning (to help in making the narrative reasonable and relatable) and possibly some coding in JavaScript or with Godot for so... | 2025-05-21T21:03:28 | https://www.reddit.com/r/LocalLLaMA/comments/1ks92uu/best_local_llm_on_a_16gb_macbook_pro_m4/ | combo-user | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ks92uu | false | null | t3_1ks92uu | /r/LocalLLaMA/comments/1ks92uu/best_local_llm_on_a_16gb_macbook_pro_m4/ | false | false | self | 0 | null |
Google's new Text Diffusion model explained, and why it matters for LocalLLaMA | 1 | [removed] | 2025-05-21T20:57:51 | https://www.reddit.com/r/LocalLLaMA/comments/1ks8xnx/googles_new_text_diffusion_model_explained_and/ | amapleson | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ks8xnx | false | null | t3_1ks8xnx | /r/LocalLLaMA/comments/1ks8xnx/googles_new_text_diffusion_model_explained_and/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'A8pZ1jc8VsvBE9_QRAwDJNx28Ocv2Dnv_rNfhgi6nEQ', 'resolutions': [{'height': 102, 'url': 'https://external-preview.redd.it/UKaL__HlNkC06VtxCnSqS-6ULeMna6lMu5DWwmwHadA.jpg?width=108&crop=smart&auto=webp&s=16c440f9f5ab482df84dba7d4d22ebb17566aacd', 'width': 108}, {'height': 204, 'url': '... | |
Has anybody built a chatbot for tons of pdf‘s with high accuracy yet? | 1 | [removed] | 2025-05-21T20:30:08 | https://www.reddit.com/r/LocalLLaMA/comments/1ks89je/has_anybody_built_a_chatbot_for_tons_of_pdfs_with/ | Melodic_Conflict_831 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ks89je | false | null | t3_1ks89je | /r/LocalLLaMA/comments/1ks89je/has_anybody_built_a_chatbot_for_tons_of_pdfs_with/ | false | false | self | 1 | null |
EVO X2 Qwen3 32B Q4 benchmark please | 3 | Anyone with the EVO X2 able to test performance of Qwen 3 32B Q4. Ideally with standard context and with 128K max context size. | 2025-05-21T20:27:58 | https://www.reddit.com/r/LocalLLaMA/comments/1ks87oi/evo_x2_qwen3_32b_q4_benchmark_please/ | MidnightProgrammer | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ks87oi | false | null | t3_1ks87oi | /r/LocalLLaMA/comments/1ks87oi/evo_x2_qwen3_32b_q4_benchmark_please/ | false | false | self | 3 | null |
I need help with SLMs | 0 | I tried running many SLMs including phi3 mini and more. I tried llama.cpp, onnx runtime as of now to run it on android and iOS. Even heard of gamma 3n release recently by Google.
Spent a lot of time in this. Please help me move forward because I didn't got any good results in terms of performance.
What my expectation... | 2025-05-21T20:08:41 | https://www.reddit.com/r/LocalLLaMA/comments/1ks7qgk/i_need_help_with_slms/ | Away_Expression_3713 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ks7qgk | false | null | t3_1ks7qgk | /r/LocalLLaMA/comments/1ks7qgk/i_need_help_with_slms/ | false | false | self | 0 | null |
I added semantic search to our codebase with RAG. No infra, no vector DB, 50 lines of Python. | 1 | We’ve been exploring ways to make our codebase more searchable for both humans and LLM agents. Standard keyword search doesn’t cut it when trying to answer questions like:
* “Where is ingestion triggered?”
* “How are retries configured in Airflow?”
* “Where do we handle usage tracking?”
We didn’t want to maintain emb... | 2025-05-21T20:06:49 | https://www.reddit.com/r/LocalLLaMA/comments/1ks7oss/i_added_semantic_search_to_our_codebase_with_rag/ | superconductiveKyle | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ks7oss | false | null | t3_1ks7oss | /r/LocalLLaMA/comments/1ks7oss/i_added_semantic_search_to_our_codebase_with_rag/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'IzWJYiFwjzG4A1U_z4otwvXPIUCx93ztClVJ5X6VtWA', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/Zdi-XkgFKYATZe6kH61ARNNlrbi8KYmb5byDmF5A0nk.jpg?width=108&crop=smart&auto=webp&s=aa724089a7f8c7c73b0f8b53f2550d520e6efa7b', 'width': 108}, {'height': 113, 'url': 'h... |
Llama.cpp vs onnx runtime | 4 | Whats better in terms of performance for both android and iOS?
also anyone tried gamma 3n by Google? Would love to know about it | 2025-05-21T20:06:18 | https://www.reddit.com/r/LocalLLaMA/comments/1ks7obu/llamacpp_vs_onnx_runtime/ | Away_Expression_3713 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ks7obu | false | null | t3_1ks7obu | /r/LocalLLaMA/comments/1ks7obu/llamacpp_vs_onnx_runtime/ | false | false | self | 4 | null |
Falcon-H1 by tiiuae. | 3 | Falcon-H1 seems to perform very decently, even better than gemma3. 0.5B and 1.5B are impressive, especially on math questions.
Any thoughts about the bigger models from your experience with it? Thoughts on the architecture? | 2025-05-21T19:54:31 | ilyas555 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1ks7dht | false | null | t3_1ks7dht | /r/LocalLLaMA/comments/1ks7dht/falconh1_by_tiiuae/ | false | false | 3 | {'enabled': True, 'images': [{'id': 'BYiVouPSFbmQlsi7nqdglAvU9l-khKSAva4DrPa17wA', 'resolutions': [{'height': 82, 'url': 'https://preview.redd.it/pqhnjil1x62f1.jpeg?width=108&crop=smart&auto=webp&s=75fc10eff3c379ff1b37935851fb1ff747112c06', 'width': 108}, {'height': 164, 'url': 'https://preview.redd.it/pqhnjil1x62f1.jp... | ||
What is tps of qwen3 30ba3b on igpu 780m? | 1 | I'm looking to get a home server that can host qwen3 30ba3b, and looking at minipc, with 780m and 64gb ddr5 RAM, or mac mini options, with at least 32gb RAM.
Does anyone have an 780m that can test the speeds, prompt processing and token generation, using llama.cpp or vllm (if it even works on igpu)? | 2025-05-21T19:24:09 | https://www.reddit.com/r/LocalLLaMA/comments/1ks6mlc/what_is_tps_of_qwen3_30ba3b_on_igpu_780m/ | Zyguard7777777 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ks6mlc | false | null | t3_1ks6mlc | /r/LocalLLaMA/comments/1ks6mlc/what_is_tps_of_qwen3_30ba3b_on_igpu_780m/ | false | false | self | 1 | null |
Looking for Advices for a Reliable Linux AI Workstation and Gaming | 1 | [removed] | 2025-05-21T19:08:26 | https://www.reddit.com/r/LocalLLaMA/comments/1ks68lq/looking_for_advices_for_a_reliable_linux_ai/ | MondoGao | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ks68lq | false | null | t3_1ks68lq | /r/LocalLLaMA/comments/1ks68lq/looking_for_advices_for_a_reliable_linux_ai/ | false | false | self | 1 | null |
Falcon-H1 performance. | 1 | [removed] | 2025-05-21T19:03:02 | MentionAgitated8682 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1ks63tw | false | null | t3_1ks63tw | /r/LocalLLaMA/comments/1ks63tw/falconh1_performance/ | false | false | 1 | {'enabled': True, 'images': [{'id': 'f18o0iKoAwjVEcmdpeJiiLMG36xXuUNy6inV7lIkDD8', 'resolutions': [{'height': 82, 'url': 'https://preview.redd.it/kkdx3itun62f1.jpeg?width=108&crop=smart&auto=webp&s=7ff2416b05b27a220b530e9c9a4a03ed2966bd50', 'width': 108}, {'height': 164, 'url': 'https://preview.redd.it/kkdx3itun62f1.jp... | ||
NVLink On 2x 3090 Question | 4 | Hello all. I recently got access to 2x RTX 3090 FEs as well as a 4-slot official NVLink bridge connector. I am planning on using this in Linux for AI research and development. I am wondering if there is any motherboard requirement to be able to use NVLink on Linux? It is hard enough to find a motherboard with the right... | 2025-05-21T19:01:12 | https://www.reddit.com/r/LocalLLaMA/comments/1ks6236/nvlink_on_2x_3090_question/ | Skye7821 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ks6236 | false | null | t3_1ks6236 | /r/LocalLLaMA/comments/1ks6236/nvlink_on_2x_3090_question/ | false | false | self | 4 | null |
Best ai for Chinese to English translation? | 1 | [removed] | 2025-05-21T18:54:58 | https://www.reddit.com/r/LocalLLaMA/comments/1ks5wdx/best_ai_for_chinese_to_english_translation/ | Civil_Candidate_824 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ks5wdx | false | null | t3_1ks5wdx | /r/LocalLLaMA/comments/1ks5wdx/best_ai_for_chinese_to_english_translation/ | false | false | self | 1 | null |
Broke down and bought a Mac Mini - my processes run 5x faster | 87 | I ran my process on my $850 Beelink Ryzen 9 32gb machine and it took 4 hours to run - the process calls my 8g llm 42 times during the run. It took 4 hours and 18 minutes. The Mac Mini with an M4 Pro chip and 24gb memory took 47 minutes.
It’s a keeper - I’m returning my Beelink. That unified memory in the Mac used hal... | 2025-05-21T18:50:39 | https://www.reddit.com/r/LocalLLaMA/comments/1ks5sh4/broke_down_and_bought_a_mac_mini_my_processes_run/ | ETBiggs | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ks5sh4 | false | null | t3_1ks5sh4 | /r/LocalLLaMA/comments/1ks5sh4/broke_down_and_bought_a_mac_mini_my_processes_run/ | false | false | self | 87 | null |
Reliable function calling with vLLM | 2 | Hi all,
we're experimenting with function calling using open-source models served through vLLM, and we're struggling to get reliable outputs for most agentic use cases.
So far, we've tried: LLaMA 3.3 70B (both vanilla and fine-tuned by Watt-ai for tool use) and Gemma 3 27B.
For LLaMA, we experimented with both the JS... | 2025-05-21T18:46:42 | https://www.reddit.com/r/LocalLLaMA/comments/1ks5oxb/reliable_function_calling_with_vllm/ | mjf-89 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ks5oxb | false | null | t3_1ks5oxb | /r/LocalLLaMA/comments/1ks5oxb/reliable_function_calling_with_vllm/ | false | false | self | 2 | null |
Devstral with vision support (from ngxson) | 23 | [https://huggingface.co/ngxson/Devstral-Small-Vision-2505-GGUF](https://huggingface.co/ngxson/Devstral-Small-Vision-2505-GGUF)
Just sharing in case people did not notice (version with vision "re-added"). Did not test yet but will do that soonly. | 2025-05-21T18:45:32 | https://www.reddit.com/r/LocalLLaMA/comments/1ks5nul/devstral_with_vision_support_from_ngxson/ | Leflakk | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ks5nul | false | null | t3_1ks5nul | /r/LocalLLaMA/comments/1ks5nul/devstral_with_vision_support_from_ngxson/ | false | false | self | 23 | {'enabled': False, 'images': [{'id': 'm0-I3lsSbmMtSf-Bnz_2X_1TVtuh-tbWaKroW5KWCec', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/LquwuxAIjWiQrDO0hGeOBS10nRcmudBN1QF7bkdxtzc.jpg?width=108&crop=smart&auto=webp&s=5639c1a88d7a1be5f21fe637272d2f0f9dae89c5', 'width': 108}, {'height': 116, 'url': 'h... |
Bosgame M5 AI Mini PC - $1699 | AMD Ryzen AI Max+ 395, 128gb LPDDR5, and 2TB SSD | 10 | https://www.bosgamepc.com/products/bosgame-m5-ai-mini-desktop-ryzen-ai-max-395
| 2025-05-21T18:42:02 | https://www.bosgamepc.com/products/bosgame-m5-ai-mini-desktop-ryzen-ai-max-395 | policyweb | bosgamepc.com | 1970-01-01T00:00:00 | 0 | {} | 1ks5kpe | false | null | t3_1ks5kpe | /r/LocalLLaMA/comments/1ks5kpe/bosgame_m5_ai_mini_pc_1699_amd_ryzen_ai_max_395/ | false | false | default | 10 | null |
Perchance RP/RPG story interface for local model? | 4 | 2025-05-21T18:14:26 | Shockbum | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1ks4vql | false | null | t3_1ks4vql | /r/LocalLLaMA/comments/1ks4vql/perchance_rprpg_story_interface_for_local_model/ | false | false | 4 | {'enabled': True, 'images': [{'id': 'j147zoA2bk_BqU9e6UNIbq8t5IY3IFoo9BdOY6kMP_U', 'resolutions': [{'height': 108, 'url': 'https://preview.redd.it/h8nikytye62f1.jpeg?width=108&crop=smart&auto=webp&s=217c7f86a44106a85974ff043c9804e69f46cd9e', 'width': 108}, {'height': 216, 'url': 'https://preview.redd.it/h8nikytye62f1.j... | |||
Batch AI agents | 1 | [removed] | 2025-05-21T18:01:51 | https://www.reddit.com/r/LocalLLaMA/comments/1ks4kbr/batch_ai_agents/ | jain-nivedit | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ks4kbr | false | null | t3_1ks4kbr | /r/LocalLLaMA/comments/1ks4kbr/batch_ai_agents/ | false | false | self | 1 | null |
Platform for batch AI agents | 1 | [removed] | 2025-05-21T17:57:31 | https://www.reddit.com/r/LocalLLaMA/comments/1ks4g5r/platform_for_batch_ai_agents/ | jain-nivedit | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ks4g5r | false | null | t3_1ks4g5r | /r/LocalLLaMA/comments/1ks4g5r/platform_for_batch_ai_agents/ | false | false | self | 1 | null |
Introducing Exosphere: Platform for batch AI agents! | 1 | [removed] | 2025-05-21T17:55:11 | https://www.reddit.com/r/LocalLLaMA/comments/1ks4e29/introducing_exosphere_platform_for_batch_ai_agents/ | jain-nivedit | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ks4e29 | false | null | t3_1ks4e29 | /r/LocalLLaMA/comments/1ks4e29/introducing_exosphere_platform_for_batch_ai_agents/ | false | false | self | 1 | null |
Tokilake: Self-Host a Private AI Cloud for Your GPUs (Access NATted Hardware!) - Open Source | 1 | [removed] | 2025-05-21T17:54:57 | https://www.reddit.com/r/LocalLLaMA/comments/1ks4duq/tokilake_selfhost_a_private_ai_cloud_for_your/ | Wise_Day4455 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ks4duq | false | null | t3_1ks4duq | /r/LocalLLaMA/comments/1ks4duq/tokilake_selfhost_a_private_ai_cloud_for_your/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'SWsR6BF8B-wCJpnAlQOh4Gv6RwJLVcBzaQgItQJahMU', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/bA0_QmSLYFpl9V716td8shzgi6k-l2hulq6JSUFtHac.jpg?width=108&crop=smart&auto=webp&s=f5522e7afee672a20964c788b66486037d7d69e9', 'width': 108}, {'height': 108, 'url': 'h... |
Arc pro b60 48gb vram | 14 | https://videocardz.com/newz/maxsun-unveils-arc-pro-b60-dual-turbo-two-battlemage-gpus-48gb-vram-and-400w-power
| 2025-05-21T17:48:26 | https://www.reddit.com/r/LocalLLaMA/comments/1ks47wv/arc_pro_b60_48gb_vram/ | zathras7 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ks47wv | false | null | t3_1ks47wv | /r/LocalLLaMA/comments/1ks47wv/arc_pro_b60_48gb_vram/ | false | false | self | 14 | {'enabled': False, 'images': [{'id': 'GAmv1Z-0vEGZvysNr_W7sYt4MzDAstpGPPZMsg_bJ6Y', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/3yPGHghVMzYHAb5HXyT6mKs83tOiC4QzGnzAcUM9BIg.jpg?width=108&crop=smart&auto=webp&s=45651f84e192227cc5fd2b75e4ab4aa94615da2d', 'width': 108}, {'height': 112, 'url': 'h... |
Tokilake: Self-Host a Private AI Cloud for Your GPUs (Access NATted Hardware) - Open Source | 1 | [removed] | 2025-05-21T17:42:05 | https://www.reddit.com/r/LocalLLaMA/comments/1ks425p/tokilake_selfhost_a_private_ai_cloud_for_your/ | Wise_Day4455 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ks425p | false | null | t3_1ks425p | /r/LocalLLaMA/comments/1ks425p/tokilake_selfhost_a_private_ai_cloud_for_your/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'SWsR6BF8B-wCJpnAlQOh4Gv6RwJLVcBzaQgItQJahMU', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/bA0_QmSLYFpl9V716td8shzgi6k-l2hulq6JSUFtHac.jpg?width=108&crop=smart&auto=webp&s=f5522e7afee672a20964c788b66486037d7d69e9', 'width': 108}, {'height': 108, 'url': 'h... |
Starting a Discord for Local LLM Deployment Enthusiasts | 1 | [removed] | 2025-05-21T17:35:25 | https://www.reddit.com/r/LocalLLaMA/comments/1ks3w6b/starting_a_discord_for_local_llm_deployment/ | tagrib | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ks3w6b | false | null | t3_1ks3w6b | /r/LocalLLaMA/comments/1ks3w6b/starting_a_discord_for_local_llm_deployment/ | false | false | self | 1 | null |
Tokilake: Self-Host a Private AI Cloud for Your GPUs (Access NATted Hardware!) - Open Source | 1 | [removed] | 2025-05-21T17:33:33 | https://www.reddit.com/r/LocalLLaMA/comments/1ks3uic/tokilake_selfhost_a_private_ai_cloud_for_your/ | Wise_Day4455 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ks3uic | false | null | t3_1ks3uic | /r/LocalLLaMA/comments/1ks3uic/tokilake_selfhost_a_private_ai_cloud_for_your/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'SWsR6BF8B-wCJpnAlQOh4Gv6RwJLVcBzaQgItQJahMU', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/bA0_QmSLYFpl9V716td8shzgi6k-l2hulq6JSUFtHac.jpg?width=108&crop=smart&auto=webp&s=f5522e7afee672a20964c788b66486037d7d69e9', 'width': 108}, {'height': 108, 'url': 'h... |
Tokilake: Self-Host a Private AI Cloud for Your GPUs (Access NATted Hardware!) - Open Source | 1 | [removed] | 2025-05-21T17:28:35 | https://www.reddit.com/r/LocalLLaMA/comments/1ks3pyz/tokilake_selfhost_a_private_ai_cloud_for_your/ | Wise_Day4455 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ks3pyz | false | null | t3_1ks3pyz | /r/LocalLLaMA/comments/1ks3pyz/tokilake_selfhost_a_private_ai_cloud_for_your/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'SWsR6BF8B-wCJpnAlQOh4Gv6RwJLVcBzaQgItQJahMU', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/bA0_QmSLYFpl9V716td8shzgi6k-l2hulq6JSUFtHac.jpg?width=108&crop=smart&auto=webp&s=f5522e7afee672a20964c788b66486037d7d69e9', 'width': 108}, {'height': 108, 'url': 'h... |
Starting a Discord for Local LLM Deployment Enthusiasts! | 1 | [removed] | 2025-05-21T17:27:00 | https://www.reddit.com/r/LocalLLaMA/comments/1ks3olr/starting_a_discord_for_local_llm_deployment/ | tagrib | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ks3olr | false | null | t3_1ks3olr | /r/LocalLLaMA/comments/1ks3olr/starting_a_discord_for_local_llm_deployment/ | false | false | self | 1 | null |
ChatGPT’s Impromptu Web Lookups... Can Open Source Compete? | 0 | I must reluctantly admit... I can’t out-fox ChatGPT, when it spots a blind spot, it just deduces it needs a web lookup and grabs the answer, no extra setup or config required. Its power comes from having vast public data indexed (Google, lol) and the instinct to query it on the fly with... tools (?).
As of today, how ... | 2025-05-21T17:26:52 | https://www.reddit.com/r/LocalLLaMA/comments/1ks3oi1/chatgpts_impromptu_web_lookups_can_open_source/ | IrisColt | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ks3oi1 | false | null | t3_1ks3oi1 | /r/LocalLLaMA/comments/1ks3oi1/chatgpts_impromptu_web_lookups_can_open_source/ | false | false | self | 0 | null |
Radeon AI Pro R9700. considering they took a lot of time comparing to 5080 likely to cost around 1000USD for 32gb vram | 1 | 2025-05-21T17:26:08 | intc3172 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1ks3nui | false | null | t3_1ks3nui | /r/LocalLLaMA/comments/1ks3nui/radeon_ai_pro_r9700_considering_they_took_a_lot/ | false | false | 1 | {'enabled': True, 'images': [{'id': 'uDyEHx3AJFIzvDb01KtyN7Wl7K2B0ZYUlZyCkbYfuUA', 'resolutions': [{'height': 60, 'url': 'https://preview.redd.it/bx2sdzae662f1.png?width=108&crop=smart&auto=webp&s=9f7ceb479a9933ad8f2ef4b8da4a6cc89e27fa0e', 'width': 108}, {'height': 121, 'url': 'https://preview.redd.it/bx2sdzae662f1.png... | |||
Models become so good, companies are selling illusion of a working brain | 0 | The push for agents we observe today is a marketing strategy to sell more usage, create demand for resources and justify investments in infrastructure and supporting software.
We don't have an alternative to our minds, AI systems can't come to conclusions outside of their training datasets. What we have is an illusion... | 2025-05-21T17:25:46 | robertpiosik | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1ks3ni2 | false | null | t3_1ks3ni2 | /r/LocalLLaMA/comments/1ks3ni2/models_become_so_good_companies_are_selling/ | false | false | 0 | {'enabled': True, 'images': [{'id': 'KEW7j1dEIE_G49e6l5U-TJTDMQxMImUpX4729PDHtEc', 'resolutions': [{'height': 99, 'url': 'https://preview.redd.it/2uo5fi9e362f1.png?width=108&crop=smart&auto=webp&s=f135bf4945ee739ecd8405a1103d90ac729be8dd', 'width': 108}, {'height': 199, 'url': 'https://preview.redd.it/2uo5fi9e362f1.png... | ||
Tokilake: Self-Host a Private AI Cloud for Your GPUs (Access NATted Hardware!) - Open Source | 1 | [removed] | 2025-05-21T17:22:40 | https://www.reddit.com/r/LocalLLaMA/comments/1ks3ksa/tokilake_selfhost_a_private_ai_cloud_for_your/ | Square-Air6513 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ks3ksa | false | null | t3_1ks3ksa | /r/LocalLLaMA/comments/1ks3ksa/tokilake_selfhost_a_private_ai_cloud_for_your/ | false | false | self | 1 | null |
Use iGPU + dGPU + NPU simultaneously? | 1 | [removed] | 2025-05-21T17:10:32 | https://www.reddit.com/r/LocalLLaMA/comments/1ks39lp/use_igpu_dgpu_npu_simultaneously/ | panther_ra | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ks39lp | false | null | t3_1ks39lp | /r/LocalLLaMA/comments/1ks39lp/use_igpu_dgpu_npu_simultaneously/ | false | false | self | 1 | null |
Just Discovered the NIMO AI-395 Mini PC – Compact Powerhouse Shipping Soon! | 1 | [removed] | 2025-05-21T17:07:05 | https://www.reddit.com/r/LocalLLaMA/comments/1ks36kb/just_discovered_the_nimo_ai395_mini_pc_compact/ | bigManProf0 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ks36kb | false | null | t3_1ks36kb | /r/LocalLLaMA/comments/1ks36kb/just_discovered_the_nimo_ai395_mini_pc_compact/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'o4lqUh5-wp1wM6I4Tx8WUzrhi7Sqcg9u2CoUJvgaFH0', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/cys9JaQ7pfaYr9P4nO8xZAetJhQC6eSAREg3JavVBhs.jpg?width=108&crop=smart&auto=webp&s=6074fa4aa0510b873ca44659684d34b1350afdde', 'width': 108}, {'height': 216, 'url': '... |
Pizza and Google I/O - I'm ready! | 0 | 2025-05-21T17:00:40 | https://www.reddit.com/r/LocalLLaMA/comments/1ks30hq/pizza_and_google_io_im_ready/ | kekePower | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ks30hq | false | null | t3_1ks30hq | /r/LocalLLaMA/comments/1ks30hq/pizza_and_google_io_im_ready/ | false | false | 0 | null | ||
Just Discovered the NIMO AI-395 Mini PC – Compact Powerhouse Shipping Soon! | 1 | [removed] | 2025-05-21T16:57:31 | https://www.reddit.com/r/LocalLLaMA/comments/1ks2xlf/just_discovered_the_nimo_ai395_mini_pc_compact/ | bigManProf0 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ks2xlf | false | null | t3_1ks2xlf | /r/LocalLLaMA/comments/1ks2xlf/just_discovered_the_nimo_ai395_mini_pc_compact/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'o4lqUh5-wp1wM6I4Tx8WUzrhi7Sqcg9u2CoUJvgaFH0', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/cys9JaQ7pfaYr9P4nO8xZAetJhQC6eSAREg3JavVBhs.jpg?width=108&crop=smart&auto=webp&s=6074fa4aa0510b873ca44659684d34b1350afdde', 'width': 108}, {'height': 216, 'url': '... |
Startups: Collaborative Coding with Windsurf/Cursor | 1 | How are startups using Windsurf/Cursor, etc. to code new applications as a team? I'm trying to wrap my head around how it works without everyone stepping on each other's toes.
My initial thoughts on starting a project from scratch:
1. **Architecture Setup**: Have one person define global rules, coding styles, and arc... | 2025-05-21T16:53:40 | https://www.reddit.com/r/LocalLLaMA/comments/1ks2u4z/startups_collaborative_coding_with_windsurfcursor/ | CodeBradley | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ks2u4z | false | null | t3_1ks2u4z | /r/LocalLLaMA/comments/1ks2u4z/startups_collaborative_coding_with_windsurfcursor/ | false | false | self | 1 | null |
Public ranking for open source models? | 8 | Is there a public ranking that i can check for open source models to compare them and to be able to finetune? Its weird theres a ranking for everything except for models that we can use for fine tuning | 2025-05-21T16:41:27 | https://www.reddit.com/r/LocalLLaMA/comments/1ks2j74/public_ranking_for_open_source_models/ | jinstronda | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ks2j74 | false | null | t3_1ks2j74 | /r/LocalLLaMA/comments/1ks2j74/public_ranking_for_open_source_models/ | false | false | self | 8 | null |
Should I add 64gb RAM to my current PC ? | 0 | I currently have this configuration :
* Graphics Card: MSI GeForce RTX 3060 VENTUS 2X 12G OC
* Power Supply: CORSAIR CX650 ATX 650W
* Motherboard: GIGABYTE B550M DS3H
* Processor (CPU): AMD Ryzen 7 5800X
* RAM: Corsair Vengeance LPX 32 GB (2 x 16 GB) DDR4 3600 MHz
* CPU Cooler: Mars Gaming ML-PRO120, Professional Liqu... | 2025-05-21T16:21:19 | https://www.reddit.com/r/LocalLLaMA/comments/1ks2101/should_i_add_64gb_ram_to_my_current_pc/ | DiyGun | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ks2101 | false | null | t3_1ks2101 | /r/LocalLLaMA/comments/1ks2101/should_i_add_64gb_ram_to_my_current_pc/ | false | false | self | 0 | null |
Trying to set up a local run Llama/XTTS workflow. Where can I get sound data sets like LORAs or checkpoints? Are NSFW sound effects possible? | 1 | [removed] | 2025-05-21T16:19:40 | https://www.reddit.com/r/LocalLLaMA/comments/1ks1zia/trying_to_set_up_a_local_run_llamaxtts_workflow/ | highwaytrading | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ks1zia | false | null | t3_1ks1zia | /r/LocalLLaMA/comments/1ks1zia/trying_to_set_up_a_local_run_llamaxtts_workflow/ | false | false | nsfw | 1 | null |
Anyone else feel like LLMs aren't actually getting that much better? | 239 | I've been in the game since GPT-3.5 (and even before then with Github Copilot). Over the last 2-3 years I've tried most of the top LLMs: all of the GPT iterations, all of the Claude's, Mistral's, LLama's, Deepseek's, Qwen's, and now Gemini 2.5 Pro Preview 05-06.
Based on benchmarks and LMSYS Arena, one would expect so... | 2025-05-21T16:06:15 | https://www.reddit.com/r/LocalLLaMA/comments/1ks1ncf/anyone_else_feel_like_llms_arent_actually_getting/ | Swimming_Beginning24 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ks1ncf | false | null | t3_1ks1ncf | /r/LocalLLaMA/comments/1ks1ncf/anyone_else_feel_like_llms_arent_actually_getting/ | false | false | self | 239 | null |
Mistral's new Devstral coding model running on a single RTX 4090 with 54k context using Q4KM quantization with vLLM | 218 | Full model announcement post on the Mistral blog [https://mistral.ai/news/devstral](https://mistral.ai/news/devstral) | 2025-05-21T15:50:12 | erdaltoprak | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1ks18uf | false | null | t3_1ks18uf | /r/LocalLLaMA/comments/1ks18uf/mistrals_new_devstral_coding_model_running_on_a/ | false | false | 218 | {'enabled': True, 'images': [{'id': 'Yxwnv789FEPpZ62S-qRHY4WD3ig6GY_W_fz-otR2lmU', 'resolutions': [{'height': 68, 'url': 'https://preview.redd.it/ddhhql5ap52f1.png?width=108&crop=smart&auto=webp&s=da019bab6f1b8373b91ff01f2e1d7c40454a1f12', 'width': 108}, {'height': 136, 'url': 'https://preview.redd.it/ddhhql5ap52f1.png... | ||
How much ram do i need with a RTX 5090 | 1 | [removed] | 2025-05-21T15:40:53 | https://www.reddit.com/r/LocalLLaMA/comments/1ks10hh/how_much_ram_do_i_need_with_a_rtx_5090/ | nanomax55 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ks10hh | false | null | t3_1ks10hh | /r/LocalLLaMA/comments/1ks10hh/how_much_ram_do_i_need_with_a_rtx_5090/ | false | false | self | 1 | null |
New to the PC world and want to run a llm locally and need input | 4 | I don't really know where to begin with this Im looking for something similar to gpt-4 performance and thinking but be able to run it locally my specs are below. I have no idea where to start or really what I want any help would be appreciated.
* AMD Ryzen 9 7950X
* PNY RTX 4070 Ti SUPER
* ASUS ROG Strix B650E-F Gamin... | 2025-05-21T15:39:43 | https://www.reddit.com/r/LocalLLaMA/comments/1ks0zee/new_to_the_pc_world_and_want_to_run_a_llm_locally/ | ZiritoBlue | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ks0zee | false | null | t3_1ks0zee | /r/LocalLLaMA/comments/1ks0zee/new_to_the_pc_world_and_want_to_run_a_llm_locally/ | false | false | self | 4 | null |
This AI tool applies to more jobs in a minute than most people do in a week | 1 | 2025-05-21T15:37:06 | https://v.redd.it/mbvknht2n52f1 | Alternative_Rock_836 | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1ks0x2t | false | {'reddit_video': {'bitrate_kbps': 800, 'dash_url': 'https://v.redd.it/mbvknht2n52f1/DASHPlaylist.mpd?a=1750433840%2CZGI1NmE1NTI5ZTAwNGM0YzMwN2Y2YjVhNTFjNWQxZTFkZjEyNjc2MWIxYzg3ODcyMmIxYzk5ZWExMzAyNDA2Zg%3D%3D&v=1&f=sd', 'duration': 8, 'fallback_url': 'https://v.redd.it/mbvknht2n52f1/DASH_360.mp4?source=fallback', 'has_... | t3_1ks0x2t | /r/LocalLLaMA/comments/1ks0x2t/this_ai_tool_applies_to_more_jobs_in_a_minute/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'bHNkNXJodDJuNTJmMY2OvwhO7zRqwSN2HcLLGYwPRadfKF-XTSbAURTEMoUb', 'resolutions': [{'height': 65, 'url': 'https://external-preview.redd.it/bHNkNXJodDJuNTJmMY2OvwhO7zRqwSN2HcLLGYwPRadfKF-XTSbAURTEMoUb.png?width=108&crop=smart&format=pjpg&auto=webp&s=ea5ea01cd62b12705d4599b05154efa081218... | ||
Has anyone used Gemini Live API for real-time interaction? | 1 | [removed] | 2025-05-21T15:35:14 | https://www.reddit.com/r/LocalLLaMA/comments/1ks0vdw/has_anyone_used_gemini_live_api_for_realtime/ | Funny_Working_7490 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ks0vdw | false | null | t3_1ks0vdw | /r/LocalLLaMA/comments/1ks0vdw/has_anyone_used_gemini_live_api_for_realtime/ | false | false | self | 1 | null |
Building a runtime to enable fast, multi-model serving with vLLM — looking for feedback | 0 | We’ve been working on an AI inference runtime that makes it easier to serve **multiple LLMs using vLLM** , but in a more dynamic, serverless-style setup.
The idea is:
* You bring your **existing vLLM container image**
* We snapshot and restore model state **directly on GPU in seconds**
* No need to spin up new contai... | 2025-05-21T15:33:11 | https://www.reddit.com/r/LocalLLaMA/comments/1ks0tlw/building_a_runtime_to_enable_fast_multimodel/ | pmv143 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ks0tlw | false | null | t3_1ks0tlw | /r/LocalLLaMA/comments/1ks0tlw/building_a_runtime_to_enable_fast_multimodel/ | false | false | self | 0 | null |
SWE-rebench update: GPT4.1 mini/nano and Gemini 2.0/2.5 Flash added | 28 | We’ve just added a batch of new models to the [SWE-rebench leaderboard](https://swe-rebench.com/leaderboard):
* GPT-4.1 mini
* GPT-4.1 nano
* Gemini 2.0 Flash
* Gemini 2.5 Flash Preview 05-20
A few quick takeaways:
* gpt-4.1-mini is surprisingly strong, it matches full GPT-4.1 performance on fresh, decontaminated ta... | 2025-05-21T15:32:06 | https://www.reddit.com/r/LocalLLaMA/comments/1ks0snl/swerebench_update_gpt41_mininano_and_gemini_2025/ | Long-Sleep-13 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ks0snl | false | null | t3_1ks0snl | /r/LocalLLaMA/comments/1ks0snl/swerebench_update_gpt41_mininano_and_gemini_2025/ | false | false | self | 28 | null |
in the end satya was right scaling law is so much good , veo 3 is a prime example and many other model too | 0 | the more it training on the problem the better it will get and google already have too much video data its was a common sense that with that much data nobody is going to mess with the google monopoly , im pretty sure all other startup finding the solution of this problem .
the there all other model are also solely... | 2025-05-21T15:27:22 | https://www.reddit.com/r/LocalLLaMA/comments/1ks0oea/in_the_end_satya_was_right_scaling_law_is_so_much/ | Select_Dream634 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ks0oea | false | null | t3_1ks0oea | /r/LocalLLaMA/comments/1ks0oea/in_the_end_satya_was_right_scaling_law_is_so_much/ | false | false | self | 0 | null |
I'd love a qwen3-coder-30B-A3B | 97 | Honestly I'd pay quite a bit to have such a model on my own machine. Inference would be quite fast and coding would be decent. | 2025-05-21T15:19:25 | https://www.reddit.com/r/LocalLLaMA/comments/1ks0h52/id_love_a_qwen3coder30ba3b/ | GreenTreeAndBlueSky | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ks0h52 | false | null | t3_1ks0h52 | /r/LocalLLaMA/comments/1ks0h52/id_love_a_qwen3coder30ba3b/ | false | false | self | 97 | null |
new to local, half new to AI but an oldie -help pls | 6 | ive been using deepseek r1 (web) to generate code for scripting languages. i dont think it does a good enough job at code generation.... i'd like to know some ideas. ill mostly be doing javascript, and .net (0 knowledge yet.. wanna get into it)
i just got a new 9900x3d + 5070 gpu and would like to know if its better t... | 2025-05-21T15:16:46 | https://www.reddit.com/r/LocalLLaMA/comments/1ks0epf/new_to_local_half_new_to_ai_but_an_oldie_help_pls/ | biatche | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ks0epf | false | null | t3_1ks0epf | /r/LocalLLaMA/comments/1ks0epf/new_to_local_half_new_to_ai_but_an_oldie_help_pls/ | false | false | self | 6 | null |
Voice cloning for Kokoro TTS using random walk algorithms | 94 | [https://news.ycombinator.com/item?id=44052295](https://news.ycombinator.com/item?id=44052295)
Hey everybody, I made a library that can somewhat clone voices using Kokoro TTS. I know it is a popular library for adding speech to various LLM applications, so I figured I would share it here. It can take awhile and produc... | 2025-05-21T15:12:31 | https://github.com/RobViren/kvoicewalk | rodbiren | github.com | 1970-01-01T00:00:00 | 0 | {} | 1ks0arl | false | null | t3_1ks0arl | /r/LocalLLaMA/comments/1ks0arl/voice_cloning_for_kokoro_tts_using_random_walk/ | false | false | 94 | {'enabled': False, 'images': [{'id': '9j7T1aryT1VtYnao4_flbG1Qyq29ME_hO8aYhG0flnk', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/aOW6-hgbtxvb8U4tg5vaNfPPQC6NYWWnQfNZ4XDYvYg.jpg?width=108&crop=smart&auto=webp&s=dd9bee6b0cec514fb74dee249bdcf20136db2c3a', 'width': 108}, {'height': 108, 'url': 'h... | |
Agent Commerce Kit – Protocols for AI Agent Identity and Payments | 2 | 2025-05-21T15:00:06 | https://www.agentcommercekit.com/overview/introduction | catena_labs | agentcommercekit.com | 1970-01-01T00:00:00 | 0 | {} | 1krzzg7 | false | null | t3_1krzzg7 | /r/LocalLLaMA/comments/1krzzg7/agent_commerce_kit_protocols_for_ai_agent/ | false | false | 2 | {'enabled': False, 'images': [{'id': '_hbQnw-UE3NBRg2idKDtqs2IThHc7EPSMdFg9H83f1o', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/OEDlc3lmK9Ir2wXwMtfU-Q1rGIkQxOFuuaTGNvgglsY.jpg?width=108&crop=smart&auto=webp&s=e9bb17d22a1e8418165e6f3d3b9e99e50dd41808', 'width': 108}, {'height': 113, 'url': 'h... | ||
largest context window model for 24GB VRAM? | 2 | Hey guys. Trying to find a model that can analyze large text files (10,000 to 15,000 words at a time) without pagination
What model is best for summarizing medium-large bodies of text? | 2025-05-21T14:55:02 | https://www.reddit.com/r/LocalLLaMA/comments/1krzuzj/largest_context_window_model_for_24gb_vram/ | odaman8213 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1krzuzj | false | null | t3_1krzuzj | /r/LocalLLaMA/comments/1krzuzj/largest_context_window_model_for_24gb_vram/ | false | false | self | 2 | null |
medgemma-4b the Pharmacist 🤣 | 301 | Google’s new OS medical model gave in to the dark side far too easily. I had to laugh. I expected it to put up a little more of a fight, but there you go. | 2025-05-21T14:49:13 | AlternativePlum5151 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1krzpyp | false | null | t3_1krzpyp | /r/LocalLLaMA/comments/1krzpyp/medgemma4b_the_pharmacist/ | false | false | nsfw | 301 | {'enabled': True, 'images': [{'id': 'DXEd1qFG1EZ9N07_bpcSQ_lnxnoDwIUpoHc59yZWr7Q', 'resolutions': [{'height': 60, 'url': 'https://preview.redd.it/f25nhvxqd52f1.png?width=108&crop=smart&auto=webp&s=f50455370fe07f346afbd340be210479cbbf7d79', 'width': 108}, {'height': 121, 'url': 'https://preview.redd.it/f25nhvxqd52f1.png... | |
AMD ROCm 6.4.1 now supports 9070/XT (Navi4) | 100 | As of this post, AMD hasn't updated their github page or their official ROCm doc page, but here is the official link to their site. Looks like it is a bundled ROCm stack for Ubuntu LTS and RHEL 9.6.
I got my 9070XT at launch at MSRP, so this is good news for me! | 2025-05-21T14:48:49 | https://www.amd.com/en/resources/support-articles/release-notes/RN-AMDGPU-UNIFIED-LINUX-25-10-1-ROCM-6-4-1.html | shifty21 | amd.com | 1970-01-01T00:00:00 | 0 | {} | 1krzpmu | false | null | t3_1krzpmu | /r/LocalLLaMA/comments/1krzpmu/amd_rocm_641_now_supports_9070xt_navi4/ | false | false | 100 | {'enabled': False, 'images': [{'id': 'VY-gY2HYKYcXNyATN6eFOZXOvhwsQtwwRY3tWSlZJMQ', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/Uw9Z-ATtXBVz3bFA4dAJBygcK_v6wL5a2uOdNIk-9qE.jpg?width=108&crop=smart&auto=webp&s=bf9ed3573a3db5d3e44a72830f8426517a91377c', 'width': 108}, {'height': 113, 'url': 'h... | |
Fastest model & setup for classification? | 1 | [removed] | 2025-05-21T14:20:15 | https://www.reddit.com/r/LocalLLaMA/comments/1krz120/fastest_model_setup_for_classification/ | Regular_Problem9019 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1krz120 | false | null | t3_1krz120 | /r/LocalLLaMA/comments/1krz120/fastest_model_setup_for_classification/ | false | false | self | 1 | null |
mistralai/Devstral-Small-2505 · Hugging Face | 394 | Devstral is an agentic LLM for software engineering tasks built under a collaboration between Mistral AI and All Hands AI | 2025-05-21T14:17:03 | https://huggingface.co/mistralai/Devstral-Small-2505 | Dark_Fire_12 | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1kryybf | false | null | t3_1kryybf | /r/LocalLLaMA/comments/1kryybf/mistralaidevstralsmall2505_hugging_face/ | false | false | 394 | {'enabled': False, 'images': [{'id': 'l58T3fdYm1_vbmuoUrQ3Cn8X1B6VkXutYixnBDCamkM', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/5v7V2smikryAtPAPRovLgRwqCqqgG7mLENcd1_6EmM4.jpg?width=108&crop=smart&auto=webp&s=e68376b3b17125c9c3906caaa299d50cf4a7a765', 'width': 108}, {'height': 116, 'url': 'h... | |
Meet Mistral Devstral, SOTA open model designed specifically for coding agents | 277 | [https://mistral.ai/news/devstral](https://mistral.ai/news/devstral)
Open Weights : [https://huggingface.co/mistralai/Devstral-Small-2505](https://huggingface.co/mistralai/Devstral-Small-2505) | 2025-05-21T14:15:57 | https://www.reddit.com/r/LocalLLaMA/comments/1kryxdg/meet_mistral_devstral_sota_open_model_designed/ | ApprehensiveAd3629 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kryxdg | false | null | t3_1kryxdg | /r/LocalLLaMA/comments/1kryxdg/meet_mistral_devstral_sota_open_model_designed/ | false | false | self | 277 | {'enabled': False, 'images': [{'id': 'QLQU1soiMTzFAm8GzW6EPDbaX5jrcYYFqy1ql5NYoiQ', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/UDZBQmD4AJb2vSY-B7oM2DhQ3zjGzTcUOviRMRcUKkg.jpg?width=108&crop=smart&auto=webp&s=bf2fc6d6ae14adad4ce62ffea575abc3783778db', 'width': 108}, {'height': 113, 'url': 'h... |
What are the best models for non-documental OCR? | 2 | Hello,
I am searching for the best LLMs for OCR. I am *not* scanning documents or similar. The input are images of sacks in a warehouse, and text has to be extracted from it. I tried QwenVL and was much worse than traditional OCR like PaddleOCR, which has given the the best results (Ok-ish at best). However, the pr... | 2025-05-21T14:14:28 | https://www.reddit.com/r/LocalLLaMA/comments/1kryw3a/what_are_the_best_models_for_nondocumental_ocr/ | Ok_Appeal8653 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kryw3a | false | null | t3_1kryw3a | /r/LocalLLaMA/comments/1kryw3a/what_are_the_best_models_for_nondocumental_ocr/ | false | false | self | 2 | null |
Solo-Built Distributed AI Voice Assistant | 1 | [removed] | 2025-05-21T13:58:05 | https://www.reddit.com/r/LocalLLaMA/comments/1kryhxj/solobuilt_distributed_ai_voice_assistant/ | No_Story_9941 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kryhxj | false | null | t3_1kryhxj | /r/LocalLLaMA/comments/1kryhxj/solobuilt_distributed_ai_voice_assistant/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': '1GieUu91QDrH2_j5GwLwVzYftRP1WCG3Ezk3UlAoer0', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/BsGVyVndLk2cMkzkvJ0HdFhlWKdyd-rSl-ZQDfl8NHU.jpg?width=108&crop=smart&auto=webp&s=29bbc2fb60d7d6c5ad41d9af1623422c4de45880', 'width': 108}, {'height': 113, 'url': 'h... |
Late-Night Study Lifesaver? My Unexpected Win with SolutionInn’s Ask AI | 1 | [removed] | 2025-05-21T13:56:15 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 1krygf5 | false | null | t3_1krygf5 | /r/LocalLLaMA/comments/1krygf5/latenight_study_lifesaver_my_unexpected_win_with/ | false | false | default | 1 | null | ||
Solo-Built Distributed AI Voice Assistant | 1 | [removed] | 2025-05-21T13:50:49 | https://www.reddit.com/r/LocalLLaMA/comments/1kryby9/solobuilt_distributed_ai_voice_assistant/ | No_Story_9941 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kryby9 | false | null | t3_1kryby9 | /r/LocalLLaMA/comments/1kryby9/solobuilt_distributed_ai_voice_assistant/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': '1GieUu91QDrH2_j5GwLwVzYftRP1WCG3Ezk3UlAoer0', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/BsGVyVndLk2cMkzkvJ0HdFhlWKdyd-rSl-ZQDfl8NHU.jpg?width=108&crop=smart&auto=webp&s=29bbc2fb60d7d6c5ad41d9af1623422c4de45880', 'width': 108}, {'height': 113, 'url': 'h... |
Dynamically loading experts in MoE models? | 2 | Is this a thing? If not, why not? I mean, MoE models like qwen3 235b only have 22b active parameters, so if one were able to just use the active parameters, then qwen would be much easier to run, maybe even runnable on a basic computer with 32gb of ram | 2025-05-21T13:46:33 | https://www.reddit.com/r/LocalLLaMA/comments/1kry8m8/dynamically_loading_experts_in_moe_models/ | ExtremeAcceptable289 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kry8m8 | false | null | t3_1kry8m8 | /r/LocalLLaMA/comments/1kry8m8/dynamically_loading_experts_in_moe_models/ | false | false | self | 2 | null |
What Hardware release are you looking forward to this year? | 2 | I'm curious what folks are planning for this year? I've been looking out for hardware that can handle very very large models, and getting my homelab ready for an expansion, but I've lost my vision on what to look for this year for very large self-hosted models.
Curious what the community thinks. | 2025-05-21T13:42:25 | https://www.reddit.com/r/LocalLLaMA/comments/1kry5dk/what_hardware_release_are_you_looking_forward_to/ | SanFranPanManStand | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kry5dk | false | null | t3_1kry5dk | /r/LocalLLaMA/comments/1kry5dk/what_hardware_release_are_you_looking_forward_to/ | false | false | self | 2 | null |
New falcon models using mamba hybrid are very competetive if not ahead for their sizes. | 53 | AVG SCORES FOR A VARIETY OF BENCHMARKS:
\*\*Falcon-H1 Models:\*\*
1. \*\*Falcon-H1-34B:\*\* 58.92
2. \*\*Falcon-H1-7B:\*\* 54.08
3. \*\*Falcon-H1-3B:\*\* 48.09
4. \*\*Falcon-H1-1.5B-deep:\*\* 47.72
5. \*\*Falcon-H1-1.5B:\*\* 45.47
6. \*\*Falcon-H1-0.5B:\*\* 35.83
\*\*Qwen3 Models:\*\*
1. \*\*Qwen... | 2025-05-21T13:31:15 | https://www.reddit.com/r/LocalLLaMA/comments/1krxwja/new_falcon_models_using_mamba_hybrid_are_very/ | ElectricalAngle1611 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1krxwja | false | null | t3_1krxwja | /r/LocalLLaMA/comments/1krxwja/new_falcon_models_using_mamba_hybrid_are_very/ | false | false | self | 53 | {'enabled': False, 'images': [{'id': '7pmU_qf5f-85GxHu1Vf_nMXhQKDehQ-ngMHj09B5QjM', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/9Qap-vjKineTiZeWQ7u0iMdaKdKc-0HPWYNmnEyzFbM.jpg?width=108&crop=smart&auto=webp&s=903373bcb66f69286ec2409aad5dae81b331c6f0', 'width': 108}, {'height': 116, 'url': 'h... |
LLM for Linux questions | 2 | I am trying to learn Linux. Can anyone recommend me a good LLM that can answer all Linux related questions? Preferrably not a huge one, like under 20B. | 2025-05-21T13:29:52 | https://www.reddit.com/r/LocalLLaMA/comments/1krxvce/llm_for_linux_questions/ | Any-Championship-611 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1krxvce | false | null | t3_1krxvce | /r/LocalLLaMA/comments/1krxvce/llm_for_linux_questions/ | false | false | self | 2 | null |
Location of downloaded LLM on android | 2 | Hello guys, can I know the exact location of the downloaded models gguf on apps like Chatter UI? | 2025-05-21T13:28:36 | https://www.reddit.com/r/LocalLLaMA/comments/1krxucq/location_of_downloaded_llm_on_android/ | Egypt_Pharoh1 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1krxucq | false | null | t3_1krxucq | /r/LocalLLaMA/comments/1krxucq/location_of_downloaded_llm_on_android/ | false | false | self | 2 | null |
Key findings after testing LLMs | 3 | After running my tests, plus a few others, and publishing the results, I got to thinking about how strong Qwen3 really is.
You can read my musings here: [https://blog.kekepower.com/blog/2025/may/21/deepseek\_r1\_and\_v3\_vs\_qwen3\_-\_why\_631-billion\_parameters\_still\_miss\_the\_mark\_on\_instruction\_fidelity.html... | 2025-05-21T13:02:12 | https://www.reddit.com/r/LocalLLaMA/comments/1krx9pa/key_findings_after_testing_llms/ | kekePower | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1krx9pa | false | null | t3_1krx9pa | /r/LocalLLaMA/comments/1krx9pa/key_findings_after_testing_llms/ | false | false | self | 3 | null |
Docling completely missing large elements on a page, can Layout model be changed? | 6 | I was playing around with docling today after I've seen some hype around these parts, and found that the object detection would often miss large chart/image type elements side by side. In one case, there were 2 large squarish elements side by side, roughly the same side. The left one was detected, the right one was n... | 2025-05-21T12:43:03 | https://www.reddit.com/r/LocalLLaMA/comments/1krwvgh/docling_completely_missing_large_elements_on_a/ | joomla00 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1krwvgh | false | null | t3_1krwvgh | /r/LocalLLaMA/comments/1krwvgh/docling_completely_missing_large_elements_on_a/ | false | false | self | 6 | null |
Ollama + RAG in godot 4 | 0 | I’ve been experimenting with setting up my own local setup with ollama, with some success. I’m using deepseek-coder-v2 with a plugin for interfacing within Godot 4 ( game engine). I set up a RAG due to GDScript ( native language for engine) not being up to date with the model knowledge cutoff. I scraped the documentati... | 2025-05-21T12:42:03 | https://www.reddit.com/r/LocalLLaMA/comments/1krwuqx/ollama_rag_in_godot_4/ | Huge-Masterpiece-824 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1krwuqx | false | null | t3_1krwuqx | /r/LocalLLaMA/comments/1krwuqx/ollama_rag_in_godot_4/ | false | false | self | 0 | null |
Add voices to Kokoru TTS? | 1 | [removed] | 2025-05-21T12:14:26 | https://www.reddit.com/r/LocalLLaMA/comments/1krwazg/add_voices_to_kokoru_tts/ | No_Cartographer_2380 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1krwazg | false | null | t3_1krwazg | /r/LocalLLaMA/comments/1krwazg/add_voices_to_kokoru_tts/ | false | false | self | 1 | null |
Add voices to Kokoru TTS | 1 | [removed] | 2025-05-21T12:00:26 | https://www.reddit.com/r/LocalLLaMA/comments/1krw17i/add_voices_to_kokoru_tts/ | No_Cartographer_2380 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1krw17i | false | null | t3_1krw17i | /r/LocalLLaMA/comments/1krw17i/add_voices_to_kokoru_tts/ | false | false | self | 1 | null |
Add voices to Kokoru? | 1 | [removed] | 2025-05-21T11:53:18 | https://www.reddit.com/r/LocalLLaMA/comments/1krvwhb/add_voices_to_kokoru/ | No_Cartographer_2380 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1krvwhb | false | null | t3_1krvwhb | /r/LocalLLaMA/comments/1krvwhb/add_voices_to_kokoru/ | false | false | self | 1 | null |
Hello everyone | 1 | [removed] | 2025-05-21T11:11:06 | https://www.reddit.com/r/LocalLLaMA/comments/1krv5oe/hello_everyone/ | No_Cartographer_2380 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1krv5oe | false | null | t3_1krv5oe | /r/LocalLLaMA/comments/1krv5oe/hello_everyone/ | false | false | self | 1 | null |
Is there a portable .exe GUI I can run ggufs on? | 1 | That needs no installation. And you can just import a gguf file without internet?
Essentially LM studio but portable | 2025-05-21T10:53:47 | https://www.reddit.com/r/LocalLLaMA/comments/1kruv49/is_there_a_portable_exe_gui_i_can_run_ggufs_on/ | Own-Potential-2308 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kruv49 | false | null | t3_1kruv49 | /r/LocalLLaMA/comments/1kruv49/is_there_a_portable_exe_gui_i_can_run_ggufs_on/ | false | false | self | 1 | null |
gemma 3n seems not work well for non English prompt | 36 | 2025-05-21T10:44:22 | Juude89 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1krupm7 | false | null | t3_1krupm7 | /r/LocalLLaMA/comments/1krupm7/gemma_3n_seems_not_work_well_for_non_english/ | false | false | 36 | {'enabled': True, 'images': [{'id': '4JzO8RQ9NzmRSnyXZ6u5IvVoDqiycP2kNMGxQK8u14o', 'resolutions': [{'height': 216, 'url': 'https://preview.redd.it/xhxxm6xv642f1.jpeg?width=108&crop=smart&auto=webp&s=0b6f2982c1a773712602acb0412b16d0e17e8fb7', 'width': 108}, {'height': 432, 'url': 'https://preview.redd.it/xhxxm6xv642f1.j... | |||
Recommendations for Self-Hosted, Open-Source Proxy for Dynamic OpenAI API Forwarding? | 1 | [removed] | 2025-05-21T10:33:05 | https://www.reddit.com/r/LocalLLaMA/comments/1krujc9/recommendations_for_selfhosted_opensource_proxy/ | z00log | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1krujc9 | false | null | t3_1krujc9 | /r/LocalLLaMA/comments/1krujc9/recommendations_for_selfhosted_opensource_proxy/ | false | false | self | 1 | null |
Seeking Self-Hosted Dynamic Proxy for OpenAI-Compatible APIs | 1 | [removed] | 2025-05-21T10:30:01 | https://www.reddit.com/r/LocalLLaMA/comments/1kruhit/seeking_selfhosted_dynamic_proxy_for/ | z00log | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kruhit | false | null | t3_1kruhit | /r/LocalLLaMA/comments/1kruhit/seeking_selfhosted_dynamic_proxy_for/ | false | false | self | 1 | null |
Hidden thinking | 42 | I was disappointed to find that Google has now hidden Gemini's thinking. I guess it is understandable to stop others from using the data to train and so help's good to keep their competitive advantage, but I found the thoughts so useful. I'd read the thoughts as generated and often would terminate the generation to ref... | 2025-05-21T10:15:57 | https://www.reddit.com/r/LocalLLaMA/comments/1kru9v3/hidden_thinking/ | DeltaSqueezer | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kru9v3 | false | null | t3_1kru9v3 | /r/LocalLLaMA/comments/1kru9v3/hidden_thinking/ | false | false | self | 42 | null |
AMD Unleashes Radeon AI PRO R9700 GPU With 32 GB VRAM, 128 AI Cores & 300W TDP: 2x Faster Than Last-Gen W7800 In DeepSeek R1 | 1 | 2025-05-21T10:13:32 | https://wccftech.com/amd-radeon-ai-pro-r9700-gpu-32-gb-vram-128-ai-cores-300w-2x-faster-deepseek-r1/ | _SYSTEM_ADMIN_MOD_ | wccftech.com | 1970-01-01T00:00:00 | 0 | {} | 1kru8ki | false | null | t3_1kru8ki | /r/LocalLLaMA/comments/1kru8ki/amd_unleashes_radeon_ai_pro_r9700_gpu_with_32_gb/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'q7mve0pWQXapLu3eVRUlgERITl5WzhQmx_iowI8HRh8', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/QehnmIpXCG5EMr8E9Ybhl7ucP9xiIhX3kgr39HNF6VE.jpg?width=108&crop=smart&auto=webp&s=d798ba11963dca0389f6729f6d91fc536cdb985a', 'width': 108}, {'height': 121, 'url': 'h... | ||
AMD Unleashes Radeon AI PRO R9700 GPU With 32 GB VRAM, 128 AI Cores & 300W TDP: 2x Faster Than Last-Gen W7800 In DeepSeek R1 | 1 | [removed] | 2025-05-21T10:12:51 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 1kru86k | false | null | t3_1kru86k | /r/LocalLLaMA/comments/1kru86k/amd_unleashes_radeon_ai_pro_r9700_gpu_with_32_gb/ | false | false | default | 1 | null | ||
Open Question: Why don’t voice agent developers fine-tune models like Kokoro to match local accents or customer voices? | 1 | [removed] | 2025-05-21T09:50:29 | https://www.reddit.com/r/LocalLLaMA/comments/1krtvv0/open_question_why_dont_voice_agent_developers/ | MAtrixompa | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1krtvv0 | false | null | t3_1krtvv0 | /r/LocalLLaMA/comments/1krtvv0/open_question_why_dont_voice_agent_developers/ | false | false | self | 1 | null |
Falcon-H1 Family of Hybrid-Head Language Models, including 0.5B, 1.5B, 1.5B-Deep, 3B, 7B, and 34B | 223 | 2025-05-21T09:50:09 | https://huggingface.co/collections/tiiuae/falcon-h1-6819f2795bc406da60fab8df | jacek2023 | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1krtvpj | false | null | t3_1krtvpj | /r/LocalLLaMA/comments/1krtvpj/falconh1_family_of_hybridhead_language_models/ | false | false | 223 | {'enabled': False, 'images': [{'id': '6yF1l7Wcly08Rg8_MLpSaoIjS1uyuQIPMUXJIhl5gPs', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/asQIFBJYgU0s0y-AV0hAHtenKk6qa9ZCLFCb-Jjyvag.jpg?width=108&crop=smart&auto=webp&s=1347f695ea42069fd4950cb6d695199ce666797f', 'width': 108}, {'height': 116, 'url': 'h... | ||
LLaMA always generates to max_new_tokens? | 1 | [removed] | 2025-05-21T09:29:48 | https://www.reddit.com/r/LocalLLaMA/comments/1krtl5s/llama_always_generates_to_max_new_tokens/ | Leviboy950 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1krtl5s | false | null | t3_1krtl5s | /r/LocalLLaMA/comments/1krtl5s/llama_always_generates_to_max_new_tokens/ | false | false | self | 1 | null |
Meet Emma-Kyu. a WIP Vtuber | 0 | Meet Emma-Kyu. a virtual AI personality. She likes to eat burgers, drink WD40 and given the chance will banter and roast you to oblivion.
Of course, I was inspired Neuro-sama and early on started with Llama 3.2 3B and a weak laptop that took 20s to generate responses. Today Emma is a fine-tuned beast 8B parameter ... | 2025-05-21T09:27:03 | https://v.redd.it/rjk7sm6wr32f1 | Experimentators | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1krtjri | false | {'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/rjk7sm6wr32f1/DASHPlaylist.mpd?a=1750411638%2CNzYwMGExOGFlOTMzM2U3MGJjNDliZjU5OWExMTU0YTRiODc3ZGZlYjYzOThmY2M3YmRhNDYyOGJkMmZhOTZjNQ%3D%3D&v=1&f=sd', 'duration': 49, 'fallback_url': 'https://v.redd.it/rjk7sm6wr32f1/DASH_720.mp4?source=fallback', 'ha... | t3_1krtjri | /r/LocalLLaMA/comments/1krtjri/meet_emmakyu_a_wip_vtuber/ | false | false | 0 | {'enabled': False, 'images': [{'id': 'eDVmYWZwNndyMzJmMeYZyiaY6_OiJ0nyyjeu2Ls0f-LqGXk13-a3_Pxj_fzF', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/eDVmYWZwNndyMzJmMeYZyiaY6_OiJ0nyyjeu2Ls0f-LqGXk13-a3_Pxj_fzF.png?width=108&crop=smart&format=pjpg&auto=webp&s=d753b7bf02146af8ccc77582a474f972d830b... | |
RTX 3090 is really good. Qwen3 30b b3a running at 100+t/s | 1 | [removed] | 2025-05-21T09:24:44 | Linkpharm2 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1krtiko | false | null | t3_1krtiko | /r/LocalLLaMA/comments/1krtiko/rtx_3090_is_really_good_qwen3_30b_b3a_running_at/ | false | false | 1 | {'enabled': True, 'images': [{'id': 'r5ClBLVR5ZZfLTa7g8WeF1WtMHsAXLhZu0P8btXNhXc', 'resolutions': [{'height': 161, 'url': 'https://preview.redd.it/k9p7qyyfs32f1.png?width=108&crop=smart&auto=webp&s=4a42ef552d8db7455752f1ec722400e803374b63', 'width': 108}, {'height': 322, 'url': 'https://preview.redd.it/k9p7qyyfs32f1.pn... | ||
Preferred models for Note Summarisation | 2 | I'm, painfully, trying to make a note summarisation prompt flow to help expand my personal knowledge management.
What are people's favourite models for handling ingesting and structuring **badly** written knowledge?
I'm trying Qwen3 32B IQ4_XS on an RTX 7900XTX with flash attention on LM studio, but it feels like I n... | 2025-05-21T09:19:56 | https://www.reddit.com/r/LocalLLaMA/comments/1krtg4t/preferred_models_for_note_summarisation/ | ROS_SDN | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1krtg4t | false | null | t3_1krtg4t | /r/LocalLLaMA/comments/1krtg4t/preferred_models_for_note_summarisation/ | false | false | self | 2 | null |
Sex tips from abliterated models are funny | 0 | If you use Josiefied-Qwen3-8B-abliterated, for example, and ask it how to give the perfect blowjob, you get a lot of advice about clitoral stimulation 😊 | 2025-05-21T09:15:35 | https://www.reddit.com/r/LocalLLaMA/comments/1krtdxr/sex_tips_from_abliterated_models_are_funny/ | MrMrsPotts | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1krtdxr | false | null | t3_1krtdxr | /r/LocalLLaMA/comments/1krtdxr/sex_tips_from_abliterated_models_are_funny/ | false | false | nsfw | 0 | null |
AMD launches the Radeon AI Pro R9700 | 1 | [removed] | 2025-05-21T09:11:19 | https://www.tomshardware.com/pc-components/gpus/amd-launches-radeon-ai-pro-r9700-to-challenge-nvidias-ai-market-dominance | PearSilicon | tomshardware.com | 1970-01-01T00:00:00 | 0 | {} | 1krtbse | false | null | t3_1krtbse | /r/LocalLLaMA/comments/1krtbse/amd_launches_the_radeon_ai_pro_r9700/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'dugXIowyXggOj3Jqd_IH8XXRuO2gY2YJRf6cWa3VSZU', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/lz6r5MBRJrsrS2eOFiJA2BI45RPBSqSmSVW_PVbtojA.jpg?width=108&crop=smart&auto=webp&s=72b8bf837b0ec198650353a22a593fd161a775d6', 'width': 108}, {'height': 121, 'url': 'h... | |
NVIDIA H200 or the new RTX Pro Blackwell for a RAG chatbot? | 5 | Hey guys, I'd appreciate your help with a dilemma I'm facing. I want to build a server for a RAG-based LLM chatbot for a new website, where users would ask for product recommendations and get answers based on my database with laboratory-tested results as a knowledge base.
I plan to build the project locally, and once ... | 2025-05-21T08:37:45 | https://www.reddit.com/r/LocalLLaMA/comments/1krsv2u/nvidia_h200_or_the_new_rtx_pro_blackwell_for_a/ | snaiperist | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1krsv2u | false | null | t3_1krsv2u | /r/LocalLLaMA/comments/1krsv2u/nvidia_h200_or_the_new_rtx_pro_blackwell_for_a/ | false | false | self | 5 | null |
Beginner questions about local models | 3 | Hello, I'm a complete beginner on this subject, but I have a few questions about local models. Currently, I'm using OpenAI for light data analysis, which I access via API. The biggest challenge is cleaning the data of personal and identifiable information before I can give it to OpenAI for processing.
* Would a local ... | 2025-05-21T08:34:10 | https://www.reddit.com/r/LocalLLaMA/comments/1krstei/beginner_questions_about_local_models/ | Flaky-Character-9383 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1krstei | false | null | t3_1krstei | /r/LocalLLaMA/comments/1krstei/beginner_questions_about_local_models/ | false | false | self | 3 | null |
New threadripper has 8 memory channels. Will it be an affordable local LLM option? | 92 | https://www.theregister.com/2025/05/21/amd_threadripper_radeon_workstation/
I'm always on the lookout for cheap local inference. I noticed the new threadrippers will move from 4 to 8 channels.
8 channels of DDR5 is about 409GB/s
That's on par with mid range GPUs on a non server chip. | 2025-05-21T08:14:04 | https://www.reddit.com/r/LocalLLaMA/comments/1krsjpb/new_threadripper_has_8_memory_channels_will_it_be/ | theKingOfIdleness | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1krsjpb | false | null | t3_1krsjpb | /r/LocalLLaMA/comments/1krsjpb/new_threadripper_has_8_memory_channels_will_it_be/ | false | false | self | 92 | {'enabled': False, 'images': [{'id': '9HEIj9JQYwpxjGLJeymdFTWkHgflcaru4ucrxb8Xgts', 'resolutions': [{'height': 43, 'url': 'https://external-preview.redd.it/kHCiWt5gI3zsT_-Y52ZSbkRxM0FdGAmjDY4tXsZ6u4Q.jpg?width=108&crop=smart&auto=webp&s=3db195115d4f9d130d2c4d0a06684e6e92db47f9', 'width': 108}, {'height': 86, 'url': 'ht... |
What If LLM Had Full Access to Your Linux Machine👩💻? I Tried It, and It's Insane🤯! | 0 | [Github Repo](https://github.com/ishanExtreme/vox-bot)
I tried giving **full access** of my *keyboard* and *mouse* to **GPT-4**, and the result was amazing!!!
I used Microsoft's **OmniParser** to get actionables (buttons/icons) on the screen as bounding boxes then **GPT-4V** to check if the given action is completed ... | 2025-05-21T07:59:40 | https://v.redd.it/xry1dcred32f1 | Responsible_Soft_429 | /r/LocalLLaMA/comments/1krscok/what_if_llm_had_full_access_to_your_linux_machine/ | 1970-01-01T00:00:00 | 0 | {} | 1krscok | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/xry1dcred32f1/DASHPlaylist.mpd?a=1750535985%2CZmU0MmQxZmRiZDU0ZGU1NDBlMGE1ZDNiYWY4ZjE4ZTljYTE0Y2E2MjIwMjA4ZjU5MmYyMjkyYWY2ODdlODgyZg%3D%3D&v=1&f=sd', 'duration': 139, 'fallback_url': 'https://v.redd.it/xry1dcred32f1/DASH_1080.mp4?source=fallback', '... | t3_1krscok | /r/LocalLLaMA/comments/1krscok/what_if_llm_had_full_access_to_your_linux_machine/ | false | false | 0 | {'enabled': False, 'images': [{'id': 'eWVnNzBlcmVkMzJmMaMjl9qjU4D_xLFsmUr9EdyatKtYedUK7-OuwVDcDZRa', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/eWVnNzBlcmVkMzJmMaMjl9qjU4D_xLFsmUr9EdyatKtYedUK7-OuwVDcDZRa.png?width=108&crop=smart&format=pjpg&auto=webp&s=51eff4965601ebe9dd74ff14b517974232e33... | |
What is the estimated token/sec for Nvidia DGX Spark | 9 | What would be the estimated token/sec for Nvidia DGX Spark ? For popular models such as gemma3 27b, qwen3 30b-a3b etc. I can get about 25 t/s, 100 t/s on my 3090. They are claiming 1000 TOPS for FP4. What existing GPU would this be comparable to ? I want to understand if there is an advantage to buying this thing vs in... | 2025-05-21T07:55:35 | https://www.reddit.com/r/LocalLLaMA/comments/1krsast/what_is_the_estimated_tokensec_for_nvidia_dgx/ | presidentbidden | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1krsast | false | null | t3_1krsast | /r/LocalLLaMA/comments/1krsast/what_is_the_estimated_tokensec_for_nvidia_dgx/ | false | false | self | 9 | null |
Why nobody mentioned "Gemini Diffusion" here? It's a BIG deal | 823 | Google has the capacity and capability to change the standard for LLMs from autoregressive generation to diffusion generation.
Google showed their Language diffusion model (Gemini Diffusion, visit the linked page for more info and benchmarks) yesterday/today (depends on your timezone), and it was extremely fast and (a... | 2025-05-21T07:42:08 | https://deepmind.google/models/gemini-diffusion/ | QuackerEnte | deepmind.google | 1970-01-01T00:00:00 | 0 | {} | 1krs40j | false | null | t3_1krs40j | /r/LocalLLaMA/comments/1krs40j/why_nobody_mentioned_gemini_diffusion_here_its_a/ | false | false | 823 | {'enabled': False, 'images': [{'id': 'GQPeUtrAeWmM_BLZUGcB2r83lDFScVSP2eZwE671aD0', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/dFWSMq_9jHPdMVGchDlKvt7rzCFhQEFmxZm8XKq654M.jpg?width=108&crop=smart&auto=webp&s=b253601cdd4a4d2ead67bffbc1a831828a43d0b8', 'width': 108}, {'height': 113, 'url': 'h... | |
Best distro for ollama? | 1 | [removed] | 2025-05-21T07:35:02 | https://www.reddit.com/r/LocalLLaMA/comments/1krs0m6/best_distro_for_ollama/ | WouterC | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1krs0m6 | false | null | t3_1krs0m6 | /r/LocalLLaMA/comments/1krs0m6/best_distro_for_ollama/ | false | false | self | 1 | null |
Are there any recent 14b or less MoE models? | 13 | There are quite a few from 2024 but was wondering if there are any more recent ones. Qwen3 30b a3d but a bit large and requires a lot of vram. | 2025-05-21T07:30:52 | https://www.reddit.com/r/LocalLLaMA/comments/1krryjx/are_there_any_recent_14b_or_less_moe_models/ | GreenTreeAndBlueSky | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1krryjx | false | null | t3_1krryjx | /r/LocalLLaMA/comments/1krryjx/are_there_any_recent_14b_or_less_moe_models/ | false | false | self | 13 | null |
AMD launches the Radeon AI Pro R9700 | 1 | [removed] | 2025-05-21T07:19:53 | https://www.tomshardware.com/pc-components/gpus/amd-launches-radeon-ai-pro-r9700-to-challenge-nvidias-ai-market-dominance | PearSilicon | tomshardware.com | 1970-01-01T00:00:00 | 0 | {} | 1krrt3p | false | null | t3_1krrt3p | /r/LocalLLaMA/comments/1krrt3p/amd_launches_the_radeon_ai_pro_r9700/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'dugXIowyXggOj3Jqd_IH8XXRuO2gY2YJRf6cWa3VSZU', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/lz6r5MBRJrsrS2eOFiJA2BI45RPBSqSmSVW_PVbtojA.jpg?width=108&crop=smart&auto=webp&s=72b8bf837b0ec198650353a22a593fd161a775d6', 'width': 108}, {'height': 121, 'url': 'h... | |
Laptop for RAG local LLM (3-6k budget) | 1 | [removed] | 2025-05-21T07:18:33 | https://www.reddit.com/r/LocalLLaMA/comments/1krrsft/laptop_for_rag_local_llm_36k_budget/ | 0800otto | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1krrsft | false | null | t3_1krrsft | /r/LocalLLaMA/comments/1krrsft/laptop_for_rag_local_llm_36k_budget/ | false | false | self | 1 | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.