title stringlengths 1 300 | score int64 0 8.54k | selftext stringlengths 0 41.5k | created timestamp[ns]date 2023-04-01 04:30:41 2026-03-04 02:14:14 ⌀ | url stringlengths 0 878 | author stringlengths 3 20 | domain stringlengths 0 82 | edited timestamp[ns]date 1970-01-01 00:00:00 2026-02-19 14:51:53 | gilded int64 0 2 | gildings stringclasses 7
values | id stringlengths 7 7 | locked bool 2
classes | media stringlengths 646 1.8k ⌀ | name stringlengths 10 10 | permalink stringlengths 33 82 | spoiler bool 2
classes | stickied bool 2
classes | thumbnail stringlengths 4 213 ⌀ | ups int64 0 8.54k | preview stringlengths 301 5.01k ⌀ |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Moving on from Ollama | 22 | I'm on a Mac with 128GB RAM and have been enjoying Ollama, I'm technical and comfortable in the CLI. What is the next step (not closed src like LMStudio), in order to have more freedom with LLMs.
Should I move to using Llama.cpp directly or what are people using?
Also what are you fav models atm? | 2025-06-12T21:51:51 | https://www.reddit.com/r/LocalLLaMA/comments/1l9z0su/moving_on_from_ollama/ | john_alan | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l9z0su | false | null | t3_1l9z0su | /r/LocalLLaMA/comments/1l9z0su/moving_on_from_ollama/ | false | false | self | 22 | null |
Is AMD Ryzen AI Max+ 395 really the only consumer option for running Llama 70B locally? | 44 | Researching hardware for Llama 70B and keep hitting the same conclusion. AMD Ryzen AI Max+ 395 in Framework Desktop with 128GB unified memory seems like the only consumer device that can actually run 70B locally.
RTX 4090 maxes at 24GB, Jetson AGX Orin hits 64GB, everything else needs rack servers with cooling and nois... | 2025-06-12T21:32:14 | https://www.reddit.com/r/LocalLLaMA/comments/1l9yk8v/is_amd_ryzen_ai_max_395_really_the_only_consumer/ | Single-Blackberry866 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l9yk8v | false | null | t3_1l9yk8v | /r/LocalLLaMA/comments/1l9yk8v/is_amd_ryzen_ai_max_395_really_the_only_consumer/ | false | false | self | 44 | null |
Any known VPS with AMD gpus at "reasonable" prices? | 1 | [removed] | 2025-06-12T21:29:55 | https://www.reddit.com/r/LocalLLaMA/comments/1l9yi6m/any_known_vps_with_amd_gpus_at_reasonable_prices/ | daddyodevil | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l9yi6m | false | null | t3_1l9yi6m | /r/LocalLLaMA/comments/1l9yi6m/any_known_vps_with_amd_gpus_at_reasonable_prices/ | false | false | self | 1 | null |
Ready Player Own: Building The Box Before Big Tech Does | 1 | 2025-06-12T21:14:37 | https://medium.com/@vanuan/ready-player-own-building-the-box-before-big-tech-does-537e2b879de7 | Single-Blackberry866 | medium.com | 1970-01-01T00:00:00 | 0 | {} | 1l9y4wt | false | null | t3_1l9y4wt | /r/LocalLLaMA/comments/1l9y4wt/ready_player_own_building_the_box_before_big_tech/ | false | false | default | 1 | null | |
Ready Player Own: Building The Box Before Big Tech Does | 1 | [removed] | 2025-06-12T21:09:56 | https://medium.com/@vanuan/ready-player-own-building-the-box-before-big-tech-does-537e2b879de7 | Single-Blackberry866 | medium.com | 1970-01-01T00:00:00 | 0 | {} | 1l9y0ua | false | null | t3_1l9y0ua | /r/LocalLLaMA/comments/1l9y0ua/ready_player_own_building_the_box_before_big_tech/ | false | false | default | 1 | null |
Cheapest way to run 32B model? | 35 | Id like to build a home server for my family to use llms that we can actually control. I know how to setup a local server and make it run etc but I'm having trouble keeping up with all the new hardware coming out.
What's the best bang for the buck for a 32b model right now? Id rather have a low power consumption solu... | 2025-06-12T20:55:09 | https://www.reddit.com/r/LocalLLaMA/comments/1l9xnt7/cheapest_way_to_run_32b_model/ | GreenTreeAndBlueSky | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l9xnt7 | false | null | t3_1l9xnt7 | /r/LocalLLaMA/comments/1l9xnt7/cheapest_way_to_run_32b_model/ | false | false | self | 35 | null |
Will this LLM setup work on both Linux and Windows? | 1 | [removed] | 2025-06-12T20:54:24 | https://www.reddit.com/r/LocalLLaMA/comments/1l9xn41/will_this_llm_setup_work_on_both_linux_and_windows/ | Highwaytothebeach | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l9xn41 | false | null | t3_1l9xn41 | /r/LocalLLaMA/comments/1l9xn41/will_this_llm_setup_work_on_both_linux_and_windows/ | false | false | self | 1 | null |
First PC Build for AI & Gaming - Advice Needed | 1 | [removed] | 2025-06-12T20:45:47 | https://www.reddit.com/r/LocalLLaMA/comments/1l9xfh7/first_pc_build_for_ai_gaming_advice_needed/ | Lufi_parrot | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l9xfh7 | false | null | t3_1l9xfh7 | /r/LocalLLaMA/comments/1l9xfh7/first_pc_build_for_ai_gaming_advice_needed/ | false | false | self | 1 | null |
Why No One Is Using Mamba Anymore | 1 | [removed] | 2025-06-12T20:35:44 | https://www.reddit.com/r/LocalLLaMA/comments/1l9x6gt/why_no_one_is_using_mamba_anymore/ | paranoidray | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l9x6gt | false | null | t3_1l9x6gt | /r/LocalLLaMA/comments/1l9x6gt/why_no_one_is_using_mamba_anymore/ | false | false | self | 1 | null |
Do mini PCs provide a superb LLM inference chance ? | 1 | [removed] | 2025-06-12T20:12:32 | https://www.reddit.com/r/LocalLLaMA/comments/1l9wlxq/do_mini_pcs_provide_a_superb_llm_inference_chance/ | Highwaytothebeach | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l9wlxq | false | null | t3_1l9wlxq | /r/LocalLLaMA/comments/1l9wlxq/do_mini_pcs_provide_a_superb_llm_inference_chance/ | false | false | self | 1 | null |
I wanted to ask what you mainly use locally served models for? | 1 | [removed] | 2025-06-12T20:01:23 | https://www.reddit.com/r/LocalLLaMA/comments/1l9wc3b/i_wanted_to_ask_what_you_mainly_use_locally/ | Repsol_Honda_PL | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l9wc3b | false | null | t3_1l9wc3b | /r/LocalLLaMA/comments/1l9wc3b/i_wanted_to_ask_what_you_mainly_use_locally/ | false | false | self | 1 | null |
Meta Is Offering Nine Figure Salaries to Build Superintelligent AI. Mark going All In. | 287 | https://www.entrepreneur.com/business-news/meta-is-offering-nine-figure-pay-for-superintelligence-team/493040 | 2025-06-12T20:00:39 | https://www.reddit.com/r/LocalLLaMA/comments/1l9wbaw/meta_is_offering_nine_figure_salaries_to_build/ | Neon_Nomad45 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l9wbaw | false | null | t3_1l9wbaw | /r/LocalLLaMA/comments/1l9wbaw/meta_is_offering_nine_figure_salaries_to_build/ | false | false | self | 287 | {'enabled': False, 'images': [{'id': 'Z9igJMl8_hdPOw6sIPbW75yCHdd4CNTLpS3YTLiM7Go', 'resolutions': [{'height': 71, 'url': 'https://external-preview.redd.it/Z9igJMl8_hdPOw6sIPbW75yCHdd4CNTLpS3YTLiM7Go.jpeg?width=108&crop=smart&auto=webp&s=9c96066f5b6b9c6e530c5980ef18454fc769203f', 'width': 108}, {'height': 143, 'url': '... |
Best Model/Hardware for coding locally - $2-$3k budget | 5 | Looking to use Roo Code with a locally hosted LLM.
Would like to get thoughts on what hardware and model to look at with a budget of about $2k - $3k.
I understand that this is somewhat of a heated topic at the moment, so I'm looking to ideally hear from folks who are doing local coding with this type of setup in thi... | 2025-06-12T19:36:56 | https://www.reddit.com/r/LocalLLaMA/comments/1l9vpzj/best_modelhardware_for_coding_locally_23k_budget/ | G3rmanaviator | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l9vpzj | false | null | t3_1l9vpzj | /r/LocalLLaMA/comments/1l9vpzj/best_modelhardware_for_coding_locally_23k_budget/ | false | false | self | 5 | null |
I love SillyTavern, but my friends hate me for recommending it | 1 | [removed] | 2025-06-12T19:35:43 | https://www.reddit.com/r/LocalLLaMA/comments/1l9vovt/i_love_sillytavern_but_my_friends_hate_me_for/ | RIPT1D3_Z | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l9vovt | false | null | t3_1l9vovt | /r/LocalLLaMA/comments/1l9vovt/i_love_sillytavern_but_my_friends_hate_me_for/ | false | false | self | 1 | null |
Guide: Install llama.cpp with rocm support on opensuse tumbleweed | 1 | [removed] | 2025-06-12T19:34:35 | https://www.reddit.com/r/LocalLLaMA/comments/1l9vnvt/guide_install_llamacpp_with_rocm_support_on/ | rohan-sircar | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l9vnvt | false | null | t3_1l9vnvt | /r/LocalLLaMA/comments/1l9vnvt/guide_install_llamacpp_with_rocm_support_on/ | false | false | self | 1 | null |
Augmentoolkit 3.0: 7 months of work, MIT License, Specialist AI Training | 1 | [removed] | 2025-06-12T19:16:15 | https://www.reddit.com/gallery/1l9v7j0 | Heralax_Tekran | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1l9v7j0 | false | null | t3_1l9v7j0 | /r/LocalLLaMA/comments/1l9v7j0/augmentoolkit_30_7_months_of_work_mit_license/ | false | false | 1 | {'enabled': True, 'images': [{'id': 'BfvIL-pt7XSLVz72I36FgAs_zcgpOxo8QHhEqNhdedM', 'resolutions': [{'height': 132, 'url': 'https://external-preview.redd.it/BfvIL-pt7XSLVz72I36FgAs_zcgpOxo8QHhEqNhdedM.jpeg?width=108&crop=smart&auto=webp&s=747bf6ef5a57c27dba3bb46dd4baa09d8ef755a2', 'width': 108}, {'height': 264, 'url': '... | |
Well... | 0 | 2025-06-12T19:01:44 | Mr_Moonsilver | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1l9uu58 | false | null | t3_1l9uu58 | /r/LocalLLaMA/comments/1l9uu58/well/ | false | false | default | 0 | {'enabled': True, 'images': [{'id': 'fc0apk3mnj6f1', 'resolutions': [{'height': 108, 'url': 'https://preview.redd.it/fc0apk3mnj6f1.png?width=108&crop=smart&auto=webp&s=e3d14836d55d07cdf407ea46b382bc6f81dfa045', 'width': 108}, {'height': 216, 'url': 'https://preview.redd.it/fc0apk3mnj6f1.png?width=216&crop=smart&auto=we... | ||
Apple be like... | 0 | 2025-06-12T18:59:15 | https://www.reddit.com/r/LocalLLaMA/comments/1l9urju/apple_be_like/ | Mr_Moonsilver | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l9urju | false | null | t3_1l9urju | /r/LocalLLaMA/comments/1l9urju/apple_be_like/ | false | false | 0 | null | ||
inclusionAI/Ming-Lite-Omni · Hugging Face | 35 | 2025-06-12T18:54:32 | https://huggingface.co/inclusionAI/Ming-Lite-Omni | ninjasaid13 | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1l9uncm | false | null | t3_1l9uncm | /r/LocalLLaMA/comments/1l9uncm/inclusionaimingliteomni_hugging_face/ | false | false | default | 35 | {'enabled': False, 'images': [{'id': 'hqIT61mJS1jkOJ-LQTx7HDscDPEojRnEkuy3JGbC7-8', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/hqIT61mJS1jkOJ-LQTx7HDscDPEojRnEkuy3JGbC7-8.png?width=108&crop=smart&auto=webp&s=cc1210fcb70a213cc463f1e58b0ef0fc196a1fe9', 'width': 108}, {'height': 116, 'url': 'h... | |
🚀 Hooshyar AI — Building a Fully Local, Privacy-First AI Personal Assistant (Looking for Support & Collaborators!) | 1 | [removed] | 2025-06-12T18:47:15 | https://www.reddit.com/r/LocalLLaMA/comments/1l9ugkk/hooshyar_ai_building_a_fully_local_privacyfirst/ | CookieFar26 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l9ugkk | false | null | t3_1l9ugkk | /r/LocalLLaMA/comments/1l9ugkk/hooshyar_ai_building_a_fully_local_privacyfirst/ | false | false | self | 1 | null |
media request - USPTO/AI | 1 | [removed] | 2025-06-12T18:45:33 | https://www.reddit.com/r/LocalLLaMA/comments/1l9ueyt/media_request_usptoai/ | IPreporter999 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l9ueyt | false | null | t3_1l9ueyt | /r/LocalLLaMA/comments/1l9ueyt/media_request_usptoai/ | false | false | self | 1 | null |
What enterprise LLM platforms or AI tools are best for internal use cases like compliance automation, wholesaler enablement, and document intelligence? | 1 | [removed] | 2025-06-12T18:44:59 | https://www.reddit.com/r/LocalLLaMA/comments/1l9uegm/what_enterprise_llm_platforms_or_ai_tools_are/ | InvestedThinkers | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l9uegm | false | null | t3_1l9uegm | /r/LocalLLaMA/comments/1l9uegm/what_enterprise_llm_platforms_or_ai_tools_are/ | false | false | self | 1 | null |
Drummer's Agatha 111B v1 - Command A tune with less positivity and better creativity! | 48 | PSA! My testers at BeaverAI are pooped!
Cydonia needs your help! We're looking to release a v3.1 but came up with several candidates with their own strengths and weaknesses. They've all got tons of potential but we can only have ONE v3.1.
Help me pick the winner from these:
* [https://huggingface.co/BeaverAI/Cydonia... | 2025-06-12T18:43:11 | https://huggingface.co/TheDrummer/Agatha-111B-v1 | TheLocalDrummer | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1l9ucsv | false | null | t3_1l9ucsv | /r/LocalLLaMA/comments/1l9ucsv/drummers_agatha_111b_v1_command_a_tune_with_less/ | false | false | default | 48 | {'enabled': False, 'images': [{'id': '8SUvc_SntqJPYJMpYlLHwIvKojeS37Q9MlW_-GMIUcs', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/8SUvc_SntqJPYJMpYlLHwIvKojeS37Q9MlW_-GMIUcs.png?width=108&crop=smart&auto=webp&s=dda555f95854492ea9240e78b5828951fa764ca9', 'width': 108}, {'height': 116, 'url': 'h... |
Drummer's Agatha 111B v1 - Command A tune with less positivity and better creativity! | 1 | [removed] | 2025-06-12T18:41:41 | https://huggingface.co/TheDrummer/Agatha-111B-v1 | TheLocalDrummer | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1l9ubdy | false | null | t3_1l9ubdy | /r/LocalLLaMA/comments/1l9ubdy/drummers_agatha_111b_v1_command_a_tune_with_less/ | false | false | default | 1 | null |
Mixed GPU inference | 16 | Decided to hop on the RTX 6000 PRO bandwagon. Now my question is can I run inference accross 3 different cards say for example the 6000, a 4090 and a 3090 (144gb VRAM total) using ollama? Are there any issues or downsides with doing this?
Also bonus question big parameter model with low precision quant or full precisi... | 2025-06-12T18:38:38 | https://www.reddit.com/gallery/1l9u8fv | cruzanstx | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1l9u8fv | false | null | t3_1l9u8fv | /r/LocalLLaMA/comments/1l9u8fv/mixed_gpu_inference/ | false | false | 16 | {'enabled': True, 'images': [{'id': 'BoDQClSCUKFzanPYb-kVp5_IXmAiaMDddF12wT1MC94', 'resolutions': [{'height': 144, 'url': 'https://external-preview.redd.it/BoDQClSCUKFzanPYb-kVp5_IXmAiaMDddF12wT1MC94.jpeg?width=108&crop=smart&auto=webp&s=e5d6937ac09c025d4936cb530f4ba964e537d0b0', 'width': 108}, {'height': 288, 'url': '... | |
Media Request - USPTO RFI for AI tools | 1 | [removed] | 2025-06-12T18:37:16 | https://www.reddit.com/r/LocalLLaMA/comments/1l9u767/media_request_uspto_rfi_for_ai_tools/ | IPreporter999 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l9u767 | false | null | t3_1l9u767 | /r/LocalLLaMA/comments/1l9u767/media_request_uspto_rfi_for_ai_tools/ | false | false | self | 1 | null |
Why Search Sucks! (But First, A Brief History) | 1 | 2025-06-12T18:13:54 | https://youtu.be/vZVcBUnre-c | kushalgoenka | youtu.be | 1970-01-01T00:00:00 | 0 | {} | 1l9tl7m | false | {'oembed': {'author_name': 'Kushal Goenka', 'author_url': 'https://www.youtube.com/@KushalGoenka', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/vZVcBUnre-c?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyros... | t3_1l9tl7m | /r/LocalLLaMA/comments/1l9tl7m/why_search_sucks_but_first_a_brief_history/ | false | false | default | 1 | null | |
devstral does not code in c++ | 1 | [removed] | 2025-06-12T17:53:28 | https://www.reddit.com/r/LocalLLaMA/comments/1l9t2ap/devstral_does_not_code_in_c/ | akierum | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l9t2ap | false | null | t3_1l9t2ap | /r/LocalLLaMA/comments/1l9t2ap/devstral_does_not_code_in_c/ | false | false | self | 1 | null |
The guide to building MCP agents using OpenAI Agents SDK | 0 | Building MCP agents felt a little complex to me, so I took some time to learn about it and created a [free guide](https://levelup.gitconnected.com/the-complete-guide-to-building-mcp-agents-ec877f30136d?source=friends_link&sk=f97341c5b0f7cfb735cc49749fa88f32). Covered the following topics in detail.
1. Brief overvi... | 2025-06-12T16:58:56 | https://www.reddit.com/r/LocalLLaMA/comments/1l9rnep/the_guide_to_building_mcp_agents_using_openai/ | anmolbaranwal | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l9rnep | false | null | t3_1l9rnep | /r/LocalLLaMA/comments/1l9rnep/the_guide_to_building_mcp_agents_using_openai/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'R9A5HK-2j0v7yNVydxgAeECVZ8i4g3qfNDvZfNb-aDM', 'resolutions': [{'height': 72, 'url': 'https://external-preview.redd.it/5ajYo5dYXk8vYcgCyzN0m0_1xW85yuW2HvFPuWd2_s8.jpg?width=108&crop=smart&auto=webp&s=9e00f69182dd7b561b6baf4fdada6dd716d8a3d5', 'width': 108}, {'height': 144, 'url': 'h... |
Qwen3-72B-Embiggened | 175 | 2025-06-12T16:49:08 | https://huggingface.co/cognitivecomputations/Qwen3-72B-Embiggened | TKGaming_11 | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1l9rejn | false | null | t3_1l9rejn | /r/LocalLLaMA/comments/1l9rejn/qwen372bembiggened/ | false | false | 175 | {'enabled': False, 'images': [{'id': '3jemtoTl3dbvGWwls0qD8rxMoJ2jFMtej9rCleQmntc', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/3jemtoTl3dbvGWwls0qD8rxMoJ2jFMtej9rCleQmntc.png?width=108&crop=smart&auto=webp&s=7e15934d9ca0b81ee373ab4d5a0a90ea09a30c12', 'width': 108}, {'height': 116, 'url': 'h... | ||
Seedance 1.0 | 8 | 2025-06-12T16:47:04 | https://seed.bytedance.com/en/seedance | kamikazechaser | seed.bytedance.com | 1970-01-01T00:00:00 | 0 | {} | 1l9rcoj | false | null | t3_1l9rcoj | /r/LocalLLaMA/comments/1l9rcoj/seedance_10/ | false | false | default | 8 | null | |
I made an open source (free) iOS app for this community. It’s a chatbot that you can use away from home to interact with an LLM that runs locally on your desktop Mac. | 1 | [removed] | 2025-06-12T16:24:41 | https://www.reddit.com/r/LocalLLaMA/comments/1l9qrwb/i_made_an_open_source_free_ios_app_for_this/ | Valuable-Run2129 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l9qrwb | false | null | t3_1l9qrwb | /r/LocalLLaMA/comments/1l9qrwb/i_made_an_open_source_free_ios_app_for_this/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': '_P7aq5FxwSERQmFp1wBuNrGdpYo8M0NcjHGzxEJkCSU', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/N6fBtvNZvzprN51ltQmbLyZKOr7u22nka_5L17e-60c.jpg?width=108&crop=smart&auto=webp&s=01032f9f1c83429c54cd66c3de219a1dacb3bbb0', 'width': 108}, {'height': 108, 'url': 'h... |
I made an open source, free iOS app for this community. It’s a chatbot that you can use away from home to interact with an LLM that runs locally on your Mac. | 1 | [removed] | 2025-06-12T16:13:33 | https://www.reddit.com/r/LocalLLaMA/comments/1l9qhsf/i_made_an_open_source_free_ios_app_for_this/ | matteoianni | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l9qhsf | false | null | t3_1l9qhsf | /r/LocalLLaMA/comments/1l9qhsf/i_made_an_open_source_free_ios_app_for_this/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': '_P7aq5FxwSERQmFp1wBuNrGdpYo8M0NcjHGzxEJkCSU', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/_P7aq5FxwSERQmFp1wBuNrGdpYo8M0NcjHGzxEJkCSU.png?width=108&crop=smart&auto=webp&s=590963253e1d7889f33d6e5bea945d1cb0f3a4be', 'width': 108}, {'height': 108, 'url': 'h... |
Optimal server for inference with large models | 1 | [removed] | 2025-06-12T16:06:02 | https://www.reddit.com/r/LocalLLaMA/comments/1l9qayp/optimal_server_for_inference_with_large_models/ | slavik-f | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l9qayp | false | null | t3_1l9qayp | /r/LocalLLaMA/comments/1l9qayp/optimal_server_for_inference_with_large_models/ | false | false | self | 1 | null |
I made a free, open source iOS app for this community. It’s a chatbot that you can use away from home to interact with an LLM that runs locally on your desktop Mac. | 1 | [removed] | 2025-06-12T16:03:06 | https://www.reddit.com/r/LocalLLaMA/comments/1l9q87m/i_made_a_free_open_source_ios_app_for_this/ | Valuable-Run2129 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l9q87m | false | null | t3_1l9q87m | /r/LocalLLaMA/comments/1l9q87m/i_made_a_free_open_source_ios_app_for_this/ | false | false | self | 1 | null |
Open Source, free iOS Chatbot that you can use away from home to interact with an LLM that runs locally on your Mac at home. | 1 | [removed] | 2025-06-12T16:00:17 | https://www.reddit.com/r/LocalLLaMA/comments/1l9q5dc/open_source_free_ios_chatbot_that_you_can_use/ | Valuable-Run2129 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l9q5dc | false | null | t3_1l9q5dc | /r/LocalLLaMA/comments/1l9q5dc/open_source_free_ios_chatbot_that_you_can_use/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': '_P7aq5FxwSERQmFp1wBuNrGdpYo8M0NcjHGzxEJkCSU', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/N6fBtvNZvzprN51ltQmbLyZKOr7u22nka_5L17e-60c.jpg?width=108&crop=smart&auto=webp&s=01032f9f1c83429c54cd66c3de219a1dacb3bbb0', 'width': 108}, {'height': 108, 'url': 'h... |
I made a very cool (free) iOS app. It’s a chatbot that you can use away from home to interact with an LLM that runs locally on your Mac. | 1 | [removed] | 2025-06-12T15:51:02 | https://www.reddit.com/r/LocalLLaMA/comments/1l9px3z/i_made_a_very_cool_free_ios_app_its_a_chatbot/ | Valuable-Run2129 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l9px3z | false | null | t3_1l9px3z | /r/LocalLLaMA/comments/1l9px3z/i_made_a_very_cool_free_ios_app_its_a_chatbot/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': '_P7aq5FxwSERQmFp1wBuNrGdpYo8M0NcjHGzxEJkCSU', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/N6fBtvNZvzprN51ltQmbLyZKOr7u22nka_5L17e-60c.jpg?width=108&crop=smart&auto=webp&s=01032f9f1c83429c54cd66c3de219a1dacb3bbb0', 'width': 108}, {'height': 108, 'url': 'h... |
🧙♂️ I Built a Local AI Dungeon Master – Meet Dungeo_ai (Open Source & Powered by your local LLM ) | 50 | https://reddit.com/link/1l9pwk1/video/u4614vthpi6f1/player
Hey folks!
I’ve been building something I'm super excited to finally share:
🎲 Dungeo\_ai – a fully local, AI-powered Dungeon Master designed for immersive solo RPGs, worldbuilding, and roleplay.
This project it's free and for now it connect to ollama(llm) ... | 2025-06-12T15:50:26 | https://www.reddit.com/r/LocalLLaMA/comments/1l9pwk1/i_built_a_local_ai_dungeon_master_meet_dungeo_ai/ | Reasonable_Brief578 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l9pwk1 | false | null | t3_1l9pwk1 | /r/LocalLLaMA/comments/1l9pwk1/i_built_a_local_ai_dungeon_master_meet_dungeo_ai/ | false | false | 50 | {'enabled': False, 'images': [{'id': '1nXVLMHSKoC4uc_XliGvQaC1UWwnoEqL3u0Xe2oRCf4', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/VKXieTbOFzGU2aZe9EDyviI58NBmBHkcxoJcwzIpt0A.jpg?width=108&crop=smart&auto=webp&s=6f5b026dfa350f996d44056b731cfee8edd67977', 'width': 108}, {'height': 108, 'url': 'h... | |
Nanonets-OCR-s: An Open-Source Image-to-Markdown Model with LaTeX, Tables, Signatures, checkboxes & More | 325 | We're excited to share **Nanonets-OCR-s**, a powerful and lightweight (3B) VLM model that converts documents into clean, structured **Markdown**. This model is trained to understand document structure and content context (like tables, equations, images, plots, watermarks, checkboxes, etc.).
🔍 **Key Features:**
* **... | 2025-06-12T15:19:41 | https://www.reddit.com/r/LocalLLaMA/comments/1l9p54x/nanonetsocrs_an_opensource_imagetomarkdown_model/ | SouvikMandal | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l9p54x | false | null | t3_1l9p54x | /r/LocalLLaMA/comments/1l9p54x/nanonetsocrs_an_opensource_imagetomarkdown_model/ | false | false | 325 | {'enabled': False, 'images': [{'id': '_xcgPKgts5yF5jBIuB89LBBM6G1OD9-qimuRMUyq8jY', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/_xcgPKgts5yF5jBIuB89LBBM6G1OD9-qimuRMUyq8jY.png?width=108&crop=smart&auto=webp&s=cb7942f143c257c4fa7a42d2b63993c819dab9b7', 'width': 108}, {'height': 116, 'url': 'h... | |
[update] Restructured repo under rvn-tools — modular CLI for LLM formats | 11 | Quick update.
Yesterday I posted about \`rvn-convert\`, a Rust tool for converting safetensors to GGUF.
While fixing bugs today, I also restructured the project under \`rvn-tools\` — a modular, CLI-oriented Rust-native toolkit for LLM model formats, inference workflows, and data pipelines.
🔧 What's in so far:... | 2025-06-12T15:12:30 | https://www.reddit.com/r/LocalLLaMA/comments/1l9oyt7/update_restructured_repo_under_rvntools_modular/ | rvnllm | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l9oyt7 | false | null | t3_1l9oyt7 | /r/LocalLLaMA/comments/1l9oyt7/update_restructured_repo_under_rvntools_modular/ | false | false | self | 11 | {'enabled': False, 'images': [{'id': 'yMp2iBPH4K3QakdvNk8EWXidfVsCExG77jPoa1jcn7s', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/yMp2iBPH4K3QakdvNk8EWXidfVsCExG77jPoa1jcn7s.png?width=108&crop=smart&auto=webp&s=dc92a3863a13234a2265d490aa7cf6ee54fd6566', 'width': 108}, {'height': 108, 'url': 'h... |
Transformer Lab Now Supports Diffusion Model Training in Addition to LLM Training | 82 | In addition to LLM training and inference, we're excited to have just launched Diffusion Model inference and training. It's all open source! We'd love your feedback and to see what you build.
In the platform we support most major open Diffusion models (including SDXL & Flux). The platform supports inpainting, img2img,... | 2025-06-12T15:09:03 | aliasaria | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1l9ovq7 | false | null | t3_1l9ovq7 | /r/LocalLLaMA/comments/1l9ovq7/transformer_lab_now_supports_diffusion_model/ | false | false | 82 | {'enabled': True, 'images': [{'id': '_xYpsuq7aXBrygYusZ4m8czLrPVdDaO2V3iXEpTmeLw', 'resolutions': [{'height': 73, 'url': 'https://preview.redd.it/usk33qqlgi6f1.png?width=108&crop=smart&auto=webp&s=e07788a20d1b72cb5985f6ce1b8d6a999ca37d15', 'width': 108}, {'height': 147, 'url': 'https://preview.redd.it/usk33qqlgi6f1.png... | ||
[P] Solving Tower of Hanoi for N ≥ 15 with LLMs: It’s Not About Model Size, It’s About Prompt Engineering | 1 | [removed] | 2025-06-12T14:38:43 | https://www.reddit.com/r/LocalLLaMA/comments/1l9o54l/p_solving_tower_of_hanoi_for_n_15_with_llms_its/ | Pale-Entertainer-386 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l9o54l | false | null | t3_1l9o54l | /r/LocalLLaMA/comments/1l9o54l/p_solving_tower_of_hanoi_for_n_15_with_llms_its/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'D9a4f_PqnwnlUmJ4j6dOW1F7gG_5ht0c45xtUL8kxDY', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/D9a4f_PqnwnlUmJ4j6dOW1F7gG_5ht0c45xtUL8kxDY.png?width=108&crop=smart&auto=webp&s=71c7c023a9ec57f87927f898daaedbe1dca2b02a', 'width': 108}, {'height': 113, 'url': 'h... |
Tired of losing great ChatGPT messages and having to scroll back all the way? | 0 | I got tired of endlessly scrolling to find back great ChatGPT messages I'd forgotten to save. It drove me crazy so I built something to fix it.
Honestly, I am very surprised how much I ended using it.
It's actually super useful when you are building a project, doing research or coming with a plan because you can save... | 2025-06-12T14:33:05 | https://www.reddit.com/r/LocalLLaMA/comments/1l9o0ct/tired_of_losing_great_chatgpt_messages_and_having/ | cedparadis | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l9o0ct | false | null | t3_1l9o0ct | /r/LocalLLaMA/comments/1l9o0ct/tired_of_losing_great_chatgpt_messages_and_having/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': '5HcRO6e7F1edK4P46xwComq2h8cvm5EOFg0431l1Crc', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/IKNO4ysc9NLlDVA-_8hcNCcyOvqj3ZgUDXMmMRh8Re4.jpg?width=108&crop=smart&auto=webp&s=c42b1b400f46ef4ceaf07a8f05523dcb3ed9a8a2', 'width': 108}], 'source': {'height': 12... |
Tired of losing great ChatGPT messages and having to scroll back all the way? I built SnapIt, a Chrome extension to instantly save, organize & export them! | 0 | I got tired of endlessly scrolling to find back great ChatGPT messages I'd forgotten to save. It drove me crazy so I built something to fix it.
Honestly, I am very surprised how much I ended using it.
It's actually super useful when you are building a project, doing research or coming with a plan because you can save... | 2025-06-12T14:07:23 | https://www.reddit.com/gallery/1l9ndww | cedparadis | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1l9ndww | false | null | t3_1l9ndww | /r/LocalLLaMA/comments/1l9ndww/tired_of_losing_great_chatgpt_messages_and_having/ | false | false | 0 | null | |
Using LLM's with Home Assistant + Voice Integration | 8 | Looking to set up home assistant at home with a LLM connected to make the assistant more conversational. It doesn't need to have superior depth of knowledge, but I am looking for something that can respond creatively, conversationally, dynamically to a variety of requests centered around IoT tasks. In my head this is s... | 2025-06-12T14:07:07 | https://www.reddit.com/r/LocalLLaMA/comments/1l9ndp2/using_llms_with_home_assistant_voice_integration/ | nat2r | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l9ndp2 | false | null | t3_1l9ndp2 | /r/LocalLLaMA/comments/1l9ndp2/using_llms_with_home_assistant_voice_integration/ | false | false | self | 8 | null |
ABBA: Highly Expressive Hadamard Product Adaptation for Large Language Models | 39 | We introduce ABBA, a new architecture for Parameter-Efficient Fine-Tuning (PEFT) that significantly outperforms LoRA and all its major variants across a broad range of benchmarks, all under the same parameter budget.
Most PEFT methods, including LoRA, represent weight updates using a low-rank decomposition added to th... | 2025-06-12T14:01:41 | https://www.reddit.com/r/LocalLLaMA/comments/1l9n911/abba_highly_expressive_hadamard_product/ | AccomplishedCode4689 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l9n911 | false | null | t3_1l9n911 | /r/LocalLLaMA/comments/1l9n911/abba_highly_expressive_hadamard_product/ | false | false | self | 39 | null |
Open WebUI Bug Reports Immediately Closed by Maintainer | 0 | [removed] | 2025-06-12T13:41:46 | https://www.reddit.com/r/LocalLLaMA/comments/1l9msio/open_webui_bug_reports_immediately_closed_by/ | liquidki | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l9msio | false | null | t3_1l9msio | /r/LocalLLaMA/comments/1l9msio/open_webui_bug_reports_immediately_closed_by/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'nkqDyTpDorAnmtsr2tluKY_TZm_AfXF403J6OR3EXX8', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/nkqDyTpDorAnmtsr2tluKY_TZm_AfXF403J6OR3EXX8.png?width=108&crop=smart&auto=webp&s=55da3d89a6b0fc42b49a5c73bad04c4c87297391', 'width': 108}, {'height': 108, 'url': 'h... |
Locally running, scriptable process to extract Form Data from scanned (!) PDF? | 1 | [removed] | 2025-06-12T13:25:04 | https://www.reddit.com/r/LocalLLaMA/comments/1l9meqh/locally_running_scriptable_process_to_extract/ | cts | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l9meqh | false | null | t3_1l9meqh | /r/LocalLLaMA/comments/1l9meqh/locally_running_scriptable_process_to_extract/ | false | false | 1 | null | |
[Update] Emotionally-Aware VN Dialogue Dataset – Deep Context Tagging, ShareGPT-Style Structure | 27 | Hey again everyone,
Following up on my earlier posts about converting a visual novel script into a fine-tuning dataset, I’ve gone back and improved the format significantly thanks to feedback here.
The goal is the same: create expressive, roleplay-friendly dialogue data that captures emotion, tone, character personali... | 2025-06-12T13:14:39 | https://www.reddit.com/r/LocalLLaMA/comments/1l9m6dc/update_emotionallyaware_vn_dialogue_dataset_deep/ | Akowmako | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l9m6dc | false | null | t3_1l9m6dc | /r/LocalLLaMA/comments/1l9m6dc/update_emotionallyaware_vn_dialogue_dataset_deep/ | false | false | self | 27 | null |
Spy search: Open source that faster than perplexity | 12 | I am really happy !!! My open source is somehow faster than perplexity yeahhhh so happy. Really really happy and want to share with you guys !! ( :( someone said it's copy paste they just never ever use mistral + 5090 :)))) & of course they don't even look at my open source hahahah )
https://reddit.com/link/1l9m32y/vi... | 2025-06-12T13:10:24 | https://www.reddit.com/r/LocalLLaMA/comments/1l9m32y/spy_search_open_source_that_faster_than_perplexity/ | jasonhon2013 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l9m32y | false | null | t3_1l9m32y | /r/LocalLLaMA/comments/1l9m32y/spy_search_open_source_that_faster_than_perplexity/ | false | false | 12 | {'enabled': False, 'images': [{'id': 'mF7cGFuCHvTPCF2PosrefDlSXoaQ7svn_kltQIq6Dac', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/mF7cGFuCHvTPCF2PosrefDlSXoaQ7svn_kltQIq6Dac.png?width=108&crop=smart&auto=webp&s=6fe4b13253cbd4c0952b3138599399866fbd3245', 'width': 108}, {'height': 108, 'url': 'h... | |
How to Use Intel AI Playground Effectively and Run LLMs Locally (Even Offline) | 0 | 2025-06-12T12:39:03 | https://www.digit.in/features/laptops/how-to-use-intel-ai-playground-effectively-and-run-llms-locally-even-offline.html | reps_up | digit.in | 1970-01-01T00:00:00 | 0 | {} | 1l9lf3t | false | null | t3_1l9lf3t | /r/LocalLLaMA/comments/1l9lf3t/how_to_use_intel_ai_playground_effectively_and/ | false | false | 0 | {'enabled': False, 'images': [{'id': 'ogL23zxjMMUwgdU0Uv-HEKuk_9SWwWKc6pbNxlVSVR0', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/6uoN3INtNDJi2_rugNt6qBdUd0zUpuy9DiQM9z8qpDU.jpg?width=108&crop=smart&auto=webp&s=6f9a3a9b6b6939543251f4e0a3b10b9a71aa87eb', 'width': 108}, {'height': 121, 'url': 'h... | ||
Petition: Ban 'announcement of announcement' posts | 803 | There's no reason to have 5 posts a week about OpenAI announcing that they will release a model then delaying the release date it then announcing it's gonna be *amazing***™** then announcing they will announce a new update in a month ad infinitum. Fuck those grifters. | 2025-06-12T12:36:45 | https://www.reddit.com/r/LocalLLaMA/comments/1l9lddr/petition_ban_announcement_of_announcement_posts/ | RangaRea | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l9lddr | false | null | t3_1l9lddr | /r/LocalLLaMA/comments/1l9lddr/petition_ban_announcement_of_announcement_posts/ | false | false | self | 803 | null |
Youtube transcript summarizer ? | 1 | [removed] | 2025-06-12T12:22:36 | https://www.reddit.com/r/LocalLLaMA/comments/1l9l310/youtube_transcript_summarizer/ | _throawayplop_ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l9l310 | false | null | t3_1l9l310 | /r/LocalLLaMA/comments/1l9l310/youtube_transcript_summarizer/ | false | false | self | 1 | null |
New SWE-LLMs results | 1 | [deleted] | 2025-06-12T12:12:10 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 1l9kvgx | false | null | t3_1l9kvgx | /r/LocalLLaMA/comments/1l9kvgx/new_swellms_results/ | false | false | default | 1 | null | ||
SWE-rebench Major Update: Tool Usage, Claude Sonnet 3.5/4, OpenAI o3 and May Data | 1 | [removed] | 2025-06-12T11:57:47 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 1l9kl6c | false | null | t3_1l9kl6c | /r/LocalLLaMA/comments/1l9kl6c/swerebench_major_update_tool_usage_claude_sonnet/ | false | false | default | 1 | null | ||
A major update for [SWE-rebench] | 1 | [removed] | 2025-06-12T11:57:08 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 1l9kkqg | false | null | t3_1l9kkqg | /r/LocalLLaMA/comments/1l9kkqg/a_major_update_for_swerebench/ | false | false | default | 1 | null | ||
SWE-rebench Major Update: Tool Usage, Claude Sonnet 3.5/4, OpenAI o3 and May Data | 1 | [removed] | 2025-06-12T11:56:18 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 1l9kk6p | false | null | t3_1l9kk6p | /r/LocalLLaMA/comments/1l9kk6p/swerebench_major_update_tool_usage_claude_sonnet/ | false | false | default | 1 | null | ||
We updated our benchmark for SWE agents | 1 | [removed] | 2025-06-12T11:54:59 | Fabulous_Pollution10 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1l9kjb5 | false | null | t3_1l9kjb5 | /r/LocalLLaMA/comments/1l9kjb5/we_updated_our_benchmark_for_swe_agents/ | false | false | 1 | {'enabled': True, 'images': [{'id': 'XtNZDYvO_YecbmR1BXXdymNNz2lRGihq_JzW2Sl-6W4', 'resolutions': [{'height': 74, 'url': 'https://preview.redd.it/9z130azajh6f1.png?width=108&crop=smart&auto=webp&s=8dd406f9b1d7e7da3f429ee56817fbd7fa9cd0cf', 'width': 108}, {'height': 149, 'url': 'https://preview.redd.it/9z130azajh6f1.png... | ||
SWE-rebench Major Update: Tool Usage, Claude Sonnet 3.5/4, OpenAI o3 and May Data | 1 | [removed] | 2025-06-12T11:51:03 | Fabulous_Pollution10 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1l9kgth | false | null | t3_1l9kgth | /r/LocalLLaMA/comments/1l9kgth/swerebench_major_update_tool_usage_claude_sonnet/ | false | false | 1 | {'enabled': True, 'images': [{'id': 'cjGIKk2H4FF_dwiLw0oxDvn6vT8MW4ZVAHsk8RLJJdw', 'resolutions': [{'height': 74, 'url': 'https://preview.redd.it/3fn3bkufih6f1.png?width=108&crop=smart&auto=webp&s=a20a33c3ae67b993ea4a5420a63d28ccbd12772c', 'width': 108}, {'height': 149, 'url': 'https://preview.redd.it/3fn3bkufih6f1.png... | ||
SWE-rebench Major Update: Tool Usage, Claude Sonnet 3.5/4, OpenAI o3 and May Data | 1 | [removed] | 2025-06-12T11:43:57 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 1l9kc7u | false | null | t3_1l9kc7u | /r/LocalLLaMA/comments/1l9kc7u/swerebench_major_update_tool_usage_claude_sonnet/ | false | false | default | 1 | null | ||
SWE-rebench Major Update: Tool Usage, Claude Sonnet 3.5/4, OpenAI o3 and May Data | 1 | [removed] | 2025-06-12T11:38:18 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 1l9k8hu | false | null | t3_1l9k8hu | /r/LocalLLaMA/comments/1l9k8hu/swerebench_major_update_tool_usage_claude_sonnet/ | false | false | default | 1 | null | ||
SWE-rebench Major Update: Tool Usage, Claude Sonnet 3.5/4, OpenAI o3 and May Data | 1 | [removed] | 2025-06-12T11:34:45 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 1l9k664 | false | null | t3_1l9k664 | /r/LocalLLaMA/comments/1l9k664/swerebench_major_update_tool_usage_claude_sonnet/ | false | false | default | 1 | null | ||
does llama.cpp have parallel requests | 1 | [removed] | 2025-06-12T11:29:52 | https://www.reddit.com/r/LocalLLaMA/comments/1l9k2xv/does_llamacpp_have_parallel_requests/ | rithwik3112 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l9k2xv | false | null | t3_1l9k2xv | /r/LocalLLaMA/comments/1l9k2xv/does_llamacpp_have_parallel_requests/ | false | false | self | 1 | null |
does llama.cpp have parallel requests | 1 | [removed] | 2025-06-12T11:26:45 | https://www.reddit.com/r/LocalLLaMA/comments/1l9k0zw/does_llamacpp_have_parallel_requests/ | rithwik3112 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l9k0zw | false | null | t3_1l9k0zw | /r/LocalLLaMA/comments/1l9k0zw/does_llamacpp_have_parallel_requests/ | false | false | self | 1 | null |
Dive into Minara's insights - 用typescript帮我写一下mcp client的代码 | 1 | 2025-06-12T11:18:46 | https://xneuro-app.dev.nftgo.dev/share/chat/684ab7858441bfd401f1e965 | LowEntrepreneur7276 | xneuro-app.dev.nftgo.dev | 1970-01-01T00:00:00 | 0 | {} | 1l9jvxr | false | null | t3_1l9jvxr | /r/LocalLLaMA/comments/1l9jvxr/dive_into_minaras_insights_用typescript帮我写一下mcp/ | false | false | default | 1 | null | |
Dive into Minara's insights - 用typescript帮我写一下mcp client的代码 | 1 | 2025-06-12T11:13:16 | https://xneuro-app.dev.nftgo.dev/share/chat/684ab5e28441bfd401f1e964 | LowEntrepreneur7276 | xneuro-app.dev.nftgo.dev | 1970-01-01T00:00:00 | 0 | {} | 1l9jsgx | false | null | t3_1l9jsgx | /r/LocalLLaMA/comments/1l9jsgx/dive_into_minaras_insights_用typescript帮我写一下mcp/ | false | false | default | 1 | null | |
Evaluate and monitor your Hybrid Search RAG | LangGraph, Qdrant miniCOIL, Opik, and DeepSeek-R1 | 1 | [removed] | 2025-06-12T11:09:19 | https://www.reddit.com/r/LocalLLaMA/comments/1l9jq07/evaluate_and_monitor_your_hybrid_search_rag/ | External_Ad_11 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l9jq07 | false | null | t3_1l9jq07 | /r/LocalLLaMA/comments/1l9jq07/evaluate_and_monitor_your_hybrid_search_rag/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': '8eW8y8XLusqJ8pw7erbgy1SslG9WzT_AqCjymDnthgM', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/VvA0IQ70_iSMgT9TOk4U3i9Qxi6UD20KAAX2xaB6VoI.jpg?width=108&crop=smart&auto=webp&s=c8ef9434e35d345e59f17e841f10433604c86882', 'width': 108}, {'height': 121, 'url': 'h... |
A new swarm-style distributed pretraining architecture has just launched, working on a 15B model | 49 | Macrocosmos has released IOTA, a collaborative distributed pretraining network. Participants contribute compute to collectively pretrain a 15B model. It’s a model and data parallel setup, meaning people can work on disjointed parts of it at the same time.
It’s also been designed with a lower barrier to entry, as nobod... | 2025-06-12T11:02:57 | https://www.reddit.com/r/LocalLLaMA/comments/1l9jm52/a_new_swarmstyle_distributed_pretraining/ | emission-control | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l9jm52 | false | null | t3_1l9jm52 | /r/LocalLLaMA/comments/1l9jm52/a_new_swarmstyle_distributed_pretraining/ | false | false | self | 49 | null |
How far are we..from running a veo 3 range model on local | 1 | [removed] | 2025-06-12T10:16:45 | https://www.reddit.com/r/LocalLLaMA/comments/1l9iujw/how_far_are_wefrom_running_a_veo_3_range_model_on/ | maneesh_sandra | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l9iujw | false | null | t3_1l9iujw | /r/LocalLLaMA/comments/1l9iujw/how_far_are_wefrom_running_a_veo_3_range_model_on/ | false | false | self | 1 | null |
Google and Microsoft vs OpenAI and Anthropic, a fun visualization of their open releases on Hugging Face in the past year (Julien Chaumond on LinkedIn) | 560 | 2025-06-12T09:21:44 | Nunki08 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1l9hzb5 | false | null | t3_1l9hzb5 | /r/LocalLLaMA/comments/1l9hzb5/google_and_microsoft_vs_openai_and_anthropic_a/ | false | false | 560 | {'enabled': True, 'images': [{'id': '-x-YTBzQZWflgljvfYv30yAbEg1nc9bfcBK-y3rEmSQ', 'resolutions': [{'height': 53, 'url': 'https://preview.redd.it/2vdfa3f5sg6f1.jpeg?width=108&crop=smart&auto=webp&s=650928a703743319320792a48a96201cbfdd01fe', 'width': 108}, {'height': 107, 'url': 'https://preview.redd.it/2vdfa3f5sg6f1.jp... | |||
Guide: Install llama.cpp with rocm support on opensuse tumbleweed | 1 | [removed] | 2025-06-12T09:13:00 | https://dev.to/rohan-sircar/unlocking-the-power-of-llms-on-opensuse-with-amd-a-step-by-step-guide-to-installing-rocm-and-1doe | rohan-sircar | dev.to | 1970-01-01T00:00:00 | 0 | {} | 1l9huh8 | false | null | t3_1l9huh8 | /r/LocalLLaMA/comments/1l9huh8/guide_install_llamacpp_with_rocm_support_on/ | false | false | default | 1 | null |
Are we going the wrong way with LLM development? | 1 | [removed] | 2025-06-12T07:47:45 | https://www.reddit.com/r/LocalLLaMA/comments/1l9gm8b/are_we_going_the_wrong_way_with_llm_development/ | Acrobatic_Plate9537 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l9gm8b | false | null | t3_1l9gm8b | /r/LocalLLaMA/comments/1l9gm8b/are_we_going_the_wrong_way_with_llm_development/ | false | false | self | 1 | null |
Are we going the wrong way with LLM development? | 1 | [removed] | 2025-06-12T07:40:18 | https://www.reddit.com/r/LocalLLaMA/comments/1l9gidm/are_we_going_the_wrong_way_with_llm_development/ | Acrobatic_Plate9537 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l9gidm | false | null | t3_1l9gidm | /r/LocalLLaMA/comments/1l9gidm/are_we_going_the_wrong_way_with_llm_development/ | false | false | 1 | null | |
Are we going the wrong way with LLM development? | 1 | [removed] | 2025-06-12T07:39:03 | https://www.reddit.com/r/LocalLLaMA/comments/1l9ghox/are_we_going_the_wrong_way_with_llm_development/ | Acrobatic_Plate9537 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l9ghox | false | null | t3_1l9ghox | /r/LocalLLaMA/comments/1l9ghox/are_we_going_the_wrong_way_with_llm_development/ | false | false | self | 1 | null |
What happened to Yi? | 113 | [Yi](https://huggingface.co/01-ai) had some of the best local models in the past, but this year there haven't been any news about them. Does anyone know what happened? | 2025-06-12T07:38:46 | https://www.reddit.com/r/LocalLLaMA/comments/1l9ghjc/what_happened_to_yi/ | undefdev | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l9ghjc | false | null | t3_1l9ghjc | /r/LocalLLaMA/comments/1l9ghjc/what_happened_to_yi/ | false | false | self | 113 | {'enabled': False, 'images': [{'id': 'hK1D7x-srfRjWqUX3jBnBT5mOIRIBBx74XIU3cvjaCI', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/hK1D7x-srfRjWqUX3jBnBT5mOIRIBBx74XIU3cvjaCI.png?width=108&crop=smart&auto=webp&s=09976e12870c91a13b5ab4ea6f395f7f8a573b8b', 'width': 108}, {'height': 116, 'url': 'h... |
Are we going the wrong way with LLM development? | 1 | [removed] | 2025-06-12T07:34:42 | https://www.reddit.com/r/LocalLLaMA/comments/1l9gfe4/are_we_going_the_wrong_way_with_llm_development/ | Acrobatic_Plate9537 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l9gfe4 | false | null | t3_1l9gfe4 | /r/LocalLLaMA/comments/1l9gfe4/are_we_going_the_wrong_way_with_llm_development/ | false | false | self | 1 | null |
What do i need to run a big deepseek r1 model locally on gpus? | 1 | [removed] | 2025-06-12T07:28:04 | https://www.reddit.com/r/LocalLLaMA/comments/1l9gbyc/what_do_i_need_to_run_a_big_deepseek_r1_model/ | Drasek666 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l9gbyc | false | null | t3_1l9gbyc | /r/LocalLLaMA/comments/1l9gbyc/what_do_i_need_to_run_a_big_deepseek_r1_model/ | false | false | self | 1 | null |
Parameter-free loading UI for LLAMA.CPP models for novice users | 1 | [removed] | 2025-06-12T07:24:16 | https://www.reddit.com/r/LocalLLaMA/comments/1l9g9yq/parameterfree_loading_ui_for_llamacpp_models_for/ | Big-Employer9324 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l9g9yq | false | null | t3_1l9g9yq | /r/LocalLLaMA/comments/1l9g9yq/parameterfree_loading_ui_for_llamacpp_models_for/ | false | false | 1 | null | |
Real-time voicechat bot on Discord Channels | 1 | [removed] | 2025-06-12T07:20:31 | https://www.reddit.com/r/LocalLLaMA/comments/1l9g7vc/realtime_voicechat_bot_on_discord_channels/ | Dry-Entrepreneur179 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l9g7vc | false | null | t3_1l9g7vc | /r/LocalLLaMA/comments/1l9g7vc/realtime_voicechat_bot_on_discord_channels/ | false | false | self | 1 | null |
Rope and temp scaling along the current used context size? | 1 | [removed] | 2025-06-12T06:56:06 | https://www.reddit.com/r/LocalLLaMA/comments/1l9fuhs/rope_and_temp_scaling_along_the_current_used/ | SiEgE-F1 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l9fuhs | false | null | t3_1l9fuhs | /r/LocalLLaMA/comments/1l9fuhs/rope_and_temp_scaling_along_the_current_used/ | false | false | self | 1 | null |
RAG for code: best current solutions? | 16 | Hi. Given a code repository, I want to generate embeddings I can use for RAG. What are the best solutions for this nowadays? I'd consider both open-source options I can run locally (if the accuracy is good) and APIs if the costs are reasonable.
Any help would be appreciated, I am very new to all of this, not sure wher... | 2025-06-12T06:37:39 | https://www.reddit.com/r/LocalLLaMA/comments/1l9fki4/rag_for_code_best_current_solutions/ | vlatkosh | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l9fki4 | false | null | t3_1l9fki4 | /r/LocalLLaMA/comments/1l9fki4/rag_for_code_best_current_solutions/ | false | false | self | 16 | null |
OpenAI delays their open source model claiming to add "something amazing" to it | 390 | 2025-06-12T06:26:23 | https://techcrunch.com/2025/06/10/openais-open-model-is-delayed | umarmnaq | techcrunch.com | 1970-01-01T00:00:00 | 0 | {} | 1l9fec7 | false | null | t3_1l9fec7 | /r/LocalLLaMA/comments/1l9fec7/openai_delays_their_open_source_model_claiming_to/ | false | false | 390 | {'enabled': False, 'images': [{'id': 'jMPcLFMxZ5DT_xqC0adhGOivsZE10inqbTpX_tOyzrU', 'resolutions': [{'height': 82, 'url': 'https://external-preview.redd.it/R_3_vozrpyNLk-RuPK719qfww-pDxCPb4GbyGYYEwIQ.jpg?width=108&crop=smart&auto=webp&s=77583f6f2972372f4a626c2024cb822dd5c846d9', 'width': 108}, {'height': 165, 'url': 'h... | ||
Can I run DeepSeek R1 and DeepSeek V3 simultaneously on the same server? | 1 | [removed] | 2025-06-12T05:39:27 | https://www.reddit.com/r/LocalLLaMA/comments/1l9enik/can_i_run_deepseek_r1_and_deepseek_v3/ | cdani2 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l9enik | false | null | t3_1l9enik | /r/LocalLLaMA/comments/1l9enik/can_i_run_deepseek_r1_and_deepseek_v3/ | false | false | self | 1 | null |
Coming Soon: VLLM-Swap (Host multiple models through one OpenAI endpoint!) | 1 | [removed] | 2025-06-12T05:31:14 | https://www.reddit.com/r/LocalLLaMA/comments/1l9eino/coming_soon_vllmswap_host_multiple_models_through/ | maxwell321 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l9eino | false | null | t3_1l9eino | /r/LocalLLaMA/comments/1l9eino/coming_soon_vllmswap_host_multiple_models_through/ | false | false | 1 | null | |
Coming Soon: VLLM-Swap (Host multiple models through one OpenAI endpoint!) | 1 | [removed] | 2025-06-12T05:27:12 | https://www.reddit.com/r/LocalLLaMA/comments/1l9eg80/coming_soon_vllmswap_host_multiple_models_through/ | maxwell321 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l9eg80 | false | null | t3_1l9eg80 | /r/LocalLLaMA/comments/1l9eg80/coming_soon_vllmswap_host_multiple_models_through/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'Z4i5RZyvksBHZxF6kr2sr1v8Yu7u_Vv0xeomPXDKt7E', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/N6VeKOkIqgMUNu00LNlLfHTuOpjBzzc73kUXajt-SfE.jpg?width=108&crop=smart&auto=webp&s=9d0701cc6f2cf8db43d78763909b346fb7618e25', 'width': 108}, {'height': 108, 'url': 'h... | |
Memory and compute estimation for Fine Tuning LLM | 11 | Hey guys,
i want to you the crowd intelligence of this forum, since i have not trained that many llms and this is my first larger project. i looked for resources but there is a lot of contrary information out there:
I have around 1 million samples of 2800 tokens. I am right now trying to finetune a qwen3 8bln model u... | 2025-06-12T04:40:01 | https://www.reddit.com/r/LocalLLaMA/comments/1l9dns1/memory_and_compute_estimation_for_fine_tuning_llm/ | TraderBoy | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l9dns1 | false | null | t3_1l9dns1 | /r/LocalLLaMA/comments/1l9dns1/memory_and_compute_estimation_for_fine_tuning_llm/ | false | false | self | 11 | null |
💡 I Built an AI-Powered YouTube Video Generator — Fully Automated, Using LLaMA, Stable Diffusion, Whisper & FFmpeg 🚀 | 1 | [removed] | 2025-06-12T04:31:04 | https://www.reddit.com/r/LocalLLaMA/comments/1l9di6m/i_built_an_aipowered_youtube_video_generator/ | tuvshin-enkhbaatar | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l9di6m | false | null | t3_1l9di6m | /r/LocalLLaMA/comments/1l9di6m/i_built_an_aipowered_youtube_video_generator/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'ow7D8WjHFOJwlOnfppmNCgeo_n9qG-oI31Ws0eFLPWE', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/ow7D8WjHFOJwlOnfppmNCgeo_n9qG-oI31Ws0eFLPWE.png?width=108&crop=smart&auto=webp&s=e484707ee8f247dcf251375a2cb017afdda71ad4', 'width': 108}, {'height': 108, 'url': 'h... |
Testing Mac Studio 512 GB, 4 TB SSD, M3 Ultra w 32 cores. | 46 | Hi all,
I am running some tests and to be fair, I don't regret it.
Given that I want to learn and sell private AI solutions, and I want to run K8s clusters of agents locally for learning purposes, I think it's a good investment medium/long term.
24 tokens/second for Qwen3 235b, in thinking mode, is totally mana... | 2025-06-12T04:25:33 | https://www.reddit.com/r/LocalLLaMA/comments/1l9demc/testing_mac_studio_512_gb_4_tb_ssd_m3_ultra_w_32/ | Deviad | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l9demc | false | null | t3_1l9demc | /r/LocalLLaMA/comments/1l9demc/testing_mac_studio_512_gb_4_tb_ssd_m3_ultra_w_32/ | false | false | self | 46 | null |
Running an LLM on a PS Vita | 197 | After spending some time with my vita I wanted to see if \*\*any\*\* LLM can be ran on it, and it can! I modified llama2.c to have it run on the Vita, with the added capability of downloading the models on device to avoid having to manually transfer model files (which can be deleted too). This was a great way to learn ... | 2025-06-12T03:57:31 | https://v.redd.it/we6m8zvv4f6f1 | ajunior7 | /r/LocalLLaMA/comments/1l9cwi5/running_an_llm_on_a_ps_vita/ | 1970-01-01T00:00:00 | 0 | {} | 1l9cwi5 | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/we6m8zvv4f6f1/DASHPlaylist.mpd?a=1752422286%2CZjQxOTY1MThlMjQ2YTZmOGU4N2E4MTE0MmJiNWM3MjgxZGZkMDQwOTIyNjJmM2I0YmVhMjZjODE2ODhjZGU1YQ%3D%3D&v=1&f=sd', 'duration': 83, 'fallback_url': 'https://v.redd.it/we6m8zvv4f6f1/DASH_1080.mp4?source=fallback', 'h... | t3_1l9cwi5 | /r/LocalLLaMA/comments/1l9cwi5/running_an_llm_on_a_ps_vita/ | false | false | 197 | {'enabled': False, 'images': [{'id': 'MWMwMGd5dnY0ZjZmMdxvYbrJ4twRsyVsSVzosN1N6q8R6lU4U4ntC9uiniMK', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/MWMwMGd5dnY0ZjZmMdxvYbrJ4twRsyVsSVzosN1N6q8R6lU4U4ntC9uiniMK.png?width=108&crop=smart&format=pjpg&auto=webp&s=8702d3708fe40d9cc96246739252ddae77937... | |
What are the best solutions to benchmark models locally? | 7 | Sorry if I'm missing something, but is there a good tool for benchmarking models locally? Not in terms of Tok/s, but by running them against open source benchmark datasets. I've been looking, and info on the topic is fragmented at best. Ideally something that can connect to localhost for local models.
Some benchmarks ... | 2025-06-12T03:55:41 | https://www.reddit.com/r/LocalLLaMA/comments/1l9cve7/what_are_the_best_solutions_to_benchmark_models/ | PraxisOG | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l9cve7 | false | null | t3_1l9cve7 | /r/LocalLLaMA/comments/1l9cve7/what_are_the_best_solutions_to_benchmark_models/ | false | false | self | 7 | null |
Mistral.rs v0.6.0 now has full built-in MCP Client support! | 104 | Hey all! Just shipped what I think is a game-changer for local LLM workflows: MCP (Model Context Protocol) client support in [mistral.rs](https://github.com/EricLBuehler/mistral.rs/) ([https://github.com/EricLBuehler/mistral.rs](https://github.com/EricLBuehler/mistral.rs))! It is built-in and closely integrated, which ... | 2025-06-12T03:27:42 | https://www.reddit.com/r/LocalLLaMA/comments/1l9cd44/mistralrs_v060_now_has_full_builtin_mcp_client/ | EricBuehler | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l9cd44 | false | null | t3_1l9cd44 | /r/LocalLLaMA/comments/1l9cd44/mistralrs_v060_now_has_full_builtin_mcp_client/ | false | false | 104 | {'enabled': False, 'images': [{'id': '2lLo8jhJSmFII5np0CAVlto_8NREWNjCUymEN6xCnKk', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/2lLo8jhJSmFII5np0CAVlto_8NREWNjCUymEN6xCnKk.png?width=108&crop=smart&auto=webp&s=c219aada1d3fcf1d71210730945ea465eb6844c5', 'width': 108}, {'height': 108, 'url': 'h... | |
Ming-Omni: A Unified Multimodal Model for Perception and Generation | 2 | [removed] | 2025-06-12T03:22:52 | https://github.com/inclusionAI/Ming/tree/main | ninjasaid13 | github.com | 1970-01-01T00:00:00 | 0 | {} | 1l9c9wa | false | null | t3_1l9c9wa | /r/LocalLLaMA/comments/1l9c9wa/mingomni_a_unified_multimodal_model_for/ | false | false | default | 2 | null |
[2506.06105] Text-to-LoRA: Instant Transformer Adaption | 51 | 2025-06-12T02:47:41 | https://arxiv.org/abs/2506.06105 | Thrumpwart | arxiv.org | 1970-01-01T00:00:00 | 0 | {} | 1l9blur | false | null | t3_1l9blur | /r/LocalLLaMA/comments/1l9blur/250606105_texttolora_instant_transformer_adaption/ | false | false | default | 51 | null | |
Local organic rig | 55 | local organic ai rig | 2025-06-12T02:17:01 | Both-Indication5062 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1l9b04q | false | null | t3_1l9b04q | /r/LocalLLaMA/comments/1l9b04q/local_organic_rig/ | false | false | 55 | {'enabled': True, 'images': [{'id': 'b-sEz_TQLI79mIy16Sphy3FszNe1CYzoUD5oQ49pN-Q', 'resolutions': [{'height': 128, 'url': 'https://preview.redd.it/78c6uej5oe6f1.jpeg?width=108&crop=smart&auto=webp&s=716f184cfbaf30a2261b435bb50f4080c842b2dd', 'width': 108}, {'height': 257, 'url': 'https://preview.redd.it/78c6uej5oe6f1.j... | ||
Enable AI Agents to join and interact in your meetings | 36 | Hey guys,
we've been working on a project called joinly for the last few weeks. After many late nights and lots of energy drinks, we just open-sourced it. The idea is that you can make any browser-based video conference accessible to your AI agents and interact with them in real-time. Think of it at as a connector la... | 2025-06-12T02:14:43 | https://v.redd.it/pdsgwnsune6f1 | Square-Test-515 | /r/LocalLLaMA/comments/1l9ayep/enable_ai_agents_to_join_and_interact_in_your/ | 1970-01-01T00:00:00 | 0 | {} | 1l9ayep | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/pdsgwnsune6f1/DASHPlaylist.mpd?a=1752416092%2CODgzYzcyMmI2Y2FjOTE2YTFlMjE4YzhhMDcwMTQ3MmViN2M2OWQzMmY4NDQ3MzZiZjkwOTRmNzg2M2U5N2RmMQ%3D%3D&v=1&f=sd', 'duration': 72, 'fallback_url': 'https://v.redd.it/pdsgwnsune6f1/DASH_1080.mp4?source=fallback', 'h... | t3_1l9ayep | /r/LocalLLaMA/comments/1l9ayep/enable_ai_agents_to_join_and_interact_in_your/ | false | false | 36 | {'enabled': False, 'images': [{'id': 'Nzdwb2lwc3VuZTZmMStdk3R7hZIRW-iC3N5YGOPCKOzNDbFNT3u3Wwxsw2PP', 'resolutions': [{'height': 70, 'url': 'https://external-preview.redd.it/Nzdwb2lwc3VuZTZmMStdk3R7hZIRW-iC3N5YGOPCKOzNDbFNT3u3Wwxsw2PP.png?width=108&crop=smart&format=pjpg&auto=webp&s=d768ca4311fddb7ac424218fcbadf60888d3e... | |
Where can I find cloud platforms with NVIDIA B200 or better GPUs than H200? | 1 | [removed] | 2025-06-12T01:37:52 | https://www.reddit.com/r/LocalLLaMA/comments/1l9a71i/where_can_i_find_cloud_platforms_with_nvidia_b200/ | Outrageous_Fix_8522 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l9a71i | false | null | t3_1l9a71i | /r/LocalLLaMA/comments/1l9a71i/where_can_i_find_cloud_platforms_with_nvidia_b200/ | false | false | self | 1 | null |
Mistral-Nemotron? | 59 | Looks like Nvidia is hosting a new model but I can't find any information about it on Mistral's website?
https://docs.api.nvidia.com/nim/reference/mistralai-mistral-nemotron
https://build.nvidia.com/mistralai/mistral-nemotron/modelcard | 2025-06-12T01:13:30 | https://www.reddit.com/r/LocalLLaMA/comments/1l99pih/mistralnemotron/ | mj3815 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l99pih | false | null | t3_1l99pih | /r/LocalLLaMA/comments/1l99pih/mistralnemotron/ | false | false | self | 59 | {'enabled': False, 'images': [{'id': '7I5kzl90Xp5FTA4J8SQiSeq3iT4zO8cTALslZpELywk', 'resolutions': [{'height': 37, 'url': 'https://external-preview.redd.it/JW4NXGKKI2VaiiqWrBNazWHEa229xXnGv_NL6p7ZjUs.jpg?width=108&crop=smart&auto=webp&s=f7510e699fecae74ca9659e1a8475aa0252dd5b6', 'width': 108}, {'height': 75, 'url': 'ht... |
Privacy implications of sending data to OpenRouter | 32 | For those of you developing applications with LLMs: do you really send your data to a local LLM hosted through OpenRouter? What are the pros and cons of doing that over sending your data to OpenAI/Azure? I'm confused about the practice of taking a local model and then accessing it through a third-party API, it negates ... | 2025-06-12T00:18:22 | https://www.reddit.com/r/LocalLLaMA/comments/1l98lly/privacy_implications_of_sending_data_to_openrouter/ | entsnack | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l98lly | false | null | t3_1l98lly | /r/LocalLLaMA/comments/1l98lly/privacy_implications_of_sending_data_to_openrouter/ | false | false | self | 32 | null |
Best Practices in RL for Reasoning-Capable LLMs: Insights from Mistral’s Magistral Report | 6 | Magistral combines PPO-Clip, REINFORCE++-style advantage normalization, and DAPO tricks like Dynamic Sampling into a solid RLHF recipe for reasoning LLMs:
[Blog: Best Practices in RL for Reasoning-Capable LLMs: Insights from Mistral’s Magistral Report](https://hijkzzz.notion.site/Best-Practices-in-RL-for-Reasoning-Cap... | 2025-06-12T00:15:10 | https://www.reddit.com/r/LocalLLaMA/comments/1l98j75/best_practices_in_rl_for_reasoningcapable_llms/ | seventh_day123 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l98j75 | false | null | t3_1l98j75 | /r/LocalLLaMA/comments/1l98j75/best_practices_in_rl_for_reasoningcapable_llms/ | false | false | self | 6 | {'enabled': False, 'images': [{'id': 'QjPei5cbWgy-AGcB-QGKkai4tr5l3qsKDZCI4X8BzFQ', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/QjPei5cbWgy-AGcB-QGKkai4tr5l3qsKDZCI4X8BzFQ.png?width=108&crop=smart&auto=webp&s=3c1c626e6c45dbf41f96426728c1cd04e46f123f', 'width': 108}, {'height': 113, 'url': 'h... |
Open Source agentic tool/framework to automate codebase workflows | 13 | Hi everyone, I'm looking for some open source agentic tool/framework with autonomous agents to automate workflows on my repositories. I tried Aider but it requires way too much human intervention, even just to automate simple tasks, it seems not to be designed for that purpose. I'm also trying OpenHands, it looks good ... | 2025-06-11T23:46:59 | https://www.reddit.com/r/LocalLLaMA/comments/1l97xrq/open_source_agentic_toolframework_to_automate/ | Soft-Salamander7514 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l97xrq | false | null | t3_1l97xrq | /r/LocalLLaMA/comments/1l97xrq/open_source_agentic_toolframework_to_automate/ | false | false | self | 13 | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.