title stringlengths 1 300 | score int64 0 8.54k | selftext stringlengths 0 41.5k | created timestamp[ns]date 2023-04-01 04:30:41 2026-03-04 02:14:14 ⌀ | url stringlengths 0 878 | author stringlengths 3 20 | domain stringlengths 0 82 | edited timestamp[ns]date 1970-01-01 00:00:00 2026-02-19 14:51:53 | gilded int64 0 2 | gildings stringclasses 7
values | id stringlengths 7 7 | locked bool 2
classes | media stringlengths 646 1.8k ⌀ | name stringlengths 10 10 | permalink stringlengths 33 82 | spoiler bool 2
classes | stickied bool 2
classes | thumbnail stringlengths 4 213 ⌀ | ups int64 0 8.54k | preview stringlengths 301 5.01k ⌀ |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Help! Best Way to Replicate Voices In Other Languages With TTS? | 1 | [removed] | 2025-05-29T06:49:33 | https://www.reddit.com/r/LocalLLaMA/comments/1ky43bw/help_best_way_to_replicate_voices_in_other/ | Initial_Designer_802 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ky43bw | false | null | t3_1ky43bw | /r/LocalLLaMA/comments/1ky43bw/help_best_way_to_replicate_voices_in_other/ | false | false | self | 1 | null |
automated debugging using Ollama | 9 | Used my down time to build a CLI that auto-fixes errors with local LLMs
The tech stack is pretty simple; it reads terminal errors and provides context-aware fixes using:
* Your local Ollama models (whatever you have downloaded)
* RAG across your entire codebase for context
* Everything stays on your machine
also, j... | 2025-05-29T06:38:08 | https://v.redd.it/8x5wao0l1o3f1 | AntelopeEntire9191 | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1ky3x8f | false | {'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/8x5wao0l1o3f1/DASHPlaylist.mpd?a=1751092700%2CZjllMWJiNjU4ZmJhYWU1N2Q2ZDgyOTdkNTViNWNkMWQ3MzcwNzY3MzI3ZGM2NDVlZTVjMWU2ZTlhZmUxMjgzMA%3D%3D&v=1&f=sd', 'duration': 22, 'fallback_url': 'https://v.redd.it/8x5wao0l1o3f1/DASH_720.mp4?source=fallback', 'ha... | t3_1ky3x8f | /r/LocalLLaMA/comments/1ky3x8f/automated_debugging_using_ollama/ | false | false | 9 | {'enabled': False, 'images': [{'id': 'aDRtdW5kMWwxbzNmMezr6njSDoFLFdrxC2JtmiXopTq_OV4L1dh4D_3UKGf5', 'resolutions': [{'height': 74, 'url': 'https://external-preview.redd.it/aDRtdW5kMWwxbzNmMezr6njSDoFLFdrxC2JtmiXopTq_OV4L1dh4D_3UKGf5.png?width=108&crop=smart&format=pjpg&auto=webp&s=54b9cea6d10e92455be26e457191c98c59660... | |
Models for writing uncensored stories | 1 | [removed] | 2025-05-29T06:23:43 | https://www.reddit.com/r/LocalLLaMA/comments/1ky3pey/models_for_writing_uncensored_stories/ | Efficient_Listen_768 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ky3pey | false | null | t3_1ky3pey | /r/LocalLLaMA/comments/1ky3pey/models_for_writing_uncensored_stories/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'jT_8KxTqlcmlmkMloNuVHGWQJKS0BXtl7ADDLTME8FE', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/e6_SHg_mSvBmDVBPG68qQXet5PnuEqMc7iJJxhuggEk.jpg?width=108&crop=smart&auto=webp&s=928f0a6388b8a3958bfb01eaf7a7396f18b1743c', 'width': 108}], 'source': {'height': 20... |
Models for writing NSFW stories | 1 | [removed] | 2025-05-29T06:21:15 | https://www.reddit.com/r/LocalLLaMA/comments/1ky3o2b/models_for_writing_nsfw_stories/ | Efficient_Listen_768 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ky3o2b | false | null | t3_1ky3o2b | /r/LocalLLaMA/comments/1ky3o2b/models_for_writing_nsfw_stories/ | false | false | nsfw | 1 | {'enabled': False, 'images': [{'id': 'jT_8KxTqlcmlmkMloNuVHGWQJKS0BXtl7ADDLTME8FE', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/e6_SHg_mSvBmDVBPG68qQXet5PnuEqMc7iJJxhuggEk.jpg?width=108&crop=smart&auto=webp&s=928f0a6388b8a3958bfb01eaf7a7396f18b1743c', 'width': 108}], 'source': {'height': 20... |
😭 I am already falling in love with the new deepseek-ai/DeepSeek-R1-0528 | 0 | 2025-05-29T05:47:43 | Rare-Programmer-1747 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1ky35nu | false | null | t3_1ky35nu | /r/LocalLLaMA/comments/1ky35nu/i_am_already_falling_in_love_with_the_new/ | false | false | 0 | {'enabled': True, 'images': [{'id': 'asDuxOZTMI_v6C8Iv3HGeD0BOUnugdXcEyN4YuP_iM0', 'resolutions': [{'height': 165, 'url': 'https://preview.redd.it/bslh7359tn3f1.jpeg?width=108&crop=smart&auto=webp&s=449cce12b1d2bb859358a38815a2643c3898eb0d', 'width': 108}, {'height': 331, 'url': 'https://preview.redd.it/bslh7359tn3f1.j... | |||
Working on New AI nfws model | 1 | [removed] | 2025-05-29T05:38:59 | https://www.reddit.com/r/LocalLLaMA/comments/1ky30qa/working_on_new_ai_nfws_model/ | Royal_Departure9934 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ky30qa | false | null | t3_1ky30qa | /r/LocalLLaMA/comments/1ky30qa/working_on_new_ai_nfws_model/ | false | false | nsfw | 1 | null |
I accidentally built a vector database using video compression | 0 | [removed] | 2025-05-29T04:49:05 | https://www.reddit.com/r/LocalLLaMA/comments/1ky27sv/i_accidentally_built_a_vector_database_using/ | Every_Chicken_1293 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ky27sv | false | null | t3_1ky27sv | /r/LocalLLaMA/comments/1ky27sv/i_accidentally_built_a_vector_database_using/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': '9vpsknPi0TlMlGAXz-tdZM_pbY0EGNCDI1BYYrPFOjA', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/A75CQf4AwRntlpkckJpD6IS-egftwL-gT-Wrf3ZwT-4.jpg?width=108&crop=smart&auto=webp&s=c6933f50c7244327bcac0f2d820b048f35523ff1', 'width': 108}, {'height': 108, 'url': 'h... |
Deepseek-R1/V3 near (I)Q2/(I)Q3 (230-250GB RAM) vs. Qwen3-235B near Q6/Q8 (same 230-250GB RAM); at what quant / RAM sizes is DS vs Qwen3 is better / worse than the other? | 26 | Deepseek-R1/V3 near (I)Q2/(I)Q3 (230-250GB RAM) vs. Qwen3-235B near Q6/Q8 (same or less 230-250GB RAM requirement); at what quant / RAM sizes is such quantized DS vs Qwen3 is better / worse than the other?
Practical question -- if one has a system or couple RPC systems which provide in the range of 200-230-260 GBy agg... | 2025-05-29T04:13:14 | https://www.reddit.com/r/LocalLLaMA/comments/1ky1lro/deepseekr1v3_near_iq2iq3_230250gb_ram_vs/ | Calcidiol | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ky1lro | false | null | t3_1ky1lro | /r/LocalLLaMA/comments/1ky1lro/deepseekr1v3_near_iq2iq3_230250gb_ram_vs/ | false | false | self | 26 | null |
Yess! Open-source strikes back! This is the closest I've seen anything come to competing with @GoogleDeepMind 's Veo 3 native audio and character motion. | 131 | 2025-05-29T04:12:04 | https://v.redd.it/wvb8a5b5cn3f1 | balianone | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1ky1l2e | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/wvb8a5b5cn3f1/DASHPlaylist.mpd?a=1751083937%2CYTM1MDlhMjYxOWI4ZjI0ZWMyNzQwY2FkODY1ZGY5ZWI0MDJkYTI2Y2YyZTRmMTc0MTVmNDJlZmI5NTM3MDZlNQ%3D%3D&v=1&f=sd', 'duration': 61, 'fallback_url': 'https://v.redd.it/wvb8a5b5cn3f1/DASH_1080.mp4?source=fallback', 'h... | t3_1ky1l2e | /r/LocalLLaMA/comments/1ky1l2e/yess_opensource_strikes_back_this_is_the_closest/ | false | false | 131 | {'enabled': False, 'images': [{'id': 'N2Rjc29reDZjbjNmMRt9-ucm3ui6hV8RM7PxWBbM61rS9Zbqewi5gNGLcurK', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/N2Rjc29reDZjbjNmMRt9-ucm3ui6hV8RM7PxWBbM61rS9Zbqewi5gNGLcurK.png?width=108&crop=smart&format=pjpg&auto=webp&s=1bb1e8732c99d9db043b84d613598736cb0e7... | ||
Open Source Alternative to NotebookLM | 110 | For those of you who aren't familiar with **SurfSense**, it aims to be the open-source alternative to **NotebookLM**, **Perplexity**, or **Glean**.
In short, it's a Highly Customizable AI Research Agent but connected to your personal external sources search engines (Tavily, LinkUp), Slack, Linear, Notion, YouTube, Git... | 2025-05-29T03:47:00 | https://www.reddit.com/r/LocalLLaMA/comments/1ky14jn/open_source_alternative_to_notebooklm/ | Uiqueblhats | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ky14jn | false | null | t3_1ky14jn | /r/LocalLLaMA/comments/1ky14jn/open_source_alternative_to_notebooklm/ | false | false | self | 110 | {'enabled': False, 'images': [{'id': 'V_B6aOAfOhvxu5-Ab6EGprURZYRFaJ2SmeG-wLIJosw', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/f_4nJaS37ny4ybO-I2hGATtJJgMfTYz-uOwFkxnL7hk.jpg?width=108&crop=smart&auto=webp&s=05ca0fc9b51aedac3b63c0d89f28eb5d15f2ae05', 'width': 108}, {'height': 108, 'url': 'h... |
Best open source models to process summaries of research papers + medical documents | 1 | [removed] | 2025-05-29T03:43:14 | https://www.reddit.com/r/LocalLLaMA/comments/1ky1238/best_open_source_models_to_process_summaries_of/ | Defiant_Low5388 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ky1238 | false | null | t3_1ky1238 | /r/LocalLLaMA/comments/1ky1238/best_open_source_models_to_process_summaries_of/ | false | false | self | 1 | null |
Mundane Robustness Benchmarks | 2 | Does anyone know of any up-to-date LLM benchmarks focused on very mundane reliability? Things like positional extraction, format compliance, and copying/pasting with slight edits? No math required. Basically, I want stupid easy tasks that test basic consistency, attention to detail, and deterministic behavior on text a... | 2025-05-29T03:39:18 | https://www.reddit.com/r/LocalLLaMA/comments/1ky0zhv/mundane_robustness_benchmarks/ | arnokha | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ky0zhv | false | null | t3_1ky0zhv | /r/LocalLLaMA/comments/1ky0zhv/mundane_robustness_benchmarks/ | false | false | self | 2 | null |
Deepseek-R1-0528 MLX 4 bit quant up | 26 | [https://huggingface.co/mlx-community/DeepSeek-R1-0528-4bit/tree/main](https://huggingface.co/mlx-community/DeepSeek-R1-0528-4bit/tree/main)
...they're fast. | 2025-05-29T03:25:41 | https://www.reddit.com/r/LocalLLaMA/comments/1ky0qes/deepseekr10528_mlx_4_bit_quant_up/ | nomorebuttsplz | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ky0qes | false | null | t3_1ky0qes | /r/LocalLLaMA/comments/1ky0qes/deepseekr10528_mlx_4_bit_quant_up/ | false | false | self | 26 | {'enabled': False, 'images': [{'id': 'WQfI_IxeceqSO0qpt2LH1bjBS5fZy5a7haCE4mjZgbk', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/ZTUuU4N_meao3Jg9pjdZ0RXjFh3zAnJh2fgczzIDBBQ.jpg?width=108&crop=smart&auto=webp&s=64775b106379a241d8dd3dd15b21c739ed81ebc5', 'width': 108}, {'height': 116, 'url': 'h... |
Researchers from the National University of Singapore Introduce ‘Thinkless,’ an Adaptive Framework that Reduces Unnecessary Reasoning by up to 90% Using DeGRPO | 53 | 2025-05-29T03:19:13 | https://github.com/VainF/Thinkless | Sporeboss | github.com | 1970-01-01T00:00:00 | 0 | {} | 1ky0m1h | false | null | t3_1ky0m1h | /r/LocalLLaMA/comments/1ky0m1h/researchers_from_the_national_university_of/ | false | false | 53 | {'enabled': False, 'images': [{'id': 'OXjtsDZOw5E2Stnluv5COiUPr0zaa8qU4UThCJSVzLw', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/mL1Q9XttYG1PHtNLMEZMLd8H7Fxw3YewxUxLIUBjgOg.jpg?width=108&crop=smart&auto=webp&s=da2f3a5f4e309e6d13b8eb0f12168b3a7417e1d5', 'width': 108}, {'height': 108, 'url': 'h... | ||
Quality GPU cloud providers to serve AI product from? | 4 | I'm getting ready to launch my inferencing-based service and for the life of me I can't find a good GPU compute provider suitable for my needs. What I need is just a couple cards, like two L40S, A6000 or similar 48GB cards, and I need them 24/7 with excellent data security. I've probably looked at 15 providers, they ar... | 2025-05-29T02:34:52 | https://www.reddit.com/r/LocalLLaMA/comments/1kxzr6l/quality_gpu_cloud_providers_to_serve_ai_product/ | No-Break-7922 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kxzr6l | false | null | t3_1kxzr6l | /r/LocalLLaMA/comments/1kxzr6l/quality_gpu_cloud_providers_to_serve_ai_product/ | false | false | self | 4 | null |
Is this the future of social media? | 1 | [removed] | 2025-05-29T02:11:34 | Routine-Classic3922 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1kxzanm | false | null | t3_1kxzanm | /r/LocalLLaMA/comments/1kxzanm/is_this_the_future_of_social_media/ | false | false | 1 | {'enabled': True, 'images': [{'id': 'rHQNtH0Ey1hUZfEwXoYqHw5i3U9XToYYeEtOHUgwLz4', 'resolutions': [{'height': 87, 'url': 'https://preview.redd.it/m5ikc4ulqm3f1.png?width=108&crop=smart&auto=webp&s=0302d6ade412d50be0030098e1e1880e87e607cd', 'width': 108}, {'height': 175, 'url': 'https://preview.redd.it/m5ikc4ulqm3f1.png... | ||
Is this the future of social media? | 1 | [removed] | 2025-05-29T02:10:40 | PartyOrganic4937 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1kxza01 | false | null | t3_1kxza01 | /r/LocalLLaMA/comments/1kxza01/is_this_the_future_of_social_media/ | false | false | 1 | {'enabled': True, 'images': [{'id': '1R2w4sTRkaE0FlQdx68bBML6_O5SWImstLKQAK6x8ew', 'resolutions': [{'height': 87, 'url': 'https://preview.redd.it/nlnt1rzaqm3f1.png?width=108&crop=smart&auto=webp&s=bce5965511ed60b87b5c2ee17e83f879c2887167', 'width': 108}, {'height': 175, 'url': 'https://preview.redd.it/nlnt1rzaqm3f1.png... | ||
What's the value of paying $20 a month for OpenAI or Anthropic? | 58 | Hey everyone, I’m new here.
Over the past few weeks, I’ve been experimenting with local LLMs and honestly, I’m impressed by what they can do. Right now, I’m paying $20/month for Raycast AI to access the latest models. But after seeing how well the models run on Open WebUI, I’m starting to wonder if paying $20/month f... | 2025-05-29T02:07:43 | https://www.reddit.com/r/LocalLLaMA/comments/1kxz7yi/whats_the_value_of_paying_20_a_month_for_openai/ | mainaisakyuhoon | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kxz7yi | false | null | t3_1kxz7yi | /r/LocalLLaMA/comments/1kxz7yi/whats_the_value_of_paying_20_a_month_for_openai/ | false | false | self | 58 | null |
Experimenting with an LLM powered game (feedback wanted!) | 1 | [removed] | 2025-05-29T02:06:01 | Routine-Classic3922 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1kxz6rk | false | null | t3_1kxz6rk | /r/LocalLLaMA/comments/1kxz6rk/experimenting_with_an_llm_powered_game_feedback/ | false | false | 1 | {'enabled': True, 'images': [{'id': 'EpQPY5Wz8gn203s9H1xhglX3CWjbI5ihmf09JTpAXl4', 'resolutions': [{'height': 87, 'url': 'https://preview.redd.it/guj0egagpm3f1.png?width=108&crop=smart&auto=webp&s=31b401c039b57bb8b9f60af728056934d48ead1e', 'width': 108}, {'height': 175, 'url': 'https://preview.redd.it/guj0egagpm3f1.png... | ||
Any interesting ideas for old hardware | 1 | I have a few left over gaming pcs from some ancient project. Hardly used but never got around to selling them (I know, what a waste of over 10k). They have been sitting around but want to see if I can use them for AI?
x6 PCs with 1080s - 8GB. 16 GB RAM. x4 Almost same but with 32 GB RAM.
From the top of my head, best... | 2025-05-29T01:54:40 | putoption21 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1kxyyex | false | null | t3_1kxyyex | /r/LocalLLaMA/comments/1kxyyex/any_interesting_ideas_for_old_hardware/ | false | false | 1 | {'enabled': True, 'images': [{'id': '7RmQZUq8LuSRougVYOz5qiJ7yUortpwAK6ZM5YQ2-Yw', 'resolutions': [{'height': 99, 'url': 'https://preview.redd.it/mxef13jonm3f1.jpeg?width=108&crop=smart&auto=webp&s=86fd3e02d7b88a55b721183e611f9c48e7dc61ec', 'width': 108}, {'height': 198, 'url': 'https://preview.redd.it/mxef13jonm3f1.jp... | ||
Looking for some early testers | 1 | [removed] | 2025-05-29T01:47:55 | https://www.reddit.com/r/LocalLLaMA/comments/1kxyth6/looking_for_some_early_testers/ | CSharpSauce | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kxyth6 | false | null | t3_1kxyth6 | /r/LocalLLaMA/comments/1kxyth6/looking_for_some_early_testers/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'YhEX0T3Yj15nQqB4XAABOWXERHjRi8QNpEAL_G53DoI', 'resolutions': [{'height': 65, 'url': 'https://external-preview.redd.it/MXjnWecie4ZNnx8hUcFDPEu1WY7oBGBDqD3z30BjUwM.jpg?width=108&crop=smart&auto=webp&s=341a80ef4308a617ef06a8196cc06b5414e83ad7', 'width': 108}, {'height': 130, 'url': 'h... |
This Eleven labs Competitor sounds better | 58 | [https://github.com/resemble-ai/chatterbox](https://github.com/resemble-ai/chatterbox)
Chatterbox tts | 2025-05-29T01:27:47 | https://v.redd.it/x864437pim3f1 | Beautiful-Essay1945 | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1kxyf0z | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/x864437pim3f1/DASHPlaylist.mpd?a=1751074081%2CMWRhMjJkOWNmYjYzMmVhNjZiMmQwMzc5NGZmZGMxMmVmOWEwY2JiYTIzNGMyZmEyODA1ZjkxNzUyZWU4ZDk3MA%3D%3D&v=1&f=sd', 'duration': 15, 'fallback_url': 'https://v.redd.it/x864437pim3f1/DASH_1080.mp4?source=fallback', 'h... | t3_1kxyf0z | /r/LocalLLaMA/comments/1kxyf0z/this_eleven_labs_competitor_sounds_better/ | false | false | 58 | {'enabled': False, 'images': [{'id': 'cXRuaHU1N3BpbTNmMYVo_tEft_a9iH3F4Ou_yJDFtaczwt9ETwiiGfy1hxPV', 'resolutions': [{'height': 72, 'url': 'https://external-preview.redd.it/cXRuaHU1N3BpbTNmMYVo_tEft_a9iH3F4Ou_yJDFtaczwt9ETwiiGfy1hxPV.png?width=108&crop=smart&format=pjpg&auto=webp&s=e350cabb194d8932075b0232cf496d3f71d9c... | |
I built an AI assistant to help me learn | 1 | [removed] | 2025-05-29T01:27:40 | https://www.reddit.com/r/LocalLLaMA/comments/1kxyexz/i_built_an_ai_assistant_to_help_me_learn/ | Hirojinho | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kxyexz | false | null | t3_1kxyexz | /r/LocalLLaMA/comments/1kxyexz/i_built_an_ai_assistant_to_help_me_learn/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'E47A6SeusMi2E0TGdaF3F8xV3n3fk5JslT9Ws6Njvcs', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/CtcQkq07FS6lp4oxGPfXsyGBd1aHQz3Asfyghz8NCYo.jpg?width=108&crop=smart&auto=webp&s=6a7a278e6e1bcbc9de074da335a6ac30371bc147', 'width': 108}, {'height': 108, 'url': 'h... |
Is inference output token/s purely gpu bound? | 2 | I have two computers. They both have LM studio. Both run Qwen 3 32b at q4km with same settings on LM studio. Both have a 3090. Vram is at about 21gb on the 3090s.
Why is it that on computer 1 I get 20t/s output for output while on computer 2 I get 30t/s output for inference?
I provide the same prompt for both models.... | 2025-05-29T01:24:10 | https://www.reddit.com/r/LocalLLaMA/comments/1kxyce1/is_inference_output_tokens_purely_gpu_bound/ | fgoricha | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kxyce1 | false | null | t3_1kxyce1 | /r/LocalLLaMA/comments/1kxyce1/is_inference_output_tokens_purely_gpu_bound/ | false | false | self | 2 | null |
Automate Your CSV Analysis with AI Agents – CrewAI + Ollama | 1 | [removed] | 2025-05-29T01:23:37 | https://v.redd.it/m9iiu7z3im3f1 | Solid_Woodpecker3635 | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1kxyc06 | false | {'reddit_video': {'bitrate_kbps': 1200, 'dash_url': 'https://v.redd.it/m9iiu7z3im3f1/DASHPlaylist.mpd?a=1751073829%2CODQ1OGQwOTg4MjhjZDA5ZGVmZWVlMzc4MDdkZmM3MTUwMTJjZThkNmZmMGFjODMxMWIzNGZmYTU1MDgzY2UyMA%3D%3D&v=1&f=sd', 'duration': 28, 'fallback_url': 'https://v.redd.it/m9iiu7z3im3f1/DASH_480.mp4?source=fallback', 'ha... | t3_1kxyc06 | /r/LocalLLaMA/comments/1kxyc06/automate_your_csv_analysis_with_ai_agents_crewai/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'ZXFlZXQ0ejNpbTNmMV-krD7C_ZIFZ70xsLMOTqRPW4NjITnCLYEfMKf1vxnR', 'resolutions': [{'height': 51, 'url': 'https://external-preview.redd.it/ZXFlZXQ0ejNpbTNmMV-krD7C_ZIFZ70xsLMOTqRPW4NjITnCLYEfMKf1vxnR.png?width=108&crop=smart&format=pjpg&auto=webp&s=7da00eaf2c59eb509c7561125024e2dc79827... | |
Deepseek R1.1 aider polyglot score | 156 | Deepseek R1.1 scored the same as claude-opus-4-nothink 70.7% on aider polyglot.
```
────────────────────────────────── tmp.benchmarks/2025-05-28-18-57-01--deepseek-r1-0528 ──────────────────────────────────
- dirname: 2025-05-28-18-57-01--deepseek-r1-0528
test_cases: 225
model: deepseek/deepseek-reasoner
edit_fo... | 2025-05-29T01:22:54 | https://www.reddit.com/r/LocalLLaMA/comments/1kxybgo/deepseek_r11_aider_polyglot_score/ | Ambitious_Subject108 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kxybgo | false | null | t3_1kxybgo | /r/LocalLLaMA/comments/1kxybgo/deepseek_r11_aider_polyglot_score/ | false | false | self | 156 | null |
I built an AI Study Assistant for Fellow Learners (and Llama Fans) | 1 | [removed] | 2025-05-29T01:22:18 | https://www.reddit.com/r/LocalLLaMA/comments/1kxyb0s/i_built_an_ai_study_assistant_for_fellow_learners/ | Hirojinho | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kxyb0s | false | null | t3_1kxyb0s | /r/LocalLLaMA/comments/1kxyb0s/i_built_an_ai_study_assistant_for_fellow_learners/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'E47A6SeusMi2E0TGdaF3F8xV3n3fk5JslT9Ws6Njvcs', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/CtcQkq07FS6lp4oxGPfXsyGBd1aHQz3Asfyghz8NCYo.jpg?width=108&crop=smart&auto=webp&s=6a7a278e6e1bcbc9de074da335a6ac30371bc147', 'width': 108}, {'height': 108, 'url': 'h... |
Wrote a tiny shell script to launch Ollama + OpenWebUI + your LocalLLM and auto-open the chat in your browser with one command | 1 | [removed] | 2025-05-29T01:03:54 | https://www.reddit.com/r/LocalLLaMA/comments/1kxxxe4/wrote_a_tiny_shell_script_to_launch_ollama/ | DilankaMcLovin | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kxxxe4 | false | null | t3_1kxxxe4 | /r/LocalLLaMA/comments/1kxxxe4/wrote_a_tiny_shell_script_to_launch_ollama/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'krjt_5uhqcaDfYjfO7lkezThehav9cAIRJgcK-OKAmM', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/s0D7i4Rco0trWh9Bu1uEkgnoJJLA3UNKUA9vs57seII.jpg?width=108&crop=smart&auto=webp&s=53486800d92d75b19d59502534fa9ba2785c14b0', 'width': 108}, {'height': 113, 'url': 'h... |
DeepSeek R1 05 28 Tested. It finally happened. The ONLY model to score 100% on everything I threw at it. | 872 | Ladies and gentlemen, It finally happened.
I knew this day was coming. I knew that one day, a model would come along that would be able to score a 100% on every single task I throw at it.
[https://www.youtube.com/watch?v=4CXkmFbgV28](https://www.youtube.com/watch?v=4CXkmFbgV28)
Past few weeks have been busy - Open... | 2025-05-29T00:48:50 | https://www.reddit.com/r/LocalLLaMA/comments/1kxxmdr/deepseek_r1_05_28_tested_it_finally_happened_the/ | Ok-Contribution9043 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kxxmdr | false | null | t3_1kxxmdr | /r/LocalLLaMA/comments/1kxxmdr/deepseek_r1_05_28_tested_it_finally_happened_the/ | false | false | self | 872 | {'enabled': False, 'images': [{'id': 'p97Iv-Tip6T-vLE95eWHuPYya5bvCcF-ugfW1yWL7Rg', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/6ymD4O7PeLpJMwr37WuqRVcpGtptivnBJwDsnIn8mYw.jpg?width=108&crop=smart&auto=webp&s=458b1dc90b591ed472186b2e7708defd014ce006', 'width': 108}, {'height': 162, 'url': 'h... |
Reasoning reducing some outcomes. | 1 | I created a prompt with qwen3 32b q4_k_m to help ask act as a ghostwriter.
I intentionally made it hard by having a reference in the text to the "image below" that the model couldn't see, and an "@" mention.
It really just ripped all the nuance, like referencing the image below and the "@" sign to mention someone whe... | 2025-05-29T00:48:01 | https://www.reddit.com/r/LocalLLaMA/comments/1kxxlsd/reasoning_reducing_some_outcomes/ | ROS_SDN | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kxxlsd | false | null | t3_1kxxlsd | /r/LocalLLaMA/comments/1kxxlsd/reasoning_reducing_some_outcomes/ | false | false | self | 1 | null |
I asked Mistral AI what its prompt is. | 19 | I had been seeing different users asking different LLMs what their original system prompts were. Some refusing, some had to be tricked, so I tried with Mistral. At first the chat would stop while generating, so I made a new one and quoted part of what it revealed to me originally.
Here is the entire prompt:
```md
##... | 2025-05-29T00:44:36 | https://www.reddit.com/gallery/1kxxj65 | theblackcat99 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1kxxj65 | false | null | t3_1kxxj65 | /r/LocalLLaMA/comments/1kxxj65/i_asked_mistral_ai_what_its_prompt_is/ | false | false | 19 | null | |
GPU consideration: AMD Pro W7800 | 7 | I am currently in talks with a distributor to aquire [this lil' box](https://www.aicipc.com/en/productdetail/51394). Since about a year or so, I have been going back and forth in trying to aquire the hardware for my own local AI server - and that as a private customer, no business. Just a dude that wants to put LocalAI... | 2025-05-29T00:39:23 | https://www.reddit.com/r/LocalLLaMA/comments/1kxxfe5/gpu_consideration_amd_pro_w7800/ | IngwiePhoenix | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kxxfe5 | false | null | t3_1kxxfe5 | /r/LocalLLaMA/comments/1kxxfe5/gpu_consideration_amd_pro_w7800/ | false | false | self | 7 | null |
Nvidia CEO says that Huawei's chip is comparable to Nvidia's H200. | 257 | On a interview with Bloomberg today, Jensen came out and said that Huawei's offering is as good as the Nvidia H200. Which kind of surprised me. Both that he just came out and said it and that it's so good. Since I thought it was only as good as the H100. But if anyone knows, Jensen would know. | 2025-05-28T23:40:56 | https://www.reddit.com/r/LocalLLaMA/comments/1kxw6b9/nvidia_ceo_says_that_huaweis_chip_is_comparable/ | fallingdowndizzyvr | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kxw6b9 | false | null | t3_1kxw6b9 | /r/LocalLLaMA/comments/1kxw6b9/nvidia_ceo_says_that_huaweis_chip_is_comparable/ | false | false | self | 257 | {'enabled': False, 'images': [{'id': '07mur0rnZDrzGJENQcx_VBtl7YvbhTtjxXkqqi2v02w', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/PblgcjVc2sexDmJ8z49sXvMVb3i5R1HdgB3kL3wGHzk.jpg?width=108&crop=smart&auto=webp&s=10a38fe6caa76ce655ab4ead962c8eef86bec75e', 'width': 108}, {'height': 162, 'url': 'h... |
What software do you use for self hosting? | 3 | Nvidia nim/triton
Ollama
vLLM
HuggingFace TGI
other
--- vote on comments via upvotes ---
I use Ollama right now. I sort of fell into this. So I used Ollama because it was the easiest and seemed most popular and had helm charts. And it supported CPU only. And had open-webui support.
However I see Nvidia nim/trit... | 2025-05-28T23:40:37 | https://www.reddit.com/r/LocalLLaMA/comments/1kxw62t/what_software_do_you_use_for_self_hosting/ | night0x63 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kxw62t | false | null | t3_1kxw62t | /r/LocalLLaMA/comments/1kxw62t/what_software_do_you_use_for_self_hosting/ | false | false | self | 3 | null |
Spoiler: If your cloud GPUs share ANY network or storage with others... they aren't really dedicated. You're renting a slice of potential chaos. | 1 | [removed] | 2025-05-28T23:39:19 | https://www.reddit.com/r/LocalLLaMA/comments/1kxw51w/spoiler_if_your_cloud_gpus_share_any_network_or/ | Equivalent-Lab-1633 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kxw51w | false | null | t3_1kxw51w | /r/LocalLLaMA/comments/1kxw51w/spoiler_if_your_cloud_gpus_share_any_network_or/ | true | false | spoiler | 1 | null |
Curious what everyone thinks of Meta's long term AI strategy. Do you think Meta will find its market when compared to Gemini/OpenAI? Open source obviously has its benefits but Mistral/Deepseek are worthy competitors. Would love to hear thoughts of where Llama is and potential to overtake? | 10 | I have a strong job opportunity within Llama - im currently happy in my gig but wanted to get your take! | 2025-05-28T23:38:26 | https://www.reddit.com/r/LocalLLaMA/comments/1kxw4cf/curious_what_everyone_thinks_of_metas_long_term/ | Excellent-Plastic638 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kxw4cf | false | null | t3_1kxw4cf | /r/LocalLLaMA/comments/1kxw4cf/curious_what_everyone_thinks_of_metas_long_term/ | false | false | self | 10 | null |
Spoiler🚨: If your cloud GPUs share ANY network or storage with others... they aren't really dedicated. You're renting a slice of potential chaos. | 1 | 2025-05-28T23:37:29 | Equivalent-Lab-1633 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1kxw3ka | false | null | t3_1kxw3ka | /r/LocalLLaMA/comments/1kxw3ka/spoiler_if_your_cloud_gpus_share_any_network_or/ | true | false | spoiler | 1 | {'enabled': True, 'images': [{'id': 'R_lbY9Wqmf1FS_F7qKsPmSitDjsl1437X_uTEju4V90', 'resolutions': [{'height': 108, 'url': 'https://preview.redd.it/lh2yr4k3zl3f1.jpeg?width=108&crop=smart&auto=webp&s=75f231bfb93d58bf7e9be6408938a491ee6b826a', 'width': 108}, {'height': 217, 'url': 'https://preview.redd.it/lh2yr4k3zl3f1.j... | ||
Ollama: The local AI model tool that doesn’t require a PhD | 1 | [removed] | 2025-05-28T23:30:15 | https://www.reddit.com/r/LocalLLaMA/comments/1kxvxxp/ollama_the_local_ai_model_tool_that_doesnt/ | techlatest_net | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kxvxxp | false | null | t3_1kxvxxp | /r/LocalLLaMA/comments/1kxvxxp/ollama_the_local_ai_model_tool_that_doesnt/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'MRcMXwW6UJRWSE0QEHlcttnaDu4rGdkAwZBqlbx1GFk', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/RViN4YnWfjgMO7IFrXevBwnBigPdBDdegLse0IsAvo0.jpg?width=108&crop=smart&auto=webp&s=f9d29552e5f2efb1b1a3fc48c2e9051738ee3d02', 'width': 108}, {'height': 216, 'url': '... |
What use case of mobile LLMs? | 0 | Niche now and through several years as mass (97%) of the hardware will be ready for it? | 2025-05-28T23:22:03 | https://www.reddit.com/r/LocalLLaMA/comments/1kxvrgf/what_use_case_of_mobile_llms/ | Perdittor | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kxvrgf | false | null | t3_1kxvrgf | /r/LocalLLaMA/comments/1kxvrgf/what_use_case_of_mobile_llms/ | false | false | self | 0 | null |
How can I ensure what hardware I need for Model Deployement? | 0 | I develop AI solutions for a company , and I trained Qwen 32B model according to their needs. It works on my local computer ,and we want to run it locally to make it reachable on company's ethernet. The maximum user number will be 10 for this model. How can we ensure what hardware is efficient for this kind of problem?... | 2025-05-28T23:20:32 | https://www.reddit.com/r/LocalLLaMA/comments/1kxvq8v/how_can_i_ensure_what_hardware_i_need_for_model/ | wololo1912 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kxvq8v | false | null | t3_1kxvq8v | /r/LocalLLaMA/comments/1kxvq8v/how_can_i_ensure_what_hardware_i_need_for_model/ | false | false | self | 0 | null |
New Deepseek R1's long context results | 148 | 2025-05-28T23:01:16 | fictionlive | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1kxvaq2 | false | null | t3_1kxvaq2 | /r/LocalLLaMA/comments/1kxvaq2/new_deepseek_r1s_long_context_results/ | false | false | 148 | {'enabled': True, 'images': [{'id': 'K7yotIVzNm9Zxy-aWJfUf4Yxj54XWZiVl5oVFHD1dRU', 'resolutions': [{'height': 144, 'url': 'https://preview.redd.it/n3mjiheosl3f1.png?width=108&crop=smart&auto=webp&s=70bbb711ac074f627ebb4f1afeac9e78bee25262', 'width': 108}, {'height': 288, 'url': 'https://preview.redd.it/n3mjiheosl3f1.pn... | |||
Looking for an uncensored vision model | 2 | For a project I am working on for a make up brand, I am creating a plugin that analyzes facial images and recommends users with a matching make up color. The use case works flawlessly within the ChatGPT app, but via the API, all models I tried refuse to analyze pictures of individuals.
"I'm sorry, but I can't help id... | 2025-05-28T22:58:18 | https://www.reddit.com/r/LocalLLaMA/comments/1kxv87u/looking_for_an_uncensored_vision_model/ | alexandernacho | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kxv87u | false | null | t3_1kxv87u | /r/LocalLLaMA/comments/1kxv87u/looking_for_an_uncensored_vision_model/ | false | false | self | 2 | null |
Is there a good LLM for therapy? | 1 | [removed] | 2025-05-28T22:31:52 | https://www.reddit.com/r/LocalLLaMA/comments/1kxumqo/is_there_a_good_llm_for_therapy/ | CanTheySeeMe | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kxumqo | false | null | t3_1kxumqo | /r/LocalLLaMA/comments/1kxumqo/is_there_a_good_llm_for_therapy/ | false | false | self | 1 | null |
ETL for unstructured to structured data and store unstructured data in db/ data warehouse | 1 | [removed] | 2025-05-28T22:26:21 | https://www.reddit.com/r/LocalLLaMA/comments/1kxui9o/etl_for_unstructured_to_structured_data_and_store/ | SpecialAppearance229 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kxui9o | false | null | t3_1kxui9o | /r/LocalLLaMA/comments/1kxui9o/etl_for_unstructured_to_structured_data_and_store/ | false | false | self | 1 | null |
Ollama now supports streaming responses with tool calling | 53 | 2025-05-28T22:18:32 | https://ollama.com/blog/streaming-tool | mj3815 | ollama.com | 1970-01-01T00:00:00 | 0 | {} | 1kxubqe | false | null | t3_1kxubqe | /r/LocalLLaMA/comments/1kxubqe/ollama_now_supports_streaming_responses_with_tool/ | false | false | 53 | {'enabled': False, 'images': [{'id': 'krjt_5uhqcaDfYjfO7lkezThehav9cAIRJgcK-OKAmM', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/s0D7i4Rco0trWh9Bu1uEkgnoJJLA3UNKUA9vs57seII.jpg?width=108&crop=smart&auto=webp&s=53486800d92d75b19d59502534fa9ba2785c14b0', 'width': 108}, {'height': 113, 'url': 'h... | ||
Is a VectorDB the best solution for this? | 5 | I'm working on a local running roleplaying chatbot and want to add external informations for example for the world lore. Perhaps with tools to process the information so that it can be easily written to such a DB. What is the best way to store this informations so the LLM can best use them in it's context when needed? ... | 2025-05-28T22:16:53 | https://www.reddit.com/r/LocalLLaMA/comments/1kxuac8/is_a_vectordb_the_best_solution_for_this/ | Blizado | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kxuac8 | false | null | t3_1kxuac8 | /r/LocalLLaMA/comments/1kxuac8/is_a_vectordb_the_best_solution_for_this/ | false | false | self | 5 | null |
Commercial AI roleplay app | 1 | [removed] | 2025-05-28T22:05:04 | https://www.reddit.com/r/LocalLLaMA/comments/1kxu0ir/commercial_ai_roleplay_app/ | Mountain_Shopping100 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kxu0ir | false | null | t3_1kxu0ir | /r/LocalLLaMA/comments/1kxu0ir/commercial_ai_roleplay_app/ | false | false | self | 1 | null |
Commercial AI roleplay bot | 1 | [removed] | 2025-05-28T22:03:36 | https://www.reddit.com/r/LocalLLaMA/comments/1kxtz9d/commercial_ai_roleplay_bot/ | Mountain_Shopping100 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kxtz9d | false | null | t3_1kxtz9d | /r/LocalLLaMA/comments/1kxtz9d/commercial_ai_roleplay_bot/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'PezcliVTOJmrw2T-iy6hQL8d2hqy4q6G8U__SS7ZjrY', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/vfINegtK_LVQfI_yxdj5PdsVDWUxQqhSmoieY5o4qKI.jpg?width=108&crop=smart&auto=webp&s=23183dce45b8759af44dc45578bcd60d1883477a', 'width': 108}, {'height': 113, 'url': 'h... |
friendshipended.gif | 0 | 2025-05-28T21:50:38 | https://v.redd.it/t5zxb802gl3f1 | Accomplished_Mode170 | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1kxto2g | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/t5zxb802gl3f1/DASHPlaylist.mpd?a=1751061052%2CMGYwMjZhMjQ2YTE0NzQ0NzNkY2ZlNGNhZTZmNTk0NmVmMTdiNDhiZTlhOWQ0ZWUyZWFmOWYxYjE0MDJmMjYzMA%3D%3D&v=1&f=sd', 'duration': 10, 'fallback_url': 'https://v.redd.it/t5zxb802gl3f1/DASH_1080.mp4?source=fallback', 'h... | t3_1kxto2g | /r/LocalLLaMA/comments/1kxto2g/friendshipendedgif/ | false | false | 0 | {'enabled': False, 'images': [{'id': 'YW1pdGg3NzVnbDNmMf6yxou-ltWHDSkhZXnMVJ1Mhd6iGp8wnIP7UB0Wo-Wj', 'resolutions': [{'height': 87, 'url': 'https://external-preview.redd.it/YW1pdGg3NzVnbDNmMf6yxou-ltWHDSkhZXnMVJ1Mhd6iGp8wnIP7UB0Wo-Wj.png?width=108&crop=smart&format=pjpg&auto=webp&s=ed2e8ad7c4f2215c44154df5dabe889ab873d... | ||
Local ETL Pipeline for Invoice Data Extraction (PDF to Structured Format) | 1 | [removed] | 2025-05-28T21:50:00 | https://www.reddit.com/r/LocalLLaMA/comments/1kxtnif/local_etl_pipeline_for_invoice_data_extraction/ | CalmMoment9215 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kxtnif | false | null | t3_1kxtnif | /r/LocalLLaMA/comments/1kxtnif/local_etl_pipeline_for_invoice_data_extraction/ | false | false | self | 1 | null |
Local ETL Pipeline for Invoice Data Extraction (PDF to Structured Format) | 1 | [removed] | 2025-05-28T21:48:02 | https://www.reddit.com/r/LocalLLaMA/comments/1kxtls0/local_etl_pipeline_for_invoice_data_extraction/ | CalmMoment9215 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kxtls0 | false | null | t3_1kxtls0 | /r/LocalLLaMA/comments/1kxtls0/local_etl_pipeline_for_invoice_data_extraction/ | false | false | self | 1 | null |
Self-hosted GitHub Copilot via Ollama – Dual RTX 4090 vs. Chained M4 Mac Minis | 0 | Hi,
I’m thinking about self-hosting GitHub Copilot using Ollama and I’m weighing two hardware setups:
* **Option A:** Dual NVIDIA RTX 4090
* **Option B:** A cluster of 7–8 Apple M4 Mac Minis linked together
My main goal is to run large open-source models like Qwen 3 and Llama 4 locally with low latency and good thro... | 2025-05-28T21:18:03 | https://www.reddit.com/r/LocalLLaMA/comments/1kxsvas/selfhosted_github_copilot_via_ollama_dual_rtx/ | stockninja666 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kxsvas | false | null | t3_1kxsvas | /r/LocalLLaMA/comments/1kxsvas/selfhosted_github_copilot_via_ollama_dual_rtx/ | false | false | self | 0 | null |
Setting up an AI to help prepare for a high difficulty oral questions test | 1 | [removed] | 2025-05-28T21:06:29 | https://www.reddit.com/r/LocalLLaMA/comments/1kxskwe/setting_up_an_ai_to_help_prepare_for_a_high/ | FinancialMechanic853 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kxskwe | false | null | t3_1kxskwe | /r/LocalLLaMA/comments/1kxskwe/setting_up_an_ai_to_help_prepare_for_a_high/ | false | false | self | 1 | null |
Implementing Cost-Effective Voice AI Solutions in Production | 1 | 2025-05-28T21:06:20 | https://comparevoiceai.com/blog/technical-guide-implementing-voice-ai-agent | Excellent-Effect237 | comparevoiceai.com | 1970-01-01T00:00:00 | 0 | {} | 1kxskrj | false | null | t3_1kxskrj | /r/LocalLLaMA/comments/1kxskrj/implementing_costeffective_voice_ai_solutions_in/ | false | false | default | 1 | null | |
LLMProxy (.NET) for seamless routing, failover, and cool features like Mixture of Agents! | 12 | Hey everyone! I recently developed a proxy service for working with LLMs, and I'm excited to share it with you. It's called LLMProxy, and its main goal is to provide a smoother, uninterrupted LLM experience.
Think of it as a smart intermediary between your favorite LLM client (like OpenWebUI, LobeChat, Roo Code, Silly... | 2025-05-28T21:04:43 | https://www.reddit.com/r/LocalLLaMA/comments/1kxsjb7/llmproxy_net_for_seamless_routing_failover_and/ | MetalZealousideal927 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kxsjb7 | false | null | t3_1kxsjb7 | /r/LocalLLaMA/comments/1kxsjb7/llmproxy_net_for_seamless_routing_failover_and/ | false | false | self | 12 | {'enabled': False, 'images': [{'id': 'eWghdjK9fEb_WTr3wKx5QZ9cyM6trK6rNKq6koJnm7U', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/tBa5Z-pOiHcDFxJqdGkUpHk5ISPew3SdrSCOr4Vznhw.jpg?width=108&crop=smart&auto=webp&s=ce768538e3a26633b680802e3339bef33b002160', 'width': 108}, {'height': 108, 'url': 'h... |
I did a screen-shot | 0 | 2025-05-28T20:58:47 | https://www.reddit.com/r/LocalLLaMA/comments/1kxsdqd/i_did_a_screenshot/ | Sicarius_The_First | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kxsdqd | false | null | t3_1kxsdqd | /r/LocalLLaMA/comments/1kxsdqd/i_did_a_screenshot/ | false | false | 0 | null | ||
posting to get klarmas :/ | 1 | [removed] | 2025-05-28T20:53:21 | https://www.reddit.com/r/LocalLLaMA/comments/1kxs8u0/posting_to_get_klarmas/ | Happy_Percentage_384 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kxs8u0 | false | null | t3_1kxs8u0 | /r/LocalLLaMA/comments/1kxs8u0/posting_to_get_klarmas/ | false | false | self | 1 | null |
Optimal way to shorten books? | 1 | [removed] | 2025-05-28T20:51:28 | https://www.reddit.com/r/LocalLLaMA/comments/1kxs785/optimal_way_to_shorten_books/ | pantel2212 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kxs785 | false | null | t3_1kxs785 | /r/LocalLLaMA/comments/1kxs785/optimal_way_to_shorten_books/ | false | false | self | 1 | null |
DeepSeek: R1 0528 is lethal | 577 | I just used DeepSeek: R1 0528 to address several ongoing coding challenges in RooCode.
This model performed exceptionally well, resolving all issues seamlessly. I hit up DeepSeek via OpenRouter, and the results were DAMN impressive. | 2025-05-28T20:48:14 | https://www.reddit.com/r/LocalLLaMA/comments/1kxs47i/deepseek_r1_0528_is_lethal/ | klippers | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kxs47i | false | null | t3_1kxs47i | /r/LocalLLaMA/comments/1kxs47i/deepseek_r1_0528_is_lethal/ | false | false | self | 577 | null |
Built a Python library for text classification because I got tired of reinventing the wheel | 6 | I kept running into the same problem at work: needing to classify text into custom categories but having to build everything from scratch each time. Sentiment analysis libraries exist, but what if you need to classify customer complaints into "billing", "technical", or "feature request"? Or moderate content into your o... | 2025-05-28T20:43:56 | https://www.reddit.com/r/LocalLLaMA/comments/1kxs06b/built_a_python_library_for_text_classification/ | Feeling-Remove6386 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kxs06b | false | null | t3_1kxs06b | /r/LocalLLaMA/comments/1kxs06b/built_a_python_library_for_text_classification/ | false | false | self | 6 | null |
New Upgraded Deepseek R1 is now almost on par with OpenAI's O3 High model on LiveCodeBench! Huge win for opensource! | 530 | 2025-05-28T20:41:49 | Gloomy-Signature297 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1kxry4x | false | null | t3_1kxry4x | /r/LocalLLaMA/comments/1kxry4x/new_upgraded_deepseek_r1_is_now_almost_on_par/ | false | false | 530 | {'enabled': True, 'images': [{'id': 'zjkexeavDkIUBvVSxNFnCv_U1xmlc7TaYE5uBayN5Hk', 'resolutions': [{'height': 87, 'url': 'https://preview.redd.it/51sg1oyu3l3f1.jpeg?width=108&crop=smart&auto=webp&s=67091deb769cd3f21915f5b8a87423c4241a5496', 'width': 108}, {'height': 174, 'url': 'https://preview.redd.it/51sg1oyu3l3f1.jp... | |||
New Upgraded Deepseek R1 is now almost on par with OpenAI's O3 Mini high model on LiveCodeBench! Huge win for opensource! | 1 | 2025-05-28T20:40:11 | Gloomy-Signature297 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1kxrwkp | false | null | t3_1kxrwkp | /r/LocalLLaMA/comments/1kxrwkp/new_upgraded_deepseek_r1_is_now_almost_on_par/ | false | false | 1 | {'enabled': True, 'images': [{'id': 'V50ubPJBDzGqMSfU32rAPcYNLRx_ilctRKB_TWeqpMU', 'resolutions': [{'height': 87, 'url': 'https://preview.redd.it/tk205xgk3l3f1.jpeg?width=108&crop=smart&auto=webp&s=39d3cfed175a6ece4a84994eb1264d2d57dffff8', 'width': 108}, {'height': 174, 'url': 'https://preview.redd.it/tk205xgk3l3f1.jp... | |||
How do you build and keep controls and guardrails for LLMs / AI agents? What trade-offs do you face? | 1 | [removed] | 2025-05-28T20:36:23 | https://www.reddit.com/r/LocalLLaMA/comments/1kxrt2m/how_do_you_build_and_keep_controls_and_guardrails/ | rafaelsandroni | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kxrt2m | false | null | t3_1kxrt2m | /r/LocalLLaMA/comments/1kxrt2m/how_do_you_build_and_keep_controls_and_guardrails/ | false | false | self | 1 | null |
Uncensoring LLM | 1 | [removed] | 2025-05-28T20:17:17 | https://www.reddit.com/r/LocalLLaMA/comments/1kxrbml/uncensoring_llm/ | Temporary-Baby9057 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kxrbml | false | null | t3_1kxrbml | /r/LocalLLaMA/comments/1kxrbml/uncensoring_llm/ | false | false | self | 1 | null |
kluster.ai is now hosting DeepSeek-R1-0528 | 21 | i think they may have been the first, not sure | 2025-05-28T20:01:41 | https://www.reddit.com/r/LocalLLaMA/comments/1kxqxmu/klusterai_is_now_hosting_deepseekr10528/ | swarmster | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kxqxmu | false | null | t3_1kxqxmu | /r/LocalLLaMA/comments/1kxqxmu/klusterai_is_now_hosting_deepseekr10528/ | false | false | self | 21 | null |
Deepseek R1 671B entirely locally? | 1 | [removed] | 2025-05-28T18:52:22 | https://www.reddit.com/r/LocalLLaMA/comments/1kxp6qk/deepseek_r1_671b_entirely_locally/ | BasicCoconut9187 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kxp6qk | false | null | t3_1kxp6qk | /r/LocalLLaMA/comments/1kxp6qk/deepseek_r1_671b_entirely_locally/ | false | false | self | 1 | null |
Bored by RLVF? Here comes RLIF | 17 | Reasoning training rests on external rewards or so I thought. But now we got this remarkable paper that shows that the reward is already in the LLM! how can that even be? I always thought there is no way the model can know what it knows and what it does not know.
https://preview.redd.it/h51ydaa9ik3f1.png?width=962&for... | 2025-05-28T18:49:54 | https://www.reddit.com/r/LocalLLaMA/comments/1kxp4hj/bored_by_rlvf_here_comes_rlif/ | Majestic-Explorer315 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kxp4hj | false | null | t3_1kxp4hj | /r/LocalLLaMA/comments/1kxp4hj/bored_by_rlvf_here_comes_rlif/ | false | false | 17 | null | |
The new DeepSeek R1 (0528) is out | 1 | [deleted] | 2025-05-28T18:44:12 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 1kxoz4s | false | null | t3_1kxoz4s | /r/LocalLLaMA/comments/1kxoz4s/the_new_deepseek_r1_0528_is_out/ | false | false | default | 1 | null | ||
Building a plug-and-play vector store for any data stream (text, audio, video, etc.)—searchable by your LLM via MCP | 11 | Hey all,
I’ve been hacking something together that I am personally missing when working with LLMs. A tool that ingests any data stream (text, audio, video, binaries) and pipes it straight into a vector store, indexed and ready to be retrieved via MCP.
My goal is as follows: In under five minutes, you can go from a me... | 2025-05-28T18:23:05 | https://www.reddit.com/r/LocalLLaMA/comments/1kxog9o/building_a_plugandplay_vector_store_for_any_data/ | Luckl507 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kxog9o | false | null | t3_1kxog9o | /r/LocalLLaMA/comments/1kxog9o/building_a_plugandplay_vector_store_for_any_data/ | false | false | self | 11 | null |
New Expressive Open source TTS model | 134 | https://github.com/resemble-ai/chatterbox
Exaggeration slider let's you control intensity.
| 2025-05-28T18:21:10 | https://www.reddit.com/r/LocalLLaMA/comments/1kxoehp/new_expressive_open_source_tts_model/ | manmaynakhashi | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kxoehp | false | null | t3_1kxoehp | /r/LocalLLaMA/comments/1kxoehp/new_expressive_open_source_tts_model/ | false | false | self | 134 | {'enabled': False, 'images': [{'id': 'LO7Q9Gr-40ixeoizFmL_qdV9btCM273X4Xf84slMJnI', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/efh_4RPbDlL1NENBasg1FwOVYh1VnSSwOHrId0YLelo.jpg?width=108&crop=smart&auto=webp&s=32a59d1b8e381b6519a3935f4b2cb4fad6632e3c', 'width': 108}, {'height': 108, 'url': 'h... |
Chatterbox TTS 0.5B - Claims to beat eleven labs | 394 | https://github.com/resemble-ai/chatterbox | 2025-05-28T18:19:08 | https://v.redd.it/i6nfhj7rck3f1 | Du_Hello | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1kxoco5 | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/i6nfhj7rck3f1/DASHPlaylist.mpd?a=1751048362%2CMjZkZmE2MzdkZmNkMTQxODkzYjU0YzljYmY0NDkwZWE1OGEyNDVjYThjZjgwM2YwZjJkZTU2NjMxN2I5N2RiZA%3D%3D&v=1&f=sd', 'duration': 26, 'fallback_url': 'https://v.redd.it/i6nfhj7rck3f1/DASH_1080.mp4?source=fallback', 'h... | t3_1kxoco5 | /r/LocalLLaMA/comments/1kxoco5/chatterbox_tts_05b_claims_to_beat_eleven_labs/ | false | false | 394 | {'enabled': False, 'images': [{'id': 'dmdxNW5pN3JjazNmMWJmZsSyJYSsSqC3nLOUAsuog5kud_cYJD6JARLf_51k', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/dmdxNW5pN3JjazNmMWJmZsSyJYSsSqC3nLOUAsuog5kud_cYJD6JARLf_51k.png?width=108&crop=smart&format=pjpg&auto=webp&s=e0d1ec7f5e95f222403aab32c8108a19e12ff... | |
Agents x MCP Hackathon by Hugging Face | 1 | [removed] | 2025-05-28T18:09:17 | https://www.reddit.com/r/LocalLLaMA/comments/1kxo3ny/agents_x_mcp_hackathon_by_hugging_face/ | Ill_Contribution6191 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kxo3ny | false | null | t3_1kxo3ny | /r/LocalLLaMA/comments/1kxo3ny/agents_x_mcp_hackathon_by_hugging_face/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': '6YDBlrUx_epQMEtVJeWfSWsJJuwv-pZdW5ltNa2XaRk', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/0EvYKAp7LVdHfTCNfFFmM6tq5Axz5wcQMZyht9HT3wk.jpg?width=108&crop=smart&auto=webp&s=92dd0d01d274294b33bd24f1338107c8e8710c78', 'width': 108}, {'height': 116, 'url': 'h... |
Anyone running into local deployment pain? | 1 | [removed] | 2025-05-28T18:08:51 | https://www.reddit.com/r/LocalLLaMA/comments/1kxo392/anyone_running_into_local_deployment_pain/ | downalongthecr33k | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kxo392 | false | null | t3_1kxo392 | /r/LocalLLaMA/comments/1kxo392/anyone_running_into_local_deployment_pain/ | false | false | self | 1 | null |
DeepSeek-R1-0528 🔥 | 419 | [https://huggingface.co/deepseek-ai/DeepSeek-R1-0528](https://huggingface.co/deepseek-ai/DeepSeek-R1-0528) | 2025-05-28T17:47:44 | https://www.reddit.com/r/LocalLLaMA/comments/1kxnjrj/deepseekr10528/ | Xhehab_ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kxnjrj | false | null | t3_1kxnjrj | /r/LocalLLaMA/comments/1kxnjrj/deepseekr10528/ | false | false | self | 419 | {'enabled': False, 'images': [{'id': 'vAUxpVLie1Mqj4dWMCPpSgS4JDBz82acZHywzpoHzeY', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/G2g_zbuPp_sknOUdQv6ufEg8e0xJC81xbpHlzy2plQU.jpg?width=108&crop=smart&auto=webp&s=9b162e58d60efac60b6dde3b475e84496c0c1868', 'width': 108}, {'height': 116, 'url': 'h... |
Resemble AI has Open-Sourced ChatterBox - A New State-of-the-Art TTS Model! | 2 | [removed] | 2025-05-28T17:44:37 | https://www.reddit.com/r/LocalLLaMA/comments/1kxngxr/resemble_ai_has_opensourced_chatterbox_a_new/ | Sea_Revolution_5907 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kxngxr | false | null | t3_1kxngxr | /r/LocalLLaMA/comments/1kxngxr/resemble_ai_has_opensourced_chatterbox_a_new/ | false | false | self | 2 | {'enabled': False, 'images': [{'id': 'LO7Q9Gr-40ixeoizFmL_qdV9btCM273X4Xf84slMJnI', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/efh_4RPbDlL1NENBasg1FwOVYh1VnSSwOHrId0YLelo.jpg?width=108&crop=smart&auto=webp&s=32a59d1b8e381b6519a3935f4b2cb4fad6632e3c', 'width': 108}, {'height': 108, 'url': 'h... |
deepseek-ai/DeepSeek-R1-0528 | 819 | [deepseek-ai/DeepSeek-R1-0528](https://huggingface.co/deepseek-ai/DeepSeek-R1-0528) | 2025-05-28T17:44:07 | https://www.reddit.com/r/LocalLLaMA/comments/1kxnggx/deepseekaideepseekr10528/ | ApprehensiveAd3629 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kxnggx | false | null | t3_1kxnggx | /r/LocalLLaMA/comments/1kxnggx/deepseekaideepseekr10528/ | false | false | self | 819 | {'enabled': False, 'images': [{'id': 'vAUxpVLie1Mqj4dWMCPpSgS4JDBz82acZHywzpoHzeY', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/G2g_zbuPp_sknOUdQv6ufEg8e0xJC81xbpHlzy2plQU.jpg?width=108&crop=smart&auto=webp&s=9b162e58d60efac60b6dde3b475e84496c0c1868', 'width': 108}, {'height': 116, 'url': 'h... |
Help me find this meme of a company that want to implement ia features and become a ia company | 0 | The meme was in 2 "slides" one of a elephant (company) and a small snake (ia features).
The second slide has the elephant in the snake 😅.
Just found the perfect prospect to send it to | 2025-05-28T17:22:33 | https://www.reddit.com/r/LocalLLaMA/comments/1kxmwps/help_me_find_this_meme_of_a_company_that_want_to/ | No_Afternoon_4260 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kxmwps | false | null | t3_1kxmwps | /r/LocalLLaMA/comments/1kxmwps/help_me_find_this_meme_of_a_company_that_want_to/ | false | false | self | 0 | null |
DeepSeek-R1-0528 VS claude-4-sonnet (still a demo) | 286 | The heptagon + 20 balls benchmark can no longer measure their capabilities, so I'm preparing to try something new | 2025-05-28T17:04:48 | https://v.redd.it/4lh915x90k3f1 | Dr_Karminski | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1kxmgtr | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/4lh915x90k3f1/DASHPlaylist.mpd?a=1751043903%2CZGZjYWU5NWQwMWNjYTZkNGVkNGFkY2NkY2I5ZjY2ZDk4Y2M2MjQ5MTZjM2UzZjdhZTFjNWRkN2UxODM1NmExNQ%3D%3D&v=1&f=sd', 'duration': 27, 'fallback_url': 'https://v.redd.it/4lh915x90k3f1/DASH_1080.mp4?source=fallback', 'h... | t3_1kxmgtr | /r/LocalLLaMA/comments/1kxmgtr/deepseekr10528_vs_claude4sonnet_still_a_demo/ | false | false | 286 | {'enabled': False, 'images': [{'id': 'dnJvNHd1dzkwazNmMfbq08Ky_kl08uBBBLb2R6rGiFj8hH36RtTI5_C0jZhK', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/dnJvNHd1dzkwazNmMfbq08Ky_kl08uBBBLb2R6rGiFj8hH36RtTI5_C0jZhK.png?width=108&crop=smart&format=pjpg&auto=webp&s=a914bac7d659e0a4b9f854e9237c0bfd55802... | |
I'm building a Self-Hosted Alternative to OpenAI Code Interpreter, E2B | 22 | Could not find a simple self-hosted solution so I built one in Rust that lets you securely run untrusted/AI-generated code in micro VMs.
**microsandbox** spins up in milliseconds, runs on your own infra, no Docker needed. And It doubles as an MCP Server so you can connect it directly with your fave MCP-enabled AI agen... | 2025-05-28T16:43:23 | https://www.reddit.com/r/LocalLLaMA/comments/1kxlx46/im_building_a_selfhosted_alternative_to_openai/ | NyproTheGeek | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kxlx46 | false | null | t3_1kxlx46 | /r/LocalLLaMA/comments/1kxlx46/im_building_a_selfhosted_alternative_to_openai/ | false | false | self | 22 | {'enabled': False, 'images': [{'id': 'S6I0XRPfDFdRK-ljqVPIUdkndNhwrC1263swjXHpM1M', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/h6hu8x8MxV_sduP2KEa8MvkkJZkIYz47KKKdjQlzqOg.jpg?width=108&crop=smart&auto=webp&s=657c810aee85367d383169806bde73a5cfb1cdde', 'width': 108}, {'height': 108, 'url': 'h... |
„[nothing]“ | 1 | 2025-05-28T16:41:30 | delobre | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1kxlvc7 | false | null | t3_1kxlvc7 | /r/LocalLLaMA/comments/1kxlvc7/nothing/ | false | false | 1 | {'enabled': True, 'images': [{'id': 'HuE7wJAbOeV1dg2ASPH6m9ujEL5UVMpYDZbRTvR6Oys', 'resolutions': [{'height': 94, 'url': 'https://preview.redd.it/rp8igkizwj3f1.jpeg?width=108&crop=smart&auto=webp&s=06f20bc85dd33bf233e67949f20aabc7702b39a4', 'width': 108}, {'height': 189, 'url': 'https://preview.redd.it/rp8igkizwj3f1.jp... | |||
Codestral Embed [embedding model specialized for code] | 26 | 2025-05-28T16:40:54 | https://mistral.ai/news/codestral-embed | pahadi_keeda | mistral.ai | 1970-01-01T00:00:00 | 0 | {} | 1kxlus4 | false | null | t3_1kxlus4 | /r/LocalLLaMA/comments/1kxlus4/codestral_embed_embedding_model_specialized_for/ | false | false | 26 | {'enabled': False, 'images': [{'id': 'QLQU1soiMTzFAm8GzW6EPDbaX5jrcYYFqy1ql5NYoiQ', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/UDZBQmD4AJb2vSY-B7oM2DhQ3zjGzTcUOviRMRcUKkg.jpg?width=108&crop=smart&auto=webp&s=bf2fc6d6ae14adad4ce62ffea575abc3783778db', 'width': 108}, {'height': 113, 'url': 'h... | ||
Unsloth Devstral Q8_K_XL only 30% the speed of Q8_0? | 6 | 2025-05-28T16:38:46 | https://www.reddit.com/r/LocalLLaMA/comments/1kxlsvk/unsloth_devstral_q8_k_xl_only_30_the_speed_of_q8_0/ | liquidki | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kxlsvk | false | null | t3_1kxlsvk | /r/LocalLLaMA/comments/1kxlsvk/unsloth_devstral_q8_k_xl_only_30_the_speed_of_q8_0/ | false | false | 6 | null | ||
Am I the only one suffering from Leaks? | 1 | [removed] | 2025-05-28T16:27:52 | https://www.reddit.com/r/LocalLLaMA/comments/1kxlj8l/am_i_the_only_one_suffering_from_leaks/ | Ok_Solution_7199 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kxlj8l | false | null | t3_1kxlj8l | /r/LocalLLaMA/comments/1kxlj8l/am_i_the_only_one_suffering_from_leaks/ | false | false | self | 1 | null |
I know it's "LOCAL"-LLaMA but... | 0 | I've been weighing buying vs renting for AI tasks/gens while working say \~8hrs a day. I did use AI to help with breakdown below (surprise, right.) This wouldn't be such a big thing to me, I would just buy the hardware but, I'm trying to build a place and go off-grid and use as little power as possible. (Even hooking u... | 2025-05-28T15:49:58 | https://www.reddit.com/r/LocalLLaMA/comments/1kxkke4/i_know_its_localllama_but/ | mr_happy_nice | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kxkke4 | false | null | t3_1kxkke4 | /r/LocalLLaMA/comments/1kxkke4/i_know_its_localllama_but/ | false | false | self | 0 | null |
Running LLMs Locally (using llama.cpp, Ollama, Docker Runner Model, and vLLM) | 1 | [removed] | 2025-05-28T15:42:19 | https://www.reddit.com/r/LocalLLaMA/comments/1kxkdcv/running_llms_locally_using_llamacpp_ollama_docker/ | Gvara | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kxkdcv | false | null | t3_1kxkdcv | /r/LocalLLaMA/comments/1kxkdcv/running_llms_locally_using_llamacpp_ollama_docker/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'nf1vfntDmlnVJqxHFe2djx5X6uwztCtSsbje7STTE0U', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/wmm93anfij2PJBeK-OhX7s7HbHrFZZTSJ6_riS1E0f4.jpg?width=108&crop=smart&auto=webp&s=419ca509f80806dd0b1e360d256e1d848ea9a438', 'width': 108}, {'height': 108, 'url': 'h... |
Another reorg for Meta Llama: AGI team created | 39 | Which teams are going to get the most GPUs?
[https://www.axios.com/2025/05/27/meta-ai-restructure-2025-agi-llama](https://www.axios.com/2025/05/27/meta-ai-restructure-2025-agi-llama)
Llama team divided into two teams: an AI products team and an AGI Foundations unit.
The AI products team will be responsible for the ... | 2025-05-28T15:31:29 | https://www.reddit.com/r/LocalLLaMA/comments/1kxk3lk/another_reorg_for_meta_llama_agi_team_created/ | Terminator857 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kxk3lk | false | null | t3_1kxk3lk | /r/LocalLLaMA/comments/1kxk3lk/another_reorg_for_meta_llama_agi_team_created/ | false | false | self | 39 | {'enabled': False, 'images': [{'id': 'hvqgHBjFtTay3NZyhRkPCn_2z-518HI17PkLVORa898', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/IbHRuWvtpfBCQoFRlxyxFQrSlpBeTjpMfaYH3WusCvs.jpg?width=108&crop=smart&auto=webp&s=d4dc40fd932667fe6cd956ce919f6ae5b010a7ac', 'width': 108}, {'height': 121, 'url': 'h... |
Dual RTX 3090 users (are there many of us?) | 21 |
What is your TDP ? (Or optimal clock speeds)
What is your PCIe lane speeds ?
Power supply ?
Planning to upgrade or sell before prices drop ?
Any other remarks ? | 2025-05-28T15:30:48 | https://www.reddit.com/r/LocalLLaMA/comments/1kxk2zf/dual_rtx_3090_users_are_there_many_of_us/ | StandardLovers | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kxk2zf | false | null | t3_1kxk2zf | /r/LocalLLaMA/comments/1kxk2zf/dual_rtx_3090_users_are_there_many_of_us/ | false | false | self | 21 | null |
Thoughts on which open source is best for what use-cases | 2 | Wondering if there is any work done/being done to 'pick' open source models for behavior based use-cases. For example: Which open source model is good for sentiment analysis, which model is good for emotion analysis, which model is good for innovation (generating newer ideas), which model is good for anomaly detection ... | 2025-05-28T15:10:53 | https://www.reddit.com/r/LocalLLaMA/comments/1kxjl07/thoughts_on_which_open_source_is_best_for_what/ | tazzspice | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kxjl07 | false | null | t3_1kxjl07 | /r/LocalLLaMA/comments/1kxjl07/thoughts_on_which_open_source_is_best_for_what/ | false | false | self | 2 | null |
QwQ 32B is Amazing (& Sharing my 131k + Imatrix) | 141 | I'm curious what your experience has been with QwQ 32B. I've seen really good takes on QwQ vs Qwen3, but I think they're not comparable. Here's the differences I see and I'd love feedback.
# When To Use Qwen3
If I had to choose between QwQ 32B versus Qwen3 for daily AI assistant tasks, I'd choose Qwen3. This is becau... | 2025-05-28T15:00:38 | https://www.reddit.com/r/LocalLLaMA/comments/1kxjbb5/qwq_32b_is_amazing_sharing_my_131k_imatrix/ | crossivejoker | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kxjbb5 | false | null | t3_1kxjbb5 | /r/LocalLLaMA/comments/1kxjbb5/qwq_32b_is_amazing_sharing_my_131k_imatrix/ | false | false | self | 141 | {'enabled': False, 'images': [{'id': '0AGVID46IoyBFfXi_I1ft4PTmcq0SBDpTWIhAApEH0s', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/Ytxc-xelT6_LJP3XZ2AYDHZVaynypLwEGtX8q6e6SD4.jpg?width=108&crop=smart&auto=webp&s=ace50ae5421a0fa349e9031eeebbedf5d9fec0c0', 'width': 108}, {'height': 116, 'url': 'h... |
Is slower inference and non-realtime cheaper? | 3 | is there a service that can take in my requests, and then give me the response after A WHILE, like, days later.
and is significantly cheaper? | 2025-05-28T14:53:00 | https://www.reddit.com/r/LocalLLaMA/comments/1kxj4ne/is_slower_inference_and_nonrealtime_cheaper/ | AryanEmbered | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kxj4ne | false | null | t3_1kxj4ne | /r/LocalLLaMA/comments/1kxj4ne/is_slower_inference_and_nonrealtime_cheaper/ | false | false | self | 3 | null |
Llama.cpp wont use gpu’s | 0 | So I recently downloaded an unsloth quant of DeepSeek R1 to test for the hell of it.
I downloaded the cuda 12.x version of llama.cpp from the releases section of the GitHub
I then went and started launching the model through the llama-server.exe making sure to use the —n-gpu-layers (or w.e) it is and set it to 14 sin... | 2025-05-28T14:24:55 | https://www.reddit.com/r/LocalLLaMA/comments/1kxifq9/llamacpp_wont_use_gpus/ | DeSibyl | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kxifq9 | false | null | t3_1kxifq9 | /r/LocalLLaMA/comments/1kxifq9/llamacpp_wont_use_gpus/ | false | false | self | 0 | null |
Llama.cpp: Does it make sense to use a larger --n-predict (-n) than --ctx-size (-c)? | 6 | My setup: A reasoning model eg Qwen3 32B at Q4KXL + 16k context. Those will fit snugly in 24GB VRAM.
Problem: Reasoning models, 1 time out of 3 (in my use cases), will keep on thinking for longer than the 16k window, and maybe indefinitely, and that's why I set the -n option to be slightly less than -c to account for ... | 2025-05-28T14:15:58 | https://www.reddit.com/r/LocalLLaMA/comments/1kxi7qh/llamacpp_does_it_make_sense_to_use_a_larger/ | ParaboloidalCrest | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kxi7qh | false | null | t3_1kxi7qh | /r/LocalLLaMA/comments/1kxi7qh/llamacpp_does_it_make_sense_to_use_a_larger/ | false | false | self | 6 | null |
VideoGameBench- full code + paper release | 29 | ERROR: type should be string, got "https://reddit.com/link/1kxhmgo/video/hzjtuzzr1j3f1/player\n\n**VideoGameBench** evaluates VLMs on Game Boy and MS-DOS games given only raw screen input, just like how a human would play. The best model (Gemini) completes just 0.48% of the benchmark. We have a bunch of clips on the website: \n[vgbench.com](http://vgbench.com)\n\n[https://arxiv.org/abs/2505.18134](https://arxiv.org/abs/2505.18134)\n\n[https://github.com/alexzhang13/vg-bench](https://github.com/alexzhang13/vg-bench)\n\nAlex and I will stick around to answer questions here." | 2025-05-28T13:51:11 | https://www.reddit.com/r/LocalLLaMA/comments/1kxhmgo/videogamebench_full_code_paper_release/ | ofirpress | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kxhmgo | false | null | t3_1kxhmgo | /r/LocalLLaMA/comments/1kxhmgo/videogamebench_full_code_paper_release/ | false | false | self | 29 | null |
VideoGameBench: Can Language Models play Video Games (arXiv release) | 1 | [removed] | 2025-05-28T13:42:38 | https://v.redd.it/16w87gp11j3f1 | ZhalexDev | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1kxhfb6 | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/16w87gp11j3f1/DASHPlaylist.mpd?a=1751031773%2CNzZiNzkzMGMyOGRjZjc1YjdkNjE0ZDQxM2JiYjFmZTI3MmYwZTUxNjBkMDYyMzg3MTliN2M0ODQxYjdiOGFkZg%3D%3D&v=1&f=sd', 'duration': 30, 'fallback_url': 'https://v.redd.it/16w87gp11j3f1/DASH_1080.mp4?source=fallback', 'h... | t3_1kxhfb6 | /r/LocalLLaMA/comments/1kxhfb6/videogamebench_can_language_models_play_video/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'N3U5bjlqcDExajNmMZ0udffztWdSVshPJLts_aHTpUjGuXCw85zjXQVQ2EeR', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/N3U5bjlqcDExajNmMZ0udffztWdSVshPJLts_aHTpUjGuXCw85zjXQVQ2EeR.png?width=108&crop=smart&format=pjpg&auto=webp&s=039ab140f0b83e5e726a7cd4821fa6a329f10... | |
VideoGameBench: Can Language Models play Video Games? (arXiv) | 1 | [removed] | 2025-05-28T13:40:20 | https://v.redd.it/yuiak0p00j3f1 | ZhalexDev | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1kxhdec | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/yuiak0p00j3f1/DASHPlaylist.mpd?a=1751031635%2CZTg2NmMyOWE0NWU1NWM2NmEyZTA4ZGZlNGZhMmYzZGQ4NTI4MGZkNjM4MWU3ZTIxODlhNmNmMTFkOTEyMTYwYw%3D%3D&v=1&f=sd', 'duration': 30, 'fallback_url': 'https://v.redd.it/yuiak0p00j3f1/DASH_1080.mp4?source=fallback', 'h... | t3_1kxhdec | /r/LocalLLaMA/comments/1kxhdec/videogamebench_can_language_models_play_video/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'cmZpemd6bzAwajNmMZ0udffztWdSVshPJLts_aHTpUjGuXCw85zjXQVQ2EeR', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/cmZpemd6bzAwajNmMZ0udffztWdSVshPJLts_aHTpUjGuXCw85zjXQVQ2EeR.png?width=108&crop=smart&format=pjpg&auto=webp&s=453fffc8cd31d5aaa228915c43810eece0e1f... | |
LLM on the go hardware question? (noob) | 1 | [removed] | 2025-05-28T13:31:48 | https://www.reddit.com/r/LocalLLaMA/comments/1kxh6et/llm_on_the_go_hardware_question_noob/ | tameka777 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kxh6et | false | null | t3_1kxh6et | /r/LocalLLaMA/comments/1kxh6et/llm_on_the_go_hardware_question_noob/ | false | false | self | 1 | null |
FlashMoe support in ipex-llm allows you to run DeepSeek V3/R1 671B and Qwen3MoE 235B models with just 1 or 2 Intel Arc GPU (such as A770 and B580) | 22 | I just noticed that this team claims it is possible to run the DeepSeek V1/R1 671B with two cheap Intel GPUs (and a huge amount of system RAM). I wonder if anybody has actually tried or built such a beast?
[https://github.com/intel/ipex-llm/blob/main/docs/mddocs/Quickstart/flashmoe\_quickstart.md](https://github.com/i... | 2025-05-28T13:24:16 | https://www.reddit.com/r/LocalLLaMA/comments/1kxh07e/flashmoe_support_in_ipexllm_allows_you_to_run/ | lQEX0It_CUNTY | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kxh07e | false | null | t3_1kxh07e | /r/LocalLLaMA/comments/1kxh07e/flashmoe_support_in_ipexllm_allows_you_to_run/ | false | false | self | 22 | {'enabled': False, 'images': [{'id': 'dQDfQwMdXNmvr4OEVIPfeHsTwt5A8oIqJPenKSWasbA', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/OAWBzkdIvJjZkJRx3FwspG9npepbJnpfeBvMde9gj4M.jpg?width=108&crop=smart&auto=webp&s=fc18aaf4e3b35b37605fe2d377d3fd5b74f206d1', 'width': 108}, {'height': 108, 'url': 'h... |
Is there an open source alternative to manus? | 60 | I tried manus and was surprised how ahead it is of other agents at browsing the web and using files, terminal etc autonomously.
There is no tool I've tried before that comes close to it.
What's the best open source alternative to Manus that you've tried? | 2025-05-28T13:23:15 | https://www.reddit.com/r/LocalLLaMA/comments/1kxgzd1/is_there_an_open_source_alternative_to_manus/ | BoJackHorseMan53 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kxgzd1 | false | null | t3_1kxgzd1 | /r/LocalLLaMA/comments/1kxgzd1/is_there_an_open_source_alternative_to_manus/ | false | false | self | 60 | null |
Model suggestions for string and arithmetic operations. | 0 | I am building a solution that does string operations, simple math, intelligent conversion of unformatted dates, checking datatype of values in the variables.
What are some models that can be used for the above scenario? | 2025-05-28T13:23:02 | https://www.reddit.com/r/LocalLLaMA/comments/1kxgz6u/model_suggestions_for_string_and_arithmetic/ | Forward_Friend_2078 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kxgz6u | false | null | t3_1kxgz6u | /r/LocalLLaMA/comments/1kxgz6u/model_suggestions_for_string_and_arithmetic/ | false | false | self | 0 | null |
chat-first code editing? | 3 | For software development with LMs we have quite a few IDE-centric solutions like Roo, Cline, <the commercial>, then a hybrid bloated/heavy UI of OpenHands and then the hardcore CLI stuff that just "works", which are fairly feasible to start even on a way in [Termux](https://f-droid.org/en/packages/com.termux/).
What ... | 2025-05-28T13:14:22 | https://www.reddit.com/r/LocalLLaMA/comments/1kxgs54/chatfirst_code_editing/ | uhuge | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kxgs54 | false | null | t3_1kxgs54 | /r/LocalLLaMA/comments/1kxgs54/chatfirst_code_editing/ | false | false | self | 3 | {'enabled': False, 'images': [{'id': '7jMZ7XD80oeucmGEaTwktIRZexLtGWvJfKdVD6Wu2SI', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/CXNpYDWzyOfIgBDx_cT8hOjSBmkBzPV2V8PF_sGNtQk.jpg?width=108&crop=smart&auto=webp&s=feccf0b924bf22ec5c533966c536d95028f97e5c', 'width': 108}], 'source': {'height': 19... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.