title stringlengths 1 300 | score int64 0 8.54k | selftext stringlengths 0 41.5k | created timestamp[ns]date 2023-04-01 04:30:41 2026-03-04 02:14:14 ⌀ | url stringlengths 0 878 | author stringlengths 3 20 | domain stringlengths 0 82 | edited timestamp[ns]date 1970-01-01 00:00:00 2026-02-19 14:51:53 | gilded int64 0 2 | gildings stringclasses 7
values | id stringlengths 7 7 | locked bool 2
classes | media stringlengths 646 1.8k ⌀ | name stringlengths 10 10 | permalink stringlengths 33 82 | spoiler bool 2
classes | stickied bool 2
classes | thumbnail stringlengths 4 213 ⌀ | ups int64 0 8.54k | preview stringlengths 301 5.01k ⌀ |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
—Built an app and 1 click installer for local AI— with enterprise security features—-running in a VM here | 1 | 2025-06-07T00:28:52 | https://v.redd.it/57s4eew7ge5f1 | Outrageous_Beat_3630 | /r/LocalLLaMA/comments/1l57x9g/built_an_app_and_1_click_installer_for_local_ai/ | 1970-01-01T00:00:00 | 0 | {} | 1l57x9g | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/57s4eew7ge5f1/DASHPlaylist.mpd?a=1751977737%2CMTY2ZjcwMjViNWQ1ODQ3ZDc4MzIzYjA2NTBjZjg2MzUxZWM5YzU2NTFkYzY4N2U1NmQ5YzRjMWYzNDU1YjUzYg%3D%3D&v=1&f=sd', 'duration': 75, 'fallback_url': 'https://v.redd.it/57s4eew7ge5f1/DASH_1080.mp4?source=fallback', 'h... | t3_1l57x9g | /r/LocalLLaMA/comments/1l57x9g/built_an_app_and_1_click_installer_for_local_ai/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'OHRmbnRlMzdnZTVmMWLvs4GTuyQxq5d6WTM1z-AYTTQ6EBP4UCg9wmLpG2Dr', 'resolutions': [{'height': 192, 'url': 'https://external-preview.redd.it/OHRmbnRlMzdnZTVmMWLvs4GTuyQxq5d6WTM1z-AYTTQ6EBP4UCg9wmLpG2Dr.png?width=108&crop=smart&format=pjpg&auto=webp&s=a49f2a0a5f5d44d429d4432d486ad597d9c3... | ||
I need help something llama.cpp | 1 | [removed] | 2025-06-07T00:25:47 | https://www.reddit.com/gallery/1l57v1f | Puzzled-Yoghurt564 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1l57v1f | false | null | t3_1l57v1f | /r/LocalLLaMA/comments/1l57v1f/i_need_help_something_llamacpp/ | false | false | 1 | null | |
I built a platform that generates overviews of codebases and creates a map of the codebase dependencies | 18 | 2025-06-07T00:16:33 | https://v.redd.it/dtd99xtbee5f1 | ComfortableArm121 | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1l57of0 | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/dtd99xtbee5f1/DASHPlaylist.mpd?a=1751847405%2CZDdjMDMwNzA4ZjgyZGU0YWQyOWIyMzRiMGNkNzZlY2U3YWY1M2RlNjZlNDIzMTIzNjc5ZTM2YTdlMjI2MjYyNw%3D%3D&v=1&f=sd', 'duration': 25, 'fallback_url': 'https://v.redd.it/dtd99xtbee5f1/DASH_1080.mp4?source=fallback', 'h... | t3_1l57of0 | /r/LocalLLaMA/comments/1l57of0/i_built_a_platform_that_generates_overviews_of/ | false | false | 18 | {'enabled': False, 'images': [{'id': 'ajV3dDR4dGJlZTVmMfiX-ZPFXvivw9U83pY8eGEanyuVX5PV_GlEnhBt2ZQ_', 'resolutions': [{'height': 73, 'url': 'https://external-preview.redd.it/ajV3dDR4dGJlZTVmMfiX-ZPFXvivw9U83pY8eGEanyuVX5PV_GlEnhBt2ZQ_.png?width=108&crop=smart&format=pjpg&auto=webp&s=0458269863ea8abf72822eef3c34b48413ba9... | ||
I built a platform that generates overviews of codebases and creates a map of the codebase dependencies | 1 | [removed] | 2025-06-07T00:14:50 | https://v.redd.it/mwjkmq1wde5f1 | ComfortableArm121 | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1l57n42 | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/mwjkmq1wde5f1/DASHPlaylist.mpd?a=1751847303%2CZDllZmYxOTc4YjRjYWI4YTUwMmY5ZTJhNTllYzRhODBiMGY3YTY3OGVjNTgzYzBmOWM3Y2Q3YjEzNDU0MTM4Ng%3D%3D&v=1&f=sd', 'duration': 25, 'fallback_url': 'https://v.redd.it/mwjkmq1wde5f1/DASH_1080.mp4?source=fallback', 'h... | t3_1l57n42 | /r/LocalLLaMA/comments/1l57n42/i_built_a_platform_that_generates_overviews_of/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'eTF5NGt1MXdkZTVmMfiX-ZPFXvivw9U83pY8eGEanyuVX5PV_GlEnhBt2ZQ_', 'resolutions': [{'height': 73, 'url': 'https://external-preview.redd.it/eTF5NGt1MXdkZTVmMfiX-ZPFXvivw9U83pY8eGEanyuVX5PV_GlEnhBt2ZQ_.png?width=108&crop=smart&format=pjpg&auto=webp&s=1a95ff36191f31f206c1a34e954e82bc99e05... | |
Built a one click local AI installer and fully functional app named Feni…🔎has enterprise security features baked in. Got it downloaded and installed in a VM! Spent a half a month on the project. What do you think? | 1 | 2025-06-07T00:14:23 | https://v.redd.it/gjqr1d95de5f1 | Outrageous_Beat_3630 | /r/LocalLLaMA/comments/1l57ms1/built_a_one_click_local_ai_installer_and_fully/ | 1970-01-01T00:00:00 | 0 | {} | 1l57ms1 | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/gjqr1d95de5f1/DASHPlaylist.mpd?a=1751976867%2CNzUxMTEwODE2YjAwZDgwZjY1YmU2YjRlMzA5ZWY3MjE0YzUxOGNkZmIwMGVhN2Y0MzlmNWE2ZGVlNzkzNWFkYg%3D%3D&v=1&f=sd', 'duration': 176, 'fallback_url': 'https://v.redd.it/gjqr1d95de5f1/DASH_1080.mp4?source=fallback', '... | t3_1l57ms1 | /r/LocalLLaMA/comments/1l57ms1/built_a_one_click_local_ai_installer_and_fully/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'ZmdpYTBqMjVkZTVmMS0H6rkVf6GAxjuDNK1kvyp-uXSIu9iwUTTsYWpzBDHO', 'resolutions': [{'height': 192, 'url': 'https://external-preview.redd.it/ZmdpYTBqMjVkZTVmMS0H6rkVf6GAxjuDNK1kvyp-uXSIu9iwUTTsYWpzBDHO.png?width=108&crop=smart&format=pjpg&auto=webp&s=46df3ba98235d67a68d6d5e8ffb339bd79eb... | ||
Pocketflow is now a workflow generator called Osly!! All you need to do is describe your idea | 0 | We built a tool that automates repetitive tasks super easily! Pocketflow was cool but you needed to be technical for that. We re-imagined a way for non-technical creators to build workflows without an IDE.
How our tool, Osly works:
1. Describe any task in plain English.
2. Our AI builds, tests, and perfects a robust ... | 2025-06-06T23:56:24 | https://www.reddit.com/r/LocalLLaMA/comments/1l579ap/pocketflow_is_now_a_workflow_generator_called/ | Weak_Birthday2735 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l579ap | false | null | t3_1l579ap | /r/LocalLLaMA/comments/1l579ap/pocketflow_is_now_a_workflow_generator_called/ | false | false | self | 0 | null |
Pocketflow is now a workflow generator called Osly!! All you need to do is describe your idea | 2 | [removed] | 2025-06-06T23:47:57 | https://www.reddit.com/r/LocalLLaMA/comments/1l5735k/pocketflow_is_now_a_workflow_generator_called/ | Weak_Birthday2735 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l5735k | false | null | t3_1l5735k | /r/LocalLLaMA/comments/1l5735k/pocketflow_is_now_a_workflow_generator_called/ | false | false | self | 2 | null |
9060xt 16gb vs B580 lmstudio | 1 | [removed] | 2025-06-06T23:10:35 | https://www.reddit.com/r/LocalLLaMA/comments/1l56b1w/9060xt_16gb_vs_b580_lmstudio/ | Buildthehomelab | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l56b1w | false | null | t3_1l56b1w | /r/LocalLLaMA/comments/1l56b1w/9060xt_16gb_vs_b580_lmstudio/ | true | false | spoiler | 1 | null |
Recommended AI model to run on my laptop without overheating? | 1 | [removed] | 2025-06-06T22:41:44 | https://www.reddit.com/r/LocalLLaMA/comments/1l55nxx/recommended_ai_model_to_run_on_my_laptop_without/ | Jealous_Matter_1282 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l55nxx | false | null | t3_1l55nxx | /r/LocalLLaMA/comments/1l55nxx/recommended_ai_model_to_run_on_my_laptop_without/ | false | false | self | 1 | null |
Guys real question where llama 4 behemoth and thinking ?? | 238 | 2025-06-06T22:21:01 | Independent-Wind4462 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1l557lg | false | null | t3_1l557lg | /r/LocalLLaMA/comments/1l557lg/guys_real_question_where_llama_4_behemoth_and/ | false | false | 238 | {'enabled': True, 'images': [{'id': 'I2Nh12fwA5O6Csj6ndw-N8Tw7fW5rQnRhZ3bsU04g2k', 'resolutions': [{'height': 60, 'url': 'https://preview.redd.it/xl7vf5frtd5f1.png?width=108&crop=smart&auto=webp&s=d6106a435db9f4caf57819ef012afd4b1367adb8', 'width': 108}, {'height': 121, 'url': 'https://preview.redd.it/xl7vf5frtd5f1.png... | |||
Local Vision LLM finetuning | 1 | [removed] | 2025-06-06T22:07:03 | https://www.reddit.com/r/LocalLLaMA/comments/1l54w6r/local_vision_llm_finetuning/ | Cool-Instruction-435 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l54w6r | false | null | t3_1l54w6r | /r/LocalLLaMA/comments/1l54w6r/local_vision_llm_finetuning/ | false | false | self | 1 | null |
So cool! Imagine if it was local. Any similar localLLM projects out there? | 0 | https://youtu.be/FpSJX59L7N4?si=SYCl8STqFxZnwg7a | 2025-06-06T21:59:33 | https://www.reddit.com/r/LocalLLaMA/comments/1l54pw7/so_cool_imagine_if_it_was_local_any_similar/ | Own-Potential-2308 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l54pw7 | false | null | t3_1l54pw7 | /r/LocalLLaMA/comments/1l54pw7/so_cool_imagine_if_it_was_local_any_similar/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'navJ1b03qwSRM5044KR_KP9_62j9mUy-O-_xXeB6PLE', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/q03BPLLZr7F5_GokBlcPkizRDQ2BMXNYnhJBNj_JYQE.jpg?width=108&crop=smart&auto=webp&s=78432354ebc2207fd86ed0f8bc4cccd96d966390', 'width': 108}, {'height': 162, 'url': 'h... |
so anyway.. i ported Bagel to run with 8GB... not that you should but... | 1 | 2025-06-06T21:37:35 | https://www.reddit.com/r/CrossosAI/comments/1l54321/behold_core_bagel/? | loscrossos | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1l547t2 | false | null | t3_1l547t2 | /r/LocalLLaMA/comments/1l547t2/so_anyway_i_ported_bagel_to_run_with_8gb_not_that/ | false | false | default | 1 | {'enabled': False, 'images': [{'id': 'a-4P51wt8yglyYw9FNkMIrrE_-Z-7_PRPfqbH82yLV0', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/HeIcia73z3LdeB3kx9Kglr2UFvrrZyKfrF5iRF-YY3o.jpg?width=108&crop=smart&auto=webp&s=98ef100eaab0c4d07019f1e5092ca7ae0d227325', 'width': 108}, {'height': 108, 'url': 'h... | |
Git for Idiots (Broken down to Four Commands) | 22 | Before AI will take over, people will still have to deal with git.
Since i noticed that a lot of my collegues want to work with AI but have no idea of how Git works i have implemented a basic Git for Idiots which breaks down Git to a basic version control and online backup functionality for solo projects with four com... | 2025-06-06T21:26:07 | https://www.reddit.com/r/LocalLLaMA/comments/1l53ych/git_for_idiots_broken_down_to_four_commands/ | Consistent-Disk-7282 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l53ych | false | null | t3_1l53ych | /r/LocalLLaMA/comments/1l53ych/git_for_idiots_broken_down_to_four_commands/ | false | false | self | 22 | {'enabled': False, 'images': [{'id': 'BdPlmM6UBlIvv_9b8BtloLVbPtkWemBeAm8iOCCLElw', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/_w4iQi8hmqjoa2i4yir8Njz05rJVqGjaSPtQW3d3ARE.jpg?width=108&crop=smart&auto=webp&s=86428f98ae948af850ee82e65e5ccbd41b779cbe', 'width': 108}, {'height': 108, 'url': 'h... |
Same document retrieved multiple times in results – why? | 1 | [removed] | 2025-06-06T20:58:15 | https://www.reddit.com/r/LocalLLaMA/comments/1l53asa/same_document_retrieved_multiple_times_in_results/ | OldBlackberry9158 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l53asa | false | null | t3_1l53asa | /r/LocalLLaMA/comments/1l53asa/same_document_retrieved_multiple_times_in_results/ | false | false | self | 1 | null |
Training Arguments | 1 | [removed] | 2025-06-06T20:50:47 | https://www.reddit.com/r/LocalLLaMA/comments/1l534kh/training_arguments/ | EchoOdd5367 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l534kh | false | null | t3_1l534kh | /r/LocalLLaMA/comments/1l534kh/training_arguments/ | false | false | self | 1 | null |
Terrible hindi translation, missing texts, paused timeline whisper ? | 0 | I have been trying very hard from hours.
When I am using whisper all models tiny to large models I am facing this issue.
Also i set language to hindi and if I don't set anything I get translation of it in english which is surprisingly good
While i just want hindi text over it correct. | 2025-06-06T20:50:17 | https://www.reddit.com/r/LocalLLaMA/comments/1l5345h/terrible_hindi_translation_missing_texts_paused/ | jadhavsaurabh | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l5345h | false | null | t3_1l5345h | /r/LocalLLaMA/comments/1l5345h/terrible_hindi_translation_missing_texts_paused/ | false | false | self | 0 | null |
Is there appetite for hosting 3b/8b size models at an affordable rate? | 0 | I don't want this to be a promotional post even though it kind of is. We are looking for people who want ot host 3b/8b models of the llama, gemma, and mistral model family's. We are working towards expanding to qwen and eventually larger model sizes, we are using new hardware that hasn't been really publicized like Gro... | 2025-06-06T20:44:27 | https://www.reddit.com/r/LocalLLaMA/comments/1l52z9k/is_there_appetite_for_hosting_3b8b_size_models_at/ | No-Fig-8614 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l52z9k | false | null | t3_1l52z9k | /r/LocalLLaMA/comments/1l52z9k/is_there_appetite_for_hosting_3b8b_size_models_at/ | false | false | self | 0 | null |
CrewAI with Ollama and MCP | 0 | Anybody spin this up with ollama successfully? I tried using the example and spin up a MCP with tools.
I can see the tools and “use” them, but I cannot for the life of me get the output from it. | 2025-06-06T20:30:38 | https://www.reddit.com/r/LocalLLaMA/comments/1l52nov/crewai_with_ollama_and_mcp/ | SpareIntroduction721 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l52nov | false | null | t3_1l52nov | /r/LocalLLaMA/comments/1l52nov/crewai_with_ollama_and_mcp/ | false | false | self | 0 | null |
AI server help, duel k80s LocalAGI | 0 | Hey everyone,
I’m trying to get LocalAGI set up on my local server to act as a backend replacement for Ollama, mainly because I want search tools, memory, and agent capabilities that Ollama doesn’t currently offer. I’ve been having a tough time getting everything running reliably, and I could use some help or guidance... | 2025-06-06T20:29:52 | https://www.reddit.com/r/LocalLLaMA/comments/1l52n0b/ai_server_help_duel_k80s_localagi/ | JcorpTech | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l52n0b | false | null | t3_1l52n0b | /r/LocalLLaMA/comments/1l52n0b/ai_server_help_duel_k80s_localagi/ | false | false | self | 0 | null |
Help with Proxmox + Debian + Docker /w Nvidia 5060TI | 2 | Hi! Im at my Witts end here. I've been trying for the past few days with varying levels of success and failure. I have proxmox running with a Debian VM running docker containers. I'm trying to use a 5060ti in passthrough mode to the Debian VM
I have the cpu set to host and passed through the 5060TI using PCI.
I'm su... | 2025-06-06T20:11:21 | https://www.reddit.com/r/LocalLLaMA/comments/1l5277f/help_with_proxmox_debian_docker_w_nvidia_5060ti/ | EarEquivalent3929 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l5277f | false | null | t3_1l5277f | /r/LocalLLaMA/comments/1l5277f/help_with_proxmox_debian_docker_w_nvidia_5060ti/ | false | false | self | 2 | null |
3b and 7b Serving with new Hardware | 2 | I don't want this to be a promotional post even though it kind of is. We are looking for people who want ot host 3b/8b models of the llama, gemma, and mistral model familys. We are working towards expanding to qwen and eventually larger model sizes: [https://www.positron.ai/snap-serve](https://www.positron.ai/snap-serv... | 2025-06-06T19:58:25 | https://www.reddit.com/r/LocalLLaMA/comments/1l51vxy/3b_and_7b_serving_with_new_hardware/ | No-Fig-8614 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l51vxy | false | null | t3_1l51vxy | /r/LocalLLaMA/comments/1l51vxy/3b_and_7b_serving_with_new_hardware/ | false | false | self | 2 | null |
What is the best value card I could buy for decent performance? | 3 | I have a 1080 (ancient) card that I use now with 7b-ish models and I'm thinking of an update mainly to use larger models. My use case is running an embedding model alongside a normal one and I don't mind switching the "normal" models depending on the case (coding vs chatbot). I was looking for a comparator for differe... | 2025-06-06T19:50:23 | https://www.reddit.com/r/LocalLLaMA/comments/1l51p85/what_is_the_best_value_card_i_could_buy_for/ | equinoxel | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l51p85 | false | null | t3_1l51p85 | /r/LocalLLaMA/comments/1l51p85/what_is_the_best_value_card_i_could_buy_for/ | false | false | self | 3 | null |
NER: extract position | 1 | Hi,
I wonder if it is possible to extract position of name entity with local LLM.
For instance, suppose I have a recipe with foods, I want to extract all foods with the position of the word in the original text.
If prompt the LLM with something like : " extract foods with the position", it will failed many time.
F... | 2025-06-06T19:45:52 | https://www.reddit.com/r/LocalLLaMA/comments/1l51lhp/ner_extract_position/ | TargetDangerous2216 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l51lhp | false | null | t3_1l51lhp | /r/LocalLLaMA/comments/1l51lhp/ner_extract_position/ | false | false | self | 1 | null |
Need selfhosted AI to generate better bash scripts and ansible playbooks | 1 | Hi. I am new to AI Models.
I need a selfhosted AI which i can give access to a directory with my scripts and playbooks etc. From which it can check the projects code and tell me where I could make it better, more concise and where it's wrong or grammar of comment is bad etc.
If possible it should be able to help me ... | 2025-06-06T19:34:27 | https://www.reddit.com/r/LocalLLaMA/comments/1l51c1o/need_selfhosted_ai_to_generate_better_bash/ | human_with_humanity | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l51c1o | false | null | t3_1l51c1o | /r/LocalLLaMA/comments/1l51c1o/need_selfhosted_ai_to_generate_better_bash/ | false | false | self | 1 | null |
LegoGPT training params | 1 | [removed] | 2025-06-06T19:19:36 | https://www.reddit.com/r/LocalLLaMA/comments/1l50z5l/legogpt_training_params/ | EchoOdd5367 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l50z5l | false | null | t3_1l50z5l | /r/LocalLLaMA/comments/1l50z5l/legogpt_training_params/ | false | false | self | 1 | null |
Can you help me find that story writing LLM tool that was introduced by other reddit user in this subreddit? | 1 | [removed] | 2025-06-06T19:12:06 | https://www.reddit.com/r/LocalLLaMA/comments/1l50snd/can_you_help_me_find_that_story_writing_llm_tool/ | DaniyarQQQ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l50snd | false | null | t3_1l50snd | /r/LocalLLaMA/comments/1l50snd/can_you_help_me_find_that_story_writing_llm_tool/ | false | false | self | 1 | null |
Help Choosing the Best LLM Inference Stack for Local Deployment (8x RTX 6000 Blackwell) | 1 | [removed] | 2025-06-06T19:03:18 | https://www.reddit.com/r/LocalLLaMA/comments/1l50kzq/help_choosing_the_best_llm_inference_stack_for/ | Fresh_Month_2594 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l50kzq | false | null | t3_1l50kzq | /r/LocalLLaMA/comments/1l50kzq/help_choosing_the_best_llm_inference_stack_for/ | false | false | self | 1 | null |
Is there a local alternative to google code diffusion? | 6 | LLMs write code, and I have some installed locally, and they are working fine
Google has DeepMind Diffusion, and I tested today, just a few request to build a few web samples, and that is shit!!!
No LLMs local or remote can compete with that shit
The question, is there an open-source alternative of something similar... | 2025-06-06T18:44:30 | https://www.reddit.com/r/LocalLLaMA/comments/1l504fg/is_there_a_local_alternative_to_google_code/ | Careful-State-854 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l504fg | false | null | t3_1l504fg | /r/LocalLLaMA/comments/1l504fg/is_there_a_local_alternative_to_google_code/ | false | false | self | 6 | null |
Best online playground for running inference on Llama models? | 1 | [removed] | 2025-06-06T18:28:58 | https://www.reddit.com/r/LocalLLaMA/comments/1l4zqwi/best_online_playground_for_running_inference_on/ | LastOfStendhal | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l4zqwi | false | null | t3_1l4zqwi | /r/LocalLLaMA/comments/1l4zqwi/best_online_playground_for_running_inference_on/ | false | false | self | 1 | null |
Seeking similar model with longer context length than Darkest-Muse-v1? | 1 | [removed] | 2025-06-06T18:20:19 | https://www.reddit.com/r/LocalLLaMA/comments/1l4zje4/seeking_similar_model_with_longer_context_length/ | julimoooli | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l4zje4 | false | null | t3_1l4zje4 | /r/LocalLLaMA/comments/1l4zje4/seeking_similar_model_with_longer_context_length/ | false | false | self | 1 | null |
Opinion needed | Local/Remote AI chat webapp (incomplete / under development) | 1 | [removed] | 2025-06-06T18:18:44 | https://www.reddit.com/r/LocalLLaMA/comments/1l4zi0p/opinion_needed_localremote_ai_chat_webapp/ | Neural-Systems | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l4zi0p | false | null | t3_1l4zi0p | /r/LocalLLaMA/comments/1l4zi0p/opinion_needed_localremote_ai_chat_webapp/ | false | false | self | 1 | null |
Seeking similar model with longer context length than Darkest-Muse-v1? | 1 | [removed] | 2025-06-06T18:17:36 | https://www.reddit.com/r/LocalLLaMA/comments/1l4zgzy/seeking_similar_model_with_longer_context_length/ | julimoooli | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l4zgzy | false | null | t3_1l4zgzy | /r/LocalLLaMA/comments/1l4zgzy/seeking_similar_model_with_longer_context_length/ | false | false | self | 1 | null |
Offline verbal chat bot with modular tool calling! | 17 | This is an update from my original [post](https://www.reddit.com/r/LocalLLaMA/comments/1l2vrg2/fully_offline_verbal_chat_bot/) where I demoed my fully offline verbal chat bot. I've made a couple updates, and should be releasing it on github soon.
\- Clipboard insertion: allows you to insert your clipboard to the prom... | 2025-06-06T17:44:15 | https://v.redd.it/onqpjk30fc5f1 | NonYa_exe | /r/LocalLLaMA/comments/1l4yncl/offline_verbal_chat_bot_with_modular_tool_calling/ | 1970-01-01T00:00:00 | 0 | {} | 1l4yncl | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/onqpjk30fc5f1/DASHPlaylist.mpd?a=1751953459%2CNWQzMDE3YjQ2YmZlY2I0NjZjNTg4ZmU4ZmJlYzFhZDI3NTllOTNkMzdmM2M5YWNiZjY2MzIwM2JlMmVjNWFjYQ%3D%3D&v=1&f=sd', 'duration': 250, 'fallback_url': 'https://v.redd.it/onqpjk30fc5f1/DASH_1080.mp4?source=fallback', '... | t3_1l4yncl | /r/LocalLLaMA/comments/1l4yncl/offline_verbal_chat_bot_with_modular_tool_calling/ | false | false | 17 | {'enabled': False, 'images': [{'id': 'aXAxOXFpMzBmYzVmMf4vZSu7SIjEMc78UdmUdVYtZoDmH2fqjic2HovHvoAi', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/aXAxOXFpMzBmYzVmMf4vZSu7SIjEMc78UdmUdVYtZoDmH2fqjic2HovHvoAi.png?width=108&crop=smart&format=pjpg&auto=webp&s=f2dd49444defd2d20e432cf90df7df06202cc... | |
Quick Question on Limitations of Mac M1 for LLMS | 1 | [removed] | 2025-06-06T17:10:15 | https://www.reddit.com/r/LocalLLaMA/comments/1l4xspl/quick_question_on_limitations_of_mac_m1_for_llms/ | chrismryan | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l4xspl | false | null | t3_1l4xspl | /r/LocalLLaMA/comments/1l4xspl/quick_question_on_limitations_of_mac_m1_for_llms/ | false | false | self | 1 | null |
what's the case against flash attention? | 65 | I accidently stumbled upon the -fa (flash attention) flag in llama.cpp's llama-server. I cannot speak to the speedup in performence as i haven't properly tested it, but the memory optimization is huge: 8B-F16-gguf model with 100k fit comfortably in 32GB vram gpu with some 2-3 GB to spare.
A very brief search revealed ... | 2025-06-06T16:59:41 | https://www.reddit.com/r/LocalLLaMA/comments/1l4xiwg/whats_the_case_against_flash_attention/ | Responsible-Crew1801 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l4xiwg | false | null | t3_1l4xiwg | /r/LocalLLaMA/comments/1l4xiwg/whats_the_case_against_flash_attention/ | false | false | self | 65 | null |
I forked google’s Fullstack LangGraph Quickstart to work with ollama + searxng | 1 | [removed] | 2025-06-06T16:40:37 | https://www.reddit.com/r/LocalLLaMA/comments/1l4x2i8/i_forked_googles_fullstack_langgraph_quickstart/ | Filo0104 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l4x2i8 | false | null | t3_1l4x2i8 | /r/LocalLLaMA/comments/1l4x2i8/i_forked_googles_fullstack_langgraph_quickstart/ | false | false | self | 1 | null |
Hugging Face Just Dropped it's MCP Server | 229 | 2025-06-06T16:12:58 | https://hf.co/mcp | eternviking | hf.co | 1970-01-01T00:00:00 | 0 | {} | 1l4wdwh | false | null | t3_1l4wdwh | /r/LocalLLaMA/comments/1l4wdwh/hugging_face_just_dropped_its_mcp_server/ | false | false | default | 229 | null | |
Better quantization: Yet Another Quantization Algorithm | 141 | We're introducing Yet Another Quantization Algorithm, a new quantization algorithm that better preserves the original model's outputs after quantization. YAQA reduces the KL by >30% over QTIP and achieves an even lower KL than Google's QAT model on Gemma 3.
See the paper [https://arxiv.org/pdf/2505.22988](https://arx... | 2025-06-06T16:12:04 | https://www.reddit.com/r/LocalLLaMA/comments/1l4wd2w/better_quantization_yet_another_quantization/ | tsengalb99 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l4wd2w | false | null | t3_1l4wd2w | /r/LocalLLaMA/comments/1l4wd2w/better_quantization_yet_another_quantization/ | false | false | self | 141 | null |
ether0 - Mistral 24B with RL on several molecular design tasks in chemistry | 34 | A Reasoning Model for Chemistry
open weights: [https://huggingface.co/futurehouse/ether0](https://huggingface.co/futurehouse/ether0)
ether0 is a 24B language model trained to reason in English and output molecular structures as SMILES. It is derived from fine-tuning and reinforcement learning training from Mistral... | 2025-06-06T15:54:31 | https://www.reddit.com/r/LocalLLaMA/comments/1l4vx7i/ether0_mistral_24b_with_rl_on_several_molecular/ | ApprehensiveAd3629 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l4vx7i | false | null | t3_1l4vx7i | /r/LocalLLaMA/comments/1l4vx7i/ether0_mistral_24b_with_rl_on_several_molecular/ | false | false | self | 34 | {'enabled': False, 'images': [{'id': 'iCMq5l8PV8l3uvNWWBrpeQgtO0VcTQXa9BqIXRBGPmk', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/HJg_in9BrTrMWG9hxp35AuIjQiW6FYC9tZD_C0eCYE8.jpg?width=108&crop=smart&auto=webp&s=59569a79b8743eaba966d7f2912b7d37ab60b644', 'width': 108}, {'height': 116, 'url': 'h... |
Is this the largest "No synthetic data" open weight LLM? (142B) | 356 | From the GitHub page of https://huggingface.co/rednote-hilab/dots.llm1.base | 2025-06-06T15:47:49 | AaronFeng47 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1l4vrj4 | false | null | t3_1l4vrj4 | /r/LocalLLaMA/comments/1l4vrj4/is_this_the_largest_no_synthetic_data_open_weight/ | false | false | default | 356 | {'enabled': True, 'images': [{'id': 'sgokl11mvb5f1', 'resolutions': [{'height': 132, 'url': 'https://preview.redd.it/sgokl11mvb5f1.png?width=108&crop=smart&auto=webp&s=70e091dae77e690684915ac00545acf713fb2f16', 'width': 108}, {'height': 264, 'url': 'https://preview.redd.it/sgokl11mvb5f1.png?width=216&crop=smart&auto=we... | |
I thought Qwen3 was putting out some questionable content into my code... | 33 | Oh. \*\*SOLVED.\*\* See why, I think, at the end.
Okay, so I was trying \`aider\`. Only tried a bit here and there, but I just switched to using \`Qwen\_Qwen3-14B-Q6\_K\_L.gguf\`. And I see this in my aider output:
\`\`\`text
\## Signoff: insurgent (razzin' frazzin' motherfu... stupid directx...)
\`\`\`
Now, ... | 2025-06-06T15:32:16 | https://www.reddit.com/r/LocalLLaMA/comments/1l4vdnd/i_thought_qwen3_was_putting_out_some_questionable/ | jaggzh | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l4vdnd | false | null | t3_1l4vdnd | /r/LocalLLaMA/comments/1l4vdnd/i_thought_qwen3_was_putting_out_some_questionable/ | false | false | 33 | null | |
New model - Qwen3 Embedding + Reranker | 1 | [removed] | 2025-06-06T15:01:32 | https://www.reddit.com/gallery/1l4umgm | koc_Z3 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1l4umgm | false | null | t3_1l4umgm | /r/LocalLLaMA/comments/1l4umgm/new_model_qwen3_embedding_reranker/ | false | false | 1 | null | |
New model - Qwen3 Embedding + Reranker | 18 | OP: [https://www.reddit.com/r/Qwen\_AI/comments/1l4qvhe/new\_model\_qwen3\_embedding\_reranker/](https://www.reddit.com/r/Qwen_AI/comments/1l4qvhe/new_model_qwen3_embedding_reranker/)
Qwen Team has launched a new set of AI models, **Qwen3 Embedding** and **Qwen3 Reranker** , it is designed for text embedding, search,... | 2025-06-06T14:58:59 | https://www.reddit.com/gallery/1l4ujxg | koc_Z3 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1l4ujxg | false | null | t3_1l4ujxg | /r/LocalLLaMA/comments/1l4ujxg/new_model_qwen3_embedding_reranker/ | false | false | default | 18 | null |
New model - Qwen3 Embedding + Reranker | 0 | OP: [https://www.reddit.com/r/Qwen\_AI/comments/1l4qvhe/new\_model\_qwen3\_embedding\_reranker/](https://www.reddit.com/r/Qwen_AI/comments/1l4qvhe/new_model_qwen3_embedding_reranker/)
Qwen Team has launched a new set of AI models, **Qwen3 Embedding** and **Qwen3 Reranker** , it is designed for text embedding, search,... | 2025-06-06T14:58:56 | https://www.reddit.com/gallery/1l4ujwg | koc_Z3 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1l4ujwg | false | null | t3_1l4ujwg | /r/LocalLLaMA/comments/1l4ujwg/new_model_qwen3_embedding_reranker/ | false | false | default | 0 | null |
New model - Qwen3 Embedding + Reranker | 0 | OP: [https://www.reddit.com/r/Qwen\_AI/comments/1l4qvhe/new\_model\_qwen3\_embedding\_reranker/](https://www.reddit.com/r/Qwen_AI/comments/1l4qvhe/new_model_qwen3_embedding_reranker/)
Qwen Team has launched a new set of AI models, **Qwen3 Embedding** and **Qwen3 Reranker** , it is designed for text embedding, search,... | 2025-06-06T14:58:46 | https://www.reddit.com/gallery/1l4ujqq | koc_Z3 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1l4ujqq | false | null | t3_1l4ujqq | /r/LocalLLaMA/comments/1l4ujqq/new_model_qwen3_embedding_reranker/ | false | false | default | 0 | null |
New model - Qwen3 Embedding + Reranker | 1 | [removed] | 2025-06-06T14:58:41 | https://www.reddit.com/gallery/1l4ujo7 | koc_Z3 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1l4ujo7 | false | null | t3_1l4ujo7 | /r/LocalLLaMA/comments/1l4ujo7/new_model_qwen3_embedding_reranker/ | false | false | 1 | null | |
Have Large Language Models(LLMs) Finally Mastered Geolocation? | 18 | > An ambiguous city street, a freshly mown field, and a parked armoured vehicle were among the example photos we chose to challenge Large Language Models (LLMs) from OpenAI, Google, Anthropic, Mistral and xAI to geolocate.
> Back in July 2023, Bellingcat analysed the geolocation performance of OpenAI and Google’s mod... | 2025-06-06T14:24:54 | https://www.bellingcat.com/resources/how-tos/2025/06/06/have-llms-finally-mastered-geolocation/ | True-Combination7059 | bellingcat.com | 1970-01-01T00:00:00 | 0 | {} | 1l4tqgt | false | null | t3_1l4tqgt | /r/LocalLLaMA/comments/1l4tqgt/have_large_language_modelsllms_finally_mastered/ | false | false | default | 18 | null |
Deciding on hardware requirements | 1 | [removed] | 2025-06-06T14:20:43 | https://www.reddit.com/r/LocalLLaMA/comments/1l4tmw8/deciding_on_hardware_requirements/ | Beautiful_Wait_8964 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l4tmw8 | false | null | t3_1l4tmw8 | /r/LocalLLaMA/comments/1l4tmw8/deciding_on_hardware_requirements/ | false | false | self | 1 | null |
Bad for device to let MBP M4 64GB process all night? (e.g. damage?) | 1 | [removed] | 2025-06-06T13:36:42 | https://www.reddit.com/r/LocalLLaMA/comments/1l4sm2r/bad_for_device_to_let_mbp_m4_64gb_process_all/ | Electronic_Voice_306 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l4sm2r | false | null | t3_1l4sm2r | /r/LocalLLaMA/comments/1l4sm2r/bad_for_device_to_let_mbp_m4_64gb_process_all/ | false | false | self | 1 | null |
Which LLM is good for NSFW Text to Image Prompts? | 1 | [removed] | 2025-06-06T13:36:02 | https://www.reddit.com/r/LocalLLaMA/comments/1l4sljd/which_llm_is_good_for_nsfw_text_to_image_prompts/ | Cheap_Musician_5382 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l4sljd | false | null | t3_1l4sljd | /r/LocalLLaMA/comments/1l4sljd/which_llm_is_good_for_nsfw_text_to_image_prompts/ | false | false | nsfw | 1 | null |
Current best model for technical documentation text generation for RAG / fine tuning? | 5 | I want to create a model which supports us in writing technical documentation. We already have a lot of text from older documentations and want to use this as RAG / fine tuning source. Inference GPU memory size will be at least 80GB.
Which model would you recommend for this task currently? | 2025-06-06T13:01:00 | https://www.reddit.com/r/LocalLLaMA/comments/1l4rtov/current_best_model_for_technical_documentation/ | OkAstronaut4911 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l4rtov | false | null | t3_1l4rtov | /r/LocalLLaMA/comments/1l4rtov/current_best_model_for_technical_documentation/ | false | false | self | 5 | null |
Semantic routing and caching doesn't work - task specific LLMs (TLMs) ftw! | 9 | If you are building caching techniques for LLMs or developing a router to handle certain queries by select LLMs/agents - know that semantic caching and routing is a broken approach. Here is why.
* Follow-ups or Elliptical Queries: Same issue as embeddings — "And Boston?" doesn't carry meaning on its own. Clustering wi... | 2025-06-06T12:53:26 | https://www.reddit.com/r/LocalLLaMA/comments/1l4rnsc/semantic_routing_and_caching_doesnt_work_task/ | AdditionalWeb107 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l4rnsc | false | null | t3_1l4rnsc | /r/LocalLLaMA/comments/1l4rnsc/semantic_routing_and_caching_doesnt_work_task/ | false | false | self | 9 | null |
Ailoy: A super-easy python / javasript agent builder | 19 | We’ve released **Ailoy**, a library that makes building agents incredibly easy.
We believe it's the easiest way to embed agents in your code.
available for both Python and JavaScript.
Homepage: [https://brekkylab.github.io/ailoy/](https://brekkylab.github.io/ailoy/)
Github: [https://github.com/brekkylab/ailoy](htt... | 2025-06-06T12:35:29 | https://www.reddit.com/r/LocalLLaMA/comments/1l4rain/ailoy_a_supereasy_python_javasript_agent_builder/ | ArmCompetitive4605 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l4rain | false | null | t3_1l4rain | /r/LocalLLaMA/comments/1l4rain/ailoy_a_supereasy_python_javasript_agent_builder/ | false | false | self | 19 | null |
Local AI on different PCs? | 1 | [removed] | 2025-06-06T12:07:33 | https://www.reddit.com/r/LocalLLaMA/comments/1l4qqyb/local_ai_on_different_pcs/ | MoneyMultiplier888 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l4qqyb | false | null | t3_1l4qqyb | /r/LocalLLaMA/comments/1l4qqyb/local_ai_on_different_pcs/ | false | false | self | 1 | null |
Best model for coding on 8GB VRAM | 1 | [removed] | 2025-06-06T11:51:06 | https://www.reddit.com/r/LocalLLaMA/comments/1l4qfx0/best_model_for_coding_on_8gb_vram/ | PressLaunchMike | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l4qfx0 | false | null | t3_1l4qfx0 | /r/LocalLLaMA/comments/1l4qfx0/best_model_for_coding_on_8gb_vram/ | false | false | self | 1 | null |
Build LLM from Scratch | Mega Playlist of 43 videos | 47 | Just like with machine learning, you will be a serious LLM engineer only if you truly understand how the nuts and bolts of a Large Language Model (LLM) work.
Very few people understand how an LLM exactly works. Even fewer can build an entire LLM from scratch.
Wouldn't it be great for you to build your own LLM from sc... | 2025-06-06T11:49:59 | https://www.reddit.com/r/LocalLLaMA/comments/1l4qf6k/build_llm_from_scratch_mega_playlist_of_43_videos/ | OtherRaisin3426 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l4qf6k | false | null | t3_1l4qf6k | /r/LocalLLaMA/comments/1l4qf6k/build_llm_from_scratch_mega_playlist_of_43_videos/ | false | false | self | 47 | {'enabled': False, 'images': [{'id': '5PyVHkoFsrddBslmOS6EzhbrJOxTQjO5STf4LiVK4_k', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/itMSuScE-SCcGqTm0UR4VRY73cEjOMfUD8R3JLKTMfo.jpg?width=108&crop=smart&auto=webp&s=9b6bc043bdccaad2019c8bbbae3441b99aaf894f', 'width': 108}, {'height': 121, 'url': 'h... |
Mega LLM Resource of 43 lectures | Popular Youtube Playlist | 1 | [removed] | 2025-06-06T11:48:59 | https://www.reddit.com/r/LocalLLaMA/comments/1l4qeib/mega_llm_resource_of_43_lectures_popular_youtube/ | OtherRaisin3426 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l4qeib | false | {'oembed': {'description': 'In this playlist, we will learn about the entire process of building a Large Language Model (LLM) from scratch. Nothing will be assumed. Everything will be s...', 'height': 450, 'html': '<iframe class="embedly-embed" src="https://cdn.embedly.com/widgets/media.html?src=https%3A%2F%2Fwww.youtu... | t3_1l4qeib | /r/LocalLLaMA/comments/1l4qeib/mega_llm_resource_of_43_lectures_popular_youtube/ | false | false | 1 | null | |
I built an app that turns your photos into smart packing lists — all on your iPhone, 100% private, no APIs, no data collection! | 283 | Fullpack uses Apple’s **VisionKit** to identify items directly from your photos and helps you organize them into **packing lists** for any occasion.
Whether you're prepping for a “Workday,” “Beach Holiday,” or “Hiking Weekend,” you can easily create a plan and Fullpack will remind you what to pack before you head out... | 2025-06-06T11:38:47 | w-zhong | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1l4q7xf | false | null | t3_1l4q7xf | /r/LocalLLaMA/comments/1l4q7xf/i_built_an_app_that_turns_your_photos_into_smart/ | false | false | default | 283 | {'enabled': True, 'images': [{'id': '9b1s8amsla5f1', 'resolutions': [{'height': 144, 'url': 'https://preview.redd.it/9b1s8amsla5f1.jpeg?width=108&crop=smart&auto=webp&s=dd5d1053a10125600d16baa908d60a3850eee9cc', 'width': 108}, {'height': 288, 'url': 'https://preview.redd.it/9b1s8amsla5f1.jpeg?width=216&crop=smart&auto=... | |
Cannot even run the smallest model on system RAM? | 0 | I am a bit confused. I am trying to run small LLMs on my Unraid server within the Ollama docker, using just the CPU and 16GB of system RAM.
Got Ollama up and running, but even when pulling the smallest models like Qwen 3 0.6B with Q4\_K\_M quantization, Ollama tells me I need way more RAM than I have left to spare. Wh... | 2025-06-06T11:29:31 | FloJak2004 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1l4q25p | false | null | t3_1l4q25p | /r/LocalLLaMA/comments/1l4q25p/cannot_even_run_the_smallest_model_on_system_ram/ | false | false | default | 0 | {'enabled': True, 'images': [{'id': 'jxaainwcka5f1', 'resolutions': [{'height': 8, 'url': 'https://preview.redd.it/jxaainwcka5f1.png?width=108&crop=smart&auto=webp&s=5952beb830596c4475777528bce76a5915d89885', 'width': 108}, {'height': 16, 'url': 'https://preview.redd.it/jxaainwcka5f1.png?width=216&crop=smart&auto=webp&... | |
Today, I've mostly been conversing with Reddit from my mobile... | 1 | 2025-06-06T11:28:30 | https://v.redd.it/t5n4dkobla5f1 | AffectionateHoney992 | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1l4q1jc | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/t5n4dkobla5f1/DASHPlaylist.mpd?a=1751801326%2CODllOGNlOTJjZGMyZTcwYzY0NWYwYjI3NGNjMWU3ZDgzYmQwZDQxNDg4OTIyMWZlNzk4OTY1OThiNDdjYThiMw%3D%3D&v=1&f=sd', 'duration': 68, 'fallback_url': 'https://v.redd.it/t5n4dkobla5f1/DASH_1080.mp4?source=fallback', 'h... | t3_1l4q1jc | /r/LocalLLaMA/comments/1l4q1jc/today_ive_mostly_been_conversing_with_reddit_from/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'ZHRiZWhrb2JsYTVmMcaBTDTv4FsyuQF638hG_AFz0_dZjajyIHT3OBMUKgC6', 'resolutions': [{'height': 216, 'url': 'https://external-preview.redd.it/ZHRiZWhrb2JsYTVmMcaBTDTv4FsyuQF638hG_AFz0_dZjajyIHT3OBMUKgC6.png?width=108&crop=smart&format=pjpg&auto=webp&s=a8c56cdc5e7f10aae5930cae7315e6dbe725... | ||
new Bielik models have been released | 62 | [https://huggingface.co/speakleash/Bielik-11B-v2.6-Instruct](https://huggingface.co/speakleash/Bielik-11B-v2.6-Instruct)
[https://huggingface.co/speakleash/Bielik-11B-v2.6-Instruct-GGUF](https://huggingface.co/speakleash/Bielik-11B-v2.6-Instruct-GGUF)
[https://huggingface.co/speakleash/Bielik-11B-v2.5-Instruct](https... | 2025-06-06T11:25:50 | https://www.reddit.com/r/LocalLLaMA/comments/1l4pzrm/new_bielik_models_have_been_released/ | jacek2023 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l4pzrm | false | null | t3_1l4pzrm | /r/LocalLLaMA/comments/1l4pzrm/new_bielik_models_have_been_released/ | false | false | self | 62 | {'enabled': False, 'images': [{'id': 'vs1h9ByfOzYzTv0FTBFd26pK_oG6nMykFWLMM5aAVbs', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/ViCxjnc8WTqAkDTHwgD5qQCVB1k-fsD-lA7k1mfFgc8.jpg?width=108&crop=smart&auto=webp&s=c35b02de6c4af8d885eef1d87e34a177cf285446', 'width': 108}, {'height': 116, 'url': 'h... |
Real-time conversation with a character on your local machine | 215 | And also the voice split function
Sorry for my English =) | 2025-06-06T11:12:33 | https://v.redd.it/vzlhsb24ia5f1 | ResolveAmbitious9572 | /r/LocalLLaMA/comments/1l4prlo/realtime_conversation_with_a_character_on_your/ | 1970-01-01T00:00:00 | 0 | {} | 1l4prlo | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/vzlhsb24ia5f1/DASHPlaylist.mpd?a=1751929963%2CNzkxY2YwNDJjMjJmY2EyNWNjOWVhMWUxYTljOTUyOTM2ZmEyN2I5OTc1OGIyNzhlYTM0ZTQyNDVlZjdjYzIxYg%3D%3D&v=1&f=sd', 'duration': 129, 'fallback_url': 'https://v.redd.it/vzlhsb24ia5f1/DASH_1080.mp4?source=fallback', '... | t3_1l4prlo | /r/LocalLLaMA/comments/1l4prlo/realtime_conversation_with_a_character_on_your/ | false | false | 215 | {'enabled': False, 'images': [{'id': 'bmNldHdvMjRpYTVmMdEmQdBg5R_hfbeuAJLNQo4_VPyV37-iPjxO1DAp2E-K', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/bmNldHdvMjRpYTVmMdEmQdBg5R_hfbeuAJLNQo4_VPyV37-iPjxO1DAp2E-K.png?width=108&crop=smart&format=pjpg&auto=webp&s=5faea925e036d05e05363c966d94794d21b03... | |
A prototype for personal finance resolution. | 26 | Hi! Kuvera v0.1.0 is now live!
A series of personal finance advisor models that try to resolve the queries by trying to understand the person’s psychological state and relevant context.
These are still prototypes that have much room for improvement.
What’s included in this release:
-
Akhil-Theerthala/Kuvera-8B-v0.1... | 2025-06-06T10:50:11 | https://huggingface.co/Akhil-Theerthala/Kuvera-8B-v0.1.0 | The-Silvervein | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1l4pdyc | false | null | t3_1l4pdyc | /r/LocalLLaMA/comments/1l4pdyc/a_prototype_for_personal_finance_resolution/ | false | false | 26 | {'enabled': False, 'images': [{'id': 'VvCnIRYBwofWbrtnA_usVoDlhKNPR-K5DJKzWDpRIEc', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/eRmOKrP_DKamzTn5vzBRb4P-HTRykpHQCacHiMsZM7c.jpg?width=108&crop=smart&auto=webp&s=2af36e650459905b5a2d525b7820bb8c4443fdd8', 'width': 108}, {'height': 116, 'url': 'h... | |
Struggling to use autocomplete with continue and openwebui | 1 | [removed] | 2025-06-06T10:37:44 | https://www.reddit.com/r/LocalLLaMA/comments/1l4p6xh/struggling_to_use_autocomplete_with_continue_and/ | Reasonable-Archer538 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l4p6xh | false | null | t3_1l4p6xh | /r/LocalLLaMA/comments/1l4p6xh/struggling_to_use_autocomplete_with_continue_and/ | false | false | self | 1 | null |
China's Rednote Open-source dots.llm Benchmarks | 101 | https://www.xiaohongshu.com/user/profile/683ffe42000000001d021a4c | 2025-06-06T10:32:40 | Fun-Doctor6855 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1l4p45i | false | null | t3_1l4p45i | /r/LocalLLaMA/comments/1l4p45i/chinas_rednote_opensource_dotsllm_benchmarks/ | false | false | default | 101 | {'enabled': True, 'images': [{'id': 'cambn0sdba5f1', 'resolutions': [{'height': 91, 'url': 'https://preview.redd.it/cambn0sdba5f1.jpeg?width=108&crop=smart&auto=webp&s=28c7cbda4a038c3451ba05bde2a899c95a5af3b6', 'width': 108}, {'height': 183, 'url': 'https://preview.redd.it/cambn0sdba5f1.jpeg?width=216&crop=smart&auto=w... | |
Help me find voice cloning FOSS with UI | 4 | I’m searching for simple-to-set-up software to run voice cloning and generation locally. Plus point would be if it can work with Slovak language. Is there a viable option? | 2025-06-06T10:30:05 | https://www.reddit.com/r/LocalLLaMA/comments/1l4p2po/help_me_find_voice_cloning_foss_with_ui/ | KekecVN | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l4p2po | false | null | t3_1l4p2po | /r/LocalLLaMA/comments/1l4p2po/help_me_find_voice_cloning_foss_with_ui/ | false | false | self | 4 | null |
8x RTX 3090 setup with p2p patch | 1 | A month ago I [complained](https://www.reddit.com/r/LocalLLaMA/comments/1kds51e/inference_needs_nontrivial_amount_of_pcie/) that connecting 8 RTX 3090 with PCIe 3.0 x4 links is bad idea. I have upgraded my rig with PCIe 4.0 x8 links (4x theoretical bandwidth improvement) and listed numbers in this reddit [post](https:/... | 2025-06-06T09:45:59 | https://www.reddit.com/r/LocalLLaMA/comments/1l4oeaj/8x_rtx_3090_setup_with_p2p_patch/ | pmur12 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l4oeaj | false | null | t3_1l4oeaj | /r/LocalLLaMA/comments/1l4oeaj/8x_rtx_3090_setup_with_p2p_patch/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 't3ClEqRkV9jbsN-syDR1DFXv_9CIlY0q9kTBwpkVCcA', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/JqIHMHzLbMobSzqcGALj958qjd6nBi4YvVXWF7IjXKw.jpg?width=108&crop=smart&auto=webp&s=a5aff8a53c5726cf0d423f4f26f91dcd0497bd03', 'width': 108}, {'height': 108, 'url': 'h... |
It is possble to run non-reasoning deepseek-r1-0528? | 30 | I know, stupid question, but couldn't find an answer to it! | 2025-06-06T08:57:30 | https://www.reddit.com/r/LocalLLaMA/comments/1l4npcl/it_is_possble_to_run_nonreasoning_deepseekr10528/ | relmny | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l4npcl | false | null | t3_1l4npcl | /r/LocalLLaMA/comments/1l4npcl/it_is_possble_to_run_nonreasoning_deepseekr10528/ | false | false | self | 30 | null |
Which agent-like terminal do you guys use? Something like Warp but free. | 5 | I want something which can browse around a source code repository and answer questions about it. Warp is pretty good but doesn’t let you use your own llm keys.
Open web-ui’s function calling doesn’t seems to be able to execute more than one functions per turn so it’s not good for planning steps.
| 2025-06-06T08:55:25 | https://www.reddit.com/r/LocalLLaMA/comments/1l4nobz/which_agentlike_terminal_do_you_guys_use/ | grey-seagull | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l4nobz | false | null | t3_1l4nobz | /r/LocalLLaMA/comments/1l4nobz/which_agentlike_terminal_do_you_guys_use/ | false | false | self | 5 | null |
MiniCPM4: 7x decoding speed than Qwen3-8B | 156 | MiniCPM 4 is an extremely efficient edge-side large model that has undergone efficient optimization across four dimensions: model architecture, learning algorithms, training data, and inference systems, achieving ultimate efficiency improvements.
* 🏗️ **Efficient Model Architecture:**
* InfLLM v2 -- Trainable Spar... | 2025-06-06T08:45:36 | Lynncc6 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1l4njon | false | null | t3_1l4njon | /r/LocalLLaMA/comments/1l4njon/minicpm4_7x_decoding_speed_than_qwen38b/ | false | false | default | 156 | {'enabled': True, 'images': [{'id': 'j4mqq99tr95f1', 'resolutions': [{'height': 46, 'url': 'https://preview.redd.it/j4mqq99tr95f1.png?width=108&crop=smart&auto=webp&s=5eb8fee188d1c0ac243261b540f880b0725c4dc0', 'width': 108}, {'height': 93, 'url': 'https://preview.redd.it/j4mqq99tr95f1.png?width=216&crop=smart&auto=webp... | |
Tokasaurus: An LLM Inference Engine for High-Throughput Workloads | 28 | 2025-06-06T08:40:10 | https://scalingintelligence.stanford.edu/blogs/tokasaurus/ | AppearanceHeavy6724 | scalingintelligence.stanford.edu | 1970-01-01T00:00:00 | 0 | {} | 1l4ngz5 | false | null | t3_1l4ngz5 | /r/LocalLLaMA/comments/1l4ngz5/tokasaurus_an_llm_inference_engine_for/ | false | false | default | 28 | {'enabled': False, 'images': [{'id': '98Ssw8LPycFLwlm_uHfDB4EVaoCaGEd0Q0M_tFW-Cko', 'resolutions': [{'height': 105, 'url': 'https://external-preview.redd.it/UMHldeViAkaftNoXr0yZV1xJLJ_mUiopvNlMx-OrluA.jpg?width=108&crop=smart&auto=webp&s=16ef86d80eacfcc1615675e738758041e968a400', 'width': 108}, {'height': 211, 'url': '... | |
Is updating prompts frequently even worth it? | 1 | [removed] | 2025-06-06T08:38:55 | https://www.reddit.com/r/LocalLLaMA/comments/1l4ngbb/is_updating_prompts_frequently_even_worth_it/ | Useful_Artichoke_292 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l4ngbb | false | null | t3_1l4ngbb | /r/LocalLLaMA/comments/1l4ngbb/is_updating_prompts_frequently_even_worth_it/ | false | false | self | 1 | null |
Locally hosted DeepSeek-R1 server in LM Studio | 1 | [removed] | 2025-06-06T08:28:29 | https://v.redd.it/6sqi1pk3p95f1 | walkerb1972 | /r/LocalLLaMA/comments/1l4nb7d/locally_hosted_deepseekr1_server_in_lm_studio/ | 1970-01-01T00:00:00 | 0 | {} | 1l4nb7d | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/6sqi1pk3p95f1/DASHPlaylist.mpd?a=1751920116%2CMjc0ZjBiMjA4MDVlZjFiODU2NDJhYjQwYjNkNDViNGJhMDdmOWFlNDMxNDNhMWMzNzBlMTkxZDlhOGZhZDVhOQ%3D%3D&v=1&f=sd', 'duration': 65, 'fallback_url': 'https://v.redd.it/6sqi1pk3p95f1/DASH_1080.mp4?source=fallback', 'h... | t3_1l4nb7d | /r/LocalLLaMA/comments/1l4nb7d/locally_hosted_deepseekr1_server_in_lm_studio/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'cGkzeW1hbzJwOTVmMfO3rqe82TjJQzUmwxOHGyZi52Sarhgd2f7bWh8fY2ok', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/cGkzeW1hbzJwOTVmMfO3rqe82TjJQzUmwxOHGyZi52Sarhgd2f7bWh8fY2ok.png?width=108&crop=smart&format=pjpg&auto=webp&s=f3068854d2c65661d36bb8ece4a9783f92fa4... | |
🚀 Chat with Local LLMs via Chrome – New Privacy-First Extension (Ollama Client) | 1 | [removed] | 2025-06-06T08:24:53 | https://www.reddit.com/r/LocalLLaMA/comments/1l4n9em/chat_with_local_llms_via_chrome_new_privacyfirst/ | Some_Storage_9977 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l4n9em | false | null | t3_1l4n9em | /r/LocalLLaMA/comments/1l4n9em/chat_with_local_llms_via_chrome_new_privacyfirst/ | false | false | self | 1 | null |
Graphic card 5060 ti vs 5070 for Llama | 1 | [removed] | 2025-06-06T08:10:26 | https://www.reddit.com/r/LocalLLaMA/comments/1l4n1v4/graphic_card_5060_ti_vs_5070_for_llama/ | Ok-Cup-608 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l4n1v4 | false | null | t3_1l4n1v4 | /r/LocalLLaMA/comments/1l4n1v4/graphic_card_5060_ti_vs_5070_for_llama/ | false | false | self | 1 | null |
Can a model be so radically altered that its origin can no longer be recognized? YES! | 94 | **Phi-lthy4**( [https://huggingface.co/SicariusSicariiStuff/Phi-lthy4](https://huggingface.co/SicariusSicariiStuff/Phi-lthy4) ) has been consistently described as **exceptionally unique** by all who have tested it, **almost devoid of SLOP**, and it is now widely regarded as the **most unique roleplay model available**.... | 2025-06-06T08:05:16 | https://www.reddit.com/r/LocalLLaMA/comments/1l4mzbr/can_a_model_be_so_radically_altered_that_its/ | Sicarius_The_First | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l4mzbr | false | null | t3_1l4mzbr | /r/LocalLLaMA/comments/1l4mzbr/can_a_model_be_so_radically_altered_that_its/ | false | false | self | 94 | {'enabled': False, 'images': [{'id': '6I6zlj5qBDMKY-S0diPhFY3TXNcscxPYnAU5SHXHueE', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/qifdM-0N9Odnjr9ZvoV983wjPcBE0iH_utXQ86v0StQ.jpg?width=108&crop=smart&auto=webp&s=4015542c2bd16193822270b458eb7ae2a72bf53e', 'width': 108}, {'height': 116, 'url': 'h... |
Help- in need of advice choosing GPU 5060ti vs 5070 or AMD | 1 | [removed] | 2025-06-06T08:04:00 | https://www.reddit.com/r/LocalLLaMA/comments/1l4myno/help_in_need_of_advice_choosing_gpu_5060ti_vs/ | Ok-Cup-608 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l4myno | false | null | t3_1l4myno | /r/LocalLLaMA/comments/1l4myno/help_in_need_of_advice_choosing_gpu_5060ti_vs/ | false | false | self | 1 | null |
llama3:70b (4-bit Quantized) from Ollama is not paying attention at the initial part of the system prompt. | 1 | [removed] | 2025-06-06T08:00:57 | https://www.reddit.com/r/LocalLLaMA/comments/1l4mx2b/llama370b_4bit_quantized_from_ollama_is_not/ | Evening-Power-3302 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l4mx2b | false | null | t3_1l4mx2b | /r/LocalLLaMA/comments/1l4mx2b/llama370b_4bit_quantized_from_ollama_is_not/ | false | false | self | 1 | null |
China's Rednote Open-source dots.llm performance & cost | 140 | ERROR: type should be string, got "\nhttps://github.com/rednote-hilab/dots.llm1/blob/main/dots1_tech_report.pdf" | 2025-06-06T07:51:36 | Fun-Doctor6855 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1l4ms71 | false | null | t3_1l4ms71 | /r/LocalLLaMA/comments/1l4ms71/chinas_rednote_opensource_dotsllm_performance_cost/ | false | false | default | 140 | {'enabled': True, 'images': [{'id': '4kbcizani95f1', 'resolutions': [{'height': 59, 'url': 'https://preview.redd.it/4kbcizani95f1.png?width=108&crop=smart&auto=webp&s=acc9f15f0fd89b5fdb8dab151a534593927cd1c5', 'width': 108}, {'height': 118, 'url': 'https://preview.redd.it/4kbcizani95f1.png?width=216&crop=smart&auto=web... | |
China's Xiaohongshu(Rednote) released its dots.llm open source AI model | 416 | https://huggingface.co/spaces/rednote-hilab/dots-demo
| 2025-06-06T07:28:45 | https://github.com/rednote-hilab/dots.llm1 | Fun-Doctor6855 | github.com | 1970-01-01T00:00:00 | 0 | {} | 1l4mgry | false | null | t3_1l4mgry | /r/LocalLLaMA/comments/1l4mgry/chinas_xiaohongshurednote_released_its_dotsllm/ | false | false | default | 416 | {'enabled': False, 'images': [{'id': 'dzT3Ipp6otwumdGdMqcZYoOXNptMhbuF91P9vr8s_p4', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/O4a5ycdopqyUZaJQmD9jwW8EpcdIe_Y4TADgsKDlB-k.jpg?width=108&crop=smart&auto=webp&s=11afaf19de477de4f4e17e2663686bb7e0fe691f', 'width': 108}, {'height': 108, 'url': 'h... |
Is there an video or article or book where a lot of real world datasets are used to train industry level LLM with all the code? | 9 | Is there an video or article or book where a lot of real world datasets are used to train industry level LLM with all the code? Everything I can find is toy models trained with toy datasets, that I played with tons of times already. I know GPT3 or Llama papers gives some information about what datasets were used, but I... | 2025-06-06T06:51:51 | https://www.reddit.com/r/LocalLLaMA/comments/1l4lwtq/is_there_an_video_or_article_or_book_where_a_lot/ | Happysedits | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l4lwtq | false | null | t3_1l4lwtq | /r/LocalLLaMA/comments/1l4lwtq/is_there_an_video_or_article_or_book_where_a_lot/ | false | false | self | 9 | {'enabled': False, 'images': [{'id': 'O4NIi1E5_R1byN18JxxjgC67yqog8scgm_H-yjjZSEk', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/yJZOrktSf3706K8SsYmAWo-4E6FP9nY4XcHL5TzPeIQ.jpg?width=108&crop=smart&auto=webp&s=98571eaf9500f9d207e1f1733a6f8e28d8a82563', 'width': 108}, {'height': 162, 'url': 'h... |
Should I choose llama-swap over my own solution | 5 | I built something similar to llama-swap a while ago. Config file with server settings for a number of different Models I use. It automatically re-starts llama-server instances when I request another model. It's not a proxy though. My apps still talk to the currently running llama-server instance directly (through a cus... | 2025-06-06T06:06:52 | https://www.reddit.com/r/LocalLLaMA/comments/1l4l938/should_i_choose_llamaswap_over_my_own_solution/ | mnze_brngo_7325 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l4l938 | false | null | t3_1l4l938 | /r/LocalLLaMA/comments/1l4l938/should_i_choose_llamaswap_over_my_own_solution/ | false | false | self | 5 | null |
Yet Another Quantization Algorithm (YAQA) | 1 | [removed] | 2025-06-06T05:14:47 | https://www.reddit.com/r/LocalLLaMA/comments/1l4kfd4/yet_another_quantization_algorithm_yaqa/ | tsengalb99 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l4kfd4 | false | null | t3_1l4kfd4 | /r/LocalLLaMA/comments/1l4kfd4/yet_another_quantization_algorithm_yaqa/ | false | false | self | 1 | null |
Model-Preserving Adaptive Rounding | 1 | [removed] | 2025-06-06T05:12:41 | https://www.reddit.com/r/LocalLLaMA/comments/1l4ke2z/modelpreserving_adaptive_rounding/ | tsengalb99 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l4ke2z | false | null | t3_1l4ke2z | /r/LocalLLaMA/comments/1l4ke2z/modelpreserving_adaptive_rounding/ | false | false | self | 1 | null |
Do we need a new programming language optimized for AI to write code? | 1 | [removed] | 2025-06-06T04:37:36 | https://www.reddit.com/r/LocalLLaMA/comments/1l4jsnb/do_we_need_a_new_programming_language_optimized/ | ggeezz12 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l4jsnb | false | null | t3_1l4jsnb | /r/LocalLLaMA/comments/1l4jsnb/do_we_need_a_new_programming_language_optimized/ | false | false | self | 1 | null |
Private LLM For Company | 1 | [removed] | 2025-06-06T04:34:36 | https://www.reddit.com/r/LocalLLaMA/comments/1l4jqtr/private_llm_for_company/ | Acataleptic23 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l4jqtr | false | null | t3_1l4jqtr | /r/LocalLLaMA/comments/1l4jqtr/private_llm_for_company/ | false | false | self | 1 | null |
Thinking about switching from cloud based AI to sth more local | 1 | [removed] | 2025-06-06T03:57:32 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 1l4j2dy | false | null | t3_1l4j2dy | /r/LocalLLaMA/comments/1l4j2dy/thinking_about_switching_from_cloud_based_ai_to/ | false | false | default | 1 | null | ||
anyone encountered this problem where f5 tts gives file with no sound ? | 4 | 2025-06-06T03:53:52 | SnooDrawings7547 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1l4izz4 | false | null | t3_1l4izz4 | /r/LocalLLaMA/comments/1l4izz4/anyone_encountered_this_problem_where_f5_tts/ | false | false | default | 4 | {'enabled': True, 'images': [{'id': '0jhn08f6c85f1', 'resolutions': [{'height': 45, 'url': 'https://preview.redd.it/0jhn08f6c85f1.png?width=108&crop=smart&auto=webp&s=27af8367c4f71e952d866534963581a7684833e3', 'width': 108}, {'height': 91, 'url': 'https://preview.redd.it/0jhn08f6c85f1.png?width=216&crop=smart&auto=webp... | ||
MiniCPM4: Ultra-Efficient LLMs on End Devices | 65 | Randomly saw this -- no models yet. | 2025-06-06T03:40:56 | https://huggingface.co/collections/openbmb/minicpm-4-6841ab29d180257e940baa9b | adefa | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1l4irk9 | false | null | t3_1l4irk9 | /r/LocalLLaMA/comments/1l4irk9/minicpm4_ultraefficient_llms_on_end_devices/ | false | false | default | 65 | {'enabled': False, 'images': [{'id': 'urN7B2TlaIBWXNq4r0fZnMUhUh2UrMvsfjcXAUJ2oTc', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/QvrTYBQgDHPkgT3IRbnKHNb-1zHcP8AdJT7CXsvRCqg.jpg?width=108&crop=smart&auto=webp&s=0b6dcb95f5889fe301fc6ee42ce2d2a1ba53781e', 'width': 108}, {'height': 116, 'url': 'h... |
Best general purpose LLM for an 8GB 3060? | 3 | Hey everyone,
I’m running a local LLM setup on a home server with a 3060 (8GB VRAM), using Ollama and OpenWebUI. Just after some advice on what the best general-purpose model would be for this kind of hardware.
Mainly using it for general chat, coding help, and a bit of local data processing. Priorities are good perf... | 2025-06-06T03:14:52 | https://www.reddit.com/r/LocalLLaMA/comments/1l4i9st/best_general_purpose_llm_for_an_8gb_3060/ | DisgustingBlackChimp | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l4i9st | false | null | t3_1l4i9st | /r/LocalLLaMA/comments/1l4i9st/best_general_purpose_llm_for_an_8gb_3060/ | false | false | self | 3 | null |
How to share an open source project to this sub without getting filtered? | 0 | [removed] | 2025-06-06T03:09:15 | https://www.reddit.com/r/LocalLLaMA/comments/1l4i5xn/how_to_share_an_open_source_project_to_this_sub/ | VantigeAI | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l4i5xn | false | null | t3_1l4i5xn | /r/LocalLLaMA/comments/1l4i5xn/how_to_share_an_open_source_project_to_this_sub/ | false | false | self | 0 | null |
Llama 3.3:70B on HP Z2 Mini G1a works, but…. | 1 | [removed] | 2025-06-06T02:54:15 | walkerb1972 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1l4hw13 | false | null | t3_1l4hw13 | /r/LocalLLaMA/comments/1l4hw13/llama_3370b_on_hp_z2_mini_g1a_works_but/ | false | false | default | 1 | {'enabled': True, 'images': [{'id': '93yvq9cl185f1', 'resolutions': [{'height': 81, 'url': 'https://preview.redd.it/93yvq9cl185f1.jpeg?width=108&crop=smart&auto=webp&s=60b3c86cbde14f1f5332f66616652850c1c1fb01', 'width': 108}, {'height': 162, 'url': 'https://preview.redd.it/93yvq9cl185f1.jpeg?width=216&crop=smart&auto=w... | |
Is ddr5/pcie5 necessary for a rtx pro 6000 workstation? | 0 | For a PC that uses rtx pro 6000 as its gpu, do you think ddr5 ram and pcie 5.0 are necessary to fully utilize the gpu?
What about SSD speed and RAID?
And since pro 6000 doesn’t support nvlink, is it reasonable to have two pro 6000s on the motherboard and let them bridge through pcie?
We know that ddr4 and pcie4 com... | 2025-06-06T02:25:13 | https://www.reddit.com/r/LocalLLaMA/comments/1l4hccw/is_ddr5pcie5_necessary_for_a_rtx_pro_6000/ | SpecialistPear755 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l4hccw | false | null | t3_1l4hccw | /r/LocalLLaMA/comments/1l4hccw/is_ddr5pcie5_necessary_for_a_rtx_pro_6000/ | false | false | self | 0 | null |
Smallest llm that can help in text rearrangement | 1 | Ive been using a translation model. Need a smallest llm that can just rearrange the output text acc to language needs | 2025-06-06T02:07:20 | https://www.reddit.com/r/LocalLLaMA/comments/1l4gzzw/smallest_llm_that_can_help_in_text_rearrangement/ | Away_Expression_3713 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l4gzzw | false | null | t3_1l4gzzw | /r/LocalLLaMA/comments/1l4gzzw/smallest_llm_that_can_help_in_text_rearrangement/ | false | false | self | 1 | null |
Turn based two model critique for rounds to refine answer - any examples or FOSS projects? | 1 | I felt like I heard of someone making a pipeline of lets say code prime fib in python as a prompt, it is served by model1, model ones answer then feeds to model2 to critique, This back and forth goes on for int turns to hopefully come back with a better answer than just one model answering.
It's similar to what think... | 2025-06-06T01:31:21 | https://www.reddit.com/r/LocalLLaMA/comments/1l4gb6s/turn_based_two_model_critique_for_rounds_to/ | HilLiedTroopsDied | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l4gb6s | false | null | t3_1l4gb6s | /r/LocalLLaMA/comments/1l4gb6s/turn_based_two_model_critique_for_rounds_to/ | false | false | self | 1 | null |
What happened to WizardLM-2 8x22b? | 74 | I was mildly intrigued when I saw /u/SomeOddCodeGuy [mention that](https://old.reddit.com/r/LocalLLaMA/comments/1cvw3s5/my_personal_guide_for_developing_software_with_ai/):
> I prefer local AI models for various reasons, and [the quality of some like WizardLM-2 8x22b are on par with ChatGPT 4](https://prollm.toqan.ai/... | 2025-06-06T00:58:45 | https://www.reddit.com/r/LocalLLaMA/comments/1l4fo3x/what_happened_to_wizardlm2_8x22b/ | RobotRobotWhatDoUSee | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l4fo3x | false | null | t3_1l4fo3x | /r/LocalLLaMA/comments/1l4fo3x/what_happened_to_wizardlm2_8x22b/ | false | false | self | 74 | null |
OpenThinker3 released | 216 | [https://huggingface.co/open-thoughts/OpenThinker3-7B](https://huggingface.co/open-thoughts/OpenThinker3-7B)
[https://huggingface.co/bartowski/open-thoughts\_OpenThinker3-7B-GGUF](https://huggingface.co/bartowski/open-thoughts_OpenThinker3-7B-GGUF) | 2025-06-06T00:26:49 | https://www.reddit.com/r/LocalLLaMA/comments/1l4f1yp/openthinker3_released/ | jacek2023 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l4f1yp | false | null | t3_1l4f1yp | /r/LocalLLaMA/comments/1l4f1yp/openthinker3_released/ | false | false | self | 216 | {'enabled': False, 'images': [{'id': 'aa7mY1LSqAx_HZNaXVUa4ki0ZQltBVxg310whTh9EG0', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/YqXUfGELS3s8NgpI8N8mcy9tyckDXxgiRMqaMOwQmJ4.jpg?width=108&crop=smart&auto=webp&s=4ec479e9c6bcd24dad79b5f1a3efc6ba88a44783', 'width': 108}, {'height': 116, 'url': 'h... |
OpenThinker3 7B released | 1 | [https://huggingface.co/open-thoughts/OpenThinker3-7B](https://huggingface.co/open-thoughts/OpenThinker3-7B)
[https://huggingface.co/bartowski/open-thoughts\_OpenThinker3-7B-GGUF](https://huggingface.co/bartowski/open-thoughts_OpenThinker3-7B-GGUF) | 2025-06-06T00:26:02 | https://www.reddit.com/r/LocalLLaMA/comments/1l4f1f6/openthinker3_7b_released/ | jacek2023 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l4f1f6 | false | null | t3_1l4f1f6 | /r/LocalLLaMA/comments/1l4f1f6/openthinker3_7b_released/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'aa7mY1LSqAx_HZNaXVUa4ki0ZQltBVxg310whTh9EG0', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/YqXUfGELS3s8NgpI8N8mcy9tyckDXxgiRMqaMOwQmJ4.jpg?width=108&crop=smart&auto=webp&s=4ec479e9c6bcd24dad79b5f1a3efc6ba88a44783', 'width': 108}, {'height': 116, 'url': 'h... |
Align text with audio | 0 | Hi, I have an audio generated using OpenAi’s TTS API and I have a raw transcript.
Is there a practical way to generate SRT or ASS captions with timestamps without processing the audio file?
I am currently using Whisper library to generate captions, but it takes 16 seconds to process the audio file. | 2025-06-06T00:02:43 | https://www.reddit.com/r/LocalLLaMA/comments/1l4ekah/align_text_with_audio/ | Terrible_Dimension66 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l4ekah | false | null | t3_1l4ekah | /r/LocalLLaMA/comments/1l4ekah/align_text_with_audio/ | false | false | self | 0 | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.