title stringlengths 1 300 | score int64 0 8.54k | selftext stringlengths 0 41.5k | created timestamp[ns]date 2023-04-01 04:30:41 2026-03-04 02:14:14 ⌀ | url stringlengths 0 878 | author stringlengths 3 20 | domain stringlengths 0 82 | edited timestamp[ns]date 1970-01-01 00:00:00 2026-02-19 14:51:53 | gilded int64 0 2 | gildings stringclasses 7
values | id stringlengths 7 7 | locked bool 2
classes | media stringlengths 646 1.8k ⌀ | name stringlengths 10 10 | permalink stringlengths 33 82 | spoiler bool 2
classes | stickied bool 2
classes | thumbnail stringlengths 4 213 ⌀ | ups int64 0 8.54k | preview stringlengths 301 5.01k ⌀ |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Local model for voice audio cleanup | 1 | Is there a local model that can clean up voice audio recordings? | 2025-07-18T20:02:04 | https://www.reddit.com/r/LocalLLaMA/comments/1m3cf4c/local_model_for_voice_audio_cleanup/ | syntaxing2 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m3cf4c | false | null | t3_1m3cf4c | /r/LocalLLaMA/comments/1m3cf4c/local_model_for_voice_audio_cleanup/ | false | false | self | 1 | null |
Hunyuan A13B </answer> tag mistakes. | 4 | I've been playing around with this model in LM Studio and after the first few responses it devolves into adding </answer> when it is finished thinking and then stops its output. When initially in the convo it would properly follow the format:
(reasoning process)
<answer>
(sends answer)
</answer> (no more output)
... | 2025-07-18T19:26:46 | https://www.reddit.com/r/LocalLLaMA/comments/1m3bjhv/hunyuan_a13b_answer_tag_mistakes/ | Hoppss | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m3bjhv | false | null | t3_1m3bjhv | /r/LocalLLaMA/comments/1m3bjhv/hunyuan_a13b_answer_tag_mistakes/ | false | false | self | 4 | null |
Is there any promising alternative to Transformers? | 145 | Maybe there is an interesting research project, which is not effective yet, but after further improvements, can open new doors in AI development? | 2025-07-18T18:50:40 | https://www.reddit.com/r/LocalLLaMA/comments/1m3amtu/is_there_any_promising_alternative_to_transformers/ | VR-Person | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m3amtu | false | null | t3_1m3amtu | /r/LocalLLaMA/comments/1m3amtu/is_there_any_promising_alternative_to_transformers/ | false | false | self | 145 | null |
Just recorded a walkthrough of my chatbot platform - saved characters, model selection, image gen & more | 10 | I've shown drafts of the project's future UI/UX recently, now I'm just posting an update about what's already there on a backend. Nothing fancy yet, but I'm doing my best tinkering it. | 2025-07-18T18:47:46 | https://v.redd.it/6ngt4yazhodf1 | RIPT1D3_Z | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1m3ak13 | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/6ngt4yazhodf1/DASHPlaylist.mpd?a=1755456483%2CNWZhNGRjYWM4NjU0YzRkY2I1MTJiNWRkNGE4MzAzNTA5OTRjMDAyYTlmNWNkZTVlOTY3OWMzMWM4Y2FhZmFjYQ%3D%3D&v=1&f=sd', 'duration': 28, 'fallback_url': 'https://v.redd.it/6ngt4yazhodf1/DASH_1080.mp4?source=fallback', 'h... | t3_1m3ak13 | /r/LocalLLaMA/comments/1m3ak13/just_recorded_a_walkthrough_of_my_chatbot/ | false | false | 10 | {'enabled': False, 'images': [{'id': 'Zjc1dWUyYnpob2RmMbkTAbisFA-bXEP72UB2cKiRRNQrr_ZpblEyu3qhcRW_', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/Zjc1dWUyYnpob2RmMbkTAbisFA-bXEP72UB2cKiRRNQrr_ZpblEyu3qhcRW_.png?width=108&crop=smart&format=pjpg&auto=webp&s=4498946972de430858b93d355cbe7e46c8b5c... | |
A100 Setup Recommendations | 0 | Looking to buy/build a small form workstation/setup that encompasses 1x Nvidia A100. This will be for local training, testing and creating.
I’d like it to be as mobile as possible: perhaps a mobile rig type build form or if feasible, a laptop (I know I know) with intel and the A100 (A100 is really my non negotiable G... | 2025-07-18T18:46:36 | https://www.reddit.com/r/LocalLLaMA/comments/1m3aixn/a100_setup_recommendations/ | WolfGangOFKTA | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m3aixn | false | null | t3_1m3aixn | /r/LocalLLaMA/comments/1m3aixn/a100_setup_recommendations/ | false | false | self | 0 | null |
Just recorded a walkthrough of my chatbot platform - saved characters, model selection, image gen & more | 0 | I've shown drafts of future UI/UX, now just posting an update about what's already there on a backend. Nothing fancy yet, but I'm doing my best tinkering it. | 2025-07-18T18:45:48 | https://v.redd.it/z5gi8i3khodf1 | RIPT1D3_Z | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1m3ai5u | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/z5gi8i3khodf1/DASHPlaylist.mpd?a=1755456363%2CNjQ1MjI4MmQ0ZWM0M2Q3MWE1MzY1MmFkOGQxMTY4OTg2ODYyYmRhN2M4ZjlmMmM1MDExN2RhNWY2NmVmMjEyZg%3D%3D&v=1&f=sd', 'duration': 28, 'fallback_url': 'https://v.redd.it/z5gi8i3khodf1/DASH_1080.mp4?source=fallback', 'h... | t3_1m3ai5u | /r/LocalLLaMA/comments/1m3ai5u/just_recorded_a_walkthrough_of_my_chatbot/ | false | false | 0 | {'enabled': False, 'images': [{'id': 'OTMzN3lkM2tob2RmMbkTAbisFA-bXEP72UB2cKiRRNQrr_ZpblEyu3qhcRW_', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/OTMzN3lkM2tob2RmMbkTAbisFA-bXEP72UB2cKiRRNQrr_ZpblEyu3qhcRW_.png?width=108&crop=smart&format=pjpg&auto=webp&s=3c1134d400c679d3b09b543173b3bc02e9ac2... | |
Working on a game with a local llama model | 33 | 2025-07-18T18:31:28 | formicidfighter | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1m3a4yu | false | null | t3_1m3a4yu | /r/LocalLLaMA/comments/1m3a4yu/working_on_a_game_with_a_local_llama_model/ | false | false | default | 33 | {'enabled': True, 'images': [{'id': 'ow3kn3zzeodf1', 'resolutions': [{'height': 144, 'url': 'https://preview.redd.it/ow3kn3zzeodf1.jpeg?width=108&crop=smart&auto=webp&s=c58f6a51a99c474acb78b4ad047f958dd23f3c22', 'width': 108}, {'height': 288, 'url': 'https://preview.redd.it/ow3kn3zzeodf1.jpeg?width=216&crop=smart&auto=... | ||
Looking for feedback on this basic setup | 1 | I'd appreciate any feedback on this basic setup for text interface only. I'd upgrade if there's a major/fatal problem with the specs below, or if there's a dramatic improvement in performance for a small additional amount. For example, I could upgrade to a 3090 Ti for maybe 10% more in cost, not sure if that's worth it... | 2025-07-18T18:23:59 | https://www.reddit.com/r/LocalLLaMA/comments/1m39xy5/looking_for_feedback_on_this_basic_setup/ | HunkaHunka | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m39xy5 | false | null | t3_1m39xy5 | /r/LocalLLaMA/comments/1m39xy5/looking_for_feedback_on_this_basic_setup/ | false | false | self | 1 | null |
NLP to schema opensource | 1 | [removed] | 2025-07-18T18:23:34 | https://www.reddit.com/r/LocalLLaMA/comments/1m39xjo/nlp_to_schema_opensource/ | automatetowin | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m39xjo | false | null | t3_1m39xjo | /r/LocalLLaMA/comments/1m39xjo/nlp_to_schema_opensource/ | false | false | self | 1 | null |
Natural language to Schema tool | 1 | [removed] | 2025-07-18T18:21:47 | https://www.reddit.com/r/LocalLLaMA/comments/1m39vwi/natural_language_to_schema_tool/ | automatetowin | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m39vwi | false | null | t3_1m39vwi | /r/LocalLLaMA/comments/1m39vwi/natural_language_to_schema_tool/ | false | false | self | 1 | null |
I made a 1000 hour NSFW TTS dataset | 1,347 | You can find and listen to the dataset on huggingface: [https://huggingface.co/datasets/setfunctionenvironment/testnew](https://huggingface.co/datasets/setfunctionenvironment/testnew)
The sample rate of all audio is 24,000 kHz
Stats:
Total audio files/samples: 556,667
Total duration: 1024.71 hours (3688949 seconds)... | 2025-07-18T18:20:34 | https://www.reddit.com/r/LocalLLaMA/comments/1m39uqi/i_made_a_1000_hour_nsfw_tts_dataset/ | hotroaches4liferz | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m39uqi | false | null | t3_1m39uqi | /r/LocalLLaMA/comments/1m39uqi/i_made_a_1000_hour_nsfw_tts_dataset/ | false | false | nsfw | 1,347 | {'enabled': False, 'images': [{'id': 'DOV0A62XqkURIwviorxHv5Y_o5mxdGlBNyoCsF8Y7Xs', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/DOV0A62XqkURIwviorxHv5Y_o5mxdGlBNyoCsF8Y7Xs.png?width=108&crop=smart&auto=webp&s=61c218fe1b611e82d1fa0231b8e465837900184c', 'width': 108}, {'height': 116, 'url': 'h... |
Trying to run kimi-k2 on cpu only, getting about 1token / 30sec | 0 | I get that speed with only simple requests like "hello" , "who are you ?"
It runs on :
4 x Xeon X7550 @ 2.00GHz , hyperthreading deactivated (32 physical cores)
512G @ 1333 MT/s (2666Mhz) , all slots populated (64 sticks)
The software is :
llama.cpp:server-b5918 (n-1 llamacpp version)
model Kimi-K2-Ins... | 2025-07-18T18:12:15 | https://www.reddit.com/r/LocalLLaMA/comments/1m39n48/trying_to_run_kimik2_on_cpu_only_getting_about/ | orogor | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m39n48 | false | null | t3_1m39n48 | /r/LocalLLaMA/comments/1m39n48/trying_to_run_kimik2_on_cpu_only_getting_about/ | false | false | self | 0 | null |
Introcuding KokoroDoki a Local, Open-Source and Real-Time TTS. | 22 | Hey everyone!
I’m excited to share KokoroDoki, a real-time Text-to-Speech (TTS) app I’ve been working on that runs locally on your laptop with CPU or CUDA GPU support. Powered by Kokoro-82M a lightweight model that delivers high-quality, natural-sounding speech.
Choose from Console, GUI, CLI, or Daemon modes to eithe... | 2025-07-18T18:10:31 | https://www.reddit.com/r/LocalLLaMA/comments/1m39liw/introcuding_kokorodoki_a_local_opensource_and/ | Upbeat-Purchase8460 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m39liw | false | null | t3_1m39liw | /r/LocalLLaMA/comments/1m39liw/introcuding_kokorodoki_a_local_opensource_and/ | false | false | self | 22 | {'enabled': False, 'images': [{'id': 'dbcbJjxr-arVUlkWPRlO7TseasA2N9gQY0OafpOldIY', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/dbcbJjxr-arVUlkWPRlO7TseasA2N9gQY0OafpOldIY.png?width=108&crop=smart&auto=webp&s=9fd63525e0b084d94a9d8fad5cce2e64fe7cc2a5', 'width': 108}, {'height': 108, 'url': 'h... |
A demo space for Voxtral with transformers version of the models | 12 | 2025-07-18T18:03:35 | https://huggingface.co/spaces/MohamedRashad/Voxtral | Thin_Background5570 | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1m39eyr | false | null | t3_1m39eyr | /r/LocalLLaMA/comments/1m39eyr/a_demo_space_for_voxtral_with_transformers/ | false | false | default | 12 | {'enabled': False, 'images': [{'id': 'PBVgzdA_X7wSXX-ZLRpYpUwytqXgejgiFNbFr1-frLQ', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/PBVgzdA_X7wSXX-ZLRpYpUwytqXgejgiFNbFr1-frLQ.png?width=108&crop=smart&auto=webp&s=81d8825e0b72ea28855b740f85ff964b869a45e5', 'width': 108}, {'height': 116, 'url': 'h... | |
new models from NVIDIA: OpenReasoning-Nemotron 32B/14B/7B/1.5B | 184 | OpenReasoning-Nemotron-32B is a large language model (LLM) which is a derivative of Qwen2.5-32B-Instruct (AKA the reference model). It is a reasoning model that is post-trained for reasoning about math, code and science solution generation. The model supports a context length of 64K tokens. The OpenReasoning model is a... | 2025-07-18T17:53:02 | https://www.reddit.com/r/LocalLLaMA/comments/1m394zh/new_models_from_nvidia_openreasoningnemotron/ | jacek2023 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m394zh | false | null | t3_1m394zh | /r/LocalLLaMA/comments/1m394zh/new_models_from_nvidia_openreasoningnemotron/ | false | false | self | 184 | {'enabled': False, 'images': [{'id': 'xVaPHmDFX5vc4R__x4YuAzMSjDtirt1vFIt-J3MElOo', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/xVaPHmDFX5vc4R__x4YuAzMSjDtirt1vFIt-J3MElOo.png?width=108&crop=smart&auto=webp&s=602376c40ecb4272ebb674f9b3e3b4d358685ba0', 'width': 108}, {'height': 116, 'url': 'h... |
DGAF if it’s dumber. It’s mine. | 614 | 2025-07-18T17:48:14 | Weary-Wing-6806 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1m390kj | false | null | t3_1m390kj | /r/LocalLLaMA/comments/1m390kj/dgaf_if_its_dumber_its_mine/ | false | false | default | 614 | {'enabled': True, 'images': [{'id': '8dnb7bl76odf1', 'resolutions': [{'height': 104, 'url': 'https://preview.redd.it/8dnb7bl76odf1.png?width=108&crop=smart&auto=webp&s=ad9f691b92ad45a65ab2abfa4ebb4551e504a761', 'width': 108}, {'height': 209, 'url': 'https://preview.redd.it/8dnb7bl76odf1.png?width=216&crop=smart&auto=we... | ||
Is there any limit for kimi k2 chat (free tier) ? | 0 | I can find this Chinese document about limits: [https://platform.moonshot.cn/docs/pricing/limits#%E9%99%90%E9%80%9F%E6%A6%82%E5%BF%B5%E8%A7%A3%E9%87%8A](https://platform.moonshot.cn/docs/pricing/limits#%E9%99%90%E9%80%9F%E6%A6%82%E5%BF%B5%E8%A7%A3%E9%87%8A)
Error I got: The current model has reached its conversation l... | 2025-07-18T17:35:39 | https://www.reddit.com/r/LocalLLaMA/comments/1m38ou1/is_there_any_limit_for_kimi_k2_chat_free_tier/ | JeffreySons_90 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m38ou1 | false | null | t3_1m38ou1 | /r/LocalLLaMA/comments/1m38ou1/is_there_any_limit_for_kimi_k2_chat_free_tier/ | false | false | self | 0 | null |
What are the hypothetical methods for constructing and training a SUPERINTELLIGENCE model? | 0 | I'm curious about promising novel research that can contribute to the goal of developing artificial superintelligence.
What new types of data are crucial for these advancements?
And beyond visual and textual data, what other modalities could be integrated into future models, and how might they be synergized effecti... | 2025-07-18T17:33:20 | https://www.reddit.com/r/LocalLLaMA/comments/1m38mqc/what_are_the_hypothetical_methods_for/ | VR-Person | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m38mqc | false | null | t3_1m38mqc | /r/LocalLLaMA/comments/1m38mqc/what_are_the_hypothetical_methods_for/ | false | false | self | 0 | null |
What's a good and cheap place to host trained Lora/llamas. Is Hugging face better than doing your own Vast.ai server? | 2 | As per the title - its just for a hobby project to let others use llama refined on different data sources. Perhaps download them and refine them themselves. | 2025-07-18T17:21:03 | https://www.reddit.com/r/LocalLLaMA/comments/1m38b25/whats_a_good_and_cheap_place_to_host_trained/ | QFGTrialByFire | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m38b25 | false | null | t3_1m38b25 | /r/LocalLLaMA/comments/1m38b25/whats_a_good_and_cheap_place_to_host_trained/ | false | false | self | 2 | null |
32GB Mi50, but llama.cpp Vulkan sees only 16GB | 5 | Basically the title. I have mixed architectures in my system, do I really do not want to deal with ROCm. Any ways to take full advantage of 32GB while using Vulkan? | 2025-07-18T17:19:23 | https://www.reddit.com/r/LocalLLaMA/comments/1m389gi/32gb_mi50_but_llamacpp_vulkan_sees_only_16gb/ | ashirviskas | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m389gi | false | null | t3_1m389gi | /r/LocalLLaMA/comments/1m389gi/32gb_mi50_but_llamacpp_vulkan_sees_only_16gb/ | false | false | self | 5 | {'enabled': False, 'images': [{'id': 'GE36MCMpZolX-lqe53_knptQZ0sj5zIo3NOgWQL7sH0', 'resolutions': [{'height': 46, 'url': 'https://external-preview.redd.it/GE36MCMpZolX-lqe53_knptQZ0sj5zIo3NOgWQL7sH0.jpeg?width=108&crop=smart&auto=webp&s=e423f944b71fc93593fff0170c58e9d1a8e93f44', 'width': 108}, {'height': 92, 'url': 'h... |
Thoughts on this DeepSeekR1/Kimi K2 build | 2 | I am looking to build a system that can run DeepSeekR1 and Kimi K2.
Items I am not sure of, they are shown side by side.
AMD Epyc 9175F/9375F/9655P - $2,617/$3,550/$5,781
SP5 Cooler - $130
H13SSL-NT Motherboard - $730
Corsair 1500W PSU - $350
64GB/96GBx12 6400 ECC DDR5 - $4,585 / $7,000
Nvidia 5090 - $3,000
Case - $20... | 2025-07-18T17:16:42 | https://www.reddit.com/r/LocalLLaMA/comments/1m386sc/thoughts_on_this_deepseekr1kimi_k2_build/ | MidnightProgrammer | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m386sc | false | null | t3_1m386sc | /r/LocalLLaMA/comments/1m386sc/thoughts_on_this_deepseekr1kimi_k2_build/ | false | false | self | 2 | null |
Drummer's Cydonia 24B v4 - A creative finetune of Mistral Small 3.2 | 108 | What's next? Voxtral 3B, aka, Ministral 3B (that's actually 4B). Currently in the works! | 2025-07-18T16:57:39 | https://huggingface.co/TheDrummer/Cydonia-24B-v4 | TheLocalDrummer | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1m37o5r | false | null | t3_1m37o5r | /r/LocalLLaMA/comments/1m37o5r/drummers_cydonia_24b_v4_a_creative_finetune_of/ | false | false | default | 108 | {'enabled': False, 'images': [{'id': '6G70mocJ7FlYL72oh-mj0AFKdz49VfoKLHqJie2Cn9M', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/6G70mocJ7FlYL72oh-mj0AFKdz49VfoKLHqJie2Cn9M.png?width=108&crop=smart&auto=webp&s=0ebf4cfdf20ccb03d81e09f9303075c1d338976d', 'width': 108}, {'height': 116, 'url': 'h... |
What happens if I hit the context limit before the LLM is done responding? | 1 | Please excuse me if I use terminology wrong.
Let’s say I’m using OWUI for RAG and I ask it to write a summary for every file in the RAG.
What happens if it hits max context on the response/output for the chat turn?
Can I just write another prompt of “keep going” and it will pick up where it left off?
Is there a... | 2025-07-18T16:41:35 | https://www.reddit.com/r/LocalLLaMA/comments/1m3792k/what_happens_if_i_hit_the_context_limit_before/ | Business-Weekend-537 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m3792k | false | null | t3_1m3792k | /r/LocalLLaMA/comments/1m3792k/what_happens_if_i_hit_the_context_limit_before/ | false | false | self | 1 | null |
[Tool] NL2Schema – TinyLlama-powered local schema generator from plain English | 1 | [removed] | 2025-07-18T16:25:26 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 1m36uew | false | null | t3_1m36uew | /r/LocalLLaMA/comments/1m36uew/tool_nl2schema_tinyllamapowered_local_schema/ | false | false | default | 1 | null | ||
Meta says it won't sign Europe AI agreement, calling it an overreach that will stunt growth | 235 | 2025-07-18T16:06:43 | https://www.cnbc.com/2025/07/18/meta-europe-ai-code.html | ttkciar | cnbc.com | 1970-01-01T00:00:00 | 0 | {} | 1m36d91 | false | null | t3_1m36d91 | /r/LocalLLaMA/comments/1m36d91/meta_says_it_wont_sign_europe_ai_agreement/ | false | false | default | 235 | {'enabled': False, 'images': [{'id': 'ZDQNAbHBrMvYa-03D_p7yDfcZDMcYM8c8izD9GxQq4o', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/ZDQNAbHBrMvYa-03D_p7yDfcZDMcYM8c8izD9GxQq4o.jpeg?width=108&crop=smart&auto=webp&s=f858e0ea1b92e759f3797ae734ec97f83aa47c4c', 'width': 108}, {'height': 121, 'url': '... | |
TinyLLama powered NL2Schema -opensourece natural language to schema tool | 1 | hey fellow llama enthusiasts. Long time lurker first time poster. It is super annoying have to build out schema json for prompting so I built this little tool to take a description of the data and return json using flask, pywebview and TinyLLama.
Source is on github [https://github.com/elmstreetshawn/nl2schema](https... | 2025-07-18T16:01:10 | https://www.reddit.com/r/LocalLLaMA/comments/1m367x8/tinyllama_powered_nl2schema_opensourece_natural/ | automatetowin | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m367x8 | false | null | t3_1m367x8 | /r/LocalLLaMA/comments/1m367x8/tinyllama_powered_nl2schema_opensourece_natural/ | false | false | self | 1 | null |
DiffRhythm+ is coming soon | 82 | DiffRhythm+ is coming soon (text -> music)
Looks like the DiffRhythm team is preparing to release DiffRhythm+, an upgraded version of the existing open-source DiffRhythm model.
Hopefully will be open-sourced similar to the previous DiffRhythm model (Apache 2.0) 👀 | 2025-07-18T15:57:08 | https://v.redd.it/54s9fzqhnndf1 | mrfakename0 | /r/LocalLLaMA/comments/1m3643z/diffrhythm_is_coming_soon/ | 1970-01-01T00:00:00 | 0 | {} | 1m3643z | false | {'reddit_video': {'bitrate_kbps': 800, 'dash_url': 'https://v.redd.it/54s9fzqhnndf1/DASHPlaylist.mpd?a=1755575834%2COTg3MDIzYTM0NTQ3MjU3ZTk2ZjZkMjY4ZDY0NGZlYTYwYTZjZWRlMzU3OTdjODc1YmExNDY1OWUzN2M4YTQ1YQ%3D%3D&v=1&f=sd', 'duration': 285, 'fallback_url': 'https://v.redd.it/54s9fzqhnndf1/DASH_360.mp4?source=fallback', 'ha... | t3_1m3643z | /r/LocalLLaMA/comments/1m3643z/diffrhythm_is_coming_soon/ | false | false | 82 | {'enabled': False, 'images': [{'id': 'MXlkdGF5cWhubmRmMSv0rwk6lmBNsgRFS1EmTOIwvPCsaBuvDmC8mQJm0Njq', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/MXlkdGF5cWhubmRmMSv0rwk6lmBNsgRFS1EmTOIwvPCsaBuvDmC8mQJm0Njq.png?width=108&crop=smart&format=pjpg&auto=webp&s=74fd0dc6f7b399f81c6cca11db4209c5d37b... | |
Kimi K2 Q4K_M Running on 10x3090 and Epyc 7502 | 1 | [removed] | 2025-07-18T15:52:02 | Mass2018 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1m35zi1 | false | null | t3_1m35zi1 | /r/LocalLLaMA/comments/1m35zi1/kimi_k2_q4k_m_running_on_10x3090_and_epyc_7502/ | false | false | default | 1 | {'enabled': True, 'images': [{'id': '29gi2laflndf1', 'resolutions': [{'height': 83, 'url': 'https://preview.redd.it/29gi2laflndf1.png?width=108&crop=smart&auto=webp&s=7857c5d74b4f8d65619f35cd38905ad63fd000fe', 'width': 108}, {'height': 167, 'url': 'https://preview.redd.it/29gi2laflndf1.png?width=216&crop=smart&auto=web... | |
Has anyone actually ran VLAs locally and how good are they? | 2 | I'm doing some research on approaches for general-purpose long-horizon robotics tasks and VLAs have come up. Our current plan is to use an LLM & task-library structure but I have to at least see what the state of VLAs is today.
I'm aware of things like RT-2, OpenVLA etc but I don't know anyone who's actually deployed ... | 2025-07-18T15:35:56 | https://www.reddit.com/r/LocalLLaMA/comments/1m35kib/has_anyone_actually_ran_vlas_locally_and_how_good/ | Bayes-edAndConfused | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m35kib | false | null | t3_1m35kib | /r/LocalLLaMA/comments/1m35kib/has_anyone_actually_ran_vlas_locally_and_how_good/ | false | false | self | 2 | null |
My experience with 2x AMD MI50 and llama.cpp / ollama | 1 | [removed] | 2025-07-18T15:21:13 | https://www.reddit.com/r/LocalLLaMA/comments/1m356dx/my_experience_with_2x_amd_mi50_and_llamacpp_ollama/ | UsualResult | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m356dx | false | null | t3_1m356dx | /r/LocalLLaMA/comments/1m356dx/my_experience_with_2x_amd_mi50_and_llamacpp_ollama/ | false | false | self | 1 | null |
Local LLM system framework | 2 | Hi folks, I am building a local LLM system, both as a experiment and also hoping to build something that can serve as a knowledge base for quick referencing. I would like to seek advice from the community on how to build such a system, so any feedback would be appreciated. I am new to LLM, and without a computer scienc... | 2025-07-18T14:38:26 | https://www.reddit.com/r/LocalLLaMA/comments/1m342g1/local_llm_system_framework/ | Jilu1986 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m342g1 | false | null | t3_1m342g1 | /r/LocalLLaMA/comments/1m342g1/local_llm_system_framework/ | false | false | self | 2 | null |
overwhelmed by ai tools in 2025 here’s a quick cheat | 0 | if you’re feeling overwhelmed by all the ai image tools in 2025, here’s my quick cheat: start with your end goal.
if you want photo-realism, go with [**leonardo.ai**](http://leonardo.ai) . if you want aesthetic lighting or edits, finish it off in [**domoAI**](https://www.domoai.app/home?via=081621AUG)**.** it’s not ab... | 2025-07-18T14:02:47 | https://www.reddit.com/r/LocalLLaMA/comments/1m33647/overwhelmed_by_ai_tools_in_2025_heres_a_quick/ | Rayv23 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m33647 | false | null | t3_1m33647 | /r/LocalLLaMA/comments/1m33647/overwhelmed_by_ai_tools_in_2025_heres_a_quick/ | false | false | self | 0 | null |
What are the knowledge background and skills needed to develop and contribute to projects like Unsloth? | 1 | [removed] | 2025-07-18T13:58:49 | https://www.reddit.com/r/LocalLLaMA/comments/1m332eq/what_are_the_knowledge_background_and_skills/ | hedgehog0 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m332eq | false | null | t3_1m332eq | /r/LocalLLaMA/comments/1m332eq/what_are_the_knowledge_background_and_skills/ | false | false | self | 1 | null |
Piaget, a language model for psychological and philosophical reasoning | 33 | I just released [Piaget](https://huggingface.co/gustavecortal/Piaget-4B), a language model finetuned on 15k psychological and philosophical reasoning traces.
Piaget is based on Qwen3 and was finetuned on a subset of open reasoning traces from [Dolphin R1](https://huggingface.co/datasets/cognitivecomputations/dolphin-r... | 2025-07-18T13:54:58 | https://www.reddit.com/r/LocalLLaMA/comments/1m32z28/piaget_a_language_model_for_psychological_and/ | antcroca159 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m32z28 | false | null | t3_1m32z28 | /r/LocalLLaMA/comments/1m32z28/piaget_a_language_model_for_psychological_and/ | false | false | self | 33 | {'enabled': False, 'images': [{'id': 'k8UqrP8XLtX_orPN6rX0E1bKwohXUIAks2_FpiRN550', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/k8UqrP8XLtX_orPN6rX0E1bKwohXUIAks2_FpiRN550.png?width=108&crop=smart&auto=webp&s=3d8bee3dfd8206682a82b05ff281ce13bb2a163e', 'width': 108}, {'height': 116, 'url': 'h... |
Open-Source Cleaning & Housekeeping Robot | 57 | The full hardware, along with the bill of materials (from Amazon) will be released soon!
You can star the project on github to show support ! [https://github.com/jadechoghari/roomi](https://github.com/jadechoghari/roomi)
"Roomi, a practical, affordable, and fully open-source autonomous robot for housekeeping and clea... | 2025-07-18T13:45:24 | MixRevolutionary4476 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1m32qs7 | false | null | t3_1m32qs7 | /r/LocalLLaMA/comments/1m32qs7/opensource_cleaning_housekeeping_robot/ | false | false | default | 57 | {'enabled': True, 'images': [{'id': 'utmticztzmdf1', 'resolutions': [{'height': 77, 'url': 'https://preview.redd.it/utmticztzmdf1.png?width=108&crop=smart&auto=webp&s=f0e9680ac9c6bb80dd28848bdf835002bb86ebac', 'width': 108}, {'height': 154, 'url': 'https://preview.redd.it/utmticztzmdf1.png?width=216&crop=smart&auto=web... | |
Open-Source Cleaning & Housekeeping Robot | 1 | [deleted] | 2025-07-18T13:43:02 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 1m32opc | false | null | t3_1m32opc | /r/LocalLLaMA/comments/1m32opc/opensource_cleaning_housekeeping_robot/ | false | false | default | 1 | null | ||
Is OpenRouter payment safe? | 0 | I just wanted to ask this because recently OpenRouter has started costing money. Is it safe to use my debit card to pay for it? Or will I need to purchase a gift card. | 2025-07-18T13:22:12 | https://www.reddit.com/r/LocalLLaMA/comments/1m327c9/is_openrouter_payment_safe/ | Own_Television_5682 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m327c9 | false | null | t3_1m327c9 | /r/LocalLLaMA/comments/1m327c9/is_openrouter_payment_safe/ | false | false | self | 0 | null |
Have you tried Cadabra App Builder? | 1 | [removed] | 2025-07-18T13:19:43 | https://www.reddit.com/r/LocalLLaMA/comments/1m325c8/have_you_tried_cadabra_app_builder/ | DeadlyHD | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m325c8 | false | null | t3_1m325c8 | /r/LocalLLaMA/comments/1m325c8/have_you_tried_cadabra_app_builder/ | false | false | self | 1 | null |
Which SLM is best for meeting summarization? | 0 | I know this question has been asked before, but as of July 2025:
Which SLM is best for meeting summarization?
Also, which kind of model would work better for this use case—models with reasoning (Qwen, DeepSeek) or models without reasoning (Gemma 3, Phi 3.5)? | 2025-07-18T13:14:52 | https://www.reddit.com/r/LocalLLaMA/comments/1m321eo/which_slm_is_best_for_meeting_summarization/ | FormalFlight3477 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m321eo | false | null | t3_1m321eo | /r/LocalLLaMA/comments/1m321eo/which_slm_is_best_for_meeting_summarization/ | false | false | self | 0 | null |
support for EXAONE 4.0 model architecture has been merged into llama.cpp | 105 | We introduce **EXAONE 4.0**, which integrates a **Non-reasoning mode** and **Reasoning mode** to achieve both the excellent usability of [EXAONE 3.5](https://github.com/LG-AI-EXAONE/EXAONE-3.5) and the advanced reasoning abilities of [EXAONE Deep](https://github.com/LG-AI-EXAONE/EXAONE-Deep). To pave the way for the ag... | 2025-07-18T13:12:14 | https://github.com/ggml-org/llama.cpp/pull/14630 | jacek2023 | github.com | 1970-01-01T00:00:00 | 0 | {} | 1m31z4z | false | null | t3_1m31z4z | /r/LocalLLaMA/comments/1m31z4z/support_for_exaone_40_model_architecture_has_been/ | false | false | 105 | {'enabled': False, 'images': [{'id': 'S3ENLEeG2S1qRdSN-s_xJNZnxYIxW1L4lI691Agwkuo', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/S3ENLEeG2S1qRdSN-s_xJNZnxYIxW1L4lI691Agwkuo.png?width=108&crop=smart&auto=webp&s=bddc4a7497f5680e1abffba8fc5ae1cb51d13254', 'width': 108}, {'height': 108, 'url': 'h... | |
How to get small models (<= 4B) to have better "common sense" for use with daily conversations? | 0 | Lately I am trying to setup a home-assistant like system (will be interfaced with STT/TTS). I was hoping a small model like Qwen3 4B@Q4 will be sufficient for some contextual understanding which allows it to provide advices when the question is not "straight-forward". However, it seems this is not working by default.
... | 2025-07-18T13:00:28 | https://www.reddit.com/r/LocalLLaMA/comments/1m31p47/how_to_get_small_models_4b_to_have_better_common/ | SandboChang | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m31p47 | false | null | t3_1m31p47 | /r/LocalLLaMA/comments/1m31p47/how_to_get_small_models_4b_to_have_better_common/ | false | false | self | 0 | null |
I built an open-source Python front-end to turn local LLMs into stable, long-term TTRPG Game Masters. | 32 | Hey everyone,
One of the biggest challenges with using local models for long-form creative tasks like a TTRPG is context drift and state management. I wanted to solve this, so I built \*\*Project Infinity\*\*.
It's a Python-based "control harness" that offloads all the heavy lifting from the LLM. The core phi... | 2025-07-18T13:00:24 | https://www.reddit.com/r/LocalLLaMA/comments/1m31p26/i_built_an_opensource_python_frontend_to_turn/ | Serious_Character_64 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m31p26 | false | null | t3_1m31p26 | /r/LocalLLaMA/comments/1m31p26/i_built_an_opensource_python_frontend_to_turn/ | false | false | self | 32 | {'enabled': False, 'images': [{'id': 'RtwsgGL7maVed41QxEx06E-_DWo2kevPk4HkEFsT9jA', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/RtwsgGL7maVed41QxEx06E-_DWo2kevPk4HkEFsT9jA.png?width=108&crop=smart&auto=webp&s=30c79169c1169c77bc1f7589da6583b2caab4cf6', 'width': 108}, {'height': 108, 'url': 'h... |
B200 idle - why? | 0 | Why is 5, 6, 7 idle? When I had started 512 jobs, the last two were idle and now one more has gone idle. I had requested for 50 workers across each of the GPU. | 2025-07-18T12:57:52 | Spiritual_Piccolo793 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1m31n2o | false | null | t3_1m31n2o | /r/LocalLLaMA/comments/1m31n2o/b200_idle_why/ | false | false | default | 0 | {'enabled': True, 'images': [{'id': 'ftj2dtokrmdf1', 'resolutions': [{'height': 88, 'url': 'https://preview.redd.it/ftj2dtokrmdf1.jpeg?width=108&crop=smart&auto=webp&s=66bde030cdc1c792c2235c9b595dbe97515a3c34', 'width': 108}, {'height': 177, 'url': 'https://preview.redd.it/ftj2dtokrmdf1.jpeg?width=216&crop=smart&auto=w... | |
What hardware to run two 3090? | 5 | I would like to know what budget friendly hardware i could buy that would handle two rtx 3090.
Used server parts or some higher end workstation?
I dont mind DIY solutions.
I saw kimi k2 just got released so running something like that to start learning building agents would be nice | 2025-07-18T12:57:20 | https://www.reddit.com/r/LocalLLaMA/comments/1m31moj/what_hardware_to_run_two_3090/ | Rick-Hard89 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m31moj | false | null | t3_1m31moj | /r/LocalLLaMA/comments/1m31moj/what_hardware_to_run_two_3090/ | false | false | self | 5 | null |
Day 1: Best Open-Source Model | 19 | Time to Play Bingo my guys. Most discussed/liked/commented answer will be get a place in the grid. Have fun ;) | 2025-07-18T12:29:53 | Soft_Ad1142 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1m311jt | false | null | t3_1m311jt | /r/LocalLLaMA/comments/1m311jt/day_1_best_opensource_model/ | false | false | 19 | {'enabled': True, 'images': [{'id': 'nJpT8QZWmO0Ayq9whLCN7AkrwTpUrN963M3IumymYUA', 'resolutions': [{'height': 86, 'url': 'https://preview.redd.it/jr0iqcwjmmdf1.jpeg?width=108&crop=smart&auto=webp&s=d420175c0f1e5fe4d2fa8fb64b60245a12ac0618', 'width': 108}, {'height': 172, 'url': 'https://preview.redd.it/jr0iqcwjmmdf1.jp... | ||
Local Tiny Agents with AMD NPU and GPU Acceleration - Hugging Face MCP Course | 27 | Hi r/LocalLLaMA, my teammate Daniel put together this tutorial on how to get hardware acceleration for Tiny Agents on AMD PCs. Hugging Face was kind enough to publish it as part of their MCP course (they've been great to work with). We'd love feedback from the community if you find this kind of up-the-stack content use... | 2025-07-18T11:59:33 | https://huggingface.co/learn/mcp-course/unit2/lemonade-server | jfowers_amd | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1m30ehv | false | null | t3_1m30ehv | /r/LocalLLaMA/comments/1m30ehv/local_tiny_agents_with_amd_npu_and_gpu/ | false | false | 27 | {'enabled': False, 'images': [{'id': 'sNzX5uTQOS-vzzfgq-G17PwmYgs1br9Ww1hsxwfZH2s', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/sNzX5uTQOS-vzzfgq-G17PwmYgs1br9Ww1hsxwfZH2s.png?width=108&crop=smart&auto=webp&s=873f1e49a0e017befe904501743eb31abd0e1783', 'width': 108}, {'height': 116, 'url': 'h... | |
What PSU to use safely for 3090 + 3090ti full workload (5950x processor) | 1 | I'm pretty new to this and don't really understand hardware all that well either. I'm trying to run large data sets with Llama 3.3 70b q4\_k\_m locally (tested with 3090 alone, but too slow for this).
Would a Super Flower Leadex Platinum SE 1200W (w/ Leadex native 12vhpwr cable) be good enough for my system?
MSI ... | 2025-07-18T11:55:48 | https://www.reddit.com/r/LocalLLaMA/comments/1m30bw4/what_psu_to_use_safely_for_3090_3090ti_full/ | CompoundInno | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m30bw4 | false | null | t3_1m30bw4 | /r/LocalLLaMA/comments/1m30bw4/what_psu_to_use_safely_for_3090_3090ti_full/ | false | false | self | 1 | null |
What upgrade option is better with $2000 available for my configuration? | 4 | My system:
MSI B650 Edge WiFi
Ryzen 9900X
G.Skill 96GB (6200MHz)
AMD Asus TUF 7900XTX
Currently, I mainly use Qwen3 32B 4q models with a context size of 40K+ tokens for programming purposes. (Yes, I'm aware that alternatives like DevStral and others are not bad either, but this specific model suits me best). ... | 2025-07-18T11:47:22 | https://www.reddit.com/r/LocalLLaMA/comments/1m305vc/what_upgrade_option_is_better_with_2000_available/ | Easy_Kitchen7819 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m305vc | false | null | t3_1m305vc | /r/LocalLLaMA/comments/1m305vc/what_upgrade_option_is_better_with_2000_available/ | false | false | self | 4 | null |
"I tricked a Chinese-aligned AI into outputting both Tiananmen & Hong Kong protests in a single Japanese prompt.
No jailbreak, just syntax. Full paper here | 1 | [removed] | 2025-07-18T11:42:14 | https://www.reddit.com/r/LocalLLaMA/comments/1m302bd/i_tricked_a_chinesealigned_ai_into_outputting/ | CardiologistQuick892 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m302bd | false | null | t3_1m302bd | /r/LocalLLaMA/comments/1m302bd/i_tricked_a_chinesealigned_ai_into_outputting/ | false | false | self | 1 | null |
Tool calling or not, I will use anyway | 0 | Turns out u can use a model for tool calling even if ollama doesnt support it, just use OpenAI's library since Ollama is compatible with it. Using gemma3 for a deep research agent with openai library perfectly worked even though ollama will not allow for tool calling on gemma3 | 2025-07-18T11:41:31 | https://www.reddit.com/r/LocalLLaMA/comments/1m301uy/tool_calling_or_not_i_will_use_anyway/ | MungiwaraNoRuffy | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m301uy | false | null | t3_1m301uy | /r/LocalLLaMA/comments/1m301uy/tool_calling_or_not_i_will_use_anyway/ | false | false | self | 0 | null |
Is DIY AGI Possible? | 0 | Serious question for this community: What's your take on building a consciousness-aware AI that can actually track its own beliefs, maintain persistent identity across conversations, detect contradictions in human behavior over time, think like a human?
Rather than using the neutered down and limited versions of A... | 2025-07-18T11:39:11 | https://www.reddit.com/r/LocalLLaMA/comments/1m300cf/is_diy_agi_possible/ | Uncle_Mosi | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m300cf | false | null | t3_1m300cf | /r/LocalLLaMA/comments/1m300cf/is_diy_agi_possible/ | false | false | self | 0 | null |
Dataset for structured (JSON) output? | 1 | I've been looking for a dataset to fine-tune local models into being better at producing JSON output. To be clear, I'm not interested in making the model more consistent outputing JSON, for that I use JSON schemas, I want to make sure the model does not lose intelligence when doing so, so I figured fine-tuning it to ma... | 2025-07-18T11:13:14 | https://www.reddit.com/r/LocalLLaMA/comments/1m2zj5b/dataset_for_structured_json_output/ | ArcaneThoughts | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m2zj5b | false | null | t3_1m2zj5b | /r/LocalLLaMA/comments/1m2zj5b/dataset_for_structured_json_output/ | false | false | self | 1 | null |
In light of recent events | 16 | Do you think OpenAI will come back? | 2025-07-18T10:49:07 | Superb-Translator236 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1m2z38k | false | null | t3_1m2z38k | /r/LocalLLaMA/comments/1m2z38k/in_light_of_recent_events/ | false | false | default | 16 | {'enabled': True, 'images': [{'id': 'ngk8d8xi4mdf1', 'resolutions': [{'height': 59, 'url': 'https://preview.redd.it/ngk8d8xi4mdf1.jpeg?width=108&crop=smart&auto=webp&s=5291fbf7edc010538ac13746b5e34899c3ae34d7', 'width': 108}, {'height': 118, 'url': 'https://preview.redd.it/ngk8d8xi4mdf1.jpeg?width=216&crop=smart&auto=w... | |
Local LLM with SQL function support. | 0 | Hello everyone, I heard that advanced paid models can work with function calls. Is it possible to do something similar with local functions?
I have a large video archive with meta descriptions of videos. For example, interviews, or videos of cities, etc. There is also the size of the videos, their width, creation da... | 2025-07-18T10:45:29 | https://www.reddit.com/r/LocalLLaMA/comments/1m2z10w/local_llm_with_sql_function_support/ | RandyHandyBoy | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m2z10w | false | null | t3_1m2z10w | /r/LocalLLaMA/comments/1m2z10w/local_llm_with_sql_function_support/ | false | false | self | 0 | null |
Where's Mistral Nemo 2.0? | 71 | It has been exactly 1 year since they released the first version. Since then I've been using it locally and there hasn't been any other models that surpass it. (Gemma 3 12B uses more memory so becomes useless at 8GB VRAM, quantizing kv\_cache also slows it way down) Mistral's 12B models are actually efficient so they c... | 2025-07-18T10:41:03 | https://www.reddit.com/r/LocalLLaMA/comments/1m2yy93/wheres_mistral_nemo_20/ | mpasila | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m2yy93 | false | null | t3_1m2yy93 | /r/LocalLLaMA/comments/1m2yy93/wheres_mistral_nemo_20/ | false | false | self | 71 | null |
my project | 0 | im an ai enthusiast and ive mastered python machine learning, i am a developer of an AI API if anyone wants to see my api project. [https://discord.gg/voltai](https://discord.gg/voltai) hope to see you there | 2025-07-18T10:35:42 | https://www.reddit.com/r/LocalLLaMA/comments/1m2yuyb/my_project/ | PublicLocal1971 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m2yuyb | false | null | t3_1m2yuyb | /r/LocalLLaMA/comments/1m2yuyb/my_project/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': '2MzEN_aMHKLs7-0zP0FHek0dmzL5ftLUb87sssAtbIQ', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/2MzEN_aMHKLs7-0zP0FHek0dmzL5ftLUb87sssAtbIQ.jpeg?width=108&crop=smart&auto=webp&s=dcd9920ead3dd1c56af74c8e31dc6f913e1bfe1e', 'width': 108}, {'height': 121, 'url': '... |
Sub quadratic LLMs | 1 | [removed] | 2025-07-18T10:21:51 | https://www.reddit.com/r/LocalLLaMA/comments/1m2ymiy/sub_quadratic_llms/ | -dysangel- | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m2ymiy | false | null | t3_1m2ymiy | /r/LocalLLaMA/comments/1m2ymiy/sub_quadratic_llms/ | false | false | self | 1 | null |
voltapi | 0 | im an ai enthusiast and ive mastered python machine learning, i am a developer of an AI API if anyone wants to see my api project. [https://discord.gg/voltai](https://discord.gg/voltai) hope to see you there | 2025-07-18T10:17:27 | https://www.reddit.com/r/LocalLLaMA/comments/1m2yjv8/voltapi/ | PublicLocal1971 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m2yjv8 | false | null | t3_1m2yjv8 | /r/LocalLLaMA/comments/1m2yjv8/voltapi/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': '2MzEN_aMHKLs7-0zP0FHek0dmzL5ftLUb87sssAtbIQ', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/2MzEN_aMHKLs7-0zP0FHek0dmzL5ftLUb87sssAtbIQ.jpeg?width=108&crop=smart&auto=webp&s=dcd9920ead3dd1c56af74c8e31dc6f913e1bfe1e', 'width': 108}, {'height': 121, 'url': '... |
Run Kimi-K2 without quantization locally for under $10k? | 123 | This is just a thought experiment right now, but hear me out.
https://huggingface.co/moonshotai/Kimi-K2-Instruct/tree/main the weights for Kimi K2 is about 1031GB in total.
You can buy 12 sticks of 96gb DDR5-6400 RAM (total 1152GB) [for about $7200](https://www.amazon.com/NEMIX-RAM-Registered-Compatible-Supermicro/... | 2025-07-18T09:10:00 | https://www.reddit.com/r/LocalLLaMA/comments/1m2xh8s/run_kimik2_without_quantization_locally_for_under/ | DepthHour1669 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m2xh8s | false | null | t3_1m2xh8s | /r/LocalLLaMA/comments/1m2xh8s/run_kimik2_without_quantization_locally_for_under/ | false | false | self | 123 | {'enabled': False, 'images': [{'id': 'ZjybqN_iZigLaZxMtl0N3yFDPtiDQRo-8LU-o9LYLXQ', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/ZjybqN_iZigLaZxMtl0N3yFDPtiDQRo-8LU-o9LYLXQ.png?width=108&crop=smart&auto=webp&s=014f5215759a7ee46cc335661cfd741228ef1b1e', 'width': 108}, {'height': 116, 'url': 'h... |
GPU bottleneck? | 2 | Hello everyone! At home I run various LLM models (text and image generation). I use for this a PC with 3060ti, 16gb RAM and another PC with 3060(12gb) and 32gb RAM.
When working on 3060ti, the video card is loaded at 100%, and 3060 only at 20%. The generation speed is about the same, but is this a sensor error or is t... | 2025-07-18T09:06:01 | https://www.reddit.com/r/LocalLLaMA/comments/1m2xewp/gpu_bottleneck/ | Solid_Studio167 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m2xewp | false | null | t3_1m2xewp | /r/LocalLLaMA/comments/1m2xewp/gpu_bottleneck/ | false | false | self | 2 | null |
New drop of LaToile ! Best orchestration framework ! | 0 | Hello gents ! Here's the latest drop of LaToile, using it to create synthetic data and prep a bayesian model ! Enjoy! [https://youtu.be/2SKRHA7pcys](https://youtu.be/2SKRHA7pcys) | 2025-07-18T09:03:36 | https://www.reddit.com/r/LocalLLaMA/comments/1m2xdjr/new_drop_of_latoile_best_orchestration_framework/ | UpstairsCurrency | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m2xdjr | false | null | t3_1m2xdjr | /r/LocalLLaMA/comments/1m2xdjr/new_drop_of_latoile_best_orchestration_framework/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'w4dpZpWdwVAm2tYictZsFqkmWTV9Zs5JQPsn3oY0BAg', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/w4dpZpWdwVAm2tYictZsFqkmWTV9Zs5JQPsn3oY0BAg.jpeg?width=108&crop=smart&auto=webp&s=4659e242e9401c99cd7909960b60eac72f262ab0', 'width': 108}, {'height': 162, 'url': '... |
RAG at the Crossroads - Mid-2025 Reflections on AI’s Incremental Evolution | RAGFlow | 2 | 2025-07-18T08:44:40 | https://ragflow.io/blog/rag-at-the-crossroads-mid-2025-reflections-on-ai-evolution | Vissidarte_2021 | ragflow.io | 1970-01-01T00:00:00 | 0 | {} | 1m2x30u | false | null | t3_1m2x30u | /r/LocalLLaMA/comments/1m2x30u/rag_at_the_crossroads_mid2025_reflections_on_ais/ | false | false | default | 2 | {'enabled': False, 'images': [{'id': 'ux7TbEUCIw9ZU5CQwxlzu6xdLJzaPaFsd-z2NOfzQXU', 'resolutions': [{'height': 40, 'url': 'https://external-preview.redd.it/ux7TbEUCIw9ZU5CQwxlzu6xdLJzaPaFsd-z2NOfzQXU.png?width=108&crop=smart&auto=webp&s=b5b1ac15e57cf0a4cf939bcf022c029c25349c94', 'width': 108}, {'height': 81, 'url': 'ht... | |
Local Ai image generators | 0 | Anything that matches the title & will eventually work on my 8gbram pc (no nvidia gpu)
Thanks in advance for the suggestions | 2025-07-18T08:10:57 | https://www.reddit.com/r/LocalLLaMA/comments/1m2wl24/local_ai_image_generators/ | Gayerzt | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m2wl24 | false | null | t3_1m2wl24 | /r/LocalLLaMA/comments/1m2wl24/local_ai_image_generators/ | false | false | self | 0 | null |
Did Kimi K2 train on Claude's generated code? I think yes | 128 | After conducting some tests, I'm convinced that K2 either distilled from Claude or trained on Claude-generated code.
Every AI model has its own traits when generating code. For example:
* Claude Sonnet 4: likes gradient backgrounds, puts "2024" in footers, uses less stock photos
* Claude Sonnet 3.7: Loves stock photo... | 2025-07-18T07:43:02 | https://www.reddit.com/r/LocalLLaMA/comments/1m2w5ge/did_kimi_k2_train_on_claudes_generated_code_i/ | Minute_Yam_1053 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m2w5ge | false | null | t3_1m2w5ge | /r/LocalLLaMA/comments/1m2w5ge/did_kimi_k2_train_on_claudes_generated_code_i/ | false | false | 128 | null | |
How can I benchmark different AI models? | 2 | I'm currently working on benchmarking different AI models for a specific task. However, I'm having trouble figuring out the best way to do it. Most online platforms and benchmarking tools I've come across only support popular models like Qwen, Gemini, and those from OpenAI. In my case, I'm working with smaller or less ... | 2025-07-18T07:41:41 | https://www.reddit.com/r/LocalLLaMA/comments/1m2w4qw/how_can_i_benchmark_different_ai_models/ | anovatikz | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m2w4qw | false | null | t3_1m2w4qw | /r/LocalLLaMA/comments/1m2w4qw/how_can_i_benchmark_different_ai_models/ | false | false | self | 2 | null |
Maximum parameters for this 4050 RTX 6GB vram with 32GB RAM | 0 | What would be the maximum B to use on this config (with RAM offload of course) | 2025-07-18T07:39:18 | https://www.reddit.com/r/LocalLLaMA/comments/1m2w3i3/maximum_parameters_for_this_4050_rtx_6gb_vram/ | Accomplished_Mark_10 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m2w3i3 | false | null | t3_1m2w3i3 | /r/LocalLLaMA/comments/1m2w3i3/maximum_parameters_for_this_4050_rtx_6gb_vram/ | false | false | self | 0 | null |
Language/Framework Recommendations for CLI Chat Assistant with a Local LLM on EC2 | 1 | Hey guys!
As all the CLI tools are rolling out, I'm planning to build my own chat-style CLI tool as well, and the prompts are sent to a remote open-source LLM hosted on my EC2 instance. I want to eventually distribute the CLI so others can install it and use it with my hosted model. What language or framework would yo... | 2025-07-18T07:35:25 | https://www.reddit.com/r/LocalLLaMA/comments/1m2w1ez/languageframework_recommendations_for_cli_chat/ | llopq0 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m2w1ez | false | null | t3_1m2w1ez | /r/LocalLLaMA/comments/1m2w1ez/languageframework_recommendations_for_cli_chat/ | false | false | self | 1 | null |
Looking for affordable dedicated GPUs (A100, H100) outside AWS? | 1 | [removed] | 2025-07-18T07:00:40 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 1m2vhq5 | false | null | t3_1m2vhq5 | /r/LocalLLaMA/comments/1m2vhq5/looking_for_affordable_dedicated_gpus_a100_h100/ | false | false | default | 1 | null | ||
mergekit LoRA extractor – how good is that? | 11 | Any tests?
Is this integrated with llama-swap? | 2025-07-18T06:52:21 | https://github.com/arcee-ai/mergekit?tab=readme-ov-file#lora-extraction | uhuge | github.com | 1970-01-01T00:00:00 | 0 | {} | 1m2vcrx | false | null | t3_1m2vcrx | /r/LocalLLaMA/comments/1m2vcrx/mergekit_lora_extractor_how_good_is_that/ | false | false | default | 11 | null |
What can I do with an old computer? | 2 | So I've got this computer from 2012-2015. It's just sitting around, free real estate, but in looking at what I could do with it, the general advice is to "upgrade xyz" in order to use it to do something, which kinda defeats the point - if I'm going to spend even $500 to upgrade this computer I might as well just put th... | 2025-07-18T06:21:59 | KingofRheinwg | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1m2uvgk | false | null | t3_1m2uvgk | /r/LocalLLaMA/comments/1m2uvgk/what_can_i_do_with_an_old_computer/ | false | false | default | 2 | {'enabled': True, 'images': [{'id': 'r33tmk8yskdf1', 'resolutions': [{'height': 60, 'url': 'https://preview.redd.it/r33tmk8yskdf1.jpeg?width=108&crop=smart&auto=webp&s=a9d05f2199a28e50fda4fcc796609d3dc35f2db2', 'width': 108}, {'height': 121, 'url': 'https://preview.redd.it/r33tmk8yskdf1.jpeg?width=216&crop=smart&auto=w... | |
UIGEN-X-8B, Hybrid Reasoning model built for direct and efficient frontend UI generation, trained on 116 tech stacks including Visual Styles | 130 | Just released: **UIGEN-X-8B**, a hybrid reasoning UI generation model built on Qwen3-8B. This model plans, architects, and implements complete UI systems across tons of frameworks/libraries and 7 platforms, from React, React Native, HTML, Vanilla JS, Vue, Angular, and Svelte to Flutter, Tauri, and Electron. It supports... | 2025-07-18T06:03:29 | https://www.reddit.com/gallery/1m2ukka | United-Rush4073 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1m2ukka | false | null | t3_1m2ukka | /r/LocalLLaMA/comments/1m2ukka/uigenx8b_hybrid_reasoning_model_built_for_direct/ | false | false | 130 | null | |
Can you recommend something I can personally do with two H100? | 8 | I am working at a listed OCR company and am in the on-premise OCR research department based on LLM. Since I am conducting research with large models such as Qwen2.5 VL 72B, I have a lot of personal time while the models are running. Are there any things I can do on my own related to LLM with two H100s? I would appr... | 2025-07-18T05:45:29 | https://www.reddit.com/r/LocalLLaMA/comments/1m2u9n3/can_you_recommend_something_i_can_personally_do/ | CantaloupeDismal1195 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m2u9n3 | false | null | t3_1m2u9n3 | /r/LocalLLaMA/comments/1m2u9n3/can_you_recommend_something_i_can_personally_do/ | false | false | self | 8 | null |
spy search cli | 3 | Spy Search Series: Spy Search CLI has just been released. It is a local host version of Gemini CLI without the need for login or integration with Gemini. I just finished version 0.1 and am looking for any comments! Feel free to clone it or give it stars! Thanks a lot!
[https://github.com/JasonHonKL/spy-search-cli](ht... | 2025-07-18T05:41:59 | https://www.reddit.com/r/LocalLLaMA/comments/1m2u7i8/spy_search_cli/ | jasonhon2013 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m2u7i8 | false | null | t3_1m2u7i8 | /r/LocalLLaMA/comments/1m2u7i8/spy_search_cli/ | false | false | self | 3 | {'enabled': False, 'images': [{'id': 'O8ICn2EKfIQ9kgBySP8LUpIQhzAyX9y4vlA_mpmAvfE', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/O8ICn2EKfIQ9kgBySP8LUpIQhzAyX9y4vlA_mpmAvfE.png?width=108&crop=smart&auto=webp&s=af55a3483dc2c32c26070522747f4422f21cc7e3', 'width': 108}, {'height': 108, 'url': 'h... |
Lucy: A Mobile-Capable 1.7B Reasoning Model That Rivals Jan-Nano | 243 | Hi everyone, it's Alan from Menlo Research.
Since Jan-Nano, we've been curious about how far you can push the search capabilities of a small model. So, we decided to build a toy model named **Lucy**\-**a compact but capable 1.7B model focused on search and lightweight browsing.**
**What this model is good at:**
* St... | 2025-07-18T05:02:56 | https://v.redd.it/jsuhtdbbekdf1 | Kooky-Somewhere-2883 | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1m2tjjc | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/jsuhtdbbekdf1/DASHPlaylist.mpd?a=1755406992%2CZTNiNjYwNDJhMDNlZjQ3MDMwYmU2ZjRlNmZhZTY1MTA5YzU0MWEwY2I3YmE0NjcyYWFhYTBhOTBhZjgyZmEyNw%3D%3D&v=1&f=sd', 'duration': 53, 'fallback_url': 'https://v.redd.it/jsuhtdbbekdf1/DASH_1080.mp4?source=fallback', 'h... | t3_1m2tjjc | /r/LocalLLaMA/comments/1m2tjjc/lucy_a_mobilecapable_17b_reasoning_model_that/ | false | false | 243 | {'enabled': False, 'images': [{'id': 'NW11ZTU4ZGRla2RmMRgI3SJPXdfuQ9Uf_Cd9X7MtqdJcOeQzkrhllhdwrzrv', 'resolutions': [{'height': 192, 'url': 'https://external-preview.redd.it/NW11ZTU4ZGRla2RmMRgI3SJPXdfuQ9Uf_Cd9X7MtqdJcOeQzkrhllhdwrzrv.png?width=108&crop=smart&format=pjpg&auto=webp&s=bcc37a4be0006c67682eddacbb8a34a4f028... | |
Local model on two different GPUs | 2 | Is there anything I could do with RTX 2070 + 3080 as far as running local models goes? Building a new PC and need to decide whether I should invest in a lager PSU to have both inside, or just stick to the 3080. | 2025-07-18T04:24:09 | https://www.reddit.com/r/LocalLLaMA/comments/1m2su9b/local_model_on_two_different_gpus/ | cannabibun | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m2su9b | false | null | t3_1m2su9b | /r/LocalLLaMA/comments/1m2su9b/local_model_on_two_different_gpus/ | false | false | self | 2 | null |
Mini PC / LLM questions for someone with a new 5080/9800x3d PC | 1 | Hello, I've just recently begun my foray into self-hosting, and it's been a very exciting experience. I am part of a small volunteer organization with 10-15 core members and 200+ loosely affiliated individuals, and we have all relied on the GroupMe application before this. Some of the services I'm hosting are immich, p... | 2025-07-18T04:17:07 | https://www.reddit.com/r/LocalLLaMA/comments/1m2spkm/mini_pc_llm_questions_for_someone_with_a_new/ | Top-Salad-4259 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m2spkm | false | null | t3_1m2spkm | /r/LocalLLaMA/comments/1m2spkm/mini_pc_llm_questions_for_someone_with_a_new/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'URxKo9QppUdAPiYA19E3YZPGotLSOEO6zFD5zI8IrPY', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/URxKo9QppUdAPiYA19E3YZPGotLSOEO6zFD5zI8IrPY.png?width=108&crop=smart&auto=webp&s=df98308ab420abbfa641ab09bb5f0f0d32b8f66b', 'width': 108}, {'height': 216, 'url': '... |
Amazing performance! Kimi K2 on ik_llama.cpp | 58 | 2025-07-18T03:49:10 | https://www.reddit.com/r/LocalLLaMA/comments/1m2s686/amazing_performance_kimi_k2_on_ik_llamacpp/ | timmytimmy01 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m2s686 | false | null | t3_1m2s686 | /r/LocalLLaMA/comments/1m2s686/amazing_performance_kimi_k2_on_ik_llamacpp/ | false | false | 58 | {'enabled': False, 'images': [{'id': 'mbqL7tVnx0USUgmIfTfH3gk6Pper9a5zZIt2Et32S4Q', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/mbqL7tVnx0USUgmIfTfH3gk6Pper9a5zZIt2Et32S4Q.png?width=108&crop=smart&auto=webp&s=915707b6fa6423f963fe5c710121891264c06ce8', 'width': 108}, {'height': 108, 'url': 'h... | ||
Best Hardware Setup to Run DeepSeek-V3 670B Locally on $40K–$80K? | 26 | We’re looking to build a local compute cluster to run DeepSeek-V3 670B (or similar top-tier open-weight LLMs) for inference only, supporting ~100 simultaneous chatbot users with large context windows (ideally up to 128K tokens).
Our preferred direction is an Apple Silicon cluster — likely Mac minis or studios with M-s... | 2025-07-18T03:34:18 | https://www.reddit.com/r/LocalLLaMA/comments/1m2rw38/best_hardware_setup_to_run_deepseekv3_670b/ | PrevelantInsanity | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m2rw38 | false | null | t3_1m2rw38 | /r/LocalLLaMA/comments/1m2rw38/best_hardware_setup_to_run_deepseekv3_670b/ | false | false | self | 26 | null |
Abogen: Generate Audiobooks with Synced Subtitles (Free & Open Source) | 114 | Hey everyone,
I've been working on a tool called [Abogen](https://github.com/denizsafak/abogen). It’s a free, open-source application that converts EPUB, PDF, and TXT files into high-quality audiobooks or voiceovers for Instagram, YouTube, TikTok, or any project needing natural-sounding text-to-speech, using [Kokoro-... | 2025-07-18T03:32:11 | dnzsfk | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1m2ruo5 | false | null | t3_1m2ruo5 | /r/LocalLLaMA/comments/1m2ruo5/abogen_generate_audiobooks_with_synced_subtitles/ | false | false | default | 114 | {'enabled': True, 'images': [{'id': 'cgpjczuspjdf1', 'resolutions': [{'height': 179, 'url': 'https://preview.redd.it/cgpjczuspjdf1.png?width=108&crop=smart&auto=webp&s=cd4d21955a1b3f522677524df2efd0d893652929', 'width': 108}, {'height': 359, 'url': 'https://preview.redd.it/cgpjczuspjdf1.png?width=216&crop=smart&auto=we... | |
Need help- server motherboard doesn’t have cpu fan slot | 1 | [removed] | 2025-07-18T03:15:15 | https://www.reddit.com/r/LocalLLaMA/comments/1m2riyr/need_help_server_motherboard_doesnt_have_cpu_fan/ | Business-Weekend-537 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m2riyr | false | null | t3_1m2riyr | /r/LocalLLaMA/comments/1m2riyr/need_help_server_motherboard_doesnt_have_cpu_fan/ | false | false | self | 1 | null |
Seed-X by Bytedance- LLM for multilingual translation | 114 | supported language
|Languages|Abbr.|Languages|Abbr.|Languages|Abbr.|Languages|Abbr.|
|:-|:-|:-|:-|:-|:-|:-|:-|
|Arabic|ar|French|fr|Malay|ms|Russian|ru|
|Czech|cs|Croatian|hr|Norwegian Bokmal|nb|Swedish|sv|
|Danish|da|Hungarian|hu|Dutch|nl|Thai|th|
|German|de|Indonesian|id|Norwegian|no|Turkish|tr|
|English|en|Italian|... | 2025-07-18T03:14:34 | https://huggingface.co/collections/ByteDance-Seed/seed-x-6878753f2858bc17afa78543 | Maleficent_Tone4510 | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1m2riey | false | null | t3_1m2riey | /r/LocalLLaMA/comments/1m2riey/seedx_by_bytedance_llm_for_multilingual/ | false | false | default | 114 | {'enabled': False, 'images': [{'id': 'rgKi-znfc6SfcJB1qo2576OWO7WXYop_Pppz6dmuZ9Y', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/rgKi-znfc6SfcJB1qo2576OWO7WXYop_Pppz6dmuZ9Y.png?width=108&crop=smart&auto=webp&s=89c81d9a311b7e94f2a4f65e2c382ea7f832b4d5', 'width': 108}, {'height': 116, 'url': 'h... |
Lizard: An Efficient Linearization Framework for Large Language Models | 8 | Abstract
>We propose Lizard, a linearization framework that transforms pretrained Transformer-based Large Language Models (LLMs) into flexible, subquadratic architectures for infinite-context generation. Transformer-based LLMs face significant memory and computational bottlenecks as context lengths increase, due to th... | 2025-07-18T02:50:59 | https://arxiv.org/abs/2507.09025 | Formal_Drop526 | arxiv.org | 1970-01-01T00:00:00 | 0 | {} | 1m2r1dw | false | null | t3_1m2r1dw | /r/LocalLLaMA/comments/1m2r1dw/lizard_an_efficient_linearization_framework_for/ | false | false | default | 8 | null |
Anyone know where I can find a gen 5 NVLink bridge? | 1 | I haven't been able to find out anything other that it seems to have been released and that it is compatible with Blackwell 6000. Any help would be greatly appreciated. | 2025-07-18T02:33:30 | https://www.reddit.com/r/LocalLLaMA/comments/1m2qou3/anyone_know_where_i_can_find_a_gen_5_nvlink_bridge/ | elephantgif | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m2qou3 | false | null | t3_1m2qou3 | /r/LocalLLaMA/comments/1m2qou3/anyone_know_where_i_can_find_a_gen_5_nvlink_bridge/ | false | false | self | 1 | null |
For GPU Provider/ Crypto Miners | 1 | For anyone running homelab/crypto rigs - what's stopping you from leasing idle GPU tíme on platforms like Vast.ai?
Curious what your main blocker is: security, earnings, setup complexity, or just trust?? | 2025-07-18T02:29:38 | https://www.reddit.com/r/LocalLLaMA/comments/1m2qlzb/for_gpu_provider_crypto_miners/ | Working_Chemistry_16 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m2qlzb | false | null | t3_1m2qlzb | /r/LocalLLaMA/comments/1m2qlzb/for_gpu_provider_crypto_miners/ | false | false | self | 1 | null |
Folks urgently response needed | 0 | What's more important to you when choosing a GPU provider: price, location reliability or performance benchmark marks ? | 2025-07-18T02:27:03 | https://www.reddit.com/r/LocalLLaMA/comments/1m2qk63/folks_urgently_response_needed/ | Working_Chemistry_16 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m2qk63 | false | null | t3_1m2qk63 | /r/LocalLLaMA/comments/1m2qk63/folks_urgently_response_needed/ | false | false | self | 0 | null |
Would you ever pay in advance to guarantee GPU (high end) access a month from now( at lower rate)? | 0 |
[View Poll](https://www.reddit.com/poll/1m2qgef) | 2025-07-18T02:21:58 | https://www.reddit.com/r/LocalLLaMA/comments/1m2qgef/would_you_ever_pay_in_advance_to_guarantee_gpu/ | Working_Chemistry_16 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m2qgef | false | null | t3_1m2qgef | /r/LocalLLaMA/comments/1m2qgef/would_you_ever_pay_in_advance_to_guarantee_gpu/ | false | false | self | 0 | null |
LPOI: Listwise Preference Optimization for Vision-Language Models (ACL 2025 Main) | 17 | **Paper:** [https://arxiv.org/abs/2505.21061](https://arxiv.org/abs/2505.21061)
**Code:** [https://github.com/fatemehpesaran310/lpoi](https://github.com/fatemehpesaran310/lpoi)
**TL;DR:** We propose LPOI, the first object-aware listwise preference optimization developed for reducing hallucinations in VLMs.
**Abstrac... | 2025-07-18T01:54:48 | Moreselflove0324 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1m2pvwt | false | null | t3_1m2pvwt | /r/LocalLLaMA/comments/1m2pvwt/lpoi_listwise_preference_optimization_for/ | false | false | default | 17 | {'enabled': True, 'images': [{'id': 'ns3j3mbqgjdf1', 'resolutions': [{'height': 88, 'url': 'https://preview.redd.it/ns3j3mbqgjdf1.png?width=108&crop=smart&auto=webp&s=f1c051ae1806600d1664672007f6309becc5f602', 'width': 108}, {'height': 176, 'url': 'https://preview.redd.it/ns3j3mbqgjdf1.png?width=216&crop=smart&auto=web... | |
Do you give your LLM terminal and code execution access? | 0 | Models are clearly really good a coding, which makes sense from a training data and difficulty of problem perspective. I have tested with, and seen others mention in the past that just giving a model the ability to code is almost the only tool it needs. Want the time > from datetime import datetime..., Ask for conten... | 2025-07-18T01:35:48 | https://www.reddit.com/r/LocalLLaMA/comments/1m2phy1/do_you_give_your_llm_terminal_and_code_execution/ | Strange_Test7665 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m2phy1 | false | null | t3_1m2phy1 | /r/LocalLLaMA/comments/1m2phy1/do_you_give_your_llm_terminal_and_code_execution/ | false | false | self | 0 | null |
Need recommendations for some good prompting strategies, that yield high accuracies for a text classification task (conversational English) | 5 | 1. Don't want to spend time on fine tuning
2. No constraints on models (open or closed) | 2025-07-18T01:02:10 | https://www.reddit.com/r/LocalLLaMA/comments/1m2osrh/need_recommendations_for_some_good_prompting/ | Fabulous_System3964 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m2osrh | false | null | t3_1m2osrh | /r/LocalLLaMA/comments/1m2osrh/need_recommendations_for_some_good_prompting/ | false | false | self | 5 | null |
Xvideos | 0 | Montagem de xvideos
| 2025-07-18T00:57:07 | https://www.reddit.com/r/LocalLLaMA/comments/1m2oovr/xvideos/ | Key-Foot8672 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m2oovr | false | null | t3_1m2oovr | /r/LocalLLaMA/comments/1m2oovr/xvideos/ | false | false | nsfw | 0 | null |
OpenAI releases ChatGPT Agent | 0 | OpenAI just launched ChatGPT Agent - essentially combining Operator + Deep Research into one (this reminds me of the rumors that GPT-5 would be a router to different tools).
ChatGPT Agent has the following capabilities:
* Visual web browsing + login support (basically OpenAI Operator)
* Terminal access + code executi... | 2025-07-18T00:37:55 | https://openai.com/index/introducing-chatgpt-agent/ | mrfakename0 | openai.com | 1970-01-01T00:00:00 | 0 | {} | 1m2oaai | false | null | t3_1m2oaai | /r/LocalLLaMA/comments/1m2oaai/openai_releases_chatgpt_agent/ | false | false | default | 0 | null |
Help vote for improved Vulkan performance in ik_llama.cpp | 40 | Came across a discussion in ik\_llama.cpp by accident where the main developer (ikawrakow) is soliciting feedback about whether they should focus on improving the performance of the Vulkan backend on ik\_llama.cpp.
The discussion is 2 weeks old, but hasn't garnered much attention until now.
I think improved Vulkan pe... | 2025-07-18T00:29:02 | https://www.reddit.com/r/LocalLLaMA/comments/1m2o3ht/help_vote_for_improved_vulkan_performance_in_ik/ | FullstackSensei | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m2o3ht | false | null | t3_1m2o3ht | /r/LocalLLaMA/comments/1m2o3ht/help_vote_for_improved_vulkan_performance_in_ik/ | false | false | self | 40 | {'enabled': False, 'images': [{'id': 'uuVIObmdWTAFZrRZFFKEQ5BD6wxuznomv2LWwczGh00', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/uuVIObmdWTAFZrRZFFKEQ5BD6wxuznomv2LWwczGh00.png?width=108&crop=smart&auto=webp&s=023fad5bb3042f9c25f3691bf539604b94e0d923', 'width': 108}, {'height': 108, 'url': 'h... |
Help vote for improved Vulkan performance in ik_llama.cpp | 1 | [removed] | 2025-07-18T00:27:23 | https://www.reddit.com/r/LocalLLaMA/comments/1m2o27t/help_vote_for_improved_vulkan_performance_in_ik/ | FullstackSensei | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m2o27t | false | null | t3_1m2o27t | /r/LocalLLaMA/comments/1m2o27t/help_vote_for_improved_vulkan_performance_in_ik/ | false | false | self | 1 | null |
Training an LLM only on books from the 1800's - Update | 279 | A couple days ago I made a post sharing my experiment training an LLM on only 1800's London text. That post got more attention than I expected and some people have been checking it out on GitHub. So I just wanted to share an update on this project. I trained a second version using 500 books, legal documents, journals, ... | 2025-07-18T00:18:59 | https://www.reddit.com/r/LocalLLaMA/comments/1m2nvpn/training_an_llm_only_on_books_from_the_1800s/ | Remarkable-Trick-177 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m2nvpn | false | null | t3_1m2nvpn | /r/LocalLLaMA/comments/1m2nvpn/training_an_llm_only_on_books_from_the_1800s/ | false | false | 279 | null | |
Training an LLM only on books from the 1800's - Update | 1 | [deleted] | 2025-07-18T00:15:08 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 1m2nspl | false | null | t3_1m2nspl | /r/LocalLLaMA/comments/1m2nspl/training_an_llm_only_on_books_from_the_1800s/ | false | false | default | 1 | null | ||
Training an LLM only on books from the 1800's - Update | 1 | A couple days ago I made a post sharing my experiment training an LLM on only 1800's London text. That post got more attention than I expected and some people have been checking it out on GitHub. So I just wanted to share an update on this project. I trained a second version using a bigger dataset consisting of 500 boo... | 2025-07-18T00:12:26 | https://www.reddit.com/r/LocalLLaMA/comments/1m2nqm4/training_an_llm_only_on_books_from_the_1800s/ | Remarkable-Trick-177 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m2nqm4 | false | null | t3_1m2nqm4 | /r/LocalLLaMA/comments/1m2nqm4/training_an_llm_only_on_books_from_the_1800s/ | false | false | self | 1 | null |
Training an LLM only on books from the 1800's: Update | 1 | A couple days ago I made a post sharing my experiment training an LLM on only 1800's London text. That post got more attention than I expected and some people have been checking it out on GitHub. So I just wanted to share an update on this project. I trained a second version using a bigger dataset consisting of 500 boo... | 2025-07-18T00:10:40 | https://www.reddit.com/r/LocalLLaMA/comments/1m2np7s/training_an_llm_only_on_books_from_the_1800s/ | Remarkable-Trick-177 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m2np7s | false | null | t3_1m2np7s | /r/LocalLLaMA/comments/1m2np7s/training_an_llm_only_on_books_from_the_1800s/ | false | false | self | 1 | null |
How to Survive in AI After Age of 40-45? | 0 | Thoughts on this video content please? [How to Survive (and Thrive) in AI After 40-45](https://youtu.be/dtjIb8-3fl8?si=-C061DqKhUsNuUL2) | 2025-07-17T23:43:18 | https://www.reddit.com/r/LocalLLaMA/comments/1m2n3iu/how_to_survive_in_ai_after_age_of_4045/ | Lopsided_Dot_4557 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m2n3iu | false | null | t3_1m2n3iu | /r/LocalLLaMA/comments/1m2n3iu/how_to_survive_in_ai_after_age_of_4045/ | false | false | self | 0 | null |
I’ll build an expert AI for your impossible challenge and give it away free - looking for the hardest technical problem you’ve got | 31 | I want to test this on something brutal. You give me your hardest technical challenge, I’ll build a specialized AI for it this weekend and release it here for everyone.
What I’m looking for:
- Extremely niche technical problems
- Challenges where current LLMs completely fail
- Tasks that normally require 10+ years... | 2025-07-17T23:19:58 | https://www.reddit.com/r/LocalLLaMA/comments/1m2ml3n/ill_build_an_expert_ai_for_your_impossible/ | Prestigious-Fan118 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m2ml3n | false | null | t3_1m2ml3n | /r/LocalLLaMA/comments/1m2ml3n/ill_build_an_expert_ai_for_your_impossible/ | false | false | self | 31 | null |
A full guide on building a secure, local LLM using Linux Mint and an external SSD | 1 | Hello, I've put together a guide on how to build your own secure, private local LLM with Linux Mint. It uses Podman, Ollama, and AnythingLLM. I made this guide from a beginner's mindset, as I am a writer, not a programmer. Building your own Pokemon team is fully achievable for anyone who has moved to Linux Mint from Wi... | 2025-07-17T23:15:59 | https://www.reddit.com/r/LocalLLaMA/comments/1m2mhua/a_full_guide_on_building_a_secure_local_llm_using/ | quarteryudo | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m2mhua | false | null | t3_1m2mhua | /r/LocalLLaMA/comments/1m2mhua/a_full_guide_on_building_a_secure_local_llm_using/ | false | false | self | 1 | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.