title stringlengths 1 300 | score int64 0 8.54k | selftext stringlengths 0 41.5k | created timestamp[ns]date 2023-04-01 04:30:41 2026-03-04 02:14:14 ⌀ | url stringlengths 0 878 | author stringlengths 3 20 | domain stringlengths 0 82 | edited timestamp[ns]date 1970-01-01 00:00:00 2026-02-19 14:51:53 | gilded int64 0 2 | gildings stringclasses 7
values | id stringlengths 7 7 | locked bool 2
classes | media stringlengths 646 1.8k ⌀ | name stringlengths 10 10 | permalink stringlengths 33 82 | spoiler bool 2
classes | stickied bool 2
classes | thumbnail stringlengths 4 213 ⌀ | ups int64 0 8.54k | preview stringlengths 301 5.01k ⌀ |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Qwen3 tiny/unsloth quants with vllm? | 1 | I've gotten UD 2 bit quants to work with llama.cpp. I've merged the split ggufs and tried to load that into vllm (v0.9.1) and it says qwen3moe architecture isn't supported for gguf. So I guess my real question here is done anyone repackage unsloth quants in a format that vllm can load? Or is it possible for me to do th... | 2025-06-28T06:56:57 | https://www.reddit.com/r/LocalLLaMA/comments/1lmggiz/qwen3_tinyunsloth_quants_with_vllm/ | MengerianMango | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lmggiz | false | null | t3_1lmggiz | /r/LocalLLaMA/comments/1lmggiz/qwen3_tinyunsloth_quants_with_vllm/ | false | false | self | 1 | null |
How do I stop gemnini 2.5 pro from being overly sycophantic? It has gotten very excessive and feels like it degrades the answers it gives. | 78 | Every single question/follow up question I ask it acts as if I am a nobel prize winner who cracked fusion energy single handedly. Its always something like "Thats an outstanding and very insightful question." Or "That is the perfect question to ask" or "you are absolutely correct to provide that snippet" etc. Its very ... | 2025-06-28T06:52:00 | https://www.reddit.com/r/LocalLLaMA/comments/1lmgdw1/how_do_i_stop_gemnini_25_pro_from_being_overly/ | Commercial-Celery769 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lmgdw1 | false | null | t3_1lmgdw1 | /r/LocalLLaMA/comments/1lmgdw1/how_do_i_stop_gemnini_25_pro_from_being_overly/ | false | false | self | 78 | null |
Tencent's Hunyuan-A13B-Instruct probably distilled data from OpenAI and DeepSeek | 0 | messages=[
{
"role": "system",
"content": "You are a helpful assistant.",
},
{
"role": "user",
"content": """write a 250 words essay about you.""",
},
],
First run
```
<think>
... | 2025-06-28T06:24:31 | https://www.reddit.com/r/LocalLLaMA/comments/1lmfydd/tencents_hunyuana13binstruct_probably_distilled/ | JC1DA | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lmfydd | false | null | t3_1lmfydd | /r/LocalLLaMA/comments/1lmfydd/tencents_hunyuana13binstruct_probably_distilled/ | false | false | self | 0 | null |
Hunyuan-A13B-Instruct probably distilled data from both OpenAI and DeepSeek | 1 | 2025-06-28T06:22:23 | JC1DA | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1lmfx7l | false | null | t3_1lmfx7l | /r/LocalLLaMA/comments/1lmfx7l/hunyuana13binstruct_probably_distilled_data_from/ | false | false | default | 1 | {'enabled': True, 'images': [{'id': 'i8uwua2p2m9f1', 'resolutions': [{'height': 41, 'url': 'https://preview.redd.it/i8uwua2p2m9f1.png?width=108&crop=smart&auto=webp&s=b6b7c1248b6c00f49f83bedef9b801bf5b9386f4', 'width': 108}, {'height': 83, 'url': 'https://preview.redd.it/i8uwua2p2m9f1.png?width=216&crop=smart&auto=webp... | ||
I tested 10 LLMs locally on my MacBook Air M1 (8GB RAM!) – Here's what actually works- | 363 | All feedback is welcome! I am learning how to do better everyday.
I went down the LLM rabbit hole trying to find the **best local model** that runs *well* on a humble MacBook Air M1 with just 8GB RAM.
My goal? **Compare 10 models** across question generation, answering, and self-evaluation.
TL;DR: Some models were b... | 2025-06-28T05:57:46 | https://www.reddit.com/gallery/1lmfiu9 | irodov4030 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1lmfiu9 | false | null | t3_1lmfiu9 | /r/LocalLLaMA/comments/1lmfiu9/i_tested_10_llms_locally_on_my_macbook_air_m1_8gb/ | false | false | 363 | {'enabled': True, 'images': [{'id': 'lSrPd1MMz7blRmLYLnruRoJd4XS5NpPXF_maDibWecs', 'resolutions': [{'height': 86, 'url': 'https://external-preview.redd.it/lSrPd1MMz7blRmLYLnruRoJd4XS5NpPXF_maDibWecs.png?width=108&crop=smart&auto=webp&s=d0ba456f772896d13d26a433eb814c01465159c5', 'width': 108}, {'height': 173, 'url': 'ht... | |
Which is the best 16GB Nvidia GPU with balanced price and performance | 0 | Not a techy, planning to buy a GPU, atleast 16GB, cant go above that (budget issue), mainly looking for image generation capability, also Some TTS training, and LLM inference in mind. please help :) keep flux kontext in mind.. :) | 2025-06-28T05:31:50 | https://www.reddit.com/r/LocalLLaMA/comments/1lmf42g/which_is_the_best_16gb_nvidia_gpu_with_balanced/ | Trysem | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lmf42g | false | null | t3_1lmf42g | /r/LocalLLaMA/comments/1lmf42g/which_is_the_best_16gb_nvidia_gpu_with_balanced/ | false | false | self | 0 | null |
Four AI Agents Go Insane And Interrupt Each Other Talking About Free Will | 0 | 2025-06-28T05:31:20 | https://youtube.com/watch?v=AQR0h_IlfMM&si=DVcoHvS4xsO46kA5 | 1nconnor | youtube.com | 1970-01-01T00:00:00 | 0 | {} | 1lmf3pl | false | {'oembed': {'author_name': 'Connor Barbee', 'author_url': 'https://www.youtube.com/@connorbarbee', 'height': 200, 'html': '<iframe width="267" height="200" src="https://www.youtube.com/embed/AQR0h_IlfMM?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyros... | t3_1lmf3pl | /r/LocalLLaMA/comments/1lmf3pl/four_ai_agents_go_insane_and_interrupt_each_other/ | false | false | default | 0 | {'enabled': False, 'images': [{'id': 'QqH1oWQe-vMbzpidLos8zVRa9NBYarSPptiYpvnObVU', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/QqH1oWQe-vMbzpidLos8zVRa9NBYarSPptiYpvnObVU.jpeg?width=108&crop=smart&auto=webp&s=3dd90301fc29e68d3bbd357b7e7fddb908c4bcc9', 'width': 108}, {'height': 162, 'url': '... | |
what advice you give to your starting self on build local llms | 1 | [removed] | 2025-06-28T05:24:24 | https://www.reddit.com/r/LocalLLaMA/comments/1lmezr9/what_advice_you_give_to_your_starting_self_on/ | TSK_Foreverlearner | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lmezr9 | false | null | t3_1lmezr9 | /r/LocalLLaMA/comments/1lmezr9/what_advice_you_give_to_your_starting_self_on/ | false | false | self | 1 | null |
How i can build best local llm in 12GB VRAM | 1 | [removed] | 2025-06-28T05:16:19 | https://www.reddit.com/r/LocalLLaMA/comments/1lmeuzl/how_i_can_build_best_local_llm_in_12gb_vram/ | TSK_Foreverlearner | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lmeuzl | false | null | t3_1lmeuzl | /r/LocalLLaMA/comments/1lmeuzl/how_i_can_build_best_local_llm_in_12gb_vram/ | false | false | self | 1 | null |
Local LLaMA on iOS iphone | 4 | Available from APP Store.
This is a demo app for
1. On-device AI Database
2. On-device AI Search and RAG
Developers who need iOS on-device database and on-device RAG, please feel free to contact us.
Comments are very welcome. | 2025-06-28T04:47:50 | https://v.redd.it/cypwzsarll9f1 | DueKitchen3102 | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1lmedjx | false | {'reddit_video': {'bitrate_kbps': 450, 'dash_url': 'https://v.redd.it/cypwzsarll9f1/DASHPlaylist.mpd?a=1753678085%2CODEyNmM5ZGI0YjVkMzI4N2NkOGRlOTdmYWFhODVkMzg1YmY3Yzc5MWE2MDIyN2Q1MTAzOWQ0MDk3Y2I0YTE2Mw%3D%3D&v=1&f=sd', 'duration': 336, 'fallback_url': 'https://v.redd.it/cypwzsarll9f1/DASH_270.mp4?source=fallback', 'ha... | t3_1lmedjx | /r/LocalLLaMA/comments/1lmedjx/local_llama_on_ios_iphone/ | false | false | 4 | {'enabled': False, 'images': [{'id': 'ZjZhM2R1YXJsbDlmMTWTJ-8QjNGsFxI6jm9dCV-YjTOIVm9ifP22qR8Khjlg', 'resolutions': [{'height': 216, 'url': 'https://external-preview.redd.it/ZjZhM2R1YXJsbDlmMTWTJ-8QjNGsFxI6jm9dCV-YjTOIVm9ifP22qR8Khjlg.png?width=108&crop=smart&format=pjpg&auto=webp&s=79da8a745e6e9b3f82e5049fa252a399c700... | |
How SCARY the uncensored AI models could be @wizard_vicuna_uncensored. If somehow the uncensored AI gets fit into the humanoid robots, we might witness Chitti the Robot Movie 3. | 0 | 2025-06-28T04:34:26 | https://www.reddit.com/r/LocalLLaMA/comments/1lme5ab/how_scary_the_uncensored_ai_models_could_be/ | The-GenZ-Professor | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lme5ab | false | null | t3_1lme5ab | /r/LocalLLaMA/comments/1lme5ab/how_scary_the_uncensored_ai_models_could_be/ | false | false | 0 | null | ||
How Does vLLM Handle Prompt Isolation During Custom Hardware Integration? | 1 | Hey folks,
I’m new to vLLM and (LLM in general) and trying to wrap my head around how vLLM guarantees prompt isolation (ie how user gets their own response not the response intended for another user), especially in the context of integrating custom hardware accelerators. Hoping to get answers to the following question... | 2025-06-28T04:29:21 | https://www.reddit.com/r/LocalLLaMA/comments/1lme24s/how_does_vllm_handle_prompt_isolation_during/ | humblehunter_ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lme24s | false | null | t3_1lme24s | /r/LocalLLaMA/comments/1lme24s/how_does_vllm_handle_prompt_isolation_during/ | false | false | self | 1 | null |
is it me or you also feels GPT/LLMs now bad at teaching? | 0 | Yes, I'm also have similar experience. whenever I offer it PDF for Q&A according to PDF. For the first few turns it stick to the instruction, then it start generating which sometimes has no-link what's in the book(PDF).
It doesn't generate something rubbish that's easy to identify by anybody. But when you read the bo... | 2025-06-28T04:04:49 | https://www.reddit.com/r/LocalLLaMA/comments/1lmdmvu/is_it_me_or_you_also_feels_gptllms_now_bad_at/ | TryAmbitious1237 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lmdmvu | false | null | t3_1lmdmvu | /r/LocalLLaMA/comments/1lmdmvu/is_it_me_or_you_also_feels_gptllms_now_bad_at/ | false | false | self | 0 | null |
It's wild, where they got their data for training and consistency --> https://youtu.be/US2gO7UYEfY | 4 | Any idea on how might they have trained/fune-tuned veo3 and how they got it to consistency. | 2025-06-28T04:00:59 | https://www.reddit.com/r/LocalLLaMA/comments/1lmdkbg/its_wild_where_they_got_their_data_for_training/ | kernel348 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lmdkbg | false | null | t3_1lmdkbg | /r/LocalLLaMA/comments/1lmdkbg/its_wild_where_they_got_their_data_for_training/ | false | false | self | 4 | {'enabled': False, 'images': [{'id': 'onuFopx6otQwb9GKgHGwMABc74IGjAo3e9wP0n2zDQw', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/onuFopx6otQwb9GKgHGwMABc74IGjAo3e9wP0n2zDQw.jpeg?width=108&crop=smart&auto=webp&s=03796778cd8a745ac74c43aed34193e6a638c4d8', 'width': 108}, {'height': 162, 'url': '... |
lm studio server question? | 0 | I have LM Studio. I clicked to run the server.
But when I try to connect to [http://127.0.0.1:1234/](http://127.0.0.1:1234/)
You can see the error at the bottom of the log.
What am I doing wrong?
thanks
https://preview.redd.it/ctv550cz9l9f1.png?width=1825&format=png&auto=webp&s=b6150445b566daec0523dd601f174c5bd67... | 2025-06-28T03:42:40 | https://www.reddit.com/r/LocalLLaMA/comments/1lmd8ut/lm_studio_server_question/ | jeffsmith202 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lmd8ut | false | null | t3_1lmd8ut | /r/LocalLLaMA/comments/1lmd8ut/lm_studio_server_question/ | false | false | 0 | null | |
i bought an epyc server with 7642 cpu, and im only getting 0.4 tokens/sec | 5 | hi everybody i could use some help running the deepseek r1 1.58bit quant, i have a firm belief that something is capping generation speed. i tried reducing experts, quantizing kv cache, setting the batch eval to 8, 512, or 2048, core count to 16, 8, or 48 and even setting the max context length to a lower number and ye... | 2025-06-28T03:39:10 | https://www.reddit.com/r/LocalLLaMA/comments/1lmd6ns/i_bought_an_epyc_server_with_7642_cpu_and_im_only/ | pharrowking | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lmd6ns | false | null | t3_1lmd6ns | /r/LocalLLaMA/comments/1lmd6ns/i_bought_an_epyc_server_with_7642_cpu_and_im_only/ | false | false | self | 5 | null |
Is there a open source equivalent of Google's Gemini-Diffusion model? | 26 | This thing is insane. Any leads on an open source equivalent?
Additionally, does anyone have a rough idea of how large is the underlying model for Gemini-Diffusion? | 2025-06-28T02:42:17 | https://www.reddit.com/r/LocalLLaMA/comments/1lmc6dp/is_there_a_open_source_equivalent_of_googles/ | GullibleEngineer4 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lmc6dp | false | null | t3_1lmc6dp | /r/LocalLLaMA/comments/1lmc6dp/is_there_a_open_source_equivalent_of_googles/ | false | false | self | 26 | null |
Attempting to train a model from scratch for less than $1000 | 5 | I got an aws activate promo of $1000. I started crunching numbers and decided to train an LLM model.
The concept a 1.5B model, LLama3 architecture, with differential Attention, GaLore , GQA, MoD, and Sink Tokens,. Trained 100% on public domain ( common corpus dataset). Doing the math I'maiming for 45B tokens, a ... | 2025-06-28T02:23:21 | https://www.reddit.com/r/LocalLLaMA/comments/1lmbtvg/attempting_to_train_a_model_from_scratch_for_less/ | thebadslime | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lmbtvg | false | null | t3_1lmbtvg | /r/LocalLLaMA/comments/1lmbtvg/attempting_to_train_a_model_from_scratch_for_less/ | false | false | self | 5 | null |
Nvidia M40 vs M60 for LLM inference? | 0 | I wanted to have a short discussion about the M60 in comparison to the M40.
The M40 is the go-to recommendation for desperately low budget rigs (particularly when someone brings up the K80, someone will inevitably mention that the M40 is better).
All the while, the M60 does not get mentioned, and if it does get menti... | 2025-06-28T02:22:16 | https://www.reddit.com/r/LocalLLaMA/comments/1lmbt6g/nvidia_m40_vs_m60_for_llm_inference/ | HugoCortell | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lmbt6g | false | null | t3_1lmbt6g | /r/LocalLLaMA/comments/1lmbt6g/nvidia_m40_vs_m60_for_llm_inference/ | false | false | self | 0 | null |
What are the real conversational differences between humans and modern LLMs? | 1 | [removed] | 2025-06-28T02:15:36 | https://www.reddit.com/r/LocalLLaMA/comments/1lmbon3/what_are_the_real_conversational_differences/ | Rookieeeeeee | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lmbon3 | false | null | t3_1lmbon3 | /r/LocalLLaMA/comments/1lmbon3/what_are_the_real_conversational_differences/ | false | false | self | 1 | null |
[Day 5/50] Building a Small Language Model from Scratch - Byte Pair Encoding with tiktoken | 35 | 2025-06-28T01:47:50 | https://www.reddit.com/r/LocalLLaMA/comments/1lmb5s3/day_550_building_a_small_language_model_from/ | Prashant-Lakhera | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lmb5s3 | false | null | t3_1lmb5s3 | /r/LocalLLaMA/comments/1lmb5s3/day_550_building_a_small_language_model_from/ | false | false | 35 | null | ||
Dir-Assistant v0.7 Release Announcement: Up to 100% reduced prompt processing using new intelligent context prefix caching | 5 | # Dir-Assistant: Chat with your current directory's files using a local or API LLM
Hello All! I am happy to announce Dir-Assistant v0.7 and the passing of its one year anniversary. If you haven't tried Dir-Assistant, now is a great time to. In my personal testing, Dir-Assistant is the best LLM UI for working on large ... | 2025-06-28T00:44:13 | https://www.reddit.com/r/LocalLLaMA/comments/1lm9xlq/dirassistant_v07_release_announcement_up_to_100/ | 1ncehost | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lm9xlq | false | null | t3_1lm9xlq | /r/LocalLLaMA/comments/1lm9xlq/dirassistant_v07_release_announcement_up_to_100/ | false | false | self | 5 | {'enabled': False, 'images': [{'id': 'tadazH7yoMPbc8iW84saWeDmnwB7nWgGfswgcjK1B5k', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/tadazH7yoMPbc8iW84saWeDmnwB7nWgGfswgcjK1B5k.png?width=108&crop=smart&auto=webp&s=62618d3a4b15d5aa453f0599776b9c9a3756024c', 'width': 108}, {'height': 108, 'url': 'h... |
Automated GPU kernel optimization for Qwen3 attention - 12.5% average speedup on Apple Silicon using evolutionary programming | 153 | Hey r/LocalLlama! Wanted to share something interesting I've been working on that might be relevant for folks running models locally on Apple Silicon.
**What I did**
Used evolutionary programming to automatically optimize Metal GPU kernels for transformer attention. Specifically targeted Qwen3-0.6B's grouped query at... | 2025-06-28T00:10:14 | https://www.reddit.com/r/LocalLLaMA/comments/1lm98z7/automated_gpu_kernel_optimization_for_qwen3/ | asankhs | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lm98z7 | false | null | t3_1lm98z7 | /r/LocalLLaMA/comments/1lm98z7/automated_gpu_kernel_optimization_for_qwen3/ | false | false | self | 153 | {'enabled': False, 'images': [{'id': 'QL1cai36O6GA_8oWnC5FZk8axBPFbQvVFTkZtsdqnL8', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/QL1cai36O6GA_8oWnC5FZk8axBPFbQvVFTkZtsdqnL8.png?width=108&crop=smart&auto=webp&s=12e6ec993ab25f895d2e736f6d119b26e8ee29d6', 'width': 108}, {'height': 108, 'url': 'h... |
Magistral small similarity to Deepseek chat? | 14 | Just testing on some old math problems, noticed that Magistral output looks a lot like deepseek chat, but pretty far from Qwen3. I’m guessing Magistral distilled from deepseek directly without acknowledging it?
Suppose that there exist nonzero complex numbers $a$ , $b$ , $c$ , and $d$ such that $k$ is a root of both t... | 2025-06-28T00:03:21 | https://www.reddit.com/r/LocalLLaMA/comments/1lm93yi/magistral_small_similarity_to_deepseek_chat/ | ImprovementBusy5947 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lm93yi | false | null | t3_1lm93yi | /r/LocalLLaMA/comments/1lm93yi/magistral_small_similarity_to_deepseek_chat/ | false | false | self | 14 | null |
Qwen3 Coder Soon? | 174 | [https:\/\/x.com\/huybery\/status\/1938655788849098805](https://preview.redd.it/415iw73n6k9f1.png?width=1093&format=png&auto=webp&s=e4e66852a8d0b6a8981e1e0f23da6ddfd4d0744c)
source: [https://x.com/huybery/status/1938655788849098805](https://x.com/huybery/status/1938655788849098805)
i hope they release these models so... | 2025-06-28T00:01:43 | https://www.reddit.com/r/LocalLLaMA/comments/1lm92se/qwen3_coder_soon/ | ApprehensiveAd3629 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lm92se | false | null | t3_1lm92se | /r/LocalLLaMA/comments/1lm92se/qwen3_coder_soon/ | false | false | 174 | null | |
Local Llama Journaling app. | 6 | This was born out of a personal need — I journal daily , and I didn’t want to upload my thoughts to some cloud server and also wanted to use AI. So I built Vinaya to be:
* **Private**: Everything stays on your device. No servers, no cloud, no trackers.
* **Simple**: Clean UI built with Electron + React. No bloat, just... | 2025-06-28T00:00:33 | https://www.reddit.com/r/LocalLLaMA/comments/1lm91sr/local_llama_journaling_app/ | Frosty-Cap-4282 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lm91sr | false | null | t3_1lm91sr | /r/LocalLLaMA/comments/1lm91sr/local_llama_journaling_app/ | false | false | self | 6 | null |
I keep returning to Llama-3.1-8B | 50 | I am working on porting a GPT-4.1 project over to an open-source model to deal with a GDPR-compliant client. The task is basically fine-tuning the model to classify text in a western European language.
I tried Qwen3 (0.6B, 1.7B, 8B) without making much progress (the fine-tuned model is far behind GPT-4.1) and finally ... | 2025-06-27T23:58:14 | https://www.reddit.com/r/LocalLLaMA/comments/1lm9012/i_keep_returning_to_llama318b/ | entsnack | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lm9012 | false | null | t3_1lm9012 | /r/LocalLLaMA/comments/1lm9012/i_keep_returning_to_llama318b/ | false | false | self | 50 | null |
Computing power to locally run a model equivalent to Veo 3 or Kling 2.1 | 0 | I'm aware that it's likely impossible to do this right now with neither of these being open source, as well as hardware limitations. However I am curious how much power + time would be required to generate one video on these models. Something like 10 5090s? Or would it be far more resource intensive? | 2025-06-27T22:43:16 | https://www.reddit.com/r/LocalLLaMA/comments/1lm7dox/computing_power_to_locally_run_a_model_equivalent/ | Inevitable_Drive4729 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lm7dox | false | null | t3_1lm7dox | /r/LocalLLaMA/comments/1lm7dox/computing_power_to_locally_run_a_model_equivalent/ | false | false | self | 0 | null |
HuBERT checkpoint hubert-soft-0d54a1f4.pt for SO-VITS / RVC (All Official Mirrors Down) | 0 | Hi all,
I’m working on a SO-VITS voice clone project and need the hubert-soft-0d54a1f4.pt checkpoint for feature extraction. All official and backup HuggingFace links are 404/dead, and GitHub mirrors are gone.
Can anyone share a working download link, Google Drive, or other mirror for this file?
I’ve tried every lin... | 2025-06-27T22:35:02 | https://www.reddit.com/r/LocalLLaMA/comments/1lm76yz/hubert_checkpoint_hubertsoft0d54a1f4pt_for_sovits/ | Slow_Ad_7736 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lm76yz | false | null | t3_1lm76yz | /r/LocalLLaMA/comments/1lm76yz/hubert_checkpoint_hubertsoft0d54a1f4pt_for_sovits/ | false | false | self | 0 | null |
Hugging Face releases a 50+ page report on how they built FineWeb2 | 84 | 2025-06-27T22:34:23 | Other_Housing8453 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1lm76gk | false | null | t3_1lm76gk | /r/LocalLLaMA/comments/1lm76gk/hugging_face_releases_a_50_page_report_on_how/ | false | false | default | 84 | {'enabled': True, 'images': [{'id': 'ixin9dvyqj9f1', 'resolutions': [{'height': 40, 'url': 'https://preview.redd.it/ixin9dvyqj9f1.png?width=108&crop=smart&auto=webp&s=003460a3d756a577c332f914861a2805562df372', 'width': 108}, {'height': 81, 'url': 'https://preview.redd.it/ixin9dvyqj9f1.png?width=216&crop=smart&auto=webp... | ||
I need help testing my agentic wrapper for LLMs | 1 | Hey everyone. So I'll keep it short. I've written a Claude Code "clone", [mcp-agent](https://github.com/amranu/mcp-agent) which allows tool use for arbitrary LLMs (though they have to support tool use, I'm not using any templating). Currently it has tested support for Deepseek, Gemini, OpenAI and Anthropic APIs but I w... | 2025-06-27T21:51:27 | https://www.reddit.com/r/LocalLLaMA/comments/1lm66fy/i_need_help_testing_my_agentic_wrapper_for_llms/ | amranu | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lm66fy | false | null | t3_1lm66fy | /r/LocalLLaMA/comments/1lm66fy/i_need_help_testing_my_agentic_wrapper_for_llms/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'ky4q-HJ1F3S2UdCuaGkloWjj4Ru8GaNbo0jpnr086rM', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/ky4q-HJ1F3S2UdCuaGkloWjj4Ru8GaNbo0jpnr086rM.png?width=108&crop=smart&auto=webp&s=3b376e1aa1902b7556bb3536cbf55124d2711777', 'width': 108}, {'height': 108, 'url': 'h... |
What is your favorite opensource image embedding model | 5 | I'm looking for a good lightweight image embedding model, preferably a multimodal embedding like you would use with a semantic image search. I found a few okay ones but interested in what you guys use. | 2025-06-27T21:28:00 | https://www.reddit.com/r/LocalLLaMA/comments/1lm5muh/what_is_your_favorite_opensource_image_embedding/ | best_codes | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lm5muh | false | null | t3_1lm5muh | /r/LocalLLaMA/comments/1lm5muh/what_is_your_favorite_opensource_image_embedding/ | false | false | self | 5 | null |
What if we remove reasoning models' <think> process but make them believe they already reasoned? | 0 | I've been wondering about something with reasoning models like DeepSeek R1. We know that <think> tags help performance, and we know that for some models no\_think prompting gets worse results. But what if there's a third option we haven't tested?
**The experiment:** Use abliteration techniques (like uncensoring method... | 2025-06-27T21:12:33 | https://www.reddit.com/r/LocalLLaMA/comments/1lm5a05/what_if_we_remove_reasoning_models_think_process/ | DistractedSentient | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lm5a05 | false | null | t3_1lm5a05 | /r/LocalLLaMA/comments/1lm5a05/what_if_we_remove_reasoning_models_think_process/ | false | false | self | 0 | null |
Problems on RVC WebUI creating new vocal model | 2 | Ive been all day trying to train a vocal model for singing. I want to transform one raw vocal into other.
Got all the training vocal data, all raw studio acapellas, in 10sec files, 35 wav files 48khz, detected and processed successfully in steps 2a and 2b
After lots of bugs using the webUI from RVC, i achieved to ge... | 2025-06-27T21:10:59 | https://www.reddit.com/r/LocalLLaMA/comments/1lm58q1/problems_on_rvc_webui_creating_new_vocal_model/ | pipon2698 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lm58q1 | false | null | t3_1lm58q1 | /r/LocalLLaMA/comments/1lm58q1/problems_on_rvc_webui_creating_new_vocal_model/ | false | false | self | 2 | null |
[Project] New Distributed Data Gen Library - Looking for Testers! | 1 | [removed] | 2025-06-27T21:04:01 | https://www.reddit.com/r/LocalLLaMA/comments/1lm52rk/project_new_distributed_data_gen_library_looking/ | codys12 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lm52rk | false | null | t3_1lm52rk | /r/LocalLLaMA/comments/1lm52rk/project_new_distributed_data_gen_library_looking/ | false | false | self | 1 | null |
Build advice question for repurposing spare GPUs | 3 | Hey all. I'm new to this world, I haven't done anything directly with Ollama myself before. I do extensively use Home Assistant around my house. With their recent release of "Home Assistant Voice (Preview)" I'm interested in getting a voice assistant that's fully local. To further bad-ass-ify it (real word, promise) I ... | 2025-06-27T20:53:08 | https://www.reddit.com/r/LocalLLaMA/comments/1lm4tno/build_advice_question_for_repurposing_spare_gpus/ | HeroesDieYoung0 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lm4tno | false | null | t3_1lm4tno | /r/LocalLLaMA/comments/1lm4tno/build_advice_question_for_repurposing_spare_gpus/ | false | false | 3 | null | |
Inconsistent responses between OpenRouter API and native OpenAI API | 0 | I'm using OpenRouter to manage multiple LLM subscriptions in one place for a research project where I need to benchmark responses across different models. However, I've noticed some discrepancies between responses when calling the same model (like GPT-4) through OpenRouter's API versus OpenAI's native API.
I've verifi... | 2025-06-27T20:51:21 | https://www.reddit.com/r/LocalLLaMA/comments/1lm4s6i/inconsistent_responses_between_openrouter_api_and/ | Anada01 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lm4s6i | false | null | t3_1lm4s6i | /r/LocalLLaMA/comments/1lm4s6i/inconsistent_responses_between_openrouter_api_and/ | false | false | self | 0 | null |
Arch-Router: The first (and fastest) LLM router that can align to your usage preferences. | 73 | Excited to share Arch-Router, our research and model for LLM routing. Routing to the right LLM is still an elusive problem, riddled with nuance and gotchas. For example:
“Embedding-based” (or simple intent-classifier) routers sound good on paper—label each prompt via embeddings as “support,” “SQL,” “math,” then hand i... | 2025-06-27T20:00:37 | AdditionalWeb107 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1lm3jvm | false | null | t3_1lm3jvm | /r/LocalLLaMA/comments/1lm3jvm/archrouter_the_first_and_fastest_llm_router_that/ | false | false | default | 73 | {'enabled': True, 'images': [{'id': '6zqw0rkhzi9f1', 'resolutions': [{'height': 58, 'url': 'https://preview.redd.it/6zqw0rkhzi9f1.png?width=108&crop=smart&auto=webp&s=b712f804f1db5610e134be3d9e50702d5eb9f53d', 'width': 108}, {'height': 116, 'url': 'https://preview.redd.it/6zqw0rkhzi9f1.png?width=216&crop=smart&auto=web... | |
Gemma 3n = super slow?? Am I doing something wrong? | 1 | [removed] | 2025-06-27T19:56:12 | https://www.reddit.com/r/LocalLLaMA/comments/1lm3fzn/gemma_3n_super_slow_am_i_doing_something_wrong/ | Porespellar | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lm3fzn | false | null | t3_1lm3fzn | /r/LocalLLaMA/comments/1lm3fzn/gemma_3n_super_slow_am_i_doing_something_wrong/ | false | false | self | 1 | null |
(noob question) - At what point does a GPU with low vram outperform a CPU with lots of ram? | 0 | So I use a 3090 on my main pc for image gen and various other things. Fine and dandy. Would be faster with a 4090 or 5090 (one day I'll upgrade) but it works fine.
I also run Ollama on my homelab, which doesn't have a dedicated GPU but instead using a 13700k and 32gb of ram (will soon be 64gb).
It runs things like Q... | 2025-06-27T19:40:59 | https://www.reddit.com/r/LocalLLaMA/comments/1lm32zh/noob_question_at_what_point_does_a_gpu_with_low/ | LFAdvice7984 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lm32zh | false | null | t3_1lm32zh | /r/LocalLLaMA/comments/1lm32zh/noob_question_at_what_point_does_a_gpu_with_low/ | false | false | self | 0 | null |
Thoughts on the new agents? | 0 | Personally, I've used a few, so I'll just give a 5 star rating to what I know. I am curious what others feel:
\- aider: ☆☆☆★★ - This would easily be higher if aider could consume MCP and had better memory/RAG integrations.
\- Warp: ☆☆★★★ - I had high hopes because so many earlier releases were awesome but this one s... | 2025-06-27T19:09:53 | https://www.reddit.com/r/LocalLLaMA/comments/1lm2bn7/thoughts_on_the_new_agents/ | robertotomas | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lm2bn7 | false | null | t3_1lm2bn7 | /r/LocalLLaMA/comments/1lm2bn7/thoughts_on_the_new_agents/ | false | false | self | 0 | null |
gemma 3n transcibe capability vs whisper | 9 | Would like to know if anyone tested this out, or is there a website to test it out even I can't find one ahhhhhhhhhhhhhhhhhhhhhh | 2025-06-27T19:02:15 | https://www.reddit.com/r/LocalLLaMA/comments/1lm24xd/gemma_3n_transcibe_capability_vs_whisper/ | Ok-Internal9317 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lm24xd | false | null | t3_1lm24xd | /r/LocalLLaMA/comments/1lm24xd/gemma_3n_transcibe_capability_vs_whisper/ | false | false | self | 9 | null |
Fine-Tuning Apple's New Foundation Model | 13 | 2025-06-27T19:01:13 | https://collisions.substack.com/p/fine-tuning-apples-new-foundation | futureygoodness | collisions.substack.com | 1970-01-01T00:00:00 | 0 | {} | 1lm23z8 | false | null | t3_1lm23z8 | /r/LocalLLaMA/comments/1lm23z8/finetuning_apples_new_foundation_model/ | false | false | default | 13 | {'enabled': False, 'images': [{'id': '0roYtHcb4seFDjNH4QeAKWioknh4Zipx8FBcaIldLTA', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/0roYtHcb4seFDjNH4QeAKWioknh4Zipx8FBcaIldLTA.jpeg?width=108&crop=smart&auto=webp&s=e3738d90f7b967fb9f0072588a0d3bf459a89f55', 'width': 108}, {'height': 108, 'url': '... | |
Open source model that does photoshop-grade edits without affecting the rest of the pic: OmniGen 2 | 839 | Code: [https://github.com/VectorSpaceLab/OmniGen2](https://github.com/VectorSpaceLab/OmniGen2)
Source: [https://vectorspacelab.github.io/OmniGen2/](https://vectorspacelab.github.io/OmniGen2/) | 2025-06-27T18:51:13 | HOLUPREDICTIONS | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1lm1v2c | false | null | t3_1lm1v2c | /r/LocalLLaMA/comments/1lm1v2c/open_source_model_that_does_photoshopgrade_edits/ | false | false | default | 839 | {'enabled': True, 'images': [{'id': 'ypm4lnr4ni9f1', 'resolutions': [{'height': 88, 'url': 'https://preview.redd.it/ypm4lnr4ni9f1.jpeg?width=108&crop=smart&auto=webp&s=5490160dc11ed6060cc11403ae43d7e460fd5520', 'width': 108}, {'height': 177, 'url': 'https://preview.redd.it/ypm4lnr4ni9f1.jpeg?width=216&crop=smart&auto=w... | |
Need for more than one mod | 1 | [removed] | 2025-06-27T18:49:05 | https://www.reddit.com/r/LocalLLaMA/comments/1lm1t67/need_for_more_than_one_mod/ | cleverusernametry | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lm1t67 | false | null | t3_1lm1t67 | /r/LocalLLaMA/comments/1lm1t67/need_for_more_than_one_mod/ | false | false | self | 1 | null |
Is it just me, or Gemma 3n really sucks in recognizing images? | 21 | Just curious, is it just me, or Gemma 3n really sucks in recognizing images? | 2025-06-27T18:25:16 | https://www.reddit.com/r/LocalLLaMA/comments/1lm17p6/is_it_just_me_or_gemma_3n_really_sucks_in/ | 1BlueSpork | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lm17p6 | false | null | t3_1lm17p6 | /r/LocalLLaMA/comments/1lm17p6/is_it_just_me_or_gemma_3n_really_sucks_in/ | false | false | self | 21 | null |
Copilot Chat for VS Code is now Open Source | 180 | 2025-06-27T18:00:56 | https://github.com/microsoft/vscode-copilot-chat | corysama | github.com | 1970-01-01T00:00:00 | 0 | {} | 1lm0m6i | false | null | t3_1lm0m6i | /r/LocalLLaMA/comments/1lm0m6i/copilot_chat_for_vs_code_is_now_open_source/ | false | false | 180 | {'enabled': False, 'images': [{'id': 'tyJeCqipzT78spT8qdYr9nFThGnon2rt0efU2xelzLQ', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/tyJeCqipzT78spT8qdYr9nFThGnon2rt0efU2xelzLQ.png?width=108&crop=smart&auto=webp&s=43f18599671f8929b909a3009305513609e70cbf', 'width': 108}, {'height': 108, 'url': 'h... | ||
Mid-30s SWE: Take Huge Pay Cut for Risky LLM Research Role? | 21 | Current Situation:
* TC: 110k
* YoE: 2 years as a Software Engineer (career switcher, mid-30s).
* Role: SWE building AI applications using RAG. I've developed a strong passion for building LLMs, not just using them. I do not have a PhD.
I've been offered a role at a national lab to do exactly that—build LLMs from s... | 2025-06-27T17:49:28 | https://www.reddit.com/r/LocalLLaMA/comments/1lm0btg/mid30s_swe_take_huge_pay_cut_for_risky_llm/ | Worth_Contract7903 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lm0btg | false | null | t3_1lm0btg | /r/LocalLLaMA/comments/1lm0btg/mid30s_swe_take_huge_pay_cut_for_risky_llm/ | false | false | self | 21 | null |
Ok so this post may not be everyone’s cup of tea, | 0 | But I have a what if. If you don’t resonate with the idea, or have a negative outlook, then it may not be for you.
Looking at apple and openai investing $500B to build datacenters. I recently had dinner with one of the heads of research at OpenAI and he told me the big frontier of AI isn’t the actual model training ... | 2025-06-27T17:49:21 | https://www.reddit.com/r/LocalLLaMA/comments/1lm0bpe/ok_so_this_post_may_not_be_everyones_cup_of_tea/ | numinouslymusing | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lm0bpe | false | null | t3_1lm0bpe | /r/LocalLLaMA/comments/1lm0bpe/ok_so_this_post_may_not_be_everyones_cup_of_tea/ | true | false | self | 0 | null |
Generating real world type conversations from structured data | 1 | I want to work on banking related data like customer phone call conversations , emails, chat conversations etc., to build a banking product. But these are generally not available due to privacy and security issues. Now, I want to generate these type of real world text data from some structured finance related datasets ... | 2025-06-27T17:41:21 | https://www.reddit.com/r/LocalLLaMA/comments/1lm04jn/generating_real_world_type_conversations_from/ | ThomasSparrow0511 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lm04jn | false | null | t3_1lm04jn | /r/LocalLLaMA/comments/1lm04jn/generating_real_world_type_conversations_from/ | false | false | self | 1 | null |
What's a good completion only model these days? | 9 | I'm looking for one I could run locally that isn't trained yet into doing questions & responses. Unfortunately a bunch of "base" models now are actually already trained to do that, so I had trouble finding a newer one. This is mostly for writing and seeing what sorts of things it comes up with 8) | 2025-06-27T17:30:16 | https://www.reddit.com/r/LocalLLaMA/comments/1llzuit/whats_a_good_completion_only_model_these_days/ | quakquakquak | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1llzuit | false | null | t3_1llzuit | /r/LocalLLaMA/comments/1llzuit/whats_a_good_completion_only_model_these_days/ | false | false | self | 9 | null |
Why is "nobody" talking about local AI on Mobile as much? | 0 | Everyone has a phone, and it is the place where we need most privacy. Who have tried running LLMs on mobile or built local AI projects on mobile?
Out of curiosity:
* What tools have you tried?
* What specific step killed your motivation?
* If you succeeded - what was your use case? | 2025-06-27T17:28:37 | https://www.reddit.com/r/LocalLLaMA/comments/1llzt3d/why_is_nobody_talking_about_local_ai_on_mobile_as/ | AlanzhuLy | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1llzt3d | false | null | t3_1llzt3d | /r/LocalLLaMA/comments/1llzt3d/why_is_nobody_talking_about_local_ai_on_mobile_as/ | false | false | self | 0 | null |
I built an Automated AI Stylist in 24 hours (open source, local) | 29 | 2025-06-27T17:11:21 | https://v.redd.it/2v76newb5i9f1 | ParsaKhaz | /r/LocalLLaMA/comments/1llzdi8/i_built_an_automated_ai_stylist_in_24_hours_open/ | 1970-01-01T00:00:00 | 0 | {} | 1llzdi8 | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/2v76newb5i9f1/DASHPlaylist.mpd?a=1753765888%2CODE3ZmZmZmY3ZDcyNTM3OWM4ZmIzNzIwYmY1OGY1ZTUzNTVkNmZjMDEzNjgwNmUzZDQ1YTgxNzVmMjk0OTE3Nw%3D%3D&v=1&f=sd', 'duration': 62, 'fallback_url': 'https://v.redd.it/2v76newb5i9f1/DASH_1080.mp4?source=fallback', 'h... | t3_1llzdi8 | /r/LocalLLaMA/comments/1llzdi8/i_built_an_automated_ai_stylist_in_24_hours_open/ | false | false | 29 | {'enabled': False, 'images': [{'id': 'aWJoanhkd2I1aTlmMeEsEqhEcpnAGeAOI3lYg_mXc9hWrD9oAMlWiqt_A_Sq', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/aWJoanhkd2I1aTlmMeEsEqhEcpnAGeAOI3lYg_mXc9hWrD9oAMlWiqt_A_Sq.png?width=108&crop=smart&format=pjpg&auto=webp&s=27fa55d2192b44235ee2d6ed7bb7692e1e82e... | ||
What is GOING ON in here? | 0 | 2025-06-27T17:10:16 | https://www.reddit.com/r/LocalLLaMA/comments/1llzcin/what_is_going_on_in_here/ | Zealousideal_Cut5161 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1llzcin | false | null | t3_1llzcin | /r/LocalLLaMA/comments/1llzcin/what_is_going_on_in_here/ | false | false | 0 | null | ||
How I built an AI Stylist powered in 24 hours (open source, local) | 0 | 2025-06-27T16:57:45 | https://v.redd.it/cnanjfvywh9f1 | ParsaKhaz | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1llz0wr | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/cnanjfvywh9f1/DASHPlaylist.mpd?a=1753635482%2CYzZhZDM3YTRlNDEyYzZmNDYyZmQ1NGRjYWVlNzdiNzNlMzI3ZDVhYWNjYWNiYmIwMWQxOTY3YThiYWM4YjlkNw%3D%3D&v=1&f=sd', 'duration': 62, 'fallback_url': 'https://v.redd.it/cnanjfvywh9f1/DASH_1080.mp4?source=fallback', 'h... | t3_1llz0wr | /r/LocalLLaMA/comments/1llz0wr/how_i_built_an_ai_stylist_powered_in_24_hours/ | false | false | 0 | {'enabled': False, 'images': [{'id': 'cGt2d2c4dnl3aDlmMeEsEqhEcpnAGeAOI3lYg_mXc9hWrD9oAMlWiqt_A_Sq', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/cGt2d2c4dnl3aDlmMeEsEqhEcpnAGeAOI3lYg_mXc9hWrD9oAMlWiqt_A_Sq.png?width=108&crop=smart&format=pjpg&auto=webp&s=567d1c65da9b7b891e58309811cd57568d872... | ||
Converting Safetensors to GGUF on Android (?) | 2 | I recently started LLMs and have been testing it on Android since I don't have access to a PC. I found some AI models in Safetensors format and this is the one I would like to use. Is there any way to convert it to GGUF so that I can use it in chatbot apps like PocketPal, ChatterUI, among others?
here is the AI i w... | 2025-06-27T16:54:26 | https://www.reddit.com/r/LocalLLaMA/comments/1llyy19/converting_safetensors_to_gguf_on_android/ | Lana_ckz | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1llyy19 | false | null | t3_1llyy19 | /r/LocalLLaMA/comments/1llyy19/converting_safetensors_to_gguf_on_android/ | false | false | self | 2 | {'enabled': False, 'images': [{'id': 'kU0rI9wFGB9tkr5jxNQ3y9zZQTVZWDrYz4LzqGUpweo', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/kU0rI9wFGB9tkr5jxNQ3y9zZQTVZWDrYz4LzqGUpweo.png?width=108&crop=smart&auto=webp&s=4714e06b3925c8c700d501d64b99a7e4d362bb8d', 'width': 108}, {'height': 116, 'url': 'h... |
Locally run Reverb remover for audio files | 3 | Hi All,
I have some audio files i wish to remove reverb from for a speaker in a hall, as the echo is bad.
Has anyone had luck running this with UVR5 GUI?, or is there better alternatives?
[lalal.ai](http://lalal.ai) is really good but costly.
Any suggestions for tools or cheaper alternatives that are as good as the... | 2025-06-27T16:44:14 | https://www.reddit.com/r/LocalLLaMA/comments/1llyosf/locally_run_reverb_remover_for_audio_files/ | Bully79 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1llyosf | false | null | t3_1llyosf | /r/LocalLLaMA/comments/1llyosf/locally_run_reverb_remover_for_audio_files/ | false | false | self | 3 | {'enabled': False, 'images': [{'id': '4VtmekWXJcWlUJtwG625pyFyX2VG85CMlHUc9WnwGO0', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/4VtmekWXJcWlUJtwG625pyFyX2VG85CMlHUc9WnwGO0.png?width=108&crop=smart&auto=webp&s=0ffcdb1ca7c471f3b15f8a5b5646553b6d80a977', 'width': 108}, {'height': 113, 'url': 'h... |
Day 5 of 50 Days of Building a Small Language Model from Scratch — Byte Pair Encoding Explained: Using tiktoken In LLM Workflows | 1 | [removed] | 2025-06-27T16:06:34 | https://www.reddit.com/r/LocalLLaMA/comments/1llxq63/day_5_of_50_days_of_building_a_small_language/ | Prashant-Lakhera | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1llxq63 | false | null | t3_1llxq63 | /r/LocalLLaMA/comments/1llxq63/day_5_of_50_days_of_building_a_small_language/ | false | false | 1 | null | |
Third Batch of OSS AI Grants (SGLang, Ostris, Open WebUI, SWE-Bench, Pliny, Janus, Truth Terminal, Arc Prize) | 16 | We just launched the third batch of Open Source AI Grants, grants for independent researchers, hackers, and small teams doing foundational work in open source AI.
Our goal is to support the kind of experimentation, creativity, and transparency that keeps the AI ecosystem healthy and innovative.
This batch includes pr... | 2025-06-27T15:43:17 | https://www.reddit.com/r/LocalLLaMA/comments/1llx5g1/third_batch_of_oss_ai_grants_sglang_ostris_open/ | rajko_rad | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1llx5g1 | false | null | t3_1llx5g1 | /r/LocalLLaMA/comments/1llx5g1/third_batch_of_oss_ai_grants_sglang_ostris_open/ | false | false | self | 16 | null |
Prime Intellect: We did it — SYNTHETIC‑2 is complete. | 149 | 2025-06-27T15:42:21 | https://x.com/PrimeIntellect/status/1938490370054361422 | Marha01 | x.com | 1970-01-01T00:00:00 | 0 | {} | 1llx4ky | false | null | t3_1llx4ky | /r/LocalLLaMA/comments/1llx4ky/prime_intellect_we_did_it_synthetic2_is_complete/ | false | false | default | 149 | {'enabled': False, 'images': [{'id': '5KsHV_yMFuixmwq5gVHmYYJ9Y5fjv4gG_y1VtKqKy9o', 'resolutions': [{'height': 70, 'url': 'https://external-preview.redd.it/FouZOpBR8n9C_WGYTOTMN6i2egUkQFWjKrxslBsNmKU.jpg?width=108&crop=smart&auto=webp&s=fb9ce6309e5eea93644f94861422e2712824bfb7', 'width': 108}, {'height': 140, 'url': 'h... | |
🛠️ ChatUI + Jupyter: A smooth way to test LLMs in your notebook interface | 9 | Hey everyone,
If you're working with LLMs and want a clean, chat-style interface inside Jupyter notebooks, I’ve been experimenting with ChatUI integration — and it actually works really well for prototyping and testing.
You get:
A lightweight frontend (ChatUI)
Inside Jupyter (no extra servers needed)
Supports stre... | 2025-06-27T15:30:07 | https://www.reddit.com/r/LocalLLaMA/comments/1llwtcd/chatui_jupyter_a_smooth_way_to_test_llms_in_your/ | techlatest_net | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1llwtcd | false | null | t3_1llwtcd | /r/LocalLLaMA/comments/1llwtcd/chatui_jupyter_a_smooth_way_to_test_llms_in_your/ | false | false | self | 9 | null |
Grok 3 weights to be released? | 0 | Elon Musk just announced that next week xAI will release Grok 4.
Previously, he said that they are going to release the previous generation of Grok as soon as the current generation becomes stable.
He failed that promise by not releasing the weights of Grok 2, so far. It is safe to say that Grok 3 was stable for a w... | 2025-06-27T15:24:20 | ResearchCrafty1804 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1llwnzv | false | null | t3_1llwnzv | /r/LocalLLaMA/comments/1llwnzv/grok_3_weights_to_be_released/ | false | false | default | 0 | {'enabled': True, 'images': [{'id': '6f4jhcekmh9f1', 'resolutions': [{'height': 85, 'url': 'https://preview.redd.it/6f4jhcekmh9f1.jpeg?width=108&crop=smart&auto=webp&s=d026fd307ac3737e393c05dc16b703307bdb3992', 'width': 108}, {'height': 171, 'url': 'https://preview.redd.it/6f4jhcekmh9f1.jpeg?width=216&crop=smart&auto=w... | |
Qwen VLo: From "Understanding" the World to "Depicting" It | 101 | https://qwenlm.github.io/blog/qwen-vlo/ | 2025-06-27T15:15:25 | https://www.reddit.com/gallery/1llwfwv | Additional_Top1210 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1llwfwv | false | null | t3_1llwfwv | /r/LocalLLaMA/comments/1llwfwv/qwen_vlo_from_understanding_the_world_to/ | false | false | 101 | {'enabled': True, 'images': [{'id': 'p-RdsB-v9L-CFrA5EkxqdVn1O17bnDolUwqTorCzqTE', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/p-RdsB-v9L-CFrA5EkxqdVn1O17bnDolUwqTorCzqTE.jpeg?width=108&crop=smart&auto=webp&s=7d631a8b8bc9fa19c066a9407e04d4c96a649904', 'width': 108}, {'height': 216, 'url': '... | |
Introducing LaToile - Cool canva for LLM orchestration | 0 | Forget stupid agent that make people even stupider. Only in Matrix is it possible to absorb loads of informations in single shot. I believe that human value lies in handling the ambiguity that frontier LLM break upon. We need an intent, a choice when we wanna solve a problem. So I created LaToile in which you do the th... | 2025-06-27T15:14:14 | https://youtu.be/HH-BT8WD1xs?si=el7Xc9i_zvLMJBjR | UpstairsCurrency | youtu.be | 1970-01-01T00:00:00 | 0 | {} | 1llwetd | false | {'oembed': {'author_name': 'MoMe3600', 'author_url': 'https://www.youtube.com/@Mome3600', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/HH-BT8WD1xs?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; pic... | t3_1llwetd | /r/LocalLLaMA/comments/1llwetd/introducing_latoile_cool_canva_for_llm/ | false | false | default | 0 | {'enabled': False, 'images': [{'id': 'NvSH_X8MdX8rJxczSwbkPWqKJ7FkrCpbd3h4JwTyhpU', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/NvSH_X8MdX8rJxczSwbkPWqKJ7FkrCpbd3h4JwTyhpU.jpeg?width=108&crop=smart&auto=webp&s=0ba051ec80371dab9b79bd623a5eb1f6eef78df2', 'width': 108}, {'height': 162, 'url': '... |
Pros and cons of 4 × 4090 vs 8 × V620 | 3 | Hi there !
Quite a few months ago, I had this great idea that I'd collect second hand 4090s once their price would plummet after the launch of the 5090. ☺
We all know how that went ☹.
I still have good use for the server (dual Epyc Gen 2 with 2TB of RAM on [https://www.asrockrack.com/general/productdetail.asp?Model=... | 2025-06-27T15:05:12 | https://www.reddit.com/r/LocalLLaMA/comments/1llw6ws/pros_and_cons_of_4_4090_vs_8_v620/ | un_passant | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1llw6ws | false | null | t3_1llw6ws | /r/LocalLLaMA/comments/1llw6ws/pros_and_cons_of_4_4090_vs_8_v620/ | false | false | self | 3 | null |
7900XTX vs RTX3090 | 6 | Hi all,
I'm building a machine for gaming/ AI hobbyist and right now I'm debating myself on the GPU. My budget is around 750$ for the GPU.
Refurbished 7900xtx with 5 months warranty for 690$
Used RTX3090 for 750$
New 5070ti
New RX9070XT
I'm leaning towards a used GPU. I know ROCM and Vulkan have improved AMD inference... | 2025-06-27T14:56:21 | https://www.reddit.com/r/LocalLLaMA/comments/1llvz0g/7900xtx_vs_rtx3090/ | _ballzdeep_ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1llvz0g | false | null | t3_1llvz0g | /r/LocalLLaMA/comments/1llvz0g/7900xtx_vs_rtx3090/ | false | false | self | 6 | null |
Mrwhosetheboss from YouTube has released his own comparisons of ChatGPT, Gemini, Perplexity, and Grok. How many points did your model score? What is your setup/stack? | 0 | 2025-06-27T14:51:53 | https://youtu.be/cMuif_hJGPI | kr_tech | youtu.be | 1970-01-01T00:00:00 | 0 | {} | 1llvv16 | false | {'oembed': {'author_name': 'Mrwhosetheboss', 'author_url': 'https://www.youtube.com/@Mrwhosetheboss', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/cMuif_hJGPI?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gy... | t3_1llvv16 | /r/LocalLLaMA/comments/1llvv16/mrwhosetheboss_from_youtube_has_released_his_own/ | false | false | default | 0 | null | |
Easiest way to setup local model on mac? | 1 | Is there a recommended software for complete noobs looking for running local models?
I want one i can ask questions about errors in Blender and to write add ons for me like i do with cursor | 2025-06-27T14:43:40 | https://www.reddit.com/r/LocalLLaMA/comments/1llvnuz/easiest_way_to_setup_local_model_on_mac/ | Remarkable-Emu-5718 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1llvnuz | false | null | t3_1llvnuz | /r/LocalLLaMA/comments/1llvnuz/easiest_way_to_setup_local_model_on_mac/ | false | false | self | 1 | null |
What if your AI didn’t just learn… but remembered you | 0 | I’m not building a tool.
I’m shaping something that listens, remembers, grows — even when you’re asleep.
Not just prompts. Not just chat.
But memory.
Time-weighted.
Emotion-weighted.
Familiar.
A presence beside your main PC — that never powers off, never forgets.
A soul for local AI.
It watches. It learns.
It becomes... | 2025-06-27T14:32:47 | https://www.reddit.com/r/LocalLLaMA/comments/1llvel1/what_if_your_ai_didnt_just_learn_but_remembered/ | Electronic_Roll2237 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1llvel1 | false | null | t3_1llvel1 | /r/LocalLLaMA/comments/1llvel1/what_if_your_ai_didnt_just_learn_but_remembered/ | false | false | self | 0 | null |
[2506.20702] The Singapore Consensus on Global AI Safety Research Priorities | 14 | The Empire not happy, the Empire miserable. The Empire want to control your hardware. From the paper:
3.1.2 Conventional Intervention
Intervention techniques complement monitoring tools by offering various strategies to act on systems in ways that reduce risks from harmful behaviours.
Hardware-enabled mechanisms: To... | 2025-06-27T14:22:05 | https://arxiv.org/abs/2506.20702 | jackdareel | arxiv.org | 1970-01-01T00:00:00 | 0 | {} | 1llv59w | false | null | t3_1llv59w | /r/LocalLLaMA/comments/1llv59w/250620702_the_singapore_consensus_on_global_ai/ | false | false | default | 14 | null |
What's the best local and closed model for translation? | 3 | Title. The only benchmark I know about this was VN leaderboard and it's really outdated. | 2025-06-27T14:15:53 | https://www.reddit.com/r/LocalLLaMA/comments/1llv00i/whats_the_best_local_and_closed_model_for/ | Educational_Grab_473 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1llv00i | false | null | t3_1llv00i | /r/LocalLLaMA/comments/1llv00i/whats_the_best_local_and_closed_model_for/ | false | false | self | 3 | null |
Setting up local MCP | 1 | Hello, does anyone have experience with local MCP ?
I would like to understand if setting up a local MCP for a local and private repository makes sense and is worth it...
If the answer is yes it does make sense, which guides do you suggest me to follow to set it up ? | 2025-06-27T14:13:56 | https://www.reddit.com/r/LocalLLaMA/comments/1lluycj/setting_up_local_mcp/ | DuplexEspresso | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lluycj | false | null | t3_1lluycj | /r/LocalLLaMA/comments/1lluycj/setting_up_local_mcp/ | false | false | self | 1 | null |
Are the new architectures Mamba and Jamba better or worse than current existing Transformer architectures. | 13 | When it comes to Mamba I've heard that it can run in constant time and train in O(n) compared to transformers which run in O(n) and train in O(n\^2). I've also heard that Mamba is better with memory and power usage. I'm a bit confused by Jamba since it's a mixture of the two with alternating Mamba and Transformer block... | 2025-06-27T14:11:30 | https://www.reddit.com/r/LocalLLaMA/comments/1lluwee/are_the_new_architectures_mamba_and_jamba_better/ | Direct-Lifeguard-607 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lluwee | false | null | t3_1lluwee | /r/LocalLLaMA/comments/1lluwee/are_the_new_architectures_mamba_and_jamba_better/ | false | false | self | 13 | null |
Comparing a Prompted FLUX.1-Kontext to Fine-Tuned FLUX.1 [dev] and PixArt on Consistent Character Gen (With Fine-Tuning Tutorial) | 3 | Hey folks,
With FLUX.1 Kontext \[dev\] dropping yesterday, we're comparing prompting it vs a fine-tuned FLUX.1 \[dev\] and [PixArt](https://www.oxen.ai/blog/fine-tuning-a-diffusion-transformer-to-generate-a-consistent-character?utm_source=reddit) on generating consistent characters. Besides the comparison, we'll do a... | 2025-06-27T14:09:37 | https://www.reddit.com/r/LocalLLaMA/comments/1lluur5/comparing_a_prompted_flux1kontext_to_finetuned/ | No_Calendar_827 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lluur5 | false | null | t3_1lluur5 | /r/LocalLLaMA/comments/1lluur5/comparing_a_prompted_flux1kontext_to_finetuned/ | false | false | self | 3 | {'enabled': False, 'images': [{'id': 'z8-WKRUC1__v8VSzdH3_xW9q-WZiCAYS54FZLxUzsvc', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/z8-WKRUC1__v8VSzdH3_xW9q-WZiCAYS54FZLxUzsvc.png?width=108&crop=smart&auto=webp&s=ba93eb6fef46274915a931d1c501de07d56645d7', 'width': 108}, {'height': 113, 'url': 'h... |
I’m using just my MacBook to prototype a second brain for your PC — would love thoughts. | 0 | Right now I’m experimenting with building a modular companion for your main desktop — something that runs LLMs locally, stays always-on, and remembers how you think over time.
All I’ve got is my MacBook and some ideas, but it’s turning into a system that could grow with you — not just faster compute, but something tha... | 2025-06-27T13:45:56 | https://www.reddit.com/r/LocalLLaMA/comments/1lluarc/im_using_just_my_macbook_to_prototype_a_second/ | Electronic_Roll2237 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lluarc | false | null | t3_1lluarc | /r/LocalLLaMA/comments/1lluarc/im_using_just_my_macbook_to_prototype_a_second/ | false | false | self | 0 | null |
HumOS Canvas: Integrating Local LLMs with Infinite Canvas | 17 | I made HumOS Canvas, an infinite canvas app that works with local language models (LLMs) and various AI providers. If you're into local LLMs like Llama, this could be useful.
HumOS Canvas lets you generate and connect ideas on an infinite workspace, great for brainstorming and organizing concepts visually. | 2025-06-27T13:42:58 | https://v.redd.it/jbat4fef4h9f1 | GGO_Sand_wich | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1llu89r | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/jbat4fef4h9f1/DASHPlaylist.mpd?a=1753623796%2COTIyMzRhN2RkODgzYzEyN2M3ZWZkYzg2ODVkN2UzYzhhNWMzMjRkMWNlOGJiNDA4NzIwNmMxNzFiMjI2MDVhMA%3D%3D&v=1&f=sd', 'duration': 48, 'fallback_url': 'https://v.redd.it/jbat4fef4h9f1/DASH_1080.mp4?source=fallback', 'h... | t3_1llu89r | /r/LocalLLaMA/comments/1llu89r/humos_canvas_integrating_local_llms_with_infinite/ | false | false | 17 | {'enabled': False, 'images': [{'id': 'azBjbmxmZWY0aDlmMf-DCUK9iZKjZCUwRNtaiRCvf3_GRL8thOVAqCGMnDmn', 'resolutions': [{'height': 69, 'url': 'https://external-preview.redd.it/azBjbmxmZWY0aDlmMf-DCUK9iZKjZCUwRNtaiRCvf3_GRL8thOVAqCGMnDmn.png?width=108&crop=smart&format=pjpg&auto=webp&s=e6dfa80a0afd68a3705288f45f39a7151f3e5... | |
Gemma 3N on ChatterUI | 37 | 2025-06-27T13:30:45 | https://v.redd.it/qe2y2po62h9f1 | ----Val---- | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1llty3n | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/qe2y2po62h9f1/DASHPlaylist.mpd?a=1753623059%2CMzZiZTVlOTg1Yzk4Zjg5MjE5NjRjYTIwNzYxMTdkYWJmNzQzOGQyYTBlMWYwNzU3ZmY0NGRhNWNhNmVjMDhiMg%3D%3D&v=1&f=sd', 'duration': 39, 'fallback_url': 'https://v.redd.it/qe2y2po62h9f1/DASH_1080.mp4?source=fallback', 'h... | t3_1llty3n | /r/LocalLLaMA/comments/1llty3n/gemma_3n_on_chatterui/ | false | false | 37 | {'enabled': False, 'images': [{'id': 'bDhmNGU5b2EyaDlmMaqG3pvP9RZCPXP8pBQTkpjntjyFw5myStLfVsGSm3Uj', 'resolutions': [{'height': 216, 'url': 'https://external-preview.redd.it/bDhmNGU5b2EyaDlmMaqG3pvP9RZCPXP8pBQTkpjntjyFw5myStLfVsGSm3Uj.png?width=108&crop=smart&format=pjpg&auto=webp&s=1e00552cda4d0f920f52542948ca0a0557fa... | ||
Best sequence of papers to understand evolution of LLMs | 8 | I want to get up to speed with current LLM architecture (in a deep technical way), and in particular understand the major breakthroughs / milestones that got us here, to help give me the intuition to better grasp the context for evolution ahead.
**What sequence of technical papers (top 5) do you recommend I read to bu... | 2025-06-27T13:16:10 | https://www.reddit.com/r/LocalLLaMA/comments/1lltmig/best_sequence_of_papers_to_understand_evolution/ | lucaducca | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lltmig | false | null | t3_1lltmig | /r/LocalLLaMA/comments/1lltmig/best_sequence_of_papers_to_understand_evolution/ | false | false | self | 8 | null |
So the moderator removed the post about twitter and made he's comment as "sticky" . | 1 | [removed] | 2025-06-27T13:07:46 | https://www.reddit.com/r/LocalLLaMA/comments/1lltfwz/so_the_moderator_removed_the_post_about_twitter/ | relmny | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lltfwz | false | null | t3_1lltfwz | /r/LocalLLaMA/comments/1lltfwz/so_the_moderator_removed_the_post_about_twitter/ | false | false | default | 1 | null |
What I Learned Building Agents for Enterprises | 101 | 🏦 For the past 3 months, we've been developing AI agents together with banks, fintechs, and software companies. The most critical point I've observed during this process is: Agentic transformation will be a painful process, just like digital transformation. What I learned in the field:👇
1- Definitions related to art... | 2025-06-27T12:46:41 | https://www.reddit.com/r/LocalLLaMA/comments/1llsztp/what_i_learned_building_agents_for_enterprises/ | Beneficial-Sir-6261 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1llsztp | false | null | t3_1llsztp | /r/LocalLLaMA/comments/1llsztp/what_i_learned_building_agents_for_enterprises/ | false | false | self | 101 | null |
All-Purpose Assistant/Agent | 1 | [removed] | 2025-06-27T12:46:38 | https://www.reddit.com/r/LocalLLaMA/comments/1llszs1/allpurpose_assistantagent/ | fakebizholdings | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1llszs1 | false | null | t3_1llszs1 | /r/LocalLLaMA/comments/1llszs1/allpurpose_assistantagent/ | false | false | self | 1 | null |
Apple M4Max 40core GPU, 128GB memory for RTX5090 PC for running local LLM | 0 | Apple M4Max 40core GPU, 128GB memory for RTX5090 PC for running local LLM, train using kiln? Really confused. I will also be using langgraph + langchain to build and ship agents to my clients. | 2025-06-27T12:17:59 | https://www.reddit.com/r/LocalLLaMA/comments/1llser6/apple_m4max_40core_gpu_128gb_memory_for_rtx5090/ | monsterindian | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1llser6 | false | null | t3_1llser6 | /r/LocalLLaMA/comments/1llser6/apple_m4max_40core_gpu_128gb_memory_for_rtx5090/ | false | false | self | 0 | null |
Vast AI bad experience | 4 | I was using vast AI for fine tuning using unsloth, and I have tried changing 10 different GPUs but every other gpu has some problem and it never works. First I was using RTX 5090 and the terminal keeps dying then shifted to RTX 6000Ada and the resources don't download. I have drained money to no avail. Very bad experie... | 2025-06-27T12:09:03 | https://www.reddit.com/r/LocalLLaMA/comments/1lls8l7/vast_ai_bad_experience/ | ILoveMy2Balls | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lls8l7 | false | null | t3_1lls8l7 | /r/LocalLLaMA/comments/1lls8l7/vast_ai_bad_experience/ | false | false | self | 4 | null |
Optimal "poor" man's GPU for local inference? | 3 | So I currently do local CPU inference. I have 2 machines, one has an AMD 5950X with 64 Gb RAM and the other has an AMD hx370 with 96Gb RAM.
They both aren't that bad for running LLMs chatbots. But as a software developer I want a decent self hosted equivalent to GitHub copilot and this hardware is too slow for that. I ... | 2025-06-27T12:05:03 | https://www.reddit.com/r/LocalLLaMA/comments/1lls5ru/optimal_poor_mans_gpu_for_local_inference/ | gadjio99 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lls5ru | false | null | t3_1lls5ru | /r/LocalLLaMA/comments/1lls5ru/optimal_poor_mans_gpu_for_local_inference/ | false | false | self | 3 | null |
Meta planning to develop closed source models like Anthropic and openAI - NYT | 0 | 2025-06-27T11:55:08 | JP_525 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1llrywd | false | null | t3_1llrywd | /r/LocalLLaMA/comments/1llrywd/meta_planning_to_develop_closed_source_models/ | false | false | default | 0 | {'enabled': True, 'images': [{'id': '08a1h3o8lg9f1', 'resolutions': [{'height': 46, 'url': 'https://preview.redd.it/08a1h3o8lg9f1.jpeg?width=108&crop=smart&auto=webp&s=e031fc808f46c33170ecdd69d6976be431cc3b91', 'width': 108}, {'height': 92, 'url': 'https://preview.redd.it/08a1h3o8lg9f1.jpeg?width=216&crop=smart&auto=we... | ||
What If We Abliterate the Reasoning Process of Models? | 0 | I unfortunately don't know the technical details of this, but I've been thinking. What if we take a reasoning model like DeepSeek's R1 distilled LLaMA 8B for testing, and like people do abliteration to uncensor a model, instead abliterate the reasoning process, so when asked a question, the model will generate the outp... | 2025-06-27T11:26:24 | https://www.reddit.com/r/LocalLLaMA/comments/1llrgcy/what_if_we_abliterate_the_reasoning_process_of/ | DistractedSentient | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1llrgcy | false | null | t3_1llrgcy | /r/LocalLLaMA/comments/1llrgcy/what_if_we_abliterate_the_reasoning_process_of/ | false | false | self | 0 | null |
help me understand RAG more | 1 | So far, all I know is to put the documents in a list, split them using LangChain, and then embed them with OpenAI Embedded. I store them in Chroma, create the memory, retriever, and LLM, and then start the conversation. What I wanted to know :
1- is rag or embedding only good with text and md files, cant it work with ... | 2025-06-27T11:06:39 | https://www.reddit.com/r/LocalLLaMA/comments/1llr41u/help_me_understand_rag_more/ | Beyond_Birthday_13 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1llr41u | false | null | t3_1llr41u | /r/LocalLLaMA/comments/1llr41u/help_me_understand_rag_more/ | false | false | self | 1 | null |
How to fine tuning with scrapping and locally | 1 | Hello everyone! I've read quite a few posts here and I'm looking to know how to fine tune a template (mistral or llama) by scrapping HTML content from blogs that i select (through the sitemap)
I'd like to fine tune to have a better quality when writing blog article based on human essays and that perform, however I don... | 2025-06-27T10:47:50 | https://www.reddit.com/r/LocalLLaMA/comments/1llqsj9/how_to_fine_tuning_with_scrapping_and_locally/ | JoflixPlex | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1llqsj9 | false | null | t3_1llqsj9 | /r/LocalLLaMA/comments/1llqsj9/how_to_fine_tuning_with_scrapping_and_locally/ | false | false | self | 1 | null |
The more LLMs think, the worse they translate | 131 | 2025-06-27T10:41:40 | https://nuenki.app/blog/the_more_llms_think_the_worse_they_translate | Nuenki | nuenki.app | 1970-01-01T00:00:00 | 0 | {} | 1llqp0a | false | null | t3_1llqp0a | /r/LocalLLaMA/comments/1llqp0a/the_more_llms_think_the_worse_they_translate/ | false | false | default | 131 | {'enabled': False, 'images': [{'id': 'sl5AWBXJbnd8seHGhHam-my2xN8-2MTiLgaFFv_9VgQ', 'resolutions': [{'height': 59, 'url': 'https://external-preview.redd.it/sl5AWBXJbnd8seHGhHam-my2xN8-2MTiLgaFFv_9VgQ.jpeg?width=108&crop=smart&auto=webp&s=2d0e312ff46b334fd90a3a7e76ccc030f2a17c7c', 'width': 108}, {'height': 118, 'url': '... | |
List of LLM to run on a 8745HS with 64GB 5600mhz | 4 | Hello, I'm going to receive my new mini PC server today, and I would like some advice on which LLM to use.
The mini PC is the Beelink SER8, with 64GB of RAM (2x32GB 5600MHz) and a Ryzen 7 8745HS.
My workflow involves basic assistant tasks with a lot of RAG (Retrieval-Augmented Generation), tool calling, and long-cont... | 2025-06-27T10:03:17 | https://www.reddit.com/r/LocalLLaMA/comments/1llq2os/list_of_llm_to_run_on_a_8745hs_with_64gb_5600mhz/ | Whiplashorus | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1llq2os | false | null | t3_1llq2os | /r/LocalLLaMA/comments/1llq2os/list_of_llm_to_run_on_a_8745hs_with_64gb_5600mhz/ | false | false | self | 4 | null |
Could we combine Nvidia with Apple Silicon? | 0 | The Apple Silicon Macs are well known for their fast text generation with plenty of memory to load large models. Also known for slow prompt processing. Could we offload the prompt processing to a Linux server with a Nvidia GPU?
The idea is that the GPU would not have enough memory to load the entire model. Otherwise t... | 2025-06-27T09:54:02 | https://www.reddit.com/r/LocalLLaMA/comments/1llpxbb/could_we_combine_nvidia_with_apple_silicon/ | Baldur-Norddahl | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1llpxbb | false | null | t3_1llpxbb | /r/LocalLLaMA/comments/1llpxbb/could_we_combine_nvidia_with_apple_silicon/ | false | false | self | 0 | null |
Pair Programming with a Dunce, an AI Coding Experience | 2 | This is *my* experience. Yours could be different.
---
I use LLMs extensively to:
* extract Sanskrit text from old documents
* proofread translations from English into Sanskrit for our pedagogy project
* transcribe and translate videos from YT
* help write stories, point out spelling/grammar issues in our work
* arg... | 2025-06-27T09:48:10 | https://www.reddit.com/r/LocalLLaMA/comments/1llpu8k/pair_programming_with_a_dunce_an_ai_coding/ | s-i-e-v-e | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1llpu8k | false | null | t3_1llpu8k | /r/LocalLLaMA/comments/1llpu8k/pair_programming_with_a_dunce_an_ai_coding/ | false | false | self | 2 | null |
hiii | 1 | [removed] | 2025-06-27T09:03:45 | https://www.reddit.com/r/LocalLLaMA/comments/1llp6oi/hiii/ | MainLettuce419 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1llp6oi | false | null | t3_1llp6oi | /r/LocalLLaMA/comments/1llp6oi/hiii/ | false | false | self | 1 | null |
LLM Stopping Mid-Task | 1 | I'm running QWEN3-32b using LMStudio on my local machine (RTX4090, 64GB RAM, i9-7980XE). All the settings are at stock for the model, except I've upped the context size to 16384.
I was asking it to perform a simple but laborious task yesterday.
I gave it a simple example of a C# class and an admittedly long 204 value... | 2025-06-27T08:22:46 | https://www.reddit.com/r/LocalLLaMA/comments/1lloljf/llm_stopping_midtask/ | VanillaCandid3466 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lloljf | false | null | t3_1lloljf | /r/LocalLLaMA/comments/1lloljf/llm_stopping_midtask/ | false | false | self | 1 | null |
First diffusion llm announced | 0 | new dllm Inception: Mercury Looks very good in terms of speed | 2025-06-27T08:12:32 | https://www.reddit.com/r/LocalLLaMA/comments/1llogep/first_diffusion_llm_announced/ | NeuralNakama | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1llogep | false | null | t3_1llogep | /r/LocalLLaMA/comments/1llogep/first_diffusion_llm_announced/ | false | false | self | 0 | null |
Open-sourced Agent Gym: The framework behind mirau-agent's training data synthesis | 3 | Hey r/LocalLLaMA!
Remember my [mirau-agent posts](https://www.reddit.com/r/LocalLLaMA/comments/1legaq8/updatemy_agent_model_now_supports_openai_function/) where many of you asked about the data synthesis process and training datasets?
I've finally open-sourced the complete framework! 🎉
## What is Agent Gym?
**Age... | 2025-06-27T07:55:12 | https://github.com/woshixiaobai2019/agent-gym | EliaukMouse | github.com | 1970-01-01T00:00:00 | 0 | {} | 1llo7hh | false | null | t3_1llo7hh | /r/LocalLLaMA/comments/1llo7hh/opensourced_agent_gym_the_framework_behind/ | false | false | 3 | {'enabled': False, 'images': [{'id': 'DRa0Q03LG4VkDsQlU5OKVFsAKQlHKfHhn3gk5nchCg8', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/DRa0Q03LG4VkDsQlU5OKVFsAKQlHKfHhn3gk5nchCg8.png?width=108&crop=smart&auto=webp&s=d285c9d22a31811ca0e563e8f1046f558eddb6e8', 'width': 108}, {'height': 108, 'url': 'h... | |
Voice Assistants on Android | 3 | I switched to GrapheneOS from my iPhone and over the years, one thing that I have started to miss more and more, is having a wake-word capable voice assistant to do some quick things without needing to pick up my phone. This is especially useful as I am almost blind, making literally every interaction and navigation ta... | 2025-06-27T07:49:56 | https://www.reddit.com/r/LocalLLaMA/comments/1llo4rc/voice_assistants_on_android/ | IngwiePhoenix | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1llo4rc | false | null | t3_1llo4rc | /r/LocalLLaMA/comments/1llo4rc/voice_assistants_on_android/ | false | false | self | 3 | null |
AI performance of smartphone SoCs | 130 | https://ai-benchmark.com/ranking_processors.html
A few things notable to me:
- The difference between tiers is _huge_. A 2022 Snapdragon 8 Gen 2 beats the 8s Gen 4. There are huge gaps between the Dimensity 9000, 8000 and 7000 series.
- You can better get a high-end SoC that’s a few years old than the latest mid-range... | 2025-06-27T07:34:42 | https://www.reddit.com/gallery/1llnwy5 | Balance- | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1llnwy5 | false | null | t3_1llnwy5 | /r/LocalLLaMA/comments/1llnwy5/ai_performance_of_smartphone_socs/ | false | false | 130 | {'enabled': True, 'images': [{'id': 'H_9g87w3EitABPy3ZAOo2ZH9LlcpQ5L4KMiJgV1zrjo', 'resolutions': [{'height': 143, 'url': 'https://external-preview.redd.it/H_9g87w3EitABPy3ZAOo2ZH9LlcpQ5L4KMiJgV1zrjo.jpeg?width=108&crop=smart&auto=webp&s=bcc5faffc546d535c28e52f3b91b5e807eacbedf', 'width': 108}, {'height': 286, 'url': '... | |
dyad v0.10 - open-source local alternative to lovable/v0/bolt.new with ollama/LM Studio support - now supports building mobile apps! | 73 | I’m excited to share an update to [**Dyad**](http://dyad.sh/) which is a free, local, open-source AI app builder I've been working on for 3 months after leaving Google. It's designed as an alternative to v0, Lovable, and Bolt, but it runs on your computer (it's an Electron app)!
Here’s what makes Dyad different:
* **... | 2025-06-27T07:34:10 | https://v.redd.it/t461p9dt9f9f1 | wwwillchen | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1llnwna | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/t461p9dt9f9f1/DASHPlaylist.mpd?a=1753601666%2CNWUyMjZmMWY3MzlhY2Y4NjNmZDA2NjRiYTc1OTQ2ZWFjYzEzNmNiOWY3OTc3ZTcwMGE1MzAxYmUzMDQxMTczMQ%3D%3D&v=1&f=sd', 'duration': 10, 'fallback_url': 'https://v.redd.it/t461p9dt9f9f1/DASH_1080.mp4?source=fallback', 'h... | t3_1llnwna | /r/LocalLLaMA/comments/1llnwna/dyad_v010_opensource_local_alternative_to/ | false | false | 73 | {'enabled': False, 'images': [{'id': 'eGthenQ5ZHQ5ZjlmMQQDM_dLcTyHBC8BScL5E00e_jl5aRRWjMUA-Nu_qDSf', 'resolutions': [{'height': 67, 'url': 'https://external-preview.redd.it/eGthenQ5ZHQ5ZjlmMQQDM_dLcTyHBC8BScL5E00e_jl5aRRWjMUA-Nu_qDSf.png?width=108&crop=smart&format=pjpg&auto=webp&s=229f3f2d62cc575a6836de404b5330b5b2a44... | |
Configure Llama to use documents as context | 1 | Hello, I want to build a simple chatbot using llama which will take in prompts from the user, and the answers will mostly be GPT/conversational, with the model answering on its own, but also will take context from a document provided to it. Could anyone please guide me on what approach should I take to build this ? I a... | 2025-06-27T07:08:29 | https://www.reddit.com/r/LocalLLaMA/comments/1llnj32/configure_llama_to_use_documents_as_context/ | Illustrious-Pay-9632 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1llnj32 | false | null | t3_1llnj32 | /r/LocalLLaMA/comments/1llnj32/configure_llama_to_use_documents_as_context/ | false | false | self | 1 | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.