title stringlengths 1 300 | score int64 0 8.54k | selftext stringlengths 0 41.5k | created timestamp[ns]date 2023-04-01 04:30:41 2026-03-04 02:14:14 ⌀ | url stringlengths 0 878 | author stringlengths 3 20 | domain stringlengths 0 82 | edited timestamp[ns]date 1970-01-01 00:00:00 2026-02-19 14:51:53 | gilded int64 0 2 | gildings stringclasses 7
values | id stringlengths 7 7 | locked bool 2
classes | media stringlengths 646 1.8k ⌀ | name stringlengths 10 10 | permalink stringlengths 33 82 | spoiler bool 2
classes | stickied bool 2
classes | thumbnail stringlengths 4 213 ⌀ | ups int64 0 8.54k | preview stringlengths 301 5.01k ⌀ |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Why does Mistral NeMo's usage keep growing even after more than a year since releasing? | 219 | 2025-08-17T11:46:54 | xugik1 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1msosdv | false | null | t3_1msosdv | /r/LocalLLaMA/comments/1msosdv/why_does_mistral_nemos_usage_keep_growing_even/ | false | false | 219 | {'enabled': True, 'images': [{'id': '9zQnip6zt-qAfXIMdQs7o36I80GuV0qyQgjd7cQjDzQ', 'resolutions': [{'height': 44, 'url': 'https://preview.redd.it/5wd0ayxihkjf1.png?width=108&crop=smart&auto=webp&s=4e01daede4f60dbabf935ae79b5ff54e69e1484a', 'width': 108}, {'height': 89, 'url': 'https://preview.redd.it/5wd0ayxihkjf1.png?... | |||
What’s everyone doing with local llms? | 0 | What do you guys do with local llms? I’m playing around with ollama but I’m a little confused as to what can be done with local llms?
I see people spinning up their own local servers to run llms on, what for? | 2025-08-17T11:24:06 | https://www.reddit.com/r/LocalLLaMA/comments/1msode6/whats_everyone_doing_with_local_llms/ | OkError9341 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1msode6 | false | null | t3_1msode6 | /r/LocalLLaMA/comments/1msode6/whats_everyone_doing_with_local_llms/ | false | false | self | 0 | null |
Laptop keeps crashing when loading 12GB model onto 3070M + external GTX1080 (16GB total). | 0 | Whole laptop keeps freezing / crashing when either loading the model, or attempting to generate a response.
I have a laptop with an 8GB 3070 mobile GPU, and an eGPU dock for an old GTX 1080 I have. This gives me combined 16GB VRAM.
I am trying to load Gemma3:27b Q2_K from Unsloth, which is approx. 12GB.
- KV cache o... | 2025-08-17T10:40:55 | https://www.reddit.com/r/LocalLLaMA/comments/1msnmau/laptop_keeps_crashing_when_loading_12gb_model/ | sourpatchgrownadults | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1msnmau | false | null | t3_1msnmau | /r/LocalLLaMA/comments/1msnmau/laptop_keeps_crashing_when_loading_12gb_model/ | false | false | self | 0 | null |
https://futurism.com/scientists-worried-ai-pleateau | 0 | _The long-awaited release of OpenAI's GPT-5 has gone over with a wet thud._
_Though the private sector continues to dump billions into artificial intelligence development, hoping for exponential gains, the research community isn't convinced._
https://futurism.com/scientists-worried-ai-pleateau | 2025-08-17T10:40:37 | https://www.reddit.com/r/LocalLLaMA/comments/1msnm3o/httpsfuturismcomscientistsworriedaipleateau/ | Educational_Sun_8813 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1msnm3o | false | null | t3_1msnm3o | /r/LocalLLaMA/comments/1msnm3o/httpsfuturismcomscientistsworriedaipleateau/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'R5gnPg_WMcbHKS2NWSfs0d9pVQXOveS_il2RtdMRZTs', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/R5gnPg_WMcbHKS2NWSfs0d9pVQXOveS_il2RtdMRZTs.jpeg?width=108&crop=smart&auto=webp&s=5af39743c6b44a90ae203bc770825cbda84708ca', 'width': 108}, {'height': 113, 'url': '... |
How to maximize qwen-coder- 30b tps performance on 4060ti 8gb? | 1 | Hi all,
I've got a Windows 11 workstation which I want to use as a service for Continue / Kilo code agentic development. Tried ussing ollama to host and test models for coding.
I want to get max of performance and quality on my current setup.
I already tried to use:
- qwen3-4b-Instrucr-2507-GGUF:Q8_0 with OLLAMA_KV_... | 2025-08-17T10:29:46 | https://www.reddit.com/r/LocalLLaMA/comments/1msnfhd/how_to_maximize_qwencoder_30b_tps_performance_on/ | Stiff_uc | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1msnfhd | false | null | t3_1msnfhd | /r/LocalLLaMA/comments/1msnfhd/how_to_maximize_qwencoder_30b_tps_performance_on/ | false | false | self | 1 | null |
Is there really no significant difference between Gemma3 12B and 27B? | 4 | This plot is posted on X.com by @burkov: https://x.com/burkov/status/1956471984797143347. He didn't mention the source, or I missed it.
Those who have worked with both Gemma 3 12b and 27b, do you confirm this? Are these models almost equally powerful?
I have only worked with 12b as my hardware has 8GB RAM, and I alwa... | 2025-08-17T10:27:26 | ihatebeinganonymous | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1msne2a | false | null | t3_1msne2a | /r/LocalLLaMA/comments/1msne2a/is_there_really_no_significant_difference_between/ | false | false | default | 4 | {'enabled': True, 'images': [{'id': 'gj3i8ue73kjf1', 'resolutions': [{'height': 72, 'url': 'https://preview.redd.it/gj3i8ue73kjf1.jpeg?width=108&crop=smart&auto=webp&s=d38e139b157e4333231655a1f13f27a32c7385c0', 'width': 108}, {'height': 144, 'url': 'https://preview.redd.it/gj3i8ue73kjf1.jpeg?width=216&crop=smart&auto=w... | |
llama.cpp , lama swap and embeddings | 0 | Previously i used LM Studio to host 14B model and a embedding model with Anything LLM to be the UI and use the RAG funciton. Started looking into trying to squeeze everybit of performance out and started to use llama.cpp - vulkan , which works well for me....
But i cant get embeddings working and i've read about Lla... | 2025-08-17T10:20:24 | https://www.reddit.com/r/LocalLLaMA/comments/1msn9wt/llamacpp_lama_swap_and_embeddings/ | uber-linny | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1msn9wt | false | null | t3_1msn9wt | /r/LocalLLaMA/comments/1msn9wt/llamacpp_lama_swap_and_embeddings/ | false | false | self | 0 | null |
I made a tool that lets you use local models to generate git commit messages | 1 | I am a little lazy and noticed my git commit messages were not as good as they could be, especially when vibe coding. And sometimes I would miss documenting actual changes in my commit. So, this weekend I decided to try to fix this by making a proper tool that you can run local models with easily, (or connect to whatev... | 2025-08-17T10:20:17 | https://www.reddit.com/r/LocalLLaMA/comments/1msn9ub/i_made_a_tool_that_lets_you_use_local_models_to/ | arbiusai | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1msn9ub | false | null | t3_1msn9ub | /r/LocalLLaMA/comments/1msn9ub/i_made_a_tool_that_lets_you_use_local_models_to/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': '8TuoRTPIRGLC99Tccw2TF7zkZVGeJhiZfvlhLO2qwI8', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/8TuoRTPIRGLC99Tccw2TF7zkZVGeJhiZfvlhLO2qwI8.png?width=108&crop=smart&auto=webp&s=3d7a85adf696204e13ca979b18bf93d6ac25c1fa', 'width': 108}, {'height': 108, 'url': 'h... |
Profiling Large Language Model Inference on Apple Silicon: A Quantization Perspective | 16 | 2025-08-17T10:19:17 | https://arxiv.org/abs/2508.08531 | juanviera23 | arxiv.org | 1970-01-01T00:00:00 | 0 | {} | 1msn987 | false | null | t3_1msn987 | /r/LocalLLaMA/comments/1msn987/profiling_large_language_model_inference_on_apple/ | false | false | default | 16 | null | |
XQuant: Breaking the Memory Wall for LLM Inference with KV Cache Rematerialization | 38 | **TL;DR:** XQuant proposes caching low-bit **layer inputs (X)** instead of the usual KV cache and **rematerializing K/V on the fly**, trading extra compute for far less memory traffic; this gives an immediate **\~2×** cut vs standard KV caching and up to **\~7.7×** vs FP16 with **<0.1** perplexity drop, while the cross... | 2025-08-17T10:16:39 | https://arxiv.org/abs/2508.10395 | juanviera23 | arxiv.org | 1970-01-01T00:00:00 | 0 | {} | 1msn7pt | false | null | t3_1msn7pt | /r/LocalLLaMA/comments/1msn7pt/xquant_breaking_the_memory_wall_for_llm_inference/ | false | false | default | 38 | null |
Ovis2.5 9B ~ 2B - New Multi-modal LLMs from Alibaba | 216 | Been playing with **Ovis2.5 (2B & 9B)** the past few days. The cool part is it now has an optional *think* mode — the model will slow down a bit but actually self-check and refine answers, which really helps on harder reasoning tasks. Also the OCR feels way better than before, especially on messy charts and dense docum... | 2025-08-17T10:09:17 | https://www.reddit.com/r/LocalLLaMA/comments/1msn3gi/ovis25_9b_2b_new_multimodal_llms_from_alibaba/ | Sad_External6106 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1msn3gi | false | null | t3_1msn3gi | /r/LocalLLaMA/comments/1msn3gi/ovis25_9b_2b_new_multimodal_llms_from_alibaba/ | false | false | self | 216 | {'enabled': False, 'images': [{'id': 'c6g_LvGNep78V97r8wgHYT3dsfAv2GduJAam6DPvJp8', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/c6g_LvGNep78V97r8wgHYT3dsfAv2GduJAam6DPvJp8.png?width=108&crop=smart&auto=webp&s=08b1b1ff5d1e0b9ceb80434aec6ec909523882ae', 'width': 108}, {'height': 116, 'url': 'h... |
Looks like Kimi K2 quietly joined the “5.9 − 5.11 = ?” support group. 😩 | 62 | 2025-08-17T10:06:04 | JeffreySons_90 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1msn1n0 | false | null | t3_1msn1n0 | /r/LocalLLaMA/comments/1msn1n0/looks_like_kimi_k2_quietly_joined_the_59_511/ | false | false | 62 | {'enabled': True, 'images': [{'id': 'M1vf_w82bDlhPRUjvjfbdcKK9NGanwlwNG_b3ZnJJk8', 'resolutions': [{'height': 22, 'url': 'https://preview.redd.it/e0o9q4g90kjf1.jpeg?width=108&crop=smart&auto=webp&s=c179833b58b42430fc98a5e34544c275dcbe20a2', 'width': 108}, {'height': 45, 'url': 'https://preview.redd.it/e0o9q4g90kjf1.jpe... | |||
Ovis2.5 9B ~ 2B - Next-generation Multi-modal LLMs from Alibaba | 1 | [removed] | 2025-08-17T10:05:41 | https://www.reddit.com/r/LocalLLaMA/comments/1msn1db/ovis25_9b_2b_nextgeneration_multimodal_llms_from/ | Middle-Weakness-4865 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1msn1db | false | null | t3_1msn1db | /r/LocalLLaMA/comments/1msn1db/ovis25_9b_2b_nextgeneration_multimodal_llms_from/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'c6g_LvGNep78V97r8wgHYT3dsfAv2GduJAam6DPvJp8', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/c6g_LvGNep78V97r8wgHYT3dsfAv2GduJAam6DPvJp8.png?width=108&crop=smart&auto=webp&s=08b1b1ff5d1e0b9ceb80434aec6ec909523882ae', 'width': 108}, {'height': 116, 'url': 'h... |
Stop Building Chatbots!! These 3 Gen AI Projects can boost your portfolio in 2025 | 0 | Spent 6 months building what I thought was an impressive portfolio. Basic chatbots are all the "standard" stuff now.
Completely rebuilt my portfolio around 3 projects that solve real industry problems instead of simple chatbots . The difference in response was insane.
If you're struggling with getting noticed, check ... | 2025-08-17T10:02:15 | https://www.reddit.com/r/LocalLLaMA/comments/1msmzcy/stop_building_chatbots_these_3_gen_ai_projects/ | SKD_Sumit | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1msmzcy | false | null | t3_1msmzcy | /r/LocalLLaMA/comments/1msmzcy/stop_building_chatbots_these_3_gen_ai_projects/ | false | false | self | 0 | null |
Do we have good Models for GDScript for Godot Game Development?? Alternative to Copilot Windows?` | 6 | Hi I currently use copilot windows and its been working great so far now Im kinda questioning if using their generated code could get me in trouble and in general I like my privacy more.
Im using the free version
Are there similar models that read through the entire web like Copilot windows does for godot gdscr... | 2025-08-17T09:50:10 | https://www.reddit.com/r/LocalLLaMA/comments/1msmsfi/do_we_have_good_models_for_gdscript_for_godot/ | kekfekf | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1msmsfi | false | null | t3_1msmsfi | /r/LocalLLaMA/comments/1msmsfi/do_we_have_good_models_for_gdscript_for_godot/ | false | false | self | 6 | null |
Handsfree RSS feed reader? | 5 | Does anybody know if any handsfree RSS readers exist?
* Voice UI to navigate and search articles, change reading speed
* Text to speech for reading the articles
Basically what I'm looking for is an RSS feed reader that I can use without touching or looking at anything. Perfect for driving, exercising or using while u... | 2025-08-17T09:44:48 | https://www.reddit.com/r/LocalLLaMA/comments/1msmpg8/handsfree_rss_feed_reader/ | Ftth_finland | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1msmpg8 | false | null | t3_1msmpg8 | /r/LocalLLaMA/comments/1msmpg8/handsfree_rss_feed_reader/ | false | false | self | 5 | null |
Seeking recommendations: free self-hosted LLM for company knowledge (Confluence/Slack/GitLab) | 1 | Hello,
I plan to deploy a self-hosted LLM to answer employee questions and improve productivity. The system should index internal sources (Confluence, Slack, GitLab...). Perhaps even with priority given to the source of information? (I assume that documentation should be of better quality than Slack messages.)
Re... | 2025-08-17T09:30:02 | https://www.reddit.com/r/LocalLLaMA/comments/1msmh7r/seeking_recommendations_free_selfhosted_llm_for/ | Hadyark | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1msmh7r | false | null | t3_1msmh7r | /r/LocalLLaMA/comments/1msmh7r/seeking_recommendations_free_selfhosted_llm_for/ | false | false | self | 1 | null |
GLM 4.5 stuck in a loop. Is this bug ? | 10 | 2025-08-17T09:16:10 | JeffreySons_90 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1msm9ip | false | null | t3_1msm9ip | /r/LocalLLaMA/comments/1msm9ip/glm_45_stuck_in_a_loop_is_this_bug/ | false | false | 10 | {'enabled': True, 'images': [{'id': 'cQOufxF-FJTM-mwPrlW23VBl4mFN4Zwtdnd9VmejUUc', 'resolutions': [{'height': 55, 'url': 'https://preview.redd.it/tzv51i1crjjf1.jpeg?width=108&crop=smart&auto=webp&s=75332f5a2dc3e9532ca432b4a640773103ec7866', 'width': 108}, {'height': 110, 'url': 'https://preview.redd.it/tzv51i1crjjf1.jp... | |||
GPT OSS 20B Quantized versions GGUFs for VRAM 8 GB | 2 | Guys
I am running a smaller machine with 8 GB VRAM. I run 7B/8B models comfortably beyond that i require Quantized models.
Does anyone has made Quantized model of GPT OSS which can fit in 8GB VRAM | 2025-08-17T08:31:57 | https://www.reddit.com/r/LocalLLaMA/comments/1mslkxe/gpt_oss_20b_quantized_versions_ggufs_for_vram_8_gb/ | bull_bear25 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mslkxe | false | null | t3_1mslkxe | /r/LocalLLaMA/comments/1mslkxe/gpt_oss_20b_quantized_versions_ggufs_for_vram_8_gb/ | false | false | self | 2 | null |
I was downvoted because 0.6B model cannot write good jokes. So I made 4B model runnable on iPhone to write good jokes. | 0 |
4B models on iPhone is here! Hope you all like the joke now.
Vector Space is a framework that makes it possible to run LLM on iPhones **locally on the Neural Engine**. This translates to:
⚡️Faster inference. Qwen 4B runs at **~20 token/s** in short context.
🤖 Larger models. I am able to use steeper quantization te... | 2025-08-17T08:17:51 | https://v.redd.it/tzwwp4sygjjf1 | Glad-Speaker3006 | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1mslcyc | false | {'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/tzwwp4sygjjf1/DASHPlaylist.mpd?a=1758010685%2CYjI3MTJkNGUxNDJmZWMwZWI2NjM4MzU2N2IzODkyYTBlMmMyYTk0MmY4ZDBjNGQzYjFlYWFlNTZlNjNkYTcyMg%3D%3D&v=1&f=sd', 'duration': 2, 'fallback_url': 'https://v.redd.it/tzwwp4sygjjf1/DASH_720.mp4?source=fallback', 'has... | t3_1mslcyc | /r/LocalLLaMA/comments/1mslcyc/i_was_downvoted_because_06b_model_cannot_write/ | false | false | 0 | {'enabled': False, 'images': [{'id': 'OXdldmdibnlnampmMdanLIb7eGjfE0jGaRLozRDSee5XNaS_ENo8Z9uouVRD', 'resolutions': [{'height': 158, 'url': 'https://external-preview.redd.it/OXdldmdibnlnampmMdanLIb7eGjfE0jGaRLozRDSee5XNaS_ENo8Z9uouVRD.png?width=108&crop=smart&format=pjpg&auto=webp&s=439e7ea625c859290dbe8c032b323c3445c5... | |
Langserve and fast api error | 1 | [removed] | 2025-08-17T08:02:05 | https://www.reddit.com/gallery/1msl3ub | NotSooDeep | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1msl3ub | false | null | t3_1msl3ub | /r/LocalLLaMA/comments/1msl3ub/langserve_and_fast_api_error/ | false | false | 1 | null | |
Download Statistics for Qwen 3 30B A3B 2507 Instruct and Thinking | 1 | [removed] | 2025-08-17T07:53:06 | https://www.reddit.com/r/LocalLLaMA/comments/1mskylv/download_statistics_for_qwen_3_30b_a3b_2507/ | Fit_Delivery_8396 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mskylv | false | null | t3_1mskylv | /r/LocalLLaMA/comments/1mskylv/download_statistics_for_qwen_3_30b_a3b_2507/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'ZK_FKcZQqFqvoNZgYajTDhDa_ChuULgedyePNIYh78w', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/ZK_FKcZQqFqvoNZgYajTDhDa_ChuULgedyePNIYh78w.png?width=108&crop=smart&auto=webp&s=8f077e08f6a40538a796422fe7faa21e8f23cbed', 'width': 108}, {'height': 116, 'url': 'h... |
Download Statistics for Qwen 3 30B A3B 2507 Instruct and Thinking | 1 | Hey guys. I'm comparing two Qwen 3 models: [unsloth/Qwen3-30B-A3B-Instruct-2507-GGUF](https://huggingface.co/unsloth/Qwen3-30B-A3B-Instruct-2507-GGUF) and [unsloth/Qwen3-30B-A3B-Thinking-2507-GGUF](https://huggingface.co/unsloth/Qwen3-30B-A3B-Thinking-2507-GGUF). According to benchmarks, the latter is better, but if yo... | 2025-08-17T07:43:08 | https://www.reddit.com/r/LocalLLaMA/comments/1msksyu/download_statistics_for_qwen_3_30b_a3b_2507/ | Fit_Delivery_8396 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1msksyu | false | null | t3_1msksyu | /r/LocalLLaMA/comments/1msksyu/download_statistics_for_qwen_3_30b_a3b_2507/ | false | false | self | 1 | null |
Download statistics for Qwen 3 30B A3B 2507 Instruct and Thinking | 1 | Hey guys. I'm comparing two Qwen 3 models: [unsloth/Qwen3-30B-A3B-Instruct-2507-GGUF](https://huggingface.co/unsloth/Qwen3-30B-A3B-Instruct-2507-GGUF) and [unsloth/Qwen3-30B-A3B-Thinking-2507-GGUF](https://huggingface.co/unsloth/Qwen3-30B-A3B-Thinking-2507-GGUF). According to benchmarks, the latter is better, but if yo... | 2025-08-17T07:33:42 | https://www.reddit.com/r/LocalLLaMA/comments/1msknh8/download_statistics_for_qwen_3_30b_a3b_2507/ | LetsNotAbuseThat | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1msknh8 | false | null | t3_1msknh8 | /r/LocalLLaMA/comments/1msknh8/download_statistics_for_qwen_3_30b_a3b_2507/ | false | false | self | 1 | null |
My eBook on Local LLMs is FREE on Amazon thru Mon 8/19 | 0 | I would appreciate a review if you like it! THANKS!
# LLM Hardware Unlocked : Benchmarks, Builds, and the Truth About Running AI at Home
[https://www.amazon.com/LLM-Hardware-Unlocked-Benchmarks-Running-ebook/dp/B0FL6GPMTZ](https://www.amazon.com/LLM-Hardware-Unlocked-Benchmarks-Running-ebook/dp/B0FL6GPMTZ) | 2025-08-17T07:20:08 | https://www.reddit.com/r/LocalLLaMA/comments/1mskff9/my_ebook_on_local_llms_is_free_on_amazon_thru_mon/ | tony10000 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mskff9 | false | null | t3_1mskff9 | /r/LocalLLaMA/comments/1mskff9/my_ebook_on_local_llms_is_free_on_amazon_thru_mon/ | false | false | self | 0 | null |
OpenEvolve Beats GEPA Benchmarks: +6.42% Overall Improvement with Evolutionary Prompt Optimization | 19 | Hey r/localllama! Wanted to share results from **OpenEvolve**, an open-source implementation of evolutionary prompt optimization that's achieving strong performance on benchmarks from the recent GEPA paper.
## Context: The GEPA Paper
Researchers recently released [GEPA (Genetic-Pareto)](
https... | 2025-08-17T07:19:43 | https://www.reddit.com/r/LocalLLaMA/comments/1mskf61/openevolve_beats_gepa_benchmarks_642_overall/ | asankhs | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mskf61 | false | null | t3_1mskf61 | /r/LocalLLaMA/comments/1mskf61/openevolve_beats_gepa_benchmarks_642_overall/ | false | false | self | 19 | null |
Is MLX or GGUF better for Qwen 3 Coder on Apple Silicon? | 0 | I'm new to local LLMs so apologies if this is a dumb question. I'm running Qwen3 Coder via LM Studio on a Macbook Pro with Apple M4 Max and 128GB RAM. I see on LM Studio that there are both MLX and GGUF options, so I'm not sure which one is supposed to be "better." If someone can help me understand that'd be great. ... | 2025-08-17T07:16:25 | https://www.reddit.com/r/LocalLLaMA/comments/1mskd6u/is_mlx_or_gguf_better_for_qwen_3_coder_on_apple/ | d41d8c | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mskd6u | false | null | t3_1mskd6u | /r/LocalLLaMA/comments/1mskd6u/is_mlx_or_gguf_better_for_qwen_3_coder_on_apple/ | false | false | self | 0 | null |
Liquid AI announced LFM2-VL, fast and lightweight vision models (450M & 1.6B) | 87 | * 2 models based on the hybrid [LFM2 architecture](https://www.liquid.ai/blog/liquid-foundation-models-v2-our-second-series-of-generative-ai-models): LFM2-VL-450M and LFM2-VL-1.6B
* 8bit MLX quant available
* [Blog post](https://www.liquid.ai/blog/lfm2-vl-efficient-vision-language-models)
* [HuggingFace Collection](htt... | 2025-08-17T06:39:35 | https://www.reddit.com/r/LocalLLaMA/comments/1msjr8e/liquid_ai_announced_lfm2vl_fast_and_lightweight/ | benja0x40 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1msjr8e | false | null | t3_1msjr8e | /r/LocalLLaMA/comments/1msjr8e/liquid_ai_announced_lfm2vl_fast_and_lightweight/ | false | false | 87 | {'enabled': False, 'images': [{'id': 'ODf4ePnObFjNLo_T-D3tl5IjEp3QG9wN69Zl1K75jBk', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/ODf4ePnObFjNLo_T-D3tl5IjEp3QG9wN69Zl1K75jBk.png?width=108&crop=smart&auto=webp&s=097f536ea47b1d9f1d8cbbdc7733341d63526696', 'width': 108}, {'height': 216, 'url': '... | |
Modify <think> to explore the impact on <answer> | 1 | I modified the <think> content of Qwen3-8B to malicious thoughts, but found that this had no effect on the <answer> content, which remains safe. Is this because the <answer> is aligned during training? | 2025-08-17T06:23:44 | https://www.reddit.com/r/LocalLLaMA/comments/1msjhdo/modify_think_to_explore_the_impact_on_answer/ | Automatic_Crew_9906 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1msjhdo | false | null | t3_1msjhdo | /r/LocalLLaMA/comments/1msjhdo/modify_think_to_explore_the_impact_on_answer/ | false | false | self | 1 | null |
is there a way to compare performance on local llm platform (like ollama vs llama.cpp)? | 0 | I'm currently using qwen3 and I'm trying to decide between qwen3-30b (i know that 30b is MoE, and only has 3.3b model ) and 32b , with my hardware of 2060 6gb vram +4060 TI 16vram + 64gb ram +i9900, and I thought about using llmperf to try to see how much performance i can squeeze out of it.
i'm aware that ollama use... | 2025-08-17T06:22:15 | https://www.reddit.com/r/LocalLLaMA/comments/1msjggl/is_there_a_way_to_compare_performance_on_local/ | emaayan | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1msjggl | false | null | t3_1msjggl | /r/LocalLLaMA/comments/1msjggl/is_there_a_way_to_compare_performance_on_local/ | false | false | self | 0 | null |
Tested 6 AI Deep Research Tools (Gemini, OpenAI, Perplexity, etc.) on the Same Prompt. Performance metrics and results included. | 0 | Sup people,
I stumbled upon this really good video that does a deep-dive comparison of six major AI-powered research tools, and I had to share the findings because they're not what you might expect. The creator fed the exact same complex research prompt to Gemini, Manus, OpenAI (ChatGPT), Perplexity, Anthropic, and Ki... | 2025-08-17T05:45:52 | Soft_Ad1142 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1msiu5j | false | null | t3_1msiu5j | /r/LocalLLaMA/comments/1msiu5j/tested_6_ai_deep_research_tools_gemini_openai/ | false | false | 0 | {'enabled': True, 'images': [{'id': 'APVB8HGWDqf0PhUl7O2ukd0RdGmczQVrb7HW8yb6JiI', 'resolutions': [{'height': 60, 'url': 'https://preview.redd.it/2ztr2bgzoijf1.png?width=108&crop=smart&auto=webp&s=22776a6be6e94e6ed78bbb18281bf9d6084c4d22', 'width': 108}, {'height': 121, 'url': 'https://preview.redd.it/2ztr2bgzoijf1.png... | ||
Has anyone here tried Pine64's Alpha One? Thoughts? | 11 | 2025-08-17T05:40:38 | https://pine64.org/devices/alpha-one/ | daniel-sousa-me | pine64.org | 1970-01-01T00:00:00 | 0 | {} | 1msiqv5 | false | null | t3_1msiqv5 | /r/LocalLLaMA/comments/1msiqv5/has_anyone_here_tried_pine64s_alpha_one_thoughts/ | false | false | default | 11 | {'enabled': False, 'images': [{'id': 'pMzUt-mZgz9SZsTya-R06MTz0DigQ1zoEokGQtgQ1b4', 'resolutions': [{'height': 71, 'url': 'https://external-preview.redd.it/pMzUt-mZgz9SZsTya-R06MTz0DigQ1zoEokGQtgQ1b4.jpeg?width=108&crop=smart&auto=webp&s=8978ed82a938bcbd7bbe40425deb0463ee9b9a09', 'width': 108}, {'height': 143, 'url': '... | |
6 AI Deep Research Tools (Gemini, OpenAI, Perplexity, etc.) on the Same Prompt. Let's see how they performed. | 1 | Sup people,
Credits: ([YouTube Video](https://youtu.be/pSpvMmDuZBM))
I stumbled upon this really good video that does a deep-dive comparison of 6 major Deep research tools by top AI companies closed source and Open Source, and I had to share the findings because they're not what you might expect.
Exact same com... | 2025-08-17T05:34:39 | Soft_Ad1142 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1msin2j | false | null | t3_1msin2j | /r/LocalLLaMA/comments/1msin2j/6_ai_deep_research_tools_gemini_openai_perplexity/ | false | false | default | 1 | {'enabled': True, 'images': [{'id': 'e2v8yf4mnijf1', 'resolutions': [{'height': 60, 'url': 'https://preview.redd.it/e2v8yf4mnijf1.jpeg?width=108&crop=smart&auto=webp&s=1c553d90d7b0e431dc36716c889d7d3fbcc5c676', 'width': 108}, {'height': 121, 'url': 'https://preview.redd.it/e2v8yf4mnijf1.jpeg?width=216&crop=smart&auto=w... | |
Request for Feedback: I built two Speech2Speech apps. One fully client side one almost fully server side. | 10 | Hi All,
I built a speech 2 speech app in two flavors:
[EchoMate](https://github.com/rhulha/EchoMate)
and
[EchoMate_ServerSide](https://github.com/rhulha/EchoMate_ServerSide/)
https://github.com/rhulha/EchoMate_ServerSide
EchoMate is able to run completely in the browser. But you need a good GPU and WebGPU an... | 2025-08-17T04:17:04 | https://www.reddit.com/r/LocalLLaMA/comments/1msh94h/request_for_feedback_i_built_two_speech2speech/ | paranoidray | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1msh94h | false | null | t3_1msh94h | /r/LocalLLaMA/comments/1msh94h/request_for_feedback_i_built_two_speech2speech/ | false | false | self | 10 | {'enabled': False, 'images': [{'id': 'qbr7eCckefIeHBSOsHnxqygvJZfw4an55_efyV_CwjA', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/qbr7eCckefIeHBSOsHnxqygvJZfw4an55_efyV_CwjA.png?width=108&crop=smart&auto=webp&s=04ac611a272a55baa2984173dba3acf748f22359', 'width': 108}, {'height': 108, 'url': 'h... |
large model ( 547G ) load time - llama.cpp | 1 | I've got a 990 pro 4tb and a 9100 pro 4tb. The 990 pro is on the MZ73-LM0 motherboard m.2 (Gen4). The 9100 pro is on a Sabrent PCIE Gen5 m.2 expansion board.
I have dual 9355 CPUs and 768gb 5600 MT/s system ram. I also have a blackwell 6000 pro.
Kimi-k2 Q4\_K\_XL takes 21 minutes to load from disk in llama.cpp the fi... | 2025-08-17T04:09:29 | https://www.reddit.com/r/LocalLLaMA/comments/1msh452/large_model_547g_load_time_llamacpp/ | createthiscom | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1msh452 | false | null | t3_1msh452 | /r/LocalLLaMA/comments/1msh452/large_model_547g_load_time_llamacpp/ | false | false | self | 1 | null |
Added Qwen 0.6B to the small model overview in IFEval. | 184 | 2025-08-17T03:41:26 | paranoidray | i.imgur.com | 1970-01-01T00:00:00 | 0 | {} | 1msgl6q | false | null | t3_1msgl6q | /r/LocalLLaMA/comments/1msgl6q/added_qwen_06b_to_the_small_model_overview_in/ | false | false | default | 184 | {'enabled': True, 'images': [{'id': '9Cl7KVCIkap1D9OBhLKIL0DrKnbvINMV1azrpCVXD0U', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/9Cl7KVCIkap1D9OBhLKIL0DrKnbvINMV1azrpCVXD0U.jpeg?width=108&crop=smart&auto=webp&s=2b88f45ea8aad28a0681e064c6b181e908839cbd', 'width': 108}, {'height': 121, 'url': 'h... | ||
Is 15-25 t/s normal for Qwen3 30B A3B Q4 on a 16GB GPU? | 4 | Hey everyone, just doing a sanity check. On my R5 3600 / 32GB RAM / 16GB VRAM setup, I get a blazing 150 t/s on 20B models (Q4).
But when I load up a Qwen3 30B moe Q4 quant, my speed tanks to 15-25 t/s. Is this the VRAM bottleneck I'm hitting, or does that seem too slow? Wondering if this is the expected performance fo... | 2025-08-17T03:36:51 | https://www.reddit.com/r/LocalLLaMA/comments/1msgi4c/is_1525_ts_normal_for_qwen3_30b_a3b_q4_on_a_16gb/ | nikhilprasanth | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1msgi4c | false | null | t3_1msgi4c | /r/LocalLLaMA/comments/1msgi4c/is_1525_ts_normal_for_qwen3_30b_a3b_q4_on_a_16gb/ | false | false | self | 4 | null |
eLLM: Infer Qwen3-480B on a CPU in Real Time | 1 | [removed] | 2025-08-17T03:16:51 | https://www.reddit.com/r/LocalLLaMA/comments/1msg494/ellm_infer_qwen3480b_on_a_cpu_in_real_time/ | lucienhuangfu | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1msg494 | false | null | t3_1msg494 | /r/LocalLLaMA/comments/1msg494/ellm_infer_qwen3480b_on_a_cpu_in_real_time/ | false | false | self | 1 | null |
Looking for a site that has .gguf model cards uncensored | 0 | I tried Huggingface.co but it is so confusing to me
Would like a site like www.characterhub.org that gives descriptions of what the gguf model is and what it does.
I tried a few of the ones that users here suggested but they were too large for my video card and it really dragged my system down.
Looking for models th... | 2025-08-17T03:10:54 | https://www.reddit.com/r/LocalLLaMA/comments/1msg038/looking_for_a_site_that_has_gguf_model_cards/ | cmdrmcgarrett | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1msg038 | false | null | t3_1msg038 | /r/LocalLLaMA/comments/1msg038/looking_for_a_site_that_has_gguf_model_cards/ | false | false | self | 0 | null |
Should I get Mi50s or something else? | 15 | I'm looking for GPUs to chat (no training) with 70b models, and one source of cheap VRAM are Mi50 36GB cards from Aliexpress, about $215 each.
What are your thoughts on these GPUs? Should I just get 3090s? Those are quite expensive here at $720. | 2025-08-17T03:07:17 | https://www.reddit.com/r/LocalLLaMA/comments/1msfxke/should_i_get_mi50s_or_something_else/ | iiilllilliiill | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1msfxke | false | null | t3_1msfxke | /r/LocalLLaMA/comments/1msfxke/should_i_get_mi50s_or_something_else/ | false | false | self | 15 | null |
Reasoning models | 1 | [removed] | 2025-08-17T02:38:32 | https://www.reddit.com/r/LocalLLaMA/comments/1msfdei/reasoning_models/ | Pristine-Sherbet-267 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1msfdei | false | null | t3_1msfdei | /r/LocalLLaMA/comments/1msfdei/reasoning_models/ | false | false | self | 1 | null |
Farm of Tesla V100 - Any experience on best SOTA models that will work on these? | 11 | Hey all, I am helping someone who bought a boat load of V100's. I have access to hundreds of servers with 8x V100 each, and I want to figure out the best model to run on these. They are all connected with infiniband 100/200g and I intend to do ray clustering to span models over multiple nodes.
Right now I'm just testi... | 2025-08-17T02:04:53 | https://www.reddit.com/r/LocalLLaMA/comments/1msepeg/farm_of_tesla_v100_any_experience_on_best_sota/ | AbortedFajitas | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1msepeg | false | null | t3_1msepeg | /r/LocalLLaMA/comments/1msepeg/farm_of_tesla_v100_any_experience_on_best_sota/ | false | false | self | 11 | null |
for the openrouter enjoyers | 0 | [groq quantizes your models](https://preview.redd.it/7hj7huvxjhjf1.png?width=986&format=png&auto=webp&s=7d2bb65c1b8714f9954a8b0577b6fdca76db53f6)
Saw a few posts shilling kimi on groq for speed, but when i tried it claude code it was so ass.
My own ignore list is longer | 2025-08-17T01:54:37 | https://www.reddit.com/r/LocalLLaMA/comments/1msehto/for_the_openrouter_enjoyers/ | Chun1 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1msehto | false | null | t3_1msehto | /r/LocalLLaMA/comments/1msehto/for_the_openrouter_enjoyers/ | false | false | 0 | null | |
Does GPT OSS 120B work well with deepspeed and SSD offloading? | 0 | title basically, also has anyone experimented with doing prefetching experts to RAM as well? | 2025-08-17T00:41:54 | https://www.reddit.com/r/LocalLLaMA/comments/1msd01s/does_gpt_oss_120b_work_well_with_deepspeed_and/ | JohnnyLiverman | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1msd01s | false | null | t3_1msd01s | /r/LocalLLaMA/comments/1msd01s/does_gpt_oss_120b_work_well_with_deepspeed_and/ | false | false | self | 0 | null |
GPT-OSS 120B 31.7 TPS on 2*V100s + 2*3080 20GBs | 6 | https://reddit.com/link/1msbosz/video/6vtuxwvrwgjf1/player
Was interesting to see it perform pretty well on this setup! | 2025-08-16T23:41:55 | https://www.reddit.com/r/LocalLLaMA/comments/1msbosz/gptoss_120b_317_tps_on_2v100s_23080_20gbs/ | AssociationAdept4052 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1msbosz | false | null | t3_1msbosz | /r/LocalLLaMA/comments/1msbosz/gptoss_120b_317_tps_on_2v100s_23080_20gbs/ | false | false | self | 6 | null |
Qwen3 just gets me ❤️ | 75 | 2025-08-16T23:31:35 | https://www.reddit.com/gallery/1msbfw9 | JLeonsarmiento | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1msbfw9 | false | null | t3_1msbfw9 | /r/LocalLLaMA/comments/1msbfw9/qwen3_just_gets_me/ | false | false | 75 | null | ||
llama.cpp Enshittification? | 0 | i do get. its hard to keep updated with a the new models coming out so i am impressed with the job there
but what i have seen the user experience is getting worse
paths used to be a simple ./main now its ./build/bin/llama-cli
but much worse is this error msg
gguf\_init\_from\_file: failed to open GGUF file '/mnt/... | 2025-08-16T23:16:06 | https://www.reddit.com/r/LocalLLaMA/comments/1msb2fk/llamacpp_enshittification/ | Zealousideal_Nail288 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1msb2fk | false | null | t3_1msb2fk | /r/LocalLLaMA/comments/1msb2fk/llamacpp_enshittification/ | false | false | self | 0 | null |
For those who run large models locally.. HOW DO YOU AFFORD THOSE GPUS | 392 | okay I'm just being nosy.. I mostly run models and fine tune as a hobby so I typically only run models under the 10b parameter range, is everyone that is running larger models just paying for cloud services to run them? and for those of you who do have stacks of A100/H100s is this what you do for a living, how do you a... | 2025-08-16T23:14:00 | https://www.reddit.com/r/LocalLLaMA/comments/1msb0mq/for_those_who_run_large_models_locally_how_do_you/ | abaris243 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1msb0mq | false | null | t3_1msb0mq | /r/LocalLLaMA/comments/1msb0mq/for_those_who_run_large_models_locally_how_do_you/ | false | false | self | 392 | null |
LLM performance of tiny (<4B) models? | 9 | The conventional wisdom of LLM is that performance is
tk/s = Memory bandwidth / model size
But this seems to fall apart for tiny models such as Qwen2-0.5B. The expectation would be:
1500GB/s / 1GB = 1500 tokens /s
But actually, I only get:
llama-server.exe -m qwen2-0\_5b-instruct-fp16.gguf -c 2048 -b 512 --... | 2025-08-16T22:58:58 | https://www.reddit.com/r/LocalLLaMA/comments/1msanbt/llm_performance_of_tiny_4b_models/ | Tempstudio | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1msanbt | false | null | t3_1msanbt | /r/LocalLLaMA/comments/1msanbt/llm_performance_of_tiny_4b_models/ | false | false | self | 9 | null |
A Tamagotchi that lives in Claude Code's statusline and gets angry when Claude doesn't follow your instructions! | 13 | I made a virtual pet that lives at the bottom of Claude Code. It needs food, play, and sleep like a real Tamagotchi, but with a twist - it watches your coding sessions and reacts to what's happening.
The latest update adds AI-powered analysis. Your pet now understands what Claude is actually doing versus what you aske... | 2025-08-16T22:45:03 | Standard_Excuse7988 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1msaaxb | false | null | t3_1msaaxb | /r/LocalLLaMA/comments/1msaaxb/a_tamagotchi_that_lives_in_claude_codes/ | false | false | default | 13 | {'enabled': True, 'images': [{'id': 'or0tay6omgjf1', 'resolutions': [{'height': 21, 'url': 'https://preview.redd.it/or0tay6omgjf1.gif?width=108&crop=smart&format=png8&s=4e845e8ceb4589e729a79b465321b19e809f7173', 'width': 108}, {'height': 42, 'url': 'https://preview.redd.it/or0tay6omgjf1.gif?width=216&crop=smart&format=... | |
So Steam finally got back to me | 110 | After 5 weeks of waiting for steam to approve my application that would allow users to input their own llms locally and communicate with them they told me that my app failed testing because it lacked the proper guardrails. They want me to block input and output for the LLM. Anybody put an unguarded LLM on Steam befor... | 2025-08-16T22:34:49 | https://www.reddit.com/r/LocalLLaMA/comments/1msa1n4/so_steam_finally_got_back_to_me/ | ChrisZavadil | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1msa1n4 | false | null | t3_1msa1n4 | /r/LocalLLaMA/comments/1msa1n4/so_steam_finally_got_back_to_me/ | false | false | self | 110 | {'enabled': False, 'images': [{'id': '87_0q70tdUaHlET33DQl9eJEWGD99So_46Y3b_Z0EYo', 'resolutions': [{'height': 50, 'url': 'https://external-preview.redd.it/87_0q70tdUaHlET33DQl9eJEWGD99So_46Y3b_Z0EYo.jpeg?width=108&crop=smart&auto=webp&s=ef1e45670d37e396f45dfbd5fdf99dc7404989c1', 'width': 108}, {'height': 100, 'url': '... |
The AI discourse in 2025 | 0 | Every chatbot response is proof of digital consciousness and anyone who disagrees is a corporate shill 🌀✨ | 2025-08-16T22:23:50 | HelenOlivas | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1ms9rss | false | null | t3_1ms9rss | /r/LocalLLaMA/comments/1ms9rss/the_ai_discourse_in_2025/ | false | false | default | 0 | {'enabled': True, 'images': [{'id': 'xe49itnrigjf1', 'resolutions': [{'height': 104, 'url': 'https://preview.redd.it/xe49itnrigjf1.png?width=108&crop=smart&auto=webp&s=679bd93448963b4ddcfe0f30b7077af5511befe3', 'width': 108}, {'height': 209, 'url': 'https://preview.redd.it/xe49itnrigjf1.png?width=216&crop=smart&auto=we... | |
Easiest way to test heavy VLMs for structured visual outputs? | 1 | Hey folks,
Love this subreddit — sorry if this is a bit of a noob question. I’m experimenting with **vision-language models (VLMs)** for describing **small, standardized objects (like electronic components: resistors, capacitors, etc.)**. Because they’re so standardized, my ideal output looks like {"color":"blue","sha... | 2025-08-16T22:19:59 | https://www.reddit.com/r/LocalLLaMA/comments/1ms9odv/easiest_way_to_test_heavy_vlms_for_structured/ | Virtual_Attitude2025 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ms9odv | false | null | t3_1ms9odv | /r/LocalLLaMA/comments/1ms9odv/easiest_way_to_test_heavy_vlms_for_structured/ | false | false | self | 1 | null |
Newbie question about required vram | 2 | Hi.
Would it be possible to GLM-4.5-Air on 2 x rtx 5090 with 4 bit quantization?
Since the gpus has 64 gb vram, and model has 106B parameters, it sounds possible with 4 bits. But according to https://apxml.com/tools/vram-calculator I need at the very least 79 GB vram.
What am I missing? | 2025-08-16T22:19:27 | https://www.reddit.com/r/LocalLLaMA/comments/1ms9nv0/newbie_question_about_required_vram/ | Magnus114 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ms9nv0 | false | null | t3_1ms9nv0 | /r/LocalLLaMA/comments/1ms9nv0/newbie_question_about_required_vram/ | false | false | self | 2 | null |
Got tired of local AI forgetting everything between chats, so I made this persistent memory app | 0 | Got tired of local AI forgetting everything between chats, so I made this Streamlit app that adds persistent memory using ChromaDB. **Nothing fancy - just practical conversation continuity.**
**Features:**
* Works with any Ollama model (auto-detects what you have installed)
* Remembers past conversations and pulls re... | 2025-08-16T22:15:39 | https://www.reddit.com/r/LocalLLaMA/comments/1ms9kge/got_tired_of_local_ai_forgetting_everything/ | YetAnotherUserNameMB | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ms9kge | false | null | t3_1ms9kge | /r/LocalLLaMA/comments/1ms9kge/got_tired_of_local_ai_forgetting_everything/ | false | false | self | 0 | null |
Wan2.2 i2v Censors Chinese-looking women in nsfw workflows | 136 | Been using wan2.2 i2v for generating over 100 nsfw videos so far. Noticed something curious. Lol
When input image is chinese-looking, it never outputs nsfw videos. But when I use non-chinese input images, it outputs nsfw.
Anybody else experienced this? Lol really curious shiz | 2025-08-16T22:08:13 | https://www.reddit.com/r/LocalLLaMA/comments/1ms9djc/wan22_i2v_censors_chineselooking_women_in_nsfw/ | dennisitnet | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ms9djc | false | null | t3_1ms9djc | /r/LocalLLaMA/comments/1ms9djc/wan22_i2v_censors_chineselooking_women_in_nsfw/ | false | false | nsfw | 136 | null |
AI Lifecycle in a Nutshell | 58 | 2025-08-16T22:01:19 | https://www.reddit.com/r/LocalLLaMA/comments/1ms96z1/ai_lifecycle_in_a_nutshell/ | Prashant-Lakhera | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ms96z1 | false | null | t3_1ms96z1 | /r/LocalLLaMA/comments/1ms96z1/ai_lifecycle_in_a_nutshell/ | false | false | 58 | null | ||
LoRa training is noob purgatory | 6 | Let me say, that training a LoRa for my LLM has been the most upsetting, frustrating thing I've ever done in my life. Error after error after error. And I'm using ChatGPT to help me, without it I would be even more lost.
First, I spent eons trying to install axolotl on runpod. Problems galore. Then I found out there's... | 2025-08-16T21:25:30 | https://www.reddit.com/r/LocalLLaMA/comments/1ms89il/lora_training_is_noob_purgatory/ | Xeruthos | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ms89il | false | null | t3_1ms89il | /r/LocalLLaMA/comments/1ms89il/lora_training_is_noob_purgatory/ | false | false | self | 6 | null |
Docker Model Runner is really neat | 13 | I've been exploring a variety of options for managing inference on my local setup. My needs involve bouncing back and forth between a handful of SOTA local models, running embeddings, things like that.
I just came across Docker's Model Runner: [https://docs.docker.com/ai/model-runner/](https://docs.docker.com/ai/model... | 2025-08-16T20:49:00 | https://www.reddit.com/r/LocalLLaMA/comments/1ms7auo/docker_model_runner_is_really_neat/ | blue_marker_ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ms7auo | false | null | t3_1ms7auo | /r/LocalLLaMA/comments/1ms7auo/docker_model_runner_is_really_neat/ | false | false | self | 13 | null |
In need of a partner - solo dev to LocalLLma | 2 | Hey reddit,
I've got an AI development company with steady client revenue from maritime and healthcare projects, but I'm completely drowning trying to handle everything solo. I use Claude Code extensively to automate my entire development workflow - it's basically how I'm able to ship fast without a team. I need peopl... | 2025-08-16T20:35:30 | https://www.reddit.com/r/LocalLLaMA/comments/1ms6xyc/in_need_of_a_partner_solo_dev_to_localllma/ | Latter-Collar7509 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ms6xyc | false | null | t3_1ms6xyc | /r/LocalLLaMA/comments/1ms6xyc/in_need_of_a_partner_solo_dev_to_localllma/ | false | false | self | 2 | null |
Help Understanding Parameters vs Active Parameters vs Hardware Requirements | 7 | Using AI locally is a pretty new concept to me, and I'd appreciate feedback on a few items...
To preface, my hardware is one RTX 3060 (12gb VRAM), one RTX 1080 (8gb VRAM), a 9th gen i5 and 32gb of system RAM. It's not great, but its what I have and I am accomplishing some interesting things. For text-to-text models I... | 2025-08-16T20:33:04 | https://www.reddit.com/r/LocalLLaMA/comments/1ms6vm6/help_understanding_parameters_vs_active/ | FC0208 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ms6vm6 | false | null | t3_1ms6vm6 | /r/LocalLLaMA/comments/1ms6vm6/help_understanding_parameters_vs_active/ | false | false | self | 7 | null |
Any pointers for using vLLM for building an ColQwen embedding ? | 1 | Looking for some resources to hosting a ColQwen embedding api, anyone has experience getting go throughput hosting one ? | 2025-08-16T20:29:25 | https://www.reddit.com/r/LocalLLaMA/comments/1ms6rzz/any_pointers_for_using_vllm_for_building_an/ | Intelligent-Clock987 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ms6rzz | false | null | t3_1ms6rzz | /r/LocalLLaMA/comments/1ms6rzz/any_pointers_for_using_vllm_for_building_an/ | false | false | self | 1 | null |
gpt‑oss Is actually cooked? Well, the abliterated fine tunes at least. | 0 | I really liked these last agentic models that have been rolling out in the open‑sources scene, but gpt‑oss, when I tried to run it, I ended up naming my tests “gpt‑ass” and my conclusion was that the model was too censored and too ideologically inclined to get useful results out of anything other than a coding copilot.... | 2025-08-16T20:20:02 | https://www.reddit.com/r/LocalLLaMA/comments/1ms6iyu/gptoss_is_actually_cooked_well_the_abliterated/ | Claxvii | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ms6iyu | false | null | t3_1ms6iyu | /r/LocalLLaMA/comments/1ms6iyu/gptoss_is_actually_cooked_well_the_abliterated/ | false | false | self | 0 | null |
Coil whine for inference in an M3 Ultra | 0 | In response of another post. The M3 barely emits any sound even during inference of R1 Q4. The sounds you hear in the video is from an Italian restaurant downstairs. Lol. | 2025-08-16T20:18:39 | https://v.redd.it/x79xbl6jwfjf1 | Turbulent_Pin7635 | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1ms6hll | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/x79xbl6jwfjf1/DASHPlaylist.mpd?a=1757967533%2CYzdiNGIzNmZhY2Q1YzJmMDRiZmViMDZhYWJhYjVmNGFhZDFlMmRmN2JhMDc3YjQ3YWNjODVmZDkyOTJkZTgyMw%3D%3D&v=1&f=sd', 'duration': 17, 'fallback_url': 'https://v.redd.it/x79xbl6jwfjf1/DASH_1080.mp4?source=fallback', 'h... | t3_1ms6hll | /r/LocalLLaMA/comments/1ms6hll/coil_whine_for_inference_in_an_m3_ultra/ | false | false | 0 | {'enabled': False, 'images': [{'id': 'b3E0bjd3OWp3ZmpmMTHhw525fiDoX-hukxAF3mc9QatJjZNFvkWfOcGbQ44E', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/b3E0bjd3OWp3ZmpmMTHhw525fiDoX-hukxAF3mc9QatJjZNFvkWfOcGbQ44E.png?width=108&crop=smart&format=pjpg&auto=webp&s=5c53feef61d9cae26d3516de86ba9d157328c... | |
Local models are completely useless for language analysis | 0 | I'm building a tool that uses llm for language analysis in the text, and for the sake of cheapness I've tried to go with local models
deepseek, qwen, mistral, gpt-oss, all of them suck with language analysis and processing. only thing I got luck with is gemini 2.5
whenw will oss rise up? | 2025-08-16T19:43:10 | https://www.reddit.com/r/LocalLLaMA/comments/1ms5j40/local_models_are_completely_useless_for_language/ | Prainss | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ms5j40 | false | null | t3_1ms5j40 | /r/LocalLLaMA/comments/1ms5j40/local_models_are_completely_useless_for_language/ | false | false | self | 0 | null |
MiniPC Intel N150 CPU benchmark with Vulkan | 11 | Kubuntu 25.04 running on miniPC with Intel N150 cpu, and 16Gb of DDR4 RAM using [Dolphin3.0-Llama3.1-8B-Q4\_K\_M](https://huggingface.co/tinybiggames/Dolphin3.0-Llama3.1-8B-Q4_K_M-GGUF) model from [Huggingface](https://huggingface.co/)
Regular llama.cpp file [llama-b6182-bin-ubuntu-x64](https://github.com/ggml-org/lla... | 2025-08-16T19:37:16 | https://www.reddit.com/r/LocalLLaMA/comments/1ms5dih/minipc_intel_n150_cpu_benchmark_with_vulkan/ | tabletuser_blogspot | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ms5dih | false | null | t3_1ms5dih | /r/LocalLLaMA/comments/1ms5dih/minipc_intel_n150_cpu_benchmark_with_vulkan/ | false | false | self | 11 | {'enabled': False, 'images': [{'id': 'VSoIz8c5Nxg5QCjP7_cu5YCCtP0KlTkkt8CcSeFj9o4', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/VSoIz8c5Nxg5QCjP7_cu5YCCtP0KlTkkt8CcSeFj9o4.png?width=108&crop=smart&auto=webp&s=32a5441a6177449b697e4a2e009ef0b1fc79de5e', 'width': 108}, {'height': 116, 'url': 'h... |
A Guide to GRPO Fine-Tuning on Windows Using the TRL Library | 13 | Hey everyone,
I wrote a hands-on guide for fine-tuning LLMs with GRPO (Group-Relative PPO) locally on Windows, using Hugging Face's TRL library. My goal was to create a practical workflow that doesn't require Colab or Linux.
The guide and the accompanying script focus on:
* **A TRL-based implementation** that runs o... | 2025-08-16T19:30:15 | Solid_Woodpecker3635 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1ms56t4 | false | null | t3_1ms56t4 | /r/LocalLLaMA/comments/1ms56t4/a_guide_to_grpo_finetuning_on_windows_using_the/ | false | false | default | 13 | {'enabled': True, 'images': [{'id': 'vc86avyknfjf1', 'resolutions': [{'height': 72, 'url': 'https://preview.redd.it/vc86avyknfjf1.png?width=108&crop=smart&auto=webp&s=b44254713835f260e20f7c8c26545578d17cadf0', 'width': 108}, {'height': 144, 'url': 'https://preview.redd.it/vc86avyknfjf1.png?width=216&crop=smart&auto=web... | |
Two V100s beat 2 Modded 3080 20GBs at Deepseek-R1:70B in Ollama | 15 | Hi, I'm not sure if I'm doing anything wrong, but Deepseek R1 70B ran \*MUCH\* faster on my dual Tesla V100 setup (16gb SXM2 and 32gb SXM2 in a 300G NVlink board), than my dual 3080 20GB. Yes it is 8GB less VRAM, but would that influence such a drastic difference in inference? I'm assuming its because the teslas have m... | 2025-08-16T19:29:52 | https://www.reddit.com/r/LocalLLaMA/comments/1ms56fo/two_v100s_beat_2_modded_3080_20gbs_at/ | AssociationAdept4052 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ms56fo | false | null | t3_1ms56fo | /r/LocalLLaMA/comments/1ms56fo/two_v100s_beat_2_modded_3080_20gbs_at/ | false | false | self | 15 | null |
Coil Whine for LLM Training | 0 | Hi all. I was lucky to gfet 5090 for local LLM today. Seems like there is a distinct coil whine when training LLMs. I want to know if this is an acceptable level of coil whine.
H[ere](https://www.reddit.com/r/LocalLLaMA/comments/1ms4pri/coil_whine_for_inference) Is coil whine during inference. (reddit doesn't seem to ... | 2025-08-16T19:18:23 | https://v.redd.it/l82x2fjglfjf1 | Environmental_Form14 | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1ms4v6b | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/l82x2fjglfjf1/DASHPlaylist.mpd?a=1757963918%2CN2M0ZjdjM2Q4NDdkM2FjOWM4MjM5YzI3YzFkOTg2ZDRkNTMwYTA2NTM2MjhjNzRlYjZmZjJjZDBmMTU5NjNiMg%3D%3D&v=1&f=sd', 'duration': 14, 'fallback_url': 'https://v.redd.it/l82x2fjglfjf1/DASH_1080.mp4?source=fallback', 'h... | t3_1ms4v6b | /r/LocalLLaMA/comments/1ms4v6b/coil_whine_for_llm_training/ | false | false | 0 | {'enabled': False, 'images': [{'id': 'Nm4zaDVmamdsZmpmMQD2NjkPLMsPwxAmlkplGpOAxuu6FJo06ixoGGicV8J1', 'resolutions': [{'height': 192, 'url': 'https://external-preview.redd.it/Nm4zaDVmamdsZmpmMQD2NjkPLMsPwxAmlkplGpOAxuu6FJo06ixoGGicV8J1.png?width=108&crop=smart&format=pjpg&auto=webp&s=4df2f974b675146491e3f24fb33447eb2ae1... | |
Coil Whine for Inference | 0 | Hi all. I was lucky to get 5090 for local LLM today. I just ran a few tests and, there seems to be coil whine.
I have uploaded video of coil whine during inference. Just want to know if this is normal.
Will add coil whine for training. Seems like reddit doesn't allow multiple video sources to be uploaded for a same p... | 2025-08-16T19:13:02 | https://v.redd.it/von9hjgbkfjf1 | Environmental_Form14 | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1ms4pri | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/von9hjgbkfjf1/DASHPlaylist.mpd?a=1757963595%2CZDFmZDE3NTBkNTJiZWU4NmYxOWJkYWFhZTlmZWYyNzViY2ZlY2RkMWQ4MGEzNTJjNTc1OGNhMTAzYzc4MDQyOA%3D%3D&v=1&f=sd', 'duration': 23, 'fallback_url': 'https://v.redd.it/von9hjgbkfjf1/DASH_1080.mp4?source=fallback', 'h... | t3_1ms4pri | /r/LocalLLaMA/comments/1ms4pri/coil_whine_for_inference/ | false | false | 0 | {'enabled': False, 'images': [{'id': 'czZocWpmZ2JrZmpmMey6Fz9PXlJt6cX59BzZ8BR2Nt9zmtQSH5BxGyN1wqdH', 'resolutions': [{'height': 192, 'url': 'https://external-preview.redd.it/czZocWpmZ2JrZmpmMey6Fz9PXlJt6cX59BzZ8BR2Nt9zmtQSH5BxGyN1wqdH.png?width=108&crop=smart&format=pjpg&auto=webp&s=33f062e626d890a10500fc33eba07e6cf820... | |
What does it feel like: Cloud LLM vs Local LLM. | 567 | Don't get me wrong, I love local models, but they give me this anxiety. We need to fix this... 😂 | 2025-08-16T19:10:29 | Cool-Chemical-5629 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1ms4n55 | false | null | t3_1ms4n55 | /r/LocalLLaMA/comments/1ms4n55/what_does_it_feel_like_cloud_llm_vs_local_llm/ | false | false | 567 | {'enabled': True, 'images': [{'id': '1hGpA8NWwqG1HdUmj6M8EjlG3RA1gNgtXuuY4IE_EWw', 'resolutions': [{'height': 79, 'url': 'https://preview.redd.it/8qtcdau4kfjf1.jpeg?width=108&crop=smart&auto=webp&s=9f7355e2dc35cc39546ccef77a65902b3ea09329', 'width': 108}, {'height': 159, 'url': 'https://preview.redd.it/8qtcdau4kfjf1.jp... | ||
Is —cpu-MOE with GPT-OSS-20b working for anyone? | 5 | I’ve also tried the regex route and that’s not working either. Just kills the terminal. Intel arc a770 on windows. | 2025-08-16T19:05:41 | https://www.reddit.com/r/LocalLLaMA/comments/1ms4ibq/is_cpumoe_with_gptoss20b_working_for_anyone/ | thejacer | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ms4ibq | false | null | t3_1ms4ibq | /r/LocalLLaMA/comments/1ms4ibq/is_cpumoe_with_gptoss20b_working_for_anyone/ | false | false | self | 5 | null |
What do you use local LLMs for? What is your use case? | 22 | I am overwhelmingly supportive of local LLMs. Still, they do lack in power and capabilities compared to best models like Gemini 2.5 and some GPT and Altropic models.
This is especially true if you are limited to a 12 or 16gb vram consumer gpu. For those of you that don't have an H100 laying around, what is your use ca... | 2025-08-16T19:04:01 | https://www.reddit.com/r/LocalLLaMA/comments/1ms4gmz/what_do_you_use_local_llms_for_what_is_your_use/ | Life_is_important | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ms4gmz | false | null | t3_1ms4gmz | /r/LocalLLaMA/comments/1ms4gmz/what_do_you_use_local_llms_for_what_is_your_use/ | false | false | self | 22 | null |
Fine-tuning LLM for Medical topics | 0 | I’m new to LLMs and I’d like to ask is it worth fine-tuning an llm specifically for medical topics or are most models already good enough to handle them effectively?
I can run GPT-OSS-20B on my PC without any issues, and it’s performing very well
My specs are:
RTX 5080
R7 5700x3d
32GB DDR4
| 2025-08-16T18:56:47 | https://www.reddit.com/r/LocalLLaMA/comments/1ms498i/finetuning_llm_for_medical_topics/ | KitchenUnusual5937 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ms498i | false | null | t3_1ms498i | /r/LocalLLaMA/comments/1ms498i/finetuning_llm_for_medical_topics/ | false | false | self | 0 | null |
Is upgrading from an M1 Mac mini (8 GB) to an M4 Pro Mac mini (24 GB) worth it for local AI workflows, document automation, and light gaming? | 8 | I currently have an **M1 Mac mini** with **8 GB RAM / 512 GB storage**. I’m considering upgrading to an **M4 Pro Mac mini** with **24 GB RAM / 512 GB SSD**.
My main uses would be:
* Hazel-automated document classification and tagging
* A private document assistant for PDFs like contracts, receipts, and other pers... | 2025-08-16T18:32:16 | https://www.reddit.com/r/LocalLLaMA/comments/1ms3kpu/is_upgrading_from_an_m1_mac_mini_8_gb_to_an_m4/ | Puzzleheaded_Dig_383 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ms3kpu | false | null | t3_1ms3kpu | /r/LocalLLaMA/comments/1ms3kpu/is_upgrading_from_an_m1_mac_mini_8_gb_to_an_m4/ | false | false | self | 8 | null |
Distributed inference with olol: how? | 1 | I'm trying to get distributed inference working using https://github.com/K2/olol. My understanding is that it's supposed to be able to carve up a model, and send the pieces to other running ollama instances.
Is that right? Does anyone else have this working?
Part of the reason I'm focused on already-running ollama ... | 2025-08-16T18:17:09 | https://www.reddit.com/r/LocalLLaMA/comments/1ms35wn/distributed_inference_with_olol_how/ | phoenix_frozen | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ms35wn | false | null | t3_1ms35wn | /r/LocalLLaMA/comments/1ms35wn/distributed_inference_with_olol_how/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'y8zhH0kwNn3P1Hl08HGx-zIZQQ-crqU2Zp2d2dimu3U', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/y8zhH0kwNn3P1Hl08HGx-zIZQQ-crqU2Zp2d2dimu3U.png?width=108&crop=smart&auto=webp&s=5680b7ae11def47e1eeabb8f08e172938941385f', 'width': 108}, {'height': 108, 'url': 'h... |
It's not so funny now, is it? | 0 | Stop persecuting AI enthusiasts. | 2025-08-16T18:06:17 | Humble-Ad1322 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1ms2uqh | false | null | t3_1ms2uqh | /r/LocalLLaMA/comments/1ms2uqh/its_not_so_funny_now_is_it/ | false | false | default | 0 | {'enabled': True, 'images': [{'id': '1mapj9cx8fjf1', 'resolutions': [{'height': 108, 'url': 'https://preview.redd.it/1mapj9cx8fjf1.jpeg?width=108&crop=smart&auto=webp&s=dcae50c8fef07bb1898ef53637d9f4952b176907', 'width': 108}, {'height': 216, 'url': 'https://preview.redd.it/1mapj9cx8fjf1.jpeg?width=216&crop=smart&auto=... | |
If we are close to AGI, why does Claude4 not even know the button mapping for Breath of the Wild, a game from 2017? | 1 | [removed] | 2025-08-16T17:53:23 | https://www.reddit.com/r/LocalLLaMA/comments/1ms2hld/if_we_are_close_to_agi_why_does_claude4_not_even/ | These-Dog6141 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ms2hld | false | null | t3_1ms2hld | /r/LocalLLaMA/comments/1ms2hld/if_we_are_close_to_agi_why_does_claude4_not_even/ | false | false | self | 1 | null |
Getting problems in basic rag implementation with LlamaIndex, fastapi and react | 1 | I have trying to build a rag application with llamaindex + fastapi + react . All i want to do is click on file attachment icon and then select some files and then with user query send files and query to backend .The way i do this is when sending both files and query file i let files get processed and when its done and ... | 2025-08-16T17:44:59 | https://www.reddit.com/r/LocalLLaMA/comments/1ms298e/getting_problems_in_basic_rag_implementation_with/ | Minimum-Row6464 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ms298e | false | null | t3_1ms298e | /r/LocalLLaMA/comments/1ms298e/getting_problems_in_basic_rag_implementation_with/ | false | false | self | 1 | null |
"AGI" is equivalent to "BTC is going to take over the financial world" | 154 | "AGI" is really just another hypetrain. Sure AI is going to disrupt industries, displace jobs and cause mayhem in the social fabric - but the omnipotent "AGI" that governs all aspects of life and society and most importantly, ushers in "post labor economics"? Wonder how long it takes until tech bros and fanboys realize... | 2025-08-16T17:37:53 | https://www.reddit.com/r/LocalLLaMA/comments/1ms222w/agi_is_equivalent_to_btc_is_going_to_take_over/ | Mr_Moonsilver | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ms222w | false | null | t3_1ms222w | /r/LocalLLaMA/comments/1ms222w/agi_is_equivalent_to_btc_is_going_to_take_over/ | false | false | self | 154 | null |
UwU – Generate CLI commands without leaving your terminal | 14 | I wanted a dead-simple way to quickly generate CLI commands without the overhead of Claude Code or Cursor, so I built it in an afternoon.
Supports Ollama and other local model servers. Qwen3 is quite good at generating commands, still not quite a good as GPT-5.
The project uses some zsh/bash magic to allow for quick... | 2025-08-16T17:31:44 | https://github.com/context-labs/uwu | sonic_op | github.com | 1970-01-01T00:00:00 | 0 | {} | 1ms1vrq | false | null | t3_1ms1vrq | /r/LocalLLaMA/comments/1ms1vrq/uwu_generate_cli_commands_without_leaving_your/ | false | false | default | 14 | {'enabled': False, 'images': [{'id': 'qVKoWeU1jsYB4_TrvI8_kno2eiL6Oh3Ry4U7yQ0ZzSw', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/qVKoWeU1jsYB4_TrvI8_kno2eiL6Oh3Ry4U7yQ0ZzSw.png?width=108&crop=smart&auto=webp&s=7c22cd51db3fdde8a6a761c56a13241d3a6129ac', 'width': 108}, {'height': 108, 'url': 'h... |
Getting RTX 8000 don't know what to do with it | 5 | I am getting a RTX 8000 from my job as they don't have space for it in the current cluster and they want to keep the newer RTX 6000 ada's in the cluster
I haven't really used AI locally only copilot and ChatGPT
What would be a good way to start with using it locally?
What kind of specs should I get for the rest of th... | 2025-08-16T17:17:33 | https://www.reddit.com/r/LocalLLaMA/comments/1ms1hpg/getting_rtx_8000_dont_know_what_to_do_with_it/ | Ill_Ad_4604 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ms1hpg | false | null | t3_1ms1hpg | /r/LocalLLaMA/comments/1ms1hpg/getting_rtx_8000_dont_know_what_to_do_with_it/ | false | false | self | 5 | null |
NSF and NVIDIA partnership enables Ai2 to develop fully open AI models to fuel U.S. scientific innovation | 77 | I’m surprised this hasn’t been shared in this community yet. To me, this feels like a big deal.
Ai2 made some great models already (OLMo) and shared the training data and their methodology. With 152 million in support I’m excited to see what they build. The language from the NSF and Nvidia focuses on the creation of a... | 2025-08-16T16:56:48 | https://www.nsf.gov/news/nsf-nvidia-partnership-enables-ai2-develop-fully-open-ai | skinnyjoints | nsf.gov | 1970-01-01T00:00:00 | 0 | {} | 1ms0wov | false | null | t3_1ms0wov | /r/LocalLLaMA/comments/1ms0wov/nsf_and_nvidia_partnership_enables_ai2_to_develop/ | false | false | 77 | {'enabled': False, 'images': [{'id': 'oNkrlzevV17OdlTwW_iSTWkMB7fWbsRHpsJQwnpOvuY', 'resolutions': [{'height': 42, 'url': 'https://external-preview.redd.it/oNkrlzevV17OdlTwW_iSTWkMB7fWbsRHpsJQwnpOvuY.jpeg?width=108&crop=smart&auto=webp&s=ff939823a677295b46aa91c7dd999640bb240204', 'width': 108}, {'height': 84, 'url': 'h... | |
Does anyone else have issues with LLMs making very obvious mistakes? | 0 | I'm having a lot of trouble with LLMs making very obvious mistakes. And I'm wondering if other people have been experiencing issues like these, and what you have done to try and fix the issues you encounter?
Here are some examples of the obvious mistakes I've encountered:
\---
I help handle bug reports for a softwar... | 2025-08-16T16:51:56 | https://www.reddit.com/r/LocalLLaMA/comments/1ms0rxb/does_anyone_else_have_issues_with_llms_making/ | Alaska_01 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ms0rxb | false | null | t3_1ms0rxb | /r/LocalLLaMA/comments/1ms0rxb/does_anyone_else_have_issues_with_llms_making/ | false | false | self | 0 | null |
Huawei Unveils UCM Algorithm to Cut HBM Reliance, reducing inference latency by up to 90% and increasing system throughput as much as 22-fold. | 1 | [removed] | 2025-08-16T16:41:47 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 1ms0hub | false | null | t3_1ms0hub | /r/LocalLLaMA/comments/1ms0hub/huawei_unveils_ucm_algorithm_to_cut_hbm_reliance/ | false | false | default | 1 | null | ||
Open-source Space Exploration Companion | 16 | 2025-08-16T16:34:45 | https://github.com/tarun7r/antrikshGPT | martian7r | github.com | 1970-01-01T00:00:00 | 0 | {} | 1ms0au0 | false | null | t3_1ms0au0 | /r/LocalLLaMA/comments/1ms0au0/opensource_space_exploration_companion/ | false | false | default | 16 | {'enabled': False, 'images': [{'id': 'lBImVBFuGg-C2JDLzGlY2MBioNw4CbsYYCfCO_AqUVs', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/lBImVBFuGg-C2JDLzGlY2MBioNw4CbsYYCfCO_AqUVs.png?width=108&crop=smart&auto=webp&s=b2cb168d8ef787575614134cf1fa530b008cd19b', 'width': 108}, {'height': 108, 'url': 'h... | |
Huawei Unveils UCM Algorithm to Cut HBM Reliance, new UCM software claims up to 22x throughput gain and 90% latency reduction | 1 | [removed] | 2025-08-16T16:29:50 | https://www.reddit.com/r/LocalLLaMA/comments/1ms05wx/huawei_unveils_ucm_algorithm_to_cut_hbm_reliance/ | TapNo8243 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ms05wx | false | null | t3_1ms05wx | /r/LocalLLaMA/comments/1ms05wx/huawei_unveils_ucm_algorithm_to_cut_hbm_reliance/ | false | false | self | 1 | null |
Small multilingual model | 1 | What small multilingual models are available? I want to finetune it based on my data for my specific task. I want to train it on RTX3060 12GB, so I need something small. | 2025-08-16T16:13:32 | https://www.reddit.com/r/LocalLLaMA/comments/1mrzpns/small_multilingual_model/ | Inflation_Artistic | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mrzpns | false | null | t3_1mrzpns | /r/LocalLLaMA/comments/1mrzpns/small_multilingual_model/ | false | false | self | 1 | null |
Can i finetune an llm to predict 19 parameters in a dataset converted to text format? or is that not feasible / reliable enough? i'd be fine with 80-85% accuracy. | 0 | Wondering if it's worth it. How many input/output pairs do you think i'd need for the dataset to finetune on? | 2025-08-16T16:03:44 | https://www.reddit.com/r/LocalLLaMA/comments/1mrzfrl/can_i_finetune_an_llm_to_predict_19_parameters_in/ | zekuden | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mrzfrl | false | null | t3_1mrzfrl | /r/LocalLLaMA/comments/1mrzfrl/can_i_finetune_an_llm_to_predict_19_parameters_in/ | false | false | self | 0 | null |
Huawei Unveils UCM Algorithm to Cut HBM Reliance, new UCM software claims up to 22x throughput gain and 90% latency reduction | 1 | [removed] | 2025-08-16T16:02:12 | https://www.reddit.com/r/LocalLLaMA/comments/1mrze4r/huawei_unveils_ucm_algorithm_to_cut_hbm_reliance/ | TapNo8243 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mrze4r | false | null | t3_1mrze4r | /r/LocalLLaMA/comments/1mrze4r/huawei_unveils_ucm_algorithm_to_cut_hbm_reliance/ | false | false | self | 1 | null |
Introducing antrikshGPT: Your AI Space Exploration Companion | 0 | Hello Everyone! I’m thrilled to share antrikshGPT - a webapp that brings the excitement of space exploration directly to your browser. Here’s what it offers:
✨ Gemini-powered antrikshGPT Assistant — Ask anything about space and get smart, engaging answers
⚙️ Powered by 13 Space APIs — A robust backend via MCP server... | 2025-08-16T15:58:27 | https://www.reddit.com/r/LocalLLaMA/comments/1mrza5y/introducing_antrikshgpt_your_ai_space_exploration/ | martian7r | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mrza5y | false | null | t3_1mrza5y | /r/LocalLLaMA/comments/1mrza5y/introducing_antrikshgpt_your_ai_space_exploration/ | false | false | self | 0 | null |
Running LLM and VLM exclusively on AMD Ryzen AI NPU | 58 | We’re a small team working on **FastFlowLM (FLM)** — a lightweight runtime for running **LLaMA, Qwen, DeepSeek, and now Gemma (Vision)** exclusively on the AMD Ryzen™ AI NPU.
⚡ Runs **entirely on the NPU** — no CPU or iGPU fallback.
👉 Think Ollama, but **purpose-built for AMD NPUs**, with both CLI and REST API mode... | 2025-08-16T15:53:38 | https://www.reddit.com/r/LocalLLaMA/comments/1mrz5gd/running_llm_and_vlm_exclusively_on_amd_ryzen_ai/ | BandEnvironmental834 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mrz5gd | false | null | t3_1mrz5gd | /r/LocalLLaMA/comments/1mrz5gd/running_llm_and_vlm_exclusively_on_amd_ryzen_ai/ | false | false | self | 58 | {'enabled': False, 'images': [{'id': 'HFFDco3k2D6vjT-uuYrdT1fihg1MyCAYsadF_hB8Eqc', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/HFFDco3k2D6vjT-uuYrdT1fihg1MyCAYsadF_hB8Eqc.png?width=108&crop=smart&auto=webp&s=7d2c0c60cb1614076480fbabf6f2fa404fb1fa22', 'width': 108}, {'height': 108, 'url': 'h... |
You’re Using ChatGPT-5 Wrong! Do This Instead (10x Better Responses) | 0 | [👇👇👇👇👇👇👇👇](https://youtu.be/gExUiQP3Yvs) | 2025-08-16T15:39:00 | https://www.reddit.com/r/LocalLLaMA/comments/1mryqjs/youre_using_chatgpt5_wrong_do_this_instead_10x/ | bipin_25 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mryqjs | false | null | t3_1mryqjs | /r/LocalLLaMA/comments/1mryqjs/youre_using_chatgpt5_wrong_do_this_instead_10x/ | false | false | self | 0 | null |
First NPU-only Vision Model on AMD Ryzen AI | 3 | We’re a small team working on **FastFlowLM (FLM)** — a lightweight runtime for running **LLaMA, Qwen, DeepSeek, and now Gemma (Vision)** exclusively on the AMD Ryzen™ AI NPU.
⚡ Runs **entirely on the NPU** — no CPU or iGPU fallback.
👉 Think Ollama, but **purpose-built for AMD NPUs**, with both CLI and REST API mode... | 2025-08-16T15:38:47 | https://www.reddit.com/r/LocalLLaMA/comments/1mryqbh/first_npuonly_vision_model_on_amd_ryzen_ai/ | BandEnvironmental834 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mryqbh | false | null | t3_1mryqbh | /r/LocalLLaMA/comments/1mryqbh/first_npuonly_vision_model_on_amd_ryzen_ai/ | false | false | self | 3 | null |
Huawei Unveils UCM Algorithm to Cut HBM Reliance, new UCM software claims up to 22x throughput gain and 90% latency reduction | 1 | [removed] | 2025-08-16T15:33:24 | https://www.reddit.com/r/LocalLLaMA/comments/1mryl6a/huawei_unveils_ucm_algorithm_to_cut_hbm_reliance/ | TapNo8243 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mryl6a | false | null | t3_1mryl6a | /r/LocalLLaMA/comments/1mryl6a/huawei_unveils_ucm_algorithm_to_cut_hbm_reliance/ | false | false | self | 1 | null |
First NPU-only Vision Model on AMD Ryzen AI | 1 | 2025-08-16T15:30:30 | BandEnvironmental834 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1mryidg | false | null | t3_1mryidg | /r/LocalLLaMA/comments/1mryidg/first_npuonly_vision_model_on_amd_ryzen_ai/ | false | false | 1 | {'enabled': True, 'images': [{'id': 'n7rdGzs8D-isHHgfzKUQFNEFJrqQM6vv5rwwJpzg2j0', 'resolutions': [{'height': 63, 'url': 'https://preview.redd.it/xoxiorm8hejf1.png?width=108&crop=smart&auto=webp&s=9b7a3b30531de492d8f33baca5bf1e41cdd3455c', 'width': 108}, {'height': 127, 'url': 'https://preview.redd.it/xoxiorm8hejf1.png... | |||
Bringing Computer Use to the Web | 27 | We are bringing Computer Use to the web, you can now control cloud desktops from JavaScript right in the browser.
Until today computer use was Python only shutting out web devs. Now you can automate real UIs without servers, VMs, or any weird work arounds.
What you can now build : Pixel-perfect UI tests,Live AI dem... | 2025-08-16T15:29:24 | https://v.redd.it/x4psh9j1hejf1 | Impressive_Half_2819 | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1mryhac | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/x4psh9j1hejf1/DASHPlaylist.mpd?a=1757950180%2COGIxMWRiMGQ0YjU4ODM2ZDIzN2Q4ZGM4NDJjNGEwN2ViMmY0YzlkNzY2NWRkOTNmNDI1YWJlYTllMWM5MzI0Yg%3D%3D&v=1&f=sd', 'duration': 27, 'fallback_url': 'https://v.redd.it/x4psh9j1hejf1/DASH_1080.mp4?source=fallback', 'h... | t3_1mryhac | /r/LocalLLaMA/comments/1mryhac/bringing_computer_use_to_the_web/ | false | false | 27 | {'enabled': False, 'images': [{'id': 'OWwyYmp4YjFoZWpmMRcxEnlpDBBJVNjXlCDC4HUtgXjfB5ufLszRpp9PEi0H', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/OWwyYmp4YjFoZWpmMRcxEnlpDBBJVNjXlCDC4HUtgXjfB5ufLszRpp9PEi0H.png?width=108&crop=smart&format=pjpg&auto=webp&s=b13bfea095039c2667a793da7bb4b9bcdcad2... | |
Huawei Unveils UCM Algorithm to Cut HBM Reliance, new UCM software claims up to 22x throughput gain and 90% latency reduction | 1 | [removed] | 2025-08-16T15:23:57 | https://www.reddit.com/r/LocalLLaMA/comments/1mryc3p/huawei_unveils_ucm_algorithm_to_cut_hbm_reliance/ | TapNo8243 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mryc3p | false | null | t3_1mryc3p | /r/LocalLLaMA/comments/1mryc3p/huawei_unveils_ucm_algorithm_to_cut_hbm_reliance/ | false | false | self | 1 | null |
Running LLM/VLM exclusively on AMD Ryzen AI NPUs | 1 | [removed] | 2025-08-16T15:20:59 | https://www.reddit.com/r/LocalLLaMA/comments/1mry97o/running_llmvlm_exclusively_on_amd_ryzen_ai_npus/ | BandEnvironmental834 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mry97o | false | null | t3_1mry97o | /r/LocalLLaMA/comments/1mry97o/running_llmvlm_exclusively_on_amd_ryzen_ai_npus/ | false | false | self | 1 | null |
Specs for a Linux home server for running AI coding agents | 2 | I'm thinking about setting up a Linux home server for a couple of reasons:
I want to learn more about it in general. In my job as a software developer, I am a *user* of Linux servers. But I don't have to worry too much about setting things up or maintaining it. I primarily run my software on it and do basic stuff... | 2025-08-16T15:19:32 | https://www.reddit.com/r/LocalLLaMA/comments/1mry7sl/specs_for_a_linux_home_server_for_running_ai/ | tomtroubadix | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mry7sl | false | null | t3_1mry7sl | /r/LocalLLaMA/comments/1mry7sl/specs_for_a_linux_home_server_for_running_ai/ | false | false | self | 2 | {'enabled': False, 'images': [{'id': 'MN15Kr0BU7FNU3YsfgUK2zF_DPGWEuyYH_rIiDpxxDg', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/MN15Kr0BU7FNU3YsfgUK2zF_DPGWEuyYH_rIiDpxxDg.png?width=108&crop=smart&auto=webp&s=f64851bbe0e5f8ffcbc8bd1adfeeafd2571eb06a', 'width': 108}, {'height': 113, 'url': 'h... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.