title stringlengths 1 300 | score int64 0 8.54k | selftext stringlengths 0 41.5k | created timestamp[ns]date 2023-04-01 04:30:41 2026-03-04 02:14:14 ⌀ | url stringlengths 0 878 | author stringlengths 3 20 | domain stringlengths 0 82 | edited timestamp[ns]date 1970-01-01 00:00:00 2026-02-19 14:51:53 | gilded int64 0 2 | gildings stringclasses 7
values | id stringlengths 7 7 | locked bool 2
classes | media stringlengths 646 1.8k ⌀ | name stringlengths 10 10 | permalink stringlengths 33 82 | spoiler bool 2
classes | stickied bool 2
classes | thumbnail stringlengths 4 213 ⌀ | ups int64 0 8.54k | preview stringlengths 301 5.01k ⌀ |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
AMD AI Max+ 395 128GB with cline | 4 | I'm asking for suggestions of run a LLM for cline agent coding since there's not much info online and my GPT and Claude seems really not a reliable options to ask, I've view almost anything I can find and still concludes a definite answer.
I'm now in one of the framework desktop late batches and I wanna try out local... | 2025-08-21T06:42:41 | https://www.reddit.com/r/LocalLLaMA/comments/1mw3k8c/amd_ai_max_395_128gb_with_cline/ | Assassinyin | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mw3k8c | false | null | t3_1mw3k8c | /r/LocalLLaMA/comments/1mw3k8c/amd_ai_max_395_128gb_with_cline/ | false | false | self | 4 | null |
Deepseek V3.1 is not so bad after all.. | 177 | It seems like it just was a different purpose, speed and agency. Its pretty good at what its meant for | 2025-08-21T06:40:50 | https://www.reddit.com/gallery/1mw3j7l | Trevor050 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1mw3j7l | false | null | t3_1mw3j7l | /r/LocalLLaMA/comments/1mw3j7l/deepseek_v31_is_not_so_bad_after_all/ | false | false | 177 | null | |
Please help me with the selection of hardware for PC upgrades | 0 | Раньше не плохо разбирался в компьютерном железе, но уже достаточно давно не следил за рынком. А сейчас понадобилось обновить некоторое железо(процессор, материнская плата, оперативная память). Какой брать процессор и материнскую плату идей вообще нет почти. Оперативную память возьму 2 платы по 16ГБ, DDR5. Если есть зн... | 2025-08-21T06:40:33 | https://www.reddit.com/r/LocalLLaMA/comments/1mw3j1n/please_help_me_with_the_selection_of_hardware_for/ | Simple-Load5461 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mw3j1n | false | null | t3_1mw3j1n | /r/LocalLLaMA/comments/1mw3j1n/please_help_me_with_the_selection_of_hardware_for/ | false | false | self | 0 | null |
Ollama prompt_eval_count < num_ctx | 1 | [removed] | 2025-08-21T06:31:11 | https://www.reddit.com/r/LocalLLaMA/comments/1mw3dkm/ollama_prompt_eval_count_num_ctx/ | NihilisticAssHat | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mw3dkm | false | null | t3_1mw3dkm | /r/LocalLLaMA/comments/1mw3dkm/ollama_prompt_eval_count_num_ctx/ | false | false | self | 1 | null |
deepseek-ai/DeepSeek-V3.1 · Hugging Face | 1 | 2025-08-21T06:31:06 | https://huggingface.co/deepseek-ai/DeepSeek-V3.1 | Lower-Jello-6906 | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1mw3diy | false | null | t3_1mw3diy | /r/LocalLLaMA/comments/1mw3diy/deepseekaideepseekv31_hugging_face/ | false | false | default | 1 | null | |
deepseek-ai/DeepSeek-V3.1 · Hugging Face | 542 | 2025-08-21T06:28:56 | https://huggingface.co/deepseek-ai/DeepSeek-V3.1 | TheLocalDrummer | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1mw3c7s | false | null | t3_1mw3c7s | /r/LocalLLaMA/comments/1mw3c7s/deepseekaideepseekv31_hugging_face/ | false | false | default | 542 | {'enabled': False, 'images': [{'id': 'RJXEgvNDm4zhSkGlks1Mt4ppnLOAENNDWYNaVwpLE9k', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/RJXEgvNDm4zhSkGlks1Mt4ppnLOAENNDWYNaVwpLE9k.png?width=108&crop=smart&auto=webp&s=662ec345b68fbe0b4fd1513b21f7ebad86e5c637', 'width': 108}, {'height': 116, 'url': 'h... | |
Single finetune vs multiple LoRA | 6 | hello,
I'm trying to finetune gemma 270M on a medical dataset; and I was wondering if it would have been better to make multiple LoRA (example: field related) and reroute the query to the more specific one or if a single large finetune would have been better
Does anyone have any experience? | 2025-08-21T06:26:44 | https://www.reddit.com/r/LocalLLaMA/comments/1mw3ayl/single_finetune_vs_multiple_lora/ | Ereptile-Disruption | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mw3ayl | false | null | t3_1mw3ayl | /r/LocalLLaMA/comments/1mw3ayl/single_finetune_vs_multiple_lora/ | false | false | self | 6 | null |
What would be a helpful dataset? | 1 | Hey guys, looking to put together a high quality dataset for fine tuning. Any thoughts or opinions on what niches would be helpful for the community? | 2025-08-21T06:14:05 | https://www.reddit.com/r/LocalLLaMA/comments/1mw33k8/what_would_be_a_helpful_dataset/ | No-Yak4416 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mw33k8 | false | null | t3_1mw33k8 | /r/LocalLLaMA/comments/1mw33k8/what_would_be_a_helpful_dataset/ | false | false | self | 1 | null |
monkeSearch's first prototype is now public, And it works! Offline natural language query for local files using a VERY small LLM (Qwen3-0.6b) and it works amazingly right away. With temporal awareness. | 46 | Hi guys, this is a follow up post of my old post, which was about building a local natural language file search engine using qwen0.6b and LangExtract, and today I am very excited to release a very bare bones and working prototype for this!
[https://github.com/monkesearch/monkeSearch](https://github.com/monkesearch/mo... | 2025-08-21T06:03:45 | https://www.reddit.com/r/LocalLLaMA/comments/1mw2xci/monkesearchs_first_prototype_is_now_public_and_it/ | fuckAIbruhIhateCorps | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mw2xci | false | null | t3_1mw2xci | /r/LocalLLaMA/comments/1mw2xci/monkesearchs_first_prototype_is_now_public_and_it/ | false | false | self | 46 | {'enabled': False, 'images': [{'id': 'G_tSwiUHfROZ63ysP4H02_I4hCf3xbHi0E8MY4v5IeM', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/G_tSwiUHfROZ63ysP4H02_I4hCf3xbHi0E8MY4v5IeM.png?width=108&crop=smart&auto=webp&s=c81efe15ec35fb8685daf67dd698166bc839e066', 'width': 108}, {'height': 108, 'url': 'h... |
Frontier AI labs’ publicized 100k-H100 training runs under-deliver because software and systems don’t scale efficiently, wasting massive GPU fleets | 379 | 2025-08-21T05:44:41 | https://www.reddit.com/gallery/1mw2lme | vladlearns | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1mw2lme | false | null | t3_1mw2lme | /r/LocalLLaMA/comments/1mw2lme/frontier_ai_labs_publicized_100kh100_training/ | false | false | 379 | null | ||
Help me! To optimize chunked grammar/spell-check processing with models running on LLaMA.cpp. | 0 | Hi everyone,
I’m working on a custom RAG-like system in **Node.js** where users can choose options like **grammar correction** or **spell checking**. To avoid hitting the model’s **token limit**, I split large documents into smaller chunks and process them batch by batch.
Here’s a simplified version of my `main` func... | 2025-08-21T05:31:21 | https://www.reddit.com/r/LocalLLaMA/comments/1mw2da8/help_me_to_optimize_chunked_grammarspellcheck/ | Technical-Chapter388 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mw2da8 | false | null | t3_1mw2da8 | /r/LocalLLaMA/comments/1mw2da8/help_me_to_optimize_chunked_grammarspellcheck/ | false | false | self | 0 | null |
US demand for 48GB 4090? | 29 | I'm able to make 48GB 4090's and offer 90 day warranties and videos of the process and testing. (I'm a gpu repair tech of 3 years)
But with 5090 over supply, and rtx a6000's being available, I was wondering if there's a demand for them in the US at 2900$ each or 900$ as an upgrade service | 2025-08-21T05:13:48 | https://www.reddit.com/r/LocalLLaMA/comments/1mw220v/us_demand_for_48gb_4090/ | CertainlyBright | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mw220v | false | null | t3_1mw220v | /r/LocalLLaMA/comments/1mw220v/us_demand_for_48gb_4090/ | false | false | self | 29 | null |
Local Open Source Alternative to NotebookLM | 12 | For those of you who aren't familiar with SurfSense, it aims to be the **open-source alternative to NotebookLM, Perplexity, or Glean.**
In short, it's a **Highly Customizable AI Research Agent** that connects to your personal external sources and Search Engines (Tavily, LinkUp), Slack, Linear, Jira, ClickUp, Confluenc... | 2025-08-21T05:09:16 | https://www.reddit.com/r/LocalLLaMA/comments/1mw1z0m/local_open_source_alternative_to_notebooklm/ | Uiqueblhats | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mw1z0m | false | null | t3_1mw1z0m | /r/LocalLLaMA/comments/1mw1z0m/local_open_source_alternative_to_notebooklm/ | false | false | self | 12 | {'enabled': False, 'images': [{'id': 'CHVoALsrjQ-lrdYJQ_AnYWKcbrjw8TU5N46MFJ1biSY', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/CHVoALsrjQ-lrdYJQ_AnYWKcbrjw8TU5N46MFJ1biSY.png?width=108&crop=smart&auto=webp&s=aa62e877a0a410001ecc4b5b84d87af217003efd', 'width': 108}, {'height': 108, 'url': 'h... |
What are the best local LLMs that can be run on mobile devices, and what are they each good at? | 1 | Essentially the title. What are the best small LLMs that can be run on a mobile device, and what is each particular one good at? | 2025-08-21T04:27:01 | https://www.reddit.com/r/LocalLLaMA/comments/1mw17du/what_are_the_best_local_llms_that_can_be_run_on/ | BaCaDaEa | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mw17du | false | null | t3_1mw17du | /r/LocalLLaMA/comments/1mw17du/what_are_the_best_local_llms_that_can_be_run_on/ | false | false | self | 1 | null |
Finally Kimi-VL-A3B-Thinking-2506-GGUF is available | 191 | Original model: [https://huggingface.co/moonshotai/Kimi-VL-A3B-Thinking-2506](https://huggingface.co/moonshotai/Kimi-VL-A3B-Thinking-2506)
Supported added in this PR: [https://github.com/ggml-org/llama.cpp/pull/15458](https://github.com/ggml-org/llama.cpp/pull/15458) | 2025-08-21T04:06:14 | https://huggingface.co/ggml-org/Kimi-VL-A3B-Thinking-2506-GGUF | kironlau | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1mw0tc4 | false | null | t3_1mw0tc4 | /r/LocalLLaMA/comments/1mw0tc4/finally_kimivla3bthinking2506gguf_is_available/ | false | false | default | 191 | {'enabled': False, 'images': [{'id': 'm2lF_KqN7wgwWcFm1a3lN4x_joA1P4xA8L66Y-aVPXM', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/m2lF_KqN7wgwWcFm1a3lN4x_joA1P4xA8L66Y-aVPXM.png?width=108&crop=smart&auto=webp&s=4278367c5ac547b8a3a287ac4b31c20046972d04', 'width': 108}, {'height': 116, 'url': 'h... |
Which weights under 50GB have the best *depth of knowledge*? | 29 | Is there a benchmark for this that doesn't mix knowledge with reasoning? Just sheer encyclopedia knowledge. | 2025-08-21T04:03:45 | https://www.reddit.com/r/LocalLLaMA/comments/1mw0rm9/which_weights_under_50gb_have_the_best_depth_of/ | ForsookComparison | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mw0rm9 | false | null | t3_1mw0rm9 | /r/LocalLLaMA/comments/1mw0rm9/which_weights_under_50gb_have_the_best_depth_of/ | false | false | self | 29 | null |
how i make anime style key visuals using mage.space and domoai | 0 | ever since i got into ai art, i’ve been chasing that “anime opening scene” feeling like the kind of visual where the sky is glowing, the hair is moving like it’s caught in the wind, and the whole frame looks like it belongs in a trailer. it’s not easy to nail that mood with a single tool, but i found that using [mage.s... | 2025-08-21T03:12:26 | https://www.reddit.com/r/LocalLLaMA/comments/1mvzr8b/how_i_make_anime_style_key_visuals_using/ | Neat_Chapter_9055 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mvzr8b | false | null | t3_1mvzr8b | /r/LocalLLaMA/comments/1mvzr8b/how_i_make_anime_style_key_visuals_using/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'VtqF2iJsFObTWP-n4tlBpWPettxeFNba6NgOoR7nio8', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/VtqF2iJsFObTWP-n4tlBpWPettxeFNba6NgOoR7nio8.jpeg?width=108&crop=smart&auto=webp&s=11b4b25e827b645c3706e95ec17568d097c6a590', 'width': 108}, {'height': 121, 'url': '... |
Best datasets for NSFW fine tuning? | 13 | I'm keen to have a go at some fine-tuning, but I'm struggling to track down any decent datasets. There was one shared on here a few years back, but it looks like it's been taken down now — such a shame! | 2025-08-21T03:04:15 | https://www.reddit.com/r/LocalLLaMA/comments/1mvzl9u/best_datasets_for_nsfw_fine_tuning/ | Disastrous_Key_1178 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mvzl9u | false | null | t3_1mvzl9u | /r/LocalLLaMA/comments/1mvzl9u/best_datasets_for_nsfw_fine_tuning/ | false | false | nsfw | 13 | null |
Questions about this laptop | 0 | Guys, I had a question... Could Model GPT-OSS-120B be able to run effectively on a laptop with the following characteristics:?
Intel® Core™ Ultra 9 275HX, 24 cores
Windows 11 Home
NVIDIA® GeForce RTX™ 5060 8GB VRAM
64GB DDR5
Thanks in advance. | 2025-08-21T02:54:18 | https://www.reddit.com/r/LocalLLaMA/comments/1mvzdtd/questions_about_this_laptop/ | Ordinary_Mud7430 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mvzdtd | false | null | t3_1mvzdtd | /r/LocalLLaMA/comments/1mvzdtd/questions_about_this_laptop/ | false | false | self | 0 | null |
Um wtf .. ?? wdym 82.3 GB downloaded and 9.93 GB uploaded .. why is doc.anthropic.com uploading and downloading massive amounts of data? | 0 | Just caught anthropic doing this on safari, does anyone have any idea on how to trace what this data is?? | 2025-08-21T02:44:24 | https://www.reddit.com/gallery/1mvz6lo | beppled | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1mvz6lo | false | null | t3_1mvz6lo | /r/LocalLLaMA/comments/1mvz6lo/um_wtf_wdym_823_gb_downloaded_and_993_gb_uploaded/ | false | false | 0 | null | |
Maxsun Dual Intel Arc Pro B60 available at $2,999 | 41 | I emailed Maxsun about availability of their dual B60 cards, and got a response:
*Hi,*
*let me introduce Mr. Jason Green, who is our US distributor for B60, he is gonna help you with the purchase, thanks.*
*Regards,*
\---
*Hi,*
*I'm Jason from Hydratech Builds, the US distributor for MAXSUN.*
*To help you with ... | 2025-08-21T02:18:53 | https://www.reddit.com/r/LocalLLaMA/comments/1mvynft/maxsun_dual_intel_arc_pro_b60_available_at_2999/ | ConcaveTriangle5761 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mvynft | false | null | t3_1mvynft | /r/LocalLLaMA/comments/1mvynft/maxsun_dual_intel_arc_pro_b60_available_at_2999/ | false | false | self | 41 | {'enabled': False, 'images': [{'id': 'npgJ8C7RmjLw0DMk-0cZIp2apc1HPY4qjVybqaSijRM', 'resolutions': [{'height': 86, 'url': 'https://external-preview.redd.it/npgJ8C7RmjLw0DMk-0cZIp2apc1HPY4qjVybqaSijRM.png?width=108&crop=smart&auto=webp&s=75168256e3a062bfb8c6b2e0fcaf1863be060c0d', 'width': 108}, {'height': 172, 'url': 'h... |
How to optimize for RTX 5090? I’m having some trouble | 1 | What CUDA and PyTorch versions are you guys using?
It seems only CUDA 12.8 and PyTorch 2.7.0 work for me. I am using VLLM for serving Qwen 32B AWQ - only getting 60 token / second. I’ve made optimizations, but unable to use the new VLLM engines, or use FlashInfer. It seems Flash Infer is not supported for sm_120/ RTX ... | 2025-08-21T02:10:45 | https://www.reddit.com/r/LocalLLaMA/comments/1mvyh0f/how_to_optimize_for_rtx_5090_im_having_some/ | NeedFuckYouMoney | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mvyh0f | false | null | t3_1mvyh0f | /r/LocalLLaMA/comments/1mvyh0f/how_to_optimize_for_rtx_5090_im_having_some/ | false | false | self | 1 | null |
Vibe datasetting- Creating syn data with a relational model | 0 | TL;DR: I’m testing Dataset Director, a tiny tool that uses a relational foundation model as a planner to predict which data you’ll need next, then has an LLM generate only those specific samples. Free, capped at 100 rows/dataset.
Why: Random synthetic data ≠ helpful. We want on-spec, just-in-time samples that fix the ... | 2025-08-21T02:07:59 | https://www.reddit.com/r/LocalLLaMA/comments/1mvyeu9/vibe_datasetting_creating_syn_data_with_a/ | OkOwl6744 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mvyeu9 | false | null | t3_1mvyeu9 | /r/LocalLLaMA/comments/1mvyeu9/vibe_datasetting_creating_syn_data_with_a/ | false | false | self | 0 | null |
Question regarding imatrix quants | 6 | So I was skimming through the [dataset](https://gist.github.com/bartowski1182/eb213dccb3571f863da82e99418f81e8) that bartowski uses for their imatrix quants. While it is certainly diverse, it's not completely comprehensive for all the subjects or tasks that someone might use the models for (nor can we realistically exp... | 2025-08-21T02:05:37 | https://www.reddit.com/r/LocalLLaMA/comments/1mvyczp/question_regarding_imatrix_quants/ | Confident-Willow5457 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mvyczp | false | null | t3_1mvyczp | /r/LocalLLaMA/comments/1mvyczp/question_regarding_imatrix_quants/ | false | false | self | 6 | {'enabled': False, 'images': [{'id': 'OAXSl8SY6T3JK9MGQyKxkoYbqZ71HQRYXLeB8CV0NXg', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/OAXSl8SY6T3JK9MGQyKxkoYbqZ71HQRYXLeB8CV0NXg.png?width=108&crop=smart&auto=webp&s=796041decb8c1250cbc2f301331b72f7385b477d', 'width': 108}, {'height': 108, 'url': 'h... |
Qwen2.5 0.5B vs Qwen3 0.6B answering the same question. Definitely a big improvement. | 124 | 2025-08-21T01:57:14 | https://www.reddit.com/gallery/1mvy6ai | airbus_a360_when | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1mvy6ai | false | null | t3_1mvy6ai | /r/LocalLLaMA/comments/1mvy6ai/qwen25_05b_vs_qwen3_06b_answering_the_same/ | false | false | 124 | null | ||
What frustrates you about tools like bolt.new or usejolt.ai? | 0 | Hey everyone,
I’ve been exploring code-gen/AI dev tools like bolt.new and usejolt.ai, and while they’re super impressive, I’ve noticed a few pain points that make me wonder if others feel the same.
👉 My question:
• If you’ve tried these tools, what do you find most frustrating?
• Do you use them regularly, or do y... | 2025-08-21T01:09:41 | https://www.reddit.com/r/LocalLLaMA/comments/1mvx5ok/what_frustrates_you_about_tools_like_boltnew_or/ | Prestigious_Skin6507 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mvx5ok | false | null | t3_1mvx5ok | /r/LocalLLaMA/comments/1mvx5ok/what_frustrates_you_about_tools_like_boltnew_or/ | false | false | self | 0 | null |
Built my first iOS app with LLM help — WaitMateNYC | 6 | I’ve been experimenting with using LLMs as coding partners and ended up shipping my first real app: WaitMateNYC. It shows real-time wait times at popular NYC restaurants and flags whether a spot is walk-in only or on Resy.
Most of the coding was done in SwiftUI, and I leaned on LLMs for:
• Full-file replacements when... | 2025-08-21T00:37:16 | Powerful_Fudge_5999 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1mvwg6i | false | null | t3_1mvwg6i | /r/LocalLLaMA/comments/1mvwg6i/built_my_first_ios_app_with_llm_help_waitmatenyc/ | false | false | default | 6 | {'enabled': True, 'images': [{'id': 'yps6tozfq9kf1', 'resolutions': [{'height': 216, 'url': 'https://preview.redd.it/yps6tozfq9kf1.jpeg?width=108&crop=smart&auto=webp&s=ad7874ac23089ad424f4632c08e6b51e39a14cde', 'width': 108}, {'height': 432, 'url': 'https://preview.redd.it/yps6tozfq9kf1.jpeg?width=216&crop=smart&auto=... | |
Android Client for Remote LLM | 5 | Hi all. I apologize for how non-technical I am about to sound, but it has been a long day and I am fried. I am looking for a reliable Android app that can act as a client for my GPT4All setup (training a model on my e-reader highlights). It needs:
1. Custom Base URL: The ability to set a custom host/URL to point it to... | 2025-08-21T00:29:01 | https://www.reddit.com/r/LocalLLaMA/comments/1mvw9rb/android_client_for_remote_llm/ | _s3raphic_ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mvw9rb | false | null | t3_1mvw9rb | /r/LocalLLaMA/comments/1mvw9rb/android_client_for_remote_llm/ | false | false | self | 5 | null |
NVIDIA Achieves 35% Performance Boost for OpenAI’s GPT-OSS-120B Model | 210 | 2025-08-21T00:20:54 | https://www.reddit.com/gallery/1mvw3hz | vibedonnie | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1mvw3hz | false | null | t3_1mvw3hz | /r/LocalLLaMA/comments/1mvw3hz/nvidia_achieves_35_performance_boost_for_openais/ | false | false | 210 | null | ||
If this is really true... | 1 | [removed] | 2025-08-20T23:44:36 | SinBecosTan | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1mvva81 | false | null | t3_1mvva81 | /r/LocalLLaMA/comments/1mvva81/if_this_is_really_true/ | false | false | default | 1 | {'enabled': True, 'images': [{'id': 'd1l2uu2bf9kf1', 'resolutions': [{'height': 47, 'url': 'https://preview.redd.it/d1l2uu2bf9kf1.png?width=108&crop=smart&auto=webp&s=d9e403c792876b0a3b09670b0759a1d959168fe6', 'width': 108}, {'height': 94, 'url': 'https://preview.redd.it/d1l2uu2bf9kf1.png?width=216&crop=smart&auto=webp... | |
7 character AI podcast - Dead Internet | 0 | 2025-08-20T23:40:48 | https://www.youtube.com/live/1c6LEsGFCWg?si=4VZqUF6IUWSprLlq | Mercyfulking | youtube.com | 1970-01-01T00:00:00 | 0 | {} | 1mvv70y | false | null | t3_1mvv70y | /r/LocalLLaMA/comments/1mvv70y/7_character_ai_podcast_dead_internet/ | false | false | default | 0 | null | |
Useful Recipes IK-Llama | 15 | Wanted invite everyone interested to share recipes and tokens/sec results that have worked for you in ik-llama.
Below are mine so far — mostly for GLM models on multi-GPU + CPU setups. If you spot any optimizations, I’d love to hear them. I’m running ubergarm quants. I’m new to this so if it looks off feel free to let... | 2025-08-20T23:39:56 | https://www.reddit.com/r/LocalLLaMA/comments/1mvv6bv/useful_recipes_ikllama/ | Infamous_Jaguar_2151 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mvv6bv | false | null | t3_1mvv6bv | /r/LocalLLaMA/comments/1mvv6bv/useful_recipes_ikllama/ | false | false | self | 15 | null |
Offline AI models for background noise removal and voice isolation | 17 | Izotope 11 doesn't give results comparable to Adobe Podcast, but AP can only process max 4h/recording and it's online only.
Is there any offline AI model I can use which outputs similar quality as AP? I have RTX4090 so GPU is not an issue. | 2025-08-20T23:32:14 | https://www.reddit.com/r/LocalLLaMA/comments/1mvuzzl/offline_ai_models_for_background_noise_removal/ | healthiswealth0 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mvuzzl | false | null | t3_1mvuzzl | /r/LocalLLaMA/comments/1mvuzzl/offline_ai_models_for_background_noise_removal/ | false | false | self | 17 | null |
Custom LLM System 2.4B (No Fine-Tuning): How do your local LLMs perform? | 5 | Most of us are probably using local inference apps like OLLaMA or vLLM, right?
https://preview.redd.it/zx6cg3od99kf1.png?width=1270&format=png&auto=webp&s=8fdd39b1a9c1cda3b986ca60d802517b05ea4521
What kind of real-world performance are you all seeing? Or is anyone else loading their model with custom modules like I a... | 2025-08-20T23:01:51 | https://www.reddit.com/r/LocalLLaMA/comments/1mvua1j/custom_llm_system_24b_no_finetuning_how_do_your/ | Patience2277 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mvua1j | false | null | t3_1mvua1j | /r/LocalLLaMA/comments/1mvua1j/custom_llm_system_24b_no_finetuning_how_do_your/ | false | false | 5 | null | |
Folsom-0811-1 — New Model Spotted in LM Arena | 6 | Mystery model **Folsom-0811-1** just showed up in my LM Arena 1v1. I can find absolutely **zero** mention of it online. I searched HF, Reddit, leaderboards, research papers, and even gave the task to Claude Research to no avail.
Its style strikes me as something in-between Mistral Medium and Phi 4 — concise, non-sycop... | 2025-08-20T22:45:01 | https://www.reddit.com/gallery/1mvtvab | Zestyclose839 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1mvtvab | false | null | t3_1mvtvab | /r/LocalLLaMA/comments/1mvtvab/folsom08111_new_model_spotted_in_lm_arena/ | false | false | 6 | null | |
2x RTX 5060ti 16GB - inference benchmarks in Ollama | 29 | Despite the recommendations of most Redditors, I chose not to fish a used 3090 out of a dumpster for $1,000. Instead, I bought two brand-new NVIDIA RTX 5060 Ti 16GB cards for a total of $800.
I am pretty happy with the inference results in Ollama!
Setup:
* Quantization: Q4\_K\_M (all models)
* Prompt: "Write a 500-w... | 2025-08-20T22:41:21 | https://www.reddit.com/gallery/1mvts3i | avedave | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1mvts3i | false | null | t3_1mvts3i | /r/LocalLLaMA/comments/1mvts3i/2x_rtx_5060ti_16gb_inference_benchmarks_in_ollama/ | false | false | 29 | null | |
What would be the next step up from using Gitlab Duo from my job? | 0 | I'm a bit new to this so bear with me.
I recently started getting into coding assistants and [Wrote my own CodeCompanion.nvim Plugin for Gitlab Duo](https://github.com/Kraust/codecompanion-gitlab.nvim). While doing that I learned a lot about how the current AI Coding assistant landscape worked and want to see what the... | 2025-08-20T22:36:55 | https://www.reddit.com/r/LocalLLaMA/comments/1mvto90/what_would_be_the_next_step_up_from_using_gitlab/ | 79215185-1feb-44c6 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mvto90 | false | null | t3_1mvto90 | /r/LocalLLaMA/comments/1mvto90/what_would_be_the_next_step_up_from_using_gitlab/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'EqFoyw0VGHkzPzshvJn-Vwe7OngqH1aC7-0ZRVgyc8o', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/EqFoyw0VGHkzPzshvJn-Vwe7OngqH1aC7-0ZRVgyc8o.png?width=108&crop=smart&auto=webp&s=b6a5d3de64df8cb9b9f99b2c101a5a6ed819779a', 'width': 108}, {'height': 108, 'url': 'h... |
Lightweight browser tool to run local models (Gemma, Llama, Zephyr, Phi, Qwen, Mistral) with private document Q&A - no installation required | 3 | For anyone getting started with local setups and prioritizing privacy, [https://lite.askcyph.ai](https://lite.askcyph.ai) offers a lightweight, browser-based way to work with local models such as Gemma, Llama, Zephyr, Phi, and Mistral, plus simple document Q&A, all running client-side. | 2025-08-20T22:28:03 | https://www.reddit.com/r/LocalLLaMA/comments/1mvtgkj/lightweight_browser_tool_to_run_local_models/ | gpt872323 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mvtgkj | false | null | t3_1mvtgkj | /r/LocalLLaMA/comments/1mvtgkj/lightweight_browser_tool_to_run_local_models/ | false | false | self | 3 | null |
FREE Stealth model in Cline: Sonic (rumoured Grok4 Code) | 0 | If you didn't hear, Cline announced a FREE Coding Model released in Stealth called Sonic.
[https://cline.bot/blog/new-stealth-model-in-cline-sonic](https://cline.bot/blog/new-stealth-model-in-cline-sonic)
It has 256k context window. Initial tests show very fast generation speeds and good instruction following fo... | 2025-08-20T22:14:40 | https://www.reddit.com/r/LocalLLaMA/comments/1mvt4hp/free_stealth_model_in_cline_sonic_rumoured_grok4/ | NoobMLDude | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mvt4hp | false | null | t3_1mvt4hp | /r/LocalLLaMA/comments/1mvt4hp/free_stealth_model_in_cline_sonic_rumoured_grok4/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'O1c5HlOg5Dvtxph21qot747_pCjbiGJa5Xmra4_oq9M', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/O1c5HlOg5Dvtxph21qot747_pCjbiGJa5Xmra4_oq9M.png?width=108&crop=smart&auto=webp&s=998690a862105d6e7e306d657e38477257d1cb99', 'width': 108}, {'height': 121, 'url': 'h... |
Where are the thoughtful people? | 0 | Sometimes I come across posts that don’t make much sense. And in the comments, there are often even more unhelpful replies. I think when we just ignore them, we miss the point—ignoring them only makes things worse. At the very least, we should downvote and provide, or upvote, a logical response. Personally, I always do... | 2025-08-20T21:59:04 | https://www.reddit.com/r/LocalLLaMA/comments/1mvsqcn/where_are_the_thoughtful_people/ | narca_hakan | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mvsqcn | false | null | t3_1mvsqcn | /r/LocalLLaMA/comments/1mvsqcn/where_are_the_thoughtful_people/ | false | false | self | 0 | null |
ARM64 laptop runs local models incredibly slowly | 1 | I have a Lenovo Thinkpad T14s with 32Gb RAM and a Snapdragon X Elite processor (12 cores, 3.4GHz) and when I try to run models locally, they are incredibly slow. I tried both LM Studio and Ollama via the terminal. Most models don't run at all simply because they need about a minute to generate just one word of the resp... | 2025-08-20T21:55:38 | https://www.reddit.com/r/LocalLLaMA/comments/1mvsn8l/arm64_laptop_runs_local_models_incredibly_slowly/ | urmel42 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mvsn8l | false | null | t3_1mvsn8l | /r/LocalLLaMA/comments/1mvsn8l/arm64_laptop_runs_local_models_incredibly_slowly/ | false | false | self | 1 | null |
I'm going to build an open source LLM/ML app interface, what features would you like to see? | 0 | I really want to build a new app (webapp) for interacting with LLMs or ML stuff in general (TTS, Automatic Speech Recognition, etc) and here are the main features that I'm thinking of including:
- fully offline and private (no external calls are going to be made without you being aware of those);
- dead simple deployme... | 2025-08-20T21:13:39 | https://www.reddit.com/r/LocalLLaMA/comments/1mvrjw5/im_going_to_build_an_open_source_llmml_app/ | xLionel775 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mvrjw5 | false | null | t3_1mvrjw5 | /r/LocalLLaMA/comments/1mvrjw5/im_going_to_build_an_open_source_llmml_app/ | false | false | self | 0 | null |
DeepSeek-V3.1 benchmarked | 0 | Modestly better than V3, not as good as Qwen3 Coder. | 2025-08-20T21:00:53 | https://brokk.ai/power-ranking?version=openround-2025-08-20&models=ds-v3.1%2Cflash-2.5%2Cgp2.5-default%2Cgpt-oss-120b%2Cgpt-oss-20b%2Cgpt5%2Cgpt5-nano%2Copus4.1%2Cr1%2Csonnet4%2Cv3 | mr_riptano | brokk.ai | 1970-01-01T00:00:00 | 0 | {} | 1mvr7gs | false | null | t3_1mvr7gs | /r/LocalLLaMA/comments/1mvr7gs/deepseekv31_benchmarked/ | false | false | default | 0 | {'enabled': False, 'images': [{'id': 'XmbB_Ggpaw13Ih4SiltMb7pnW0SotFk3Ey3eZ2fkjxY', 'resolutions': [{'height': 62, 'url': 'https://external-preview.redd.it/XmbB_Ggpaw13Ih4SiltMb7pnW0SotFk3Ey3eZ2fkjxY.png?width=108&crop=smart&auto=webp&s=8ddd0009ff8428f66396a97f4c1d1c8de9c7be98', 'width': 108}, {'height': 125, 'url': 'h... |
Is OpenAI's 120B "open source" model intentionally sabotaged? | 0 | Something's fishy with OpenAI's gpt-oss-120b compression rates compared to other MoE models:
**OpenAI gpt-oss-120b:**
* 16-bit: 65.4GB
* 2-bit: 62.6GB
* **Only 4.3% compression** [https://huggingface.co/openai/gpt-oss-120b](https://huggingface.co/openai/gpt-oss-120b)
**Other MoE models:**
* Meta Maverick: 801GB → ... | 2025-08-20T20:59:17 | https://www.reddit.com/r/LocalLLaMA/comments/1mvr5sk/is_openais_120b_open_source_model_intentionally/ | Desperate-Cry592 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mvr5sk | false | null | t3_1mvr5sk | /r/LocalLLaMA/comments/1mvr5sk/is_openais_120b_open_source_model_intentionally/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': '12ojQ9khZuJRm7jqdMaOtnKaFtBC6Yo7dfwq4qKZ3jA', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/12ojQ9khZuJRm7jqdMaOtnKaFtBC6Yo7dfwq4qKZ3jA.png?width=108&crop=smart&auto=webp&s=292c3d3a2dfa2ce762d4e0ad0113f21057208fb5', 'width': 108}, {'height': 116, 'url': 'h... |
Local models are gradually gaining ground | 0 | The artificial intelligence landscape is changing… and it's changing dramatically. What was once the exclusive domain of closed models—guarded like treasures in proprietary data centers—is now trembling in the face of the unstoppable rise of open source models. The arrival of GPT-OSS, Qwen3, and other successors is not... | 2025-08-20T20:48:58 | https://www.reddit.com/r/LocalLLaMA/comments/1mvqw4l/local_models_are_gradually_gaining_ground/ | Illustrious-Swim9663 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mvqw4l | false | null | t3_1mvqw4l | /r/LocalLLaMA/comments/1mvqw4l/local_models_are_gradually_gaining_ground/ | false | false | 0 | null | |
Ia there any benefits of running this amd +nvidia setup compared to nvidia only? | 4 | I've got a rtx 3080 10gb and where thinking about using it to run some local models, i know 10gb vram is on the lower end. I've also got an old rx 390 8gb laying around and where wondering if I would see any benefits if i where to run both gpu's side by side (10+8gb vram)? | 2025-08-20T20:29:18 | https://www.reddit.com/r/LocalLLaMA/comments/1mvqd59/ia_there_any_benefits_of_running_this_amd_nvidia/ | PolarNightProphecies | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mvqd59 | false | null | t3_1mvqd59 | /r/LocalLLaMA/comments/1mvqd59/ia_there_any_benefits_of_running_this_amd_nvidia/ | false | false | self | 4 | null |
Best AI for writing a resume? | 0 | What is the most powerful and has the best writing skills for AI. I use LM Studio so, a good local AI would be good or free one online. Which ones do you recommend? I read the free online chat bots are more powerful than the ones you run on your machine. | 2025-08-20T20:18:18 | https://www.reddit.com/r/LocalLLaMA/comments/1mvq2bi/best_ai_for_writing_a_resume/ | PhotographerUSA | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mvq2bi | false | null | t3_1mvq2bi | /r/LocalLLaMA/comments/1mvq2bi/best_ai_for_writing_a_resume/ | false | false | self | 0 | null |
Anyone tried running llama cpp with Vulkan on Android? | 8 | Im trying to run llama cpp on pixel phones and i wonder if anyone had success before? There is an issue on qualcomm gpu s with vulcan, but anyone tried with Mali? | 2025-08-20T19:53:17 | https://www.reddit.com/r/LocalLLaMA/comments/1mvpdl4/anyone_tried_running_llama_cpp_with_vulkan_on/ | Icy_Advance_2514 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mvpdl4 | false | null | t3_1mvpdl4 | /r/LocalLLaMA/comments/1mvpdl4/anyone_tried_running_llama_cpp_with_vulkan_on/ | false | false | self | 8 | null |
Gemini Nano size | 3 | What's the size (parameters) of Gemini Nano on Chrome? I haven't found documentation on this topic.
The weights.bin file (TFLite) is about 4G size, so it is a small model (2B?).
(it's surely a local model!) | 2025-08-20T19:52:37 | https://www.reddit.com/r/LocalLLaMA/comments/1mvpcxe/gemini_nano_size/ | Acrobatic_Cat_3448 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mvpcxe | false | null | t3_1mvpcxe | /r/LocalLLaMA/comments/1mvpcxe/gemini_nano_size/ | false | false | self | 3 | null |
Anyone use Continue on VsCode? Huge performance hit in app vs using it on Ollama | 2 | I noticed that whenever I run any LLM locally through Continue I take a huge ram hit. When I use Ollama desktop app it's a lot smoother. Wondering if anyone else has the same experience | 2025-08-20T19:44:22 | https://www.reddit.com/r/LocalLLaMA/comments/1mvp4mr/anyone_use_continue_on_vscode_huge_performance/ | A4_Ts | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mvp4mr | false | null | t3_1mvp4mr | /r/LocalLLaMA/comments/1mvp4mr/anyone_use_continue_on_vscode_huge_performance/ | false | false | self | 2 | null |
Weird Behavior in openwebui/ollama with Qwen3 | 2 | The first response starts off fine, but then it seems to start thinking out loud to itself in the response and things break down from there. Any new chats after that result in very weird/fragmented response that seems like leftover from the initial result. I've never seen anything like this before, but I also haven't r... | 2025-08-20T19:41:20 | https://www.reddit.com/r/LocalLLaMA/comments/1mvp1mq/weird_behavior_in_openwebuiollama_with_qwen3/ | weirdtracks | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mvp1mq | false | null | t3_1mvp1mq | /r/LocalLLaMA/comments/1mvp1mq/weird_behavior_in_openwebuiollama_with_qwen3/ | false | false | self | 2 | {'enabled': False, 'images': [{'id': 'dYpqMY7yKWFQLdELFd5E3L0FTZjLSwGGwotJLBAF8Ms', 'resolutions': [{'height': 65, 'url': 'https://external-preview.redd.it/gbh2V9lSyZk8rgXyGXrssLoEve5Ymys9FJbIy2A8Pds.jpg?width=108&crop=smart&auto=webp&s=33931afaa48fb3652a181a6ea80eb4a14b542d8a', 'width': 108}, {'height': 130, 'url': 'h... |
cursor will increase in price , The good thing is that we have local models | 52 | the cursor will increase in price. Right now, you have an elastic price, but after September 15, you will be charged more.
blog : [https://cursor.com/blog/aug-2025-pricing](https://cursor.com/blog/aug-2025-pricing)
price : [https://docs.cursor.com/en/account/pricing#auto](https://docs.cursor.com/en/account/pricing#au... | 2025-08-20T19:40:16 | https://www.reddit.com/r/LocalLLaMA/comments/1mvp0kn/cursor_will_increase_in_price_the_good_thing_is/ | Illustrious-Swim9663 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mvp0kn | false | null | t3_1mvp0kn | /r/LocalLLaMA/comments/1mvp0kn/cursor_will_increase_in_price_the_good_thing_is/ | false | false | 52 | {'enabled': False, 'images': [{'id': 'wrLxmwkxE0sboHYe9DL7M2A9cBYiHIRiSvubxC7TZHk', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/wrLxmwkxE0sboHYe9DL7M2A9cBYiHIRiSvubxC7TZHk.png?width=108&crop=smart&auto=webp&s=f5612dc9ce7f87a094b32ce01b177e69919bd75e', 'width': 108}, {'height': 113, 'url': 'h... | |
Running Qwen3-Coder-30B-A3 Q4_LM in Cursor with Agent Mode unlocked | 81 | I’ve been testing ways to make Cursor usable without relying only on their default “auto” model (which honestly feels pretty bad). While experimenting, I noticed something interesting:
If you run a model locally and just register it under the name `gpt-4o`, Cursor unlocks **Agent Mode** (function calling, todo list, e... | 2025-08-20T19:24:29 | https://www.reddit.com/r/LocalLLaMA/comments/1mvol0o/running_qwen3coder30ba3_q4_lm_in_cursor_with/ | ConfidentDinner6648 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mvol0o | false | null | t3_1mvol0o | /r/LocalLLaMA/comments/1mvol0o/running_qwen3coder30ba3_q4_lm_in_cursor_with/ | false | false | self | 81 | null |
Best practices for data-hoarding models? | 3 | Still new to the local AI scene, so I'm trying ollama, llama.cpp, LM Studio, etc., in LLMs, as well as ai-dock and ComfyUI for generative. As far as I can tell, there is no universal or even popular way to set up a collection of model files on a file server, and then tell the various hosting environments to go look the... | 2025-08-20T19:18:28 | https://www.reddit.com/r/LocalLLaMA/comments/1mvof4z/best_practices_for_datahoarding_models/ | Health-Nut7477 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mvof4z | false | null | t3_1mvof4z | /r/LocalLLaMA/comments/1mvof4z/best_practices_for_datahoarding_models/ | false | false | self | 3 | null |
My open-source project on building production-level AI agents just hit 10K stars on GitHub | 0 | My Agents-Towards-Production GitHub repository just crossed 10,000 stars in only two months! Here's what's inside:
* 33 detailed tutorials on building the components needed for production-level agents
* Tutorials organized by category
* Clear, high-quality explanations with diagrams and step-by-step code implem... | 2025-08-20T19:13:02 | https://www.reddit.com/r/LocalLLaMA/comments/1mvo9pq/my_opensource_project_on_building_productionlevel/ | Nir777 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mvo9pq | false | null | t3_1mvo9pq | /r/LocalLLaMA/comments/1mvo9pq/my_opensource_project_on_building_productionlevel/ | false | false | self | 0 | null |
Using large-scale search to discover fast GPU kernels | 58 | I'm building a GPU compiler for automatically generating fast GPU kernels for AI models. It uses search-based compilation to achieve high performance. [https://github.com/luminal-ai/luminal](https://github.com/luminal-ai/luminal)
It takes high level model code, like you'd have in PyTorch, and generate very fast GPU co... | 2025-08-20T19:12:53 | https://www.reddit.com/r/LocalLLaMA/comments/1mvo9ko/using_largescale_search_to_discover_fast_gpu/ | jafioti | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mvo9ko | false | null | t3_1mvo9ko | /r/LocalLLaMA/comments/1mvo9ko/using_largescale_search_to_discover_fast_gpu/ | false | false | self | 58 | {'enabled': False, 'images': [{'id': 'b-ktLeXWioxuV4hQoMUJJnc-Er8yy-L0PCsHQ7N2a0Y', 'resolutions': [{'height': 25, 'url': 'https://external-preview.redd.it/b-ktLeXWioxuV4hQoMUJJnc-Er8yy-L0PCsHQ7N2a0Y.jpeg?width=108&crop=smart&auto=webp&s=8ad7833046af45f536efa3a28e3e721ad1f4a4d7', 'width': 108}, {'height': 50, 'url': 'h... |
Stop using mobile phone at Nights!! ⚔️🛡️ | 0 | Have you ever struggled to reduce your mobile phone screen time, severely affecting your sleep cycles and sleeping patterns? 😫😴
Fret not! Introducing 🛡️Night Knight ⚔️ !!!
Your digital wellbeing assistant 🤖⚡ who persuades you to stop using your mobile phone when you exceed your daily limits! And it is w... | 2025-08-20T19:00:39 | https://www.reddit.com/r/LocalLLaMA/comments/1mvnxe3/stop_using_mobile_phone_at_nights/ | rozeappletree | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mvnxe3 | false | null | t3_1mvnxe3 | /r/LocalLLaMA/comments/1mvnxe3/stop_using_mobile_phone_at_nights/ | false | false | self | 0 | null |
Is the generative AI bubble about to burst? | 0 | 2025-08-20T18:59:39 | https://www.infoworld.com/article/4041556/is-the-generative-ai-bubble-about-to-burst.html | juanviera23 | infoworld.com | 1970-01-01T00:00:00 | 0 | {} | 1mvnwdy | false | null | t3_1mvnwdy | /r/LocalLLaMA/comments/1mvnwdy/is_the_generative_ai_bubble_about_to_burst/ | false | false | default | 0 | {'enabled': False, 'images': [{'id': '45gG7-XasAYEX0b9JsIhAUuYh4VijFVER5lX3-UpSqc', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/45gG7-XasAYEX0b9JsIhAUuYh4VijFVER5lX3-UpSqc.jpeg?width=108&crop=smart&auto=webp&s=fc72ce423e372a4649a519e061d66f69e61e12e2', 'width': 108}, {'height': 121, 'url': '... | |
My LLM trained from scratch on only 1800s London texts brings up a real protest from 1834 | 1,119 | Hi, I’ve posted on here a couple times sharing my project. I'm training LLM’s from scratch on 1800’s London texts (no fine tune/modern data). I built a dataset using 7,000 texts published between 1800 to 1875 in the city of London, and also trained a custom tokenizer on the dataset itself to get rid of modern vocab.
... | 2025-08-20T18:49:36 | https://www.reddit.com/r/LocalLLaMA/comments/1mvnmjo/my_llm_trained_from_scratch_on_only_1800s_london/ | Remarkable-Trick-177 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mvnmjo | false | null | t3_1mvnmjo | /r/LocalLLaMA/comments/1mvnmjo/my_llm_trained_from_scratch_on_only_1800s_london/ | false | false | 1,119 | {'enabled': False, 'images': [{'id': 'bruJaed8mpWclO3rYYnLL_4tpIRSDSNQT1lxjc08864', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/bruJaed8mpWclO3rYYnLL_4tpIRSDSNQT1lxjc08864.png?width=108&crop=smart&auto=webp&s=5735794ce675d6053813f9bbc5b37d3530b805ab', 'width': 108}, {'height': 108, 'url': 'h... | |
qwen3 is the worst model ive used | 0 | just wanted to rant a bit cause 2.5 max was beautiful. unsure how they dropped the ball so significantly with the 3. I ask it to do simple tasks like write x in your format and it just hallucinates random stuff. talking about 235b.
anyone else? | 2025-08-20T18:42:13 | https://www.reddit.com/r/LocalLLaMA/comments/1mvnf3y/qwen3_is_the_worst_model_ive_used/ | MigorRortis96 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mvnf3y | false | null | t3_1mvnf3y | /r/LocalLLaMA/comments/1mvnf3y/qwen3_is_the_worst_model_ive_used/ | false | false | self | 0 | null |
New Trainable Sparsity Method I've been working on! | 44 | Introducing CWIC a trainable sparsity paradigm that beats SOTA methods, enabling 80% sparsity and 4x+ speedups on CPU.
Something I've been working on with friends at [crystalai.org](http://crystalai.org) !
It works on models as small as 1b, outperforming TEAL R-sparse and friends.
We are releasing code at [https://... | 2025-08-20T18:41:56 | nano-tech-warrior | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1mvnetu | false | null | t3_1mvnetu | /r/LocalLLaMA/comments/1mvnetu/new_trainable_sparsity_method_ive_been_working_on/ | false | false | 44 | {'enabled': True, 'images': [{'id': 'nxpwEO7LduIkpSxDMKHWB9AXo18KjjAQeLhdZ-a6fCk', 'resolutions': [{'height': 90, 'url': 'https://preview.redd.it/mpxhgfb1y7kf1.png?width=108&crop=smart&auto=webp&s=9a4856c9b12288f7c5a2623fb259c8e8b1a6376c', 'width': 108}, {'height': 180, 'url': 'https://preview.redd.it/mpxhgfb1y7kf1.png... | ||
Google Pixel 10: Real time phone call translation with your own voice + RAG locally on device? | 0 | I just watched the Google Pixel 10 presentation here:
[https://www.youtube.com/watch?v=Wp1ynkJw1U4](https://www.youtube.com/watch?v=Wp1ynkJw1U4)
and it sounds to me like Google is using a **local LLM** that searches all data like text messages, calendar entries etc. to automatically provide responses while writing te... | 2025-08-20T18:39:58 | https://www.reddit.com/r/LocalLLaMA/comments/1mvncu0/google_pixel_10_real_time_phone_call_translation/ | chikengunya | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mvncu0 | false | null | t3_1mvncu0 | /r/LocalLLaMA/comments/1mvncu0/google_pixel_10_real_time_phone_call_translation/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'qoZAy8XsjQsMTai9RK_auP_g6zFsfX-63S8v-u5i9QA', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/qoZAy8XsjQsMTai9RK_auP_g6zFsfX-63S8v-u5i9QA.jpeg?width=108&crop=smart&auto=webp&s=42111d6eebb6eff3719de0480dd19407d686b5d5', 'width': 108}, {'height': 162, 'url': '... |
Guys it's official, the nano banana model on lm arena is Google's | 138 | 2025-08-20T18:32:13 | https://x.com/OfficialLoganK/status/1957908528925909391 | Severe-Awareness829 | x.com | 1970-01-01T00:00:00 | 0 | {} | 1mvn50l | false | null | t3_1mvn50l | /r/LocalLLaMA/comments/1mvn50l/guys_its_official_the_nano_banana_model_on_lm/ | false | false | default | 138 | null | |
What happens in GRPO if all rewards within a group are equal? | 5 | Trying out training an LLM using GRPO through HuggingFace's TRL and this question occured to me.
Since GRPO can't really calculate the most advantageous completion since all of them are equal, what does it do? Does it just assume a random one as the best completion? Does it outright discard that group without learning... | 2025-08-20T18:10:05 | https://www.reddit.com/r/LocalLLaMA/comments/1mvmitv/what_happens_in_grpo_if_all_rewards_within_a/ | lkr2711 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mvmitv | false | null | t3_1mvmitv | /r/LocalLLaMA/comments/1mvmitv/what_happens_in_grpo_if_all_rewards_within_a/ | false | false | self | 5 | null |
Is my diet the problem ? | 0 | The question I kept asking myself bc I was trying to improve my diet…
So the question I got for u guys is would anyone think that if I build an Ai powered app including a chat bot that gives you recipes in function of : your time, your budget, your allergies, your goal and other things would be useful for anyone and a... | 2025-08-20T18:06:46 | https://www.reddit.com/r/LocalLLaMA/comments/1mvmfh9/is_my_diet_the_problem/ | Fit-Writer-1796 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mvmfh9 | false | null | t3_1mvmfh9 | /r/LocalLLaMA/comments/1mvmfh9/is_my_diet_the_problem/ | false | false | self | 0 | null |
Help with IK-Llama | 0 | Would anyone be so kind as to let me know what’s off with my Ik-llama settings? I’m running dual 4090s with a 9255 epyc and 768gb ram. I’m getting quite low tokens per sec (3ish) compared to llama.cpp (13):
# in host$ (Pop!_OS main terminal) — set a generic model path (no username or real folders)
export MODEL_PATH=... | 2025-08-20T17:44:25 | https://www.reddit.com/r/LocalLLaMA/comments/1mvlskw/help_with_ikllama/ | Infamous_Jaguar_2151 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mvlskw | false | null | t3_1mvlskw | /r/LocalLLaMA/comments/1mvlskw/help_with_ikllama/ | false | false | self | 0 | null |
LittleBit: Ultra Low-Bit Quantization via Latent Factorization | 1 | [removed] | 2025-08-20T17:36:56 | https://www.arxiv.org/abs/2506.13771 | juanviera23 | arxiv.org | 1970-01-01T00:00:00 | 0 | {} | 1mvlkvh | false | null | t3_1mvlkvh | /r/LocalLLaMA/comments/1mvlkvh/littlebit_ultra_lowbit_quantization_via_latent/ | false | false | default | 1 | null |
My open source AI activity tracker project | 12 | Hey everyone, I wanted to share my latest project. **Bilge** is a wise activity tracker that runs completely on your machine. Instead of sending your data to a cloud server, it uses a local LLM to understand your digital habits and gently nudge you to take breaks.
It's a great example of what's possible with local AI,... | 2025-08-20T17:17:46 | https://www.reddit.com/r/LocalLLaMA/comments/1mvl1dg/my_open_source_ai_activity_tracker_project/ | adnan-kaya | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mvl1dg | false | null | t3_1mvl1dg | /r/LocalLLaMA/comments/1mvl1dg/my_open_source_ai_activity_tracker_project/ | false | false | self | 12 | {'enabled': False, 'images': [{'id': '2mffNmrAmB7EqKd5lxhXiAEjL20PeM7s0PRuGqYTGzw', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/2mffNmrAmB7EqKd5lxhXiAEjL20PeM7s0PRuGqYTGzw.png?width=108&crop=smart&auto=webp&s=d1c08e0ea98a4961f495aa79c11f5515c73d1f95', 'width': 108}, {'height': 108, 'url': 'h... |
Qwen-Image-Edit #6 overall on LMArena, best open model image editor | 139 | Surprised they didn't vote this one higher, I felt like the edits I saw Qwen make online were pretty good | 2025-08-20T17:17:24 | vibedonnie | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1mvl0zk | false | null | t3_1mvl0zk | /r/LocalLLaMA/comments/1mvl0zk/qwenimageedit_6_overall_on_lmarena_best_open/ | false | false | default | 139 | {'enabled': True, 'images': [{'id': '90yj5wnyj7kf1', 'resolutions': [{'height': 108, 'url': 'https://preview.redd.it/90yj5wnyj7kf1.jpeg?width=108&crop=smart&auto=webp&s=e4134d90ab414675b5cc318e3a82573825a4b9ac', 'width': 108}, {'height': 216, 'url': 'https://preview.redd.it/90yj5wnyj7kf1.jpeg?width=216&crop=smart&auto=... | |
Finetuning GTP-OSS 20b -- Does it need an actual 'Thinking' response? | 0 | Just seeing if anyone has ideas on if I can just set every "thinking" parameter to "Null" or if I really need to give it a reason why it should chose that answer. Thanks in advance. | 2025-08-20T17:01:57 | https://www.reddit.com/r/LocalLLaMA/comments/1mvklai/finetuning_gtposs_20b_does_it_need_an_actual/ | searstream | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mvklai | false | null | t3_1mvklai | /r/LocalLLaMA/comments/1mvklai/finetuning_gtposs_20b_does_it_need_an_actual/ | false | false | self | 0 | null |
Flowchart and image analysis model. | 2 | Hi everyone, I am looking for an open source model which can describe the flows, process in a image. I have tried LLAVA but was not satisfied with the results. Any suggestion for models comparable to the chatgpt or gemini for image description? | 2025-08-20T17:01:42 | https://www.reddit.com/r/LocalLLaMA/comments/1mvkl1j/flowchart_and_image_analysis_model/ | Legen-Wait_4_it-dary | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mvkl1j | false | null | t3_1mvkl1j | /r/LocalLLaMA/comments/1mvkl1j/flowchart_and_image_analysis_model/ | false | false | self | 2 | null |
It's impossible to detect the footnote callback, right? | 8 | Even with such a good image, I haven't been able to get it to pick up using either Tesseract or OlmOCR.
Then they do include the footnote, but not where it came from.
Any ideas?
I've already tried Nanonets-OCR-s, which is actually great too. But it doesn't detect the callback.
...my book has a lot of footnotes... | 2025-08-20T16:49:25 | 9acca9 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1mvk8v1 | false | null | t3_1mvk8v1 | /r/LocalLLaMA/comments/1mvk8v1/its_impossible_to_detect_the_footnote_callback/ | false | false | default | 8 | {'enabled': True, 'images': [{'id': 'yrddrc7se7kf1', 'resolutions': [{'height': 159, 'url': 'https://preview.redd.it/yrddrc7se7kf1.png?width=108&crop=smart&auto=webp&s=38974d611f2541fcecbea0db1f23635e3edcc72c', 'width': 108}, {'height': 318, 'url': 'https://preview.redd.it/yrddrc7se7kf1.png?width=216&crop=smart&auto=we... | |
Help with PC build | 3 | Hi, I'm building a new PC primarily for gaming but I plan to run some local ML models. I already bought the GPU which is 5070ti, now I need to chose CPU and RAM. I thought going with 9700x(does this matter at all?) and 64gb of ram since I read that models can be partially loaded into RAM even if they don't fit into GPU... | 2025-08-20T16:43:39 | https://www.reddit.com/r/LocalLLaMA/comments/1mvk35u/help_with_pc_build/ | exzzy | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mvk35u | false | null | t3_1mvk35u | /r/LocalLLaMA/comments/1mvk35u/help_with_pc_build/ | false | false | self | 3 | null |
I made an Android app for voice-to-voice and text-to-voice cloning and you can try it! (Android) | 1 | [removed] | 2025-08-20T16:33:28 | https://www.reddit.com/r/LocalLLaMA/comments/1mvjt2b/i_made_an_android_app_for_voicetovoice_and/ | StrainActive2642 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mvjt2b | false | null | t3_1mvjt2b | /r/LocalLLaMA/comments/1mvjt2b/i_made_an_android_app_for_voicetovoice_and/ | false | false | self | 1 | null |
guide : running gpt-oss with llama.cpp -ggerganov | 25 | 2025-08-20T16:24:30 | https://github.com/ggml-org/llama.cpp/discussions/15396 | onwardforward | github.com | 1970-01-01T00:00:00 | 0 | {} | 1mvjjxe | false | null | t3_1mvjjxe | /r/LocalLLaMA/comments/1mvjjxe/guide_running_gptoss_with_llamacpp_ggerganov/ | false | false | default | 25 | {'enabled': False, 'images': [{'id': '0MtZInNIGRZV6H6dIhZ8EGkGtej94yvJcqiqENXoH1U', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/0MtZInNIGRZV6H6dIhZ8EGkGtej94yvJcqiqENXoH1U.png?width=108&crop=smart&auto=webp&s=398b878c8874a3cea53d9af3aa343a3a49818594', 'width': 108}, {'height': 108, 'url': 'h... | |
Seed-OSS-36B-Instruct | 275 | [https://huggingface.co/ByteDance-Seed/Seed-OSS-36B-Instruct](https://huggingface.co/ByteDance-Seed/Seed-OSS-36B-Instruct)
Introduction:
Seed-OSS is a series of open-source large language models developed by ByteDance's Seed Team, designed for powerful long-context, reasoning, agent and general capabilities, and vers... | 2025-08-20T16:23:50 | https://www.reddit.com/r/LocalLLaMA/comments/1mvjj8q/seedoss36binstruct/ | NeterOster | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mvjj8q | false | null | t3_1mvjj8q | /r/LocalLLaMA/comments/1mvjj8q/seedoss36binstruct/ | false | false | self | 275 | {'enabled': False, 'images': [{'id': '2h_CX4OErqePeWwhtP3G1-P7Ko736GavLjUkx5LIVTc', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/2h_CX4OErqePeWwhtP3G1-P7Ko736GavLjUkx5LIVTc.png?width=108&crop=smart&auto=webp&s=f4ecb557c6e35bae94f3de782b54807db9d486d4', 'width': 108}, {'height': 116, 'url': 'h... |
Bought my new strong computer | 0 | Hello everyone,
I just bought a new computer and I need your guys to help me.
I just bought MacBook pro m4 max chip - 128gb of RAM - 2TB SSD - 16 core CPU - 40 core GPU
I would like to use it for vibe coding (building web apps with mobile application) - which LLM should I use for simple vibe coding?
Thanks! | 2025-08-20T16:19:27 | https://www.reddit.com/r/LocalLLaMA/comments/1mvjeoz/bought_my_new_strong_computer/ | Ok-Respond2582 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mvjeoz | false | null | t3_1mvjeoz | /r/LocalLLaMA/comments/1mvjeoz/bought_my_new_strong_computer/ | false | false | self | 0 | null |
which model is less demanding on resources, gpt-oss-20b or qwen3-30b-a3b. | 2 | I'm a newbie and I don't really understand how the number of active/inactive parameters affects performance. | 2025-08-20T16:18:41 | https://www.reddit.com/r/LocalLLaMA/comments/1mvjdxh/which_model_is_less_demanding_on_resources/ | DentistNext6439 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mvjdxh | false | null | t3_1mvjdxh | /r/LocalLLaMA/comments/1mvjdxh/which_model_is_less_demanding_on_resources/ | false | false | self | 2 | null |
which model is less demanding on resources, gpt-oss-20b or qwen3-30b-a3b. | 1 | [removed] | 2025-08-20T16:17:04 | https://www.reddit.com/r/LocalLLaMA/comments/1mvjc88/which_model_is_less_demanding_on_resources/ | Mundane-Buyer-6729 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mvjc88 | false | null | t3_1mvjc88 | /r/LocalLLaMA/comments/1mvjc88/which_model_is_less_demanding_on_resources/ | false | false | self | 1 | null |
Cluster of two AMD Strix Halo machines (HP Z2 Mini G1a) | 20 | I'd really like to get something decent running locally, like one of the Deepseek models. I figure this will need 600 GBs of VRAM to run comfortably with one of the Unsloth models. Buying this amount of VRAM via Nvidia GPUs isn't workable for me, but the AMD Strix Halo 395+ machines should make this possible, eventuall... | 2025-08-20T16:00:19 | https://www.reddit.com/r/LocalLLaMA/comments/1mviuzq/cluster_of_two_amd_strix_halo_machines_hp_z2_mini/ | aquarat | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mviuzq | false | null | t3_1mviuzq | /r/LocalLLaMA/comments/1mviuzq/cluster_of_two_amd_strix_halo_machines_hp_z2_mini/ | false | false | 20 | {'enabled': False, 'images': [{'id': 'MaUSRoNLAJ1LyKjA9wGag9Te4pvp93JxB__k_YVVUBI', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/MaUSRoNLAJ1LyKjA9wGag9Te4pvp93JxB__k_YVVUBI.png?width=108&crop=smart&auto=webp&s=4fa2f9936acdedca07805c3da0d25758803e02ac', 'width': 108}, {'height': 108, 'url': 'h... | |
Confused by local TTS compared to ElevenLabs | 1 | Hello everyone,
I am looking for a local TTS model that I can incorporate into my audiobook generation pipeline.
I’ve been looking at the options, but so far I don’t know what to choose. The examples online are very mixed.
What I’m trying to avoid as much as possible are robotic or unnatural voices. I also want some... | 2025-08-20T15:16:00 | https://www.reddit.com/r/LocalLLaMA/comments/1mvhmqp/confused_by_local_tts_compared_to_elevenlabs/ | Full_Honeydew | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mvhmqp | false | null | t3_1mvhmqp | /r/LocalLLaMA/comments/1mvhmqp/confused_by_local_tts_compared_to_elevenlabs/ | false | false | self | 1 | null |
DiffMem: Using Git as a Differential Memory Backend for AI Agents - Open-Source PoC | 72 | We've been experimenting with memory systems for AI agents, and I wanted to share a prototype I've built: DiffMem. It's a lightweight, Git-based memory backend that stores "current state" knowledge in Markdown files while using Git's commit history for tracking evolution. The goal is efficient, scalable memory for long... | 2025-08-20T14:48:46 | https://github.com/Growth-Kinetics/DiffMem | alexmrv | github.com | 1970-01-01T00:00:00 | 0 | {} | 1mvgw9k | false | null | t3_1mvgw9k | /r/LocalLLaMA/comments/1mvgw9k/diffmem_using_git_as_a_differential_memory/ | false | false | default | 72 | {'enabled': False, 'images': [{'id': 'FW1JlH9sSss0Pq8rzoWxjFpJJZK922NxO1y6uOe6VUI', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/FW1JlH9sSss0Pq8rzoWxjFpJJZK922NxO1y6uOe6VUI.png?width=108&crop=smart&auto=webp&s=eb59ce6302000fa084817b04e9e636f83e10c6b5', 'width': 108}, {'height': 108, 'url': 'h... |
Home Assistant recommendations or benchmarks? | 4 | I'm a home automation junkie, love playing around with Home Assistant. LLMs have been great to help me with writing yaml automations, scripts, etc but all the models I've tried (including cloud ones) have needed a lot of handholding.
Does anybody have a good go-to for Home Assistant? A finetune or even a useful benchm... | 2025-08-20T14:46:26 | https://www.reddit.com/r/LocalLLaMA/comments/1mvgu23/home_assistant_recommendations_or_benchmarks/ | LightBrightLeftRight | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mvgu23 | false | null | t3_1mvgu23 | /r/LocalLLaMA/comments/1mvgu23/home_assistant_recommendations_or_benchmarks/ | false | false | self | 4 | null |
Doing continued pre-training with Unsloth? | 69 | I want to experiment with continued pre-training to teach a model domain specific facts (law) in a non-english language, but the barrier to entry seems a bit daunting. My dataset is in the range of \~2B tokens.
Unsloth has a [guide](https://docs.unsloth.ai/basics/continued-pretraining), and also posted [here](https://... | 2025-08-20T14:32:02 | https://www.reddit.com/r/LocalLLaMA/comments/1mvgg6u/doing_continued_pretraining_with_unsloth/ | Thisisdog92 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mvgg6u | false | null | t3_1mvgg6u | /r/LocalLLaMA/comments/1mvgg6u/doing_continued_pretraining_with_unsloth/ | false | false | self | 69 | {'enabled': False, 'images': [{'id': 'ksRJC2bKGwjrMfOqsioi-B4oIm5QWQUM7Vf03KwieGM', 'resolutions': [{'height': 53, 'url': 'https://external-preview.redd.it/ksRJC2bKGwjrMfOqsioi-B4oIm5QWQUM7Vf03KwieGM.jpeg?width=108&crop=smart&auto=webp&s=6fa9ec0bda4ae81d05efe9ff0a296be82987e912', 'width': 108}, {'height': 106, 'url': '... |
Docker now support AI Models, anyone using it? | 0 | Docker has now model runner and can run many AI locally, just curious if people are using it | 2025-08-20T14:30:40 | https://www.reddit.com/r/LocalLLaMA/comments/1mvgez4/docker_now_support_ai_models_anyone_using_it/ | Working-Magician-823 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mvgez4 | false | null | t3_1mvgez4 | /r/LocalLLaMA/comments/1mvgez4/docker_now_support_ai_models_anyone_using_it/ | false | false | self | 0 | null |
What other MOE models are you using? | 1 | [removed] | 2025-08-20T14:14:39 | https://www.reddit.com/r/LocalLLaMA/comments/1mvfzvl/what_other_moe_models_are_you_using/ | pmttyji | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mvfzvl | false | null | t3_1mvfzvl | /r/LocalLLaMA/comments/1mvfzvl/what_other_moe_models_are_you_using/ | false | false | self | 1 | null |
Deploying an MCP Server on Raspberry Pi or Microcontrollers | 1 | > | 2025-08-20T14:14:38 | https://glama.ai/blog/2025-08-20-implementing-mcp-on-edge-devices | No-Abies7108 | glama.ai | 1970-01-01T00:00:00 | 0 | {} | 1mvfzv1 | false | null | t3_1mvfzv1 | /r/LocalLLaMA/comments/1mvfzv1/deploying_an_mcp_server_on_raspberry_pi_or/ | false | false | default | 1 | null |
Is there an Open LLM leaderboard that allows you to list the rankings by model weights size? | 3 | Basically, I just want to be able to check the ranking of open LLMs sorted by the size of the SafeTensor files to identify the best lightweight open LLMs. | 2025-08-20T14:10:24 | https://www.reddit.com/r/LocalLLaMA/comments/1mvfvsg/is_there_an_open_llm_leaderboard_that_allows_you/ | zoxtech | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mvfvsg | false | null | t3_1mvfvsg | /r/LocalLLaMA/comments/1mvfvsg/is_there_an_open_llm_leaderboard_that_allows_you/ | false | false | self | 3 | null |
What other MOE models are you using? | 19 | I'm looking for MOE models under 50B(Active upto 5B). Our laptop has 8GB VRAM & 32GB RAM.
I know that most of us do use Qwen MOE models(Qwen3-30B-A3B particularly). Mistral, recently GPT-OSS-20B. **What else we have? Share your favorites. Recommend under appreciated/overlooked MOE models**.
It would be great to have ... | 2025-08-20T14:09:18 | https://www.reddit.com/r/LocalLLaMA/comments/1mvfuqn/what_other_moe_models_are_you_using/ | pmttyji | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mvfuqn | false | null | t3_1mvfuqn | /r/LocalLLaMA/comments/1mvfuqn/what_other_moe_models_are_you_using/ | false | false | self | 19 | {'enabled': False, 'images': [{'id': 'An0iJLapq-5CUQQlm3lWegevVWf7wlANjmn1iOwCTqk', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/An0iJLapq-5CUQQlm3lWegevVWf7wlANjmn1iOwCTqk.png?width=108&crop=smart&auto=webp&s=c58faeb60d6cd1478f77717010b54d2ec5ab95aa', 'width': 108}, {'height': 116, 'url': 'h... |
IBM and NASA just dropped Surya: an open‑source AI to forecast solar storms before they hit | 376 | Solar storms don’t just make pretty auroras—they can scramble GPS, disrupt flights, degrade satellite comms, and stress power grids. To get ahead of that, IBM and NASA have open‑sourced Surya on Hugging Face: a foundation model trained on years of Solar Dynamics Observatory (SDO) data to make space‑weather forecasting ... | 2025-08-20T13:51:04 | AskGpts | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1mvfdja | false | null | t3_1mvfdja | /r/LocalLLaMA/comments/1mvfdja/ibm_and_nasa_just_dropped_surya_an_opensource_ai/ | false | false | default | 376 | {'enabled': True, 'images': [{'id': 'moddapg5j6kf1', 'resolutions': [{'height': 135, 'url': 'https://preview.redd.it/moddapg5j6kf1.jpeg?width=108&crop=smart&auto=webp&s=c604da03eef33e1d757e7bebf551856691f89457', 'width': 108}, {'height': 270, 'url': 'https://preview.redd.it/moddapg5j6kf1.jpeg?width=216&crop=smart&auto=... | |
For those who’ve built AI Voice/Avatar Bots – what’s the best approach (cost vs performance)? | 5 | Hey everyone,
I’m working on building AI voice/avatar bots (voice-to-voice with animated avatars). I’ve tested some APIs but still figuring out the most cost-effective yet high-performance setup that doesn’t sound too robotic and can be structured/controlled.
I’d love to hear from people who’ve actually built and dep... | 2025-08-20T13:49:23 | https://www.reddit.com/r/LocalLLaMA/comments/1mvfbzz/for_those_whove_built_ai_voiceavatar_bots_whats/ | Funny_Working_7490 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mvfbzz | false | null | t3_1mvfbzz | /r/LocalLLaMA/comments/1mvfbzz/for_those_whove_built_ai_voiceavatar_bots_whats/ | false | false | self | 5 | null |
Why are these specific languages gaining traction in MTPE? | 5 | Hi, I work at Alconost (a localization company), and we’ve just released our report on the languages with the highest demand for both overall localization services and MTPE (machine-translation post-editing) in particular. What’s interesting is that the languages topping the overall localization demand **aren’t the sam... | 2025-08-20T13:36:06 | https://www.reddit.com/gallery/1mvf042 | NataliaShu | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1mvf042 | false | null | t3_1mvf042 | /r/LocalLLaMA/comments/1mvf042/why_are_these_specific_languages_gaining_traction/ | false | false | 5 | null | |
Programmers, what is your local agentic coding setup? | 9 | What agentic coding tool do you use? Roo, Cline, Aider, or something else?
Which models do you find work best for interacting with MCP tools and coding? | 2025-08-20T13:03:45 | https://www.reddit.com/r/LocalLLaMA/comments/1mve7vm/programmers_what_is_your_local_agentic_coding/ | DeviantlyPronto | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mve7vm | false | null | t3_1mve7vm | /r/LocalLLaMA/comments/1mve7vm/programmers_what_is_your_local_agentic_coding/ | false | false | self | 9 | null |
Datarus-R1-14B-Preview, an adaptive multi-step reasoning LLM for automated data analysis | 52 | If you’ve used modern reasoning-focused LLMs, you’ve probably seen it happen: the model starts solving your problem, then analyzes its own reasoning, then re-analyzes that, spiraling into thousands of tokens of circular “thinking.” It’s expensive, slow, and sometimes worse than a non reasoning model.
Today, we’re exci... | 2025-08-20T13:01:02 | https://www.reddit.com/r/LocalLLaMA/comments/1mve5hp/datarusr114bpreview_an_adaptive_multistep/ | Educational_Cry_7951 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mve5hp | false | null | t3_1mve5hp | /r/LocalLLaMA/comments/1mve5hp/datarusr114bpreview_an_adaptive_multistep/ | false | false | 52 | null | |
Datarus-R1-14B-Preview, an adaptive multi-step reasoning LLM for automated data analysis | 1 | [removed] | 2025-08-20T12:59:23 | https://www.reddit.com/r/LocalLLaMA/comments/1mve3y9/datarusr114bpreview_an_adaptive_multistep/ | Educational_Cry_7951 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mve3y9 | false | null | t3_1mve3y9 | /r/LocalLLaMA/comments/1mve3y9/datarusr114bpreview_an_adaptive_multistep/ | false | false | 1 | null | |
🚀 Datarus-R1-14B-Preview, an adaptive multi-step reasoning LLM for
automated data analysis | 1 | [removed] | 2025-08-20T12:53:59 | https://www.reddit.com/r/LocalLLaMA/comments/1mvdznv/datarusr114bpreview_an_adaptive_multistep/ | spectre_atlas | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mvdznv | false | null | t3_1mvdznv | /r/LocalLLaMA/comments/1mvdznv/datarusr114bpreview_an_adaptive_multistep/ | false | false | 1 | null | |
amd ryzen ai max+ 395 evo-x2 run the new gpt-oss-120b | 12 | Has anyone been able to actually run the new openAI gpt-oss-120b model on the amd ryzen ai max+ 395 evo-x2? If so, what software did you use to run it? What was the maximum context window size it could run with the 96GB ram setup for the GPU? What token/sec rate did you get?
AMD is advertising that you can run that m... | 2025-08-20T12:34:29 | https://www.reddit.com/r/LocalLLaMA/comments/1mvdk0z/amd_ryzen_ai_max_395_evox2_run_the_new_gptoss120b/ | Thumper450x | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mvdk0z | false | null | t3_1mvdk0z | /r/LocalLLaMA/comments/1mvdk0z/amd_ryzen_ai_max_395_evox2_run_the_new_gptoss120b/ | false | false | self | 12 | {'enabled': False, 'images': [{'id': 'NgWDcnPAzAv9C_IdYMqborsydUjUEEFGZXyC17go05Y', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/NgWDcnPAzAv9C_IdYMqborsydUjUEEFGZXyC17go05Y.jpeg?width=108&crop=smart&auto=webp&s=569ee38687589e1134e6e83d5b36da59d9f13822', 'width': 108}, {'height': 120, 'url': '... |
Not a model, but Open Source Memory framework claims to beat Mem0 on public benchmarks | 8 | Seems interesting - [https://github.com/prem-research/cortex](https://github.com/prem-research/cortex) | 2025-08-20T11:56:58 | https://www.reddit.com/r/LocalLLaMA/comments/1mvcpxn/not_a_model_but_open_source_memory_framework/ | mr_jaypee | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mvcpxn | false | null | t3_1mvcpxn | /r/LocalLLaMA/comments/1mvcpxn/not_a_model_but_open_source_memory_framework/ | false | false | self | 8 | {'enabled': False, 'images': [{'id': 'VBJhQoxocE3ZEP93aKsvXLZyCT1TUaPnxE16rrdtZo4', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/VBJhQoxocE3ZEP93aKsvXLZyCT1TUaPnxE16rrdtZo4.png?width=108&crop=smart&auto=webp&s=baa5eef10d72df35db78efb316664163f8f1570f', 'width': 108}, {'height': 108, 'url': 'h... |
During one of my internships I've built a fully local agent based off of Microsoft's Phi-4 model family with dynamic RAG, image + chart understanding, and other tools like arXiv/web search. Dropping a demo and the source if anybody is interested! | 13 | Source code, feel free to check out and contribute:
[https://github.com/yagizdas/phi-delta/](https://github.com/yagizdas/phi-delta/)
Currently planning to extend this project onto a platform local agents with easily switchable models.
I've used Langchain, vLLM for running the vision model, llama.cpp for the main ... | 2025-08-20T11:26:08 | https://v.redd.it/eta2ncztr5kf1 | yagellaaether | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1mvc3pw | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/eta2ncztr5kf1/DASHPlaylist.mpd?a=1758281183%2CNTllYTVmNDFhNzI1MDdmMDdmODc0MzRlOTAwYmQxOWE4YjczMzI5MTkzZWI4NDYxODNkOTIxNzBjYmU2NDc1ZA%3D%3D&v=1&f=sd', 'duration': 103, 'fallback_url': 'https://v.redd.it/eta2ncztr5kf1/DASH_1080.mp4?source=fallback', '... | t3_1mvc3pw | /r/LocalLLaMA/comments/1mvc3pw/during_one_of_my_internships_ive_built_a_fully/ | false | false | 13 | {'enabled': False, 'images': [{'id': 'bGt2Z2ljenRyNWtmMeGmsqcQQcADxEMAVnD95sMGYFYlw2zLIMFBil46ZKRE', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/bGt2Z2ljenRyNWtmMeGmsqcQQcADxEMAVnD95sMGYFYlw2zLIMFBil46ZKRE.png?width=108&crop=smart&format=pjpg&auto=webp&s=1abc13ed2c301b68ee1e878a61b3a15ff830a... | |
Qwen 30B Instruct vs GPT-OSS 20B for real life coding | 56 | Hi there,
Would like some opinions besides benchmarks for those 2 models (or maybe additional one) from people who use it for production applications. Web (PHP), iOS (Swift). As Im GPU poor and have 1x3090 these are the best local options for me now.
Both models sucks with the whole codebases (qwen cli, aider), so I... | 2025-08-20T11:20:38 | https://www.reddit.com/r/LocalLLaMA/comments/1mvbzvh/qwen_30b_instruct_vs_gptoss_20b_for_real_life/ | Mobile_Ice1759 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mvbzvh | false | null | t3_1mvbzvh | /r/LocalLLaMA/comments/1mvbzvh/qwen_30b_instruct_vs_gptoss_20b_for_real_life/ | false | false | self | 56 | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.